relevant_id
large_stringlengths
19
19
earliest_claim_jusrisdiction
stringclasses
68 values
jurisdiction
listlengths
1
44
ipcr_codes_str
stringlengths
8
1.46k
earliest_claim_date
timestamp[ms]date
1964-06-26 00:00:00
2023-06-20 00:00:00
earliest_claim_year
stringdate
1964-01-01 00:00:00
2023-01-01 00:00:00
classifications_ipcr_list_first_three_chars_list
listlengths
1
20
title_en
stringlengths
3
600
abstract_en
stringlengths
20
10.5k
claims_text
stringlengths
33
221k
description_en
stringlengths
30
3.47M
112-167-974-722-606
DE
[ "JP", "WO", "US", "EP" ]
G01B9/02,G01B11/30
1999-10-09T00:00:00
1999
[ "G01" ]
interferometric measuring device for form measurement
the invention relates to an interferometric measuring device for the form measurement, especially of rough surfaces of a measurement object (o). said measuring device comprises a unit for producing a beam (sld), said unit emitting a short-coherent beam, and a beam-splitting device (st1) for producing a first and a second partial beam (t1, t2). the first partial beam is directed onto the measurement object (o) via an object light path and the second partial beam is directed onto a reflective reference plane (rsp) via a reference light path. the measuring device is further provided with a superimposition element on which the measuring beam coming from the measurement object (o) and the reference plane (rsp) are caused to interfere. an image converter (bs) that receives the superimposed radiation supplies corresponding signals to an evaluation device. for the purpose of measurement, the optical path length of the object light path relative to the optical path length of the reference light path is modified. the aim of the invention is to provide a device that allows an exact measurement of object surfaces in narrow cavities in all three dimensions and with a very high precision. to this end, an optical probe (os, oso) is disposed in the object light path and is provided with an optical system that produces at least one optical intermediate image.
1. an interferometric measuring device for measuring shape, including surfaces of a measured object, comprising: 2. the measuring device according to claim 1 , wherein the at least one intermediate image is generated in the object light path. 3. the measuring device according to claim 2 , wherein both the radiation directed to the measured object and the radiation returning from the measured object pass through the optical probe. 4. the measuring device according to claim 1 , further comprising, in the reference light path, one of a further optical probe and a glass device for compensating for a glass proportion present in the optical probe with regard to the elements for the intermediate image. 5. the measuring device according to claim 1 , 6. the measuring device according to claim 5 , wherein the reference mirror is provided on one of a flat face-plate and a prism. 7. the measuring device as recited in claim 6 , further comprising a fiber optic element positioned between the beam splitter and the further beam splitter. 8. the measuring device according to claim 1 ,
background information the present invention relates to an interferometric measuring device for measuring the shape especially of rough surfaces of a measured object, having a radiation-producing unit emitting short-coherent radiation, a beam splitter for forming a first and a second beam component, of which the first is directed via an object light path to the measured object and the second is directed via a reference light path to a reflective reference plane, having a superposition element at which the radiation coming from the measured object and the reference plane are brought to superposition, and an image converter which receives the superposed radiation and sends corresponding signals to a device for evaluation, the optical path length of the object light path being changed relative to the optical path length of the reference light path. such an interferometric measuring device is known from german de 197 21 842 c2. in the case of this known measuring device, a radiation-producing unit, such as a light-emitting diode or a superluminescent diode, emits a short-coherent radiation, which is split via a beam splitter into a first beam component guided over an object light path, and a second beam component guided over a reference light path. the reference light path is periodically changed, using two deflector elements and a stationary diffraction grating positioned behind it, by activating the deflector elements, so as to scan the object surface in the depth direction. if the object light path and the reference light path coincide, a maximum interference contrast results, which is detected using an evaluation device post-connected to the photodetector device. an interferometric measuring device representative of the measuring principle (white-light interferometry or short-coherent interferometry) is also specified in german de 41 08 944 a1. here, however, a moved mirror is used to change the light path in the reference ray path. in this method, the surface of the object is imaged on the photodetector device, using an optical system, it being difficult, however, to conduct measurements in cavities. additional such interferometric measuring devices and interferometric measuring methods based on white-light interferometry are described by p. de groot, l. deck, surface profiling by analysis of white-light interferograms in the spatial frequency domain j. mod. opt., vol. 42, no. 2, 389-401, 1995 and no. t. maack, g. notni, w. schreiber, w.-d. prenzel, endoskopisches 3-d-formmesssystem, (endoscopic 3-d shape measuring system) in jahrbuch fr optik und feinmechanik, ed. w.-d. prenzel, verlag (publisher) schiele und schoen, berlin, 231-240, 1998 verwiesen (submitted). in the case of the interferometric measuring devices and measuring methods named, one difficulty is making measurements in deep cavities or narrow ducts. one suggestion for a measuring device in which measurements can be performed even in cavities, using white-light interferometry, is shown in german de 197 21 843 c1. it is proposed there to split a first beam component further into a reference beam component and at least one measuring beam component, an additional beam splitter and the reference mirror being positioned in a common measuring probe. to be sure, such a measuring probe can be introduced into cavities, however, using this device, in each measurement, only a small, dot-like location in the surface can be scanned. in order to take the measure of more locations on the surface in the depth direction, relative motion between measured object and measuring probe is required, an exact lateral coordination, however, being costly and difficult. the object of the present invention is to make available an interferometric measuring device, of the kind mentioned at the outset, which especially makes possible simplified measurements in deep cavities with great accuracy. this object is achieved by the features of claim 1 . according to this it is provided that an optical probe in the object light path, having an optical device for generating at least one optical intermediate image, be provided. similarly to an endoscope or a borescope, in using the optical device, because of the intermediate images, it becomes possible to image the observed surface, besides using high longitudinal resolution, also at high lateral resolution over a path which is long compared to the diameter of the imaging optics. for example, the optical probe can be introduced into the bores of valve seats or into vessels of organisms for the purpose of medical measurements. in contrast to the usual endoscope, quantitative depth information is now obtained. in this connection, an advantageous embodiment is one in which the at least one intermediate image is generated in the object light path. for this, the same optical device is used for illuminating the measured location on the measured object as for transmitting the radiation coming from the measured object to the photodetector device, if it is provided that both the radiation going to the measured object and the radiation coming back from it pass through the optical probe. the optical image on the photodetector device can be improved by providing, in the reference light path, an equal, further optical probe or at least a glass device for compensating for a glass proportion present in the optical probe with regard to the elements for the intermediate image(s). a favorable construction, as far as handling is concerned, is one in which the optical motion difference between the first and the second arm is greater than the coherence length of the radiation; the radiation coming from the first mirror and the reflecting element are guided through a common optical probe (common path) using a further radiation portion; in the optical probe, a reference mirror is arranged at such a distance from the measured object that the motion difference between the first mirror and the reflecting element is canceled, and one part of the radiation incident on the reference mirror is reflected to the photodetector device and one part is allowed to pass through to the measured object and is reflected from there to the photodetector device. a further benefit of this design is that the object and reference waves pass through virtually the identical optics assembly, so that aberrations are substantially compensated for. moreover, this set-up is more resistant to mechanical shocks. in this connection, two embodiment possibilities are for the reference mirror to be provided on a flat face-plate or on a prism. in this connection, handling can further be simplified by arranging a fiber optic element between the beam splitter and the further beam splitter. in this design too, splitting essentially into a probe part and a part having a modulation arrangement is realized, handling being also favored. the present invention is elucidated in the following on the basis of exemplary embodiments, with reference to the drawings. the figures show: fig. 1 a first exemplary embodiment of an interferometric measuring device having an optical probe in a measured light path. fig. 2 a second exemplary embodiment in which an optical probe is provided both in the measured light path and in the reference light path. fig. 3 a design of the interferometric measuring device having a common reference and measured light path. fig. 4 a further exemplary embodiment in which, compared to fig. 3 , fiber optics are provided between the first and a further beam splitter. fig. 5 a further design example of the interferometric measuring device. fig. 1 shows an interferometric measuring device having a radiation-producing unit sld emitting short-coherent radiation, as, for example a light-emitting diode or a superluminescent diode, whose radiation is split by a beam splitter st 1 into a first beam component t 1 of a measured light path and a second beam component t 2 of a reference light path. the design is like that of a michelson interferometer. in the reference light path, the second beam component is reflected by a reference plane in the form of a reference mirror rsp, the reference light path being periodically changed by moving the reference mirror rsp or by acoustooptical deflectors, as described in german de 197 21 842 c2, mentioned at the outset. if the change of the light path is performed using two acoustooptical deflectors, a mechanically moved reflecting element becomes unnecessary, but instead, a fixed element, particularly a diffraction grating, can be used. by using a glass block g, the dispersion of an optical probe oso arranged in the object light path can be corrected as necessary. in the object light path, the radiation is coupled into optical probe oso, so that the radiation illuminates the surface to be measured of measured object o. the surface of the object is imaged by optical probe oso via one or more intermediate images on photodetector equipment in the form of an image converter or image sensor bs, for instance, a ccd camera. the image of measured object o on image sensor bs is superposed with the reference wave of the second beam component. a high interference contrast occurs in the image of measured object o when the path difference in the reference light path and the measured light path is less than the coherence length. with regard to this, the measuring principle is based on white-light interferometry (short-coherent interferometry), as is described in greater detail in the documents mentioned at the outset. the length of the reference light path is varied over the entire measuring range for scanning in the depth direction of the surface to be measured, the length of the reference light path being detected for each measured point at which the greatest interference contrast appears. it is made possible by the intermediate images to image the surface of the measured object at a high lateral resolution over a range that is large compared to the diameter of the imaging optics. optical probe oso resembles an endoscope and a borescope, however, the illumination and the feedback of the radiation coming from the measured surface via the same optical device occurring via at least one intermediate image. fig. 1 shows schematically some lenses l as further imaging elements. the actual intermediate images are created in optical probe oso. for applications, in which an exact compensation for the influence of the imaging lenses of optical probe oso is required, the same optical probe osr is also integrated in the reference light path or reference arm between beam splitter st 1 and reference mirror rsp as in the object light path between beam splitter st 1 and measuring object o, as shown in fig. 2 . in a modified design according to fig. 3 , the interferometric measuring device may also be realized as a device having common reference and measuring arms (common path device). the interferometric measuring device is again illuminated by a short-coherent (broadband) radiation-producing unit. beam splitter st 1 splits the light in two arms into first beam component t 1 and second beam component t 2 , first beam component t 1 falling on a first, fixed mirror sp 1 , and second beam component t 2 falling on reflecting element rsp in the form of a reference mirror. the optical path difference between the arms thus formed is greater than the coherence length of the radiation produced by radiation-producing unit sld. starting from the two mirrors sp 1 and rsp, the reflected radiation is fed to optical probe os via beam splitter st 1 and a further beam splitter st 2 . the special quality of this design is that there is a reference mirror rsp 2 in optical probe os itself. a part of the radiation is reflected by this reference mirror rsp 2 , while the other part of the radiation illuminates the surface to be measured. reference mirror rsp 2 may be mounted on flat face-plate or on a prism. by using a prism, the wave front of the radiation illuminating the object surface, i.e. of the object wave can be adapted to the geometry (e.g. inclination) of the surface to be measured. with the aid of optical probe os, measured object o is in turn imaged via one or more intermediate images on image sensor bs, and superposed by the reference wave. in order to obtain height information, reflecting element rsp is made to traverse the measuring range, or changing the light path is undertaken as described above. in the image of measured object o great interference contrast appears when the path difference between fixed mirror sp 1 and reflecting element rsp or of the light paths of the two arms is exactly the same as the optical path difference between reference mirror rsp 2 and measured object o. in order to obtain the height profile, known methods for detecting the greatest interference contrast are used in each image point (pixel). the benefit of this design is that the object and reference waves pass through virtually the identical optics assembly, so that aberrations are substantially compensated for. moreover, this set-up is more rugged and, therefore, less susceptible to mechanical shocks. for even simpler handling of the measuring device, the radiation of beam splitter st 1 can also be transmitted to further beam splitter st 1 , using fiber optics lf, as is shown in fig. 4 . a further alternative design is shown in fig. 5 . as an alternative to the design having the common reference path and measuring light path as in figs. 3 and 4 , a combined mach-zehnder-michelson arrangement is provided. again, a broadband radiation-producing unit sld is used, whose radiation is coupled into a fiber optic element. first beam splitter st 1 splits the radiation into an object arm oa and a reference arm ra. in object arm oa, first beam component t 1 is coupled out of the corresponding light conducting fiber and coupled into optical probe oso via further beam splitter st 2 , so that the surface to be measured of measured object o is illuminated. the object surface is imaged by optical probe oso via one or more intermediate images on image sensor bs. in reference arm ra light is coupled out of the corresponding light-conducting fiber, is then propagated, if necessary, through the same optical probe osr as is applied in object arm oa, and is coupled in by a second fiber coupler r 2 to a light-conducting fiber positioned there. the reference wave reaches further beam splitter st 2 via the light-conducting fiber. there it is uncoupled and superposed with the object wave on image sensor bs via further beam splitter st 2 . in both arms, the optical paths in the air, in optical probes oso or osr as well as in the light-conducting fibers have to be adjusted. tuning of the path lengths in reference arm ra is performed here, for example, by shifting second fiber coupler r 2 , so that the optical air path in the reference arm is changed.
113-824-444-904-043
CN
[ "US", "EP", "JP" ]
A61B5/04,A61B5/00,A61B5/044,A61B5/0452,A61B5/117,G06F21/32,G06T7/00
2016-01-06T00:00:00
2016
[ "A61", "G06" ]
electrocardiogram (ecg) authentication method and apparatus
disclosed are an electrocardiogram (ecg) authentication method and apparatus, and a training method and apparatus for training a neural network model used for ecg authentication, the ecg authentication apparatus being configured to acquire an ecg signal of a subject, extract a semantic feature of the ecg signal, and authenticate the subject based on the extracted semantic feature.
1. an electrocardiogram (ecg) authentication method comprising: acquiring an ecg signal of a subject; filtering the ecg signal using a band pass filter; detecting a fiducial point from the ecg signal after filtering; acquiring a data segment from the filtered ecg signal based on the fiducial point; extracting a semantic feature of the acquired ecg signal using a neural network model; and authenticating the subject based on the extracted semantic feature, wherein the neural network model is trained using a candidate neural network model selected from candidate neural network models whose cross-entropy of its uppermost layer has been minimized. 2. the ecg authentication method of claim 1 , wherein the extracting of the semantic feature of the ecg signal comprises extracting a semantic feature from the data segment using the neural network model. 3. the ecg authentication method of claim 1 , wherein the fiducial point comprises a peak point of the filtered ecg signal and a minimum point close to the peak point. 4. the ecg authentication method of claim 1 , wherein the authenticating comprises: calculating a similarity between the extracted semantic feature and a registered feature; and determining an authentication of the ecg signal based on a comparison of the similarity and a threshold. 5. the ecg authentication method of claim 1 , wherein the neural network model is a semantic feature extraction model trained using a deep learning scheme based on ecg training data. 6. the ecg authentication method of claim 1 , wherein the fiducial point comprises at least one of: the peak point of the filtered ecg signal, a left minimum point, a right minimum point close to the peak point, the peak point and the left minimum point, or the peak point and the right minimum point. 7. a non-transitory computer-readable storage medium storing instructions that, when executed by a processor, causes the processor to perform the method of claim 1 . 8. an electrocardiogram (ecg) authentication apparatus comprising: a processor configured to: receive an ecg signal of a subject; filter the ecg signal using a band pass filter; extract a semantic feature of the filtered ecg signal using a neural network model; and authenticate the subject based on the extracted semantic feature, wherein the neural network model is trained by detecting a fiducial point from the ecg signal after filtering and using a candidate neural network model selected from candidate neural network models whose cross-entropy of its uppermost layer has been minimized; and acquiring a data segment from the filtered ecg signal based on the fiducial point. 9. the training method of claim 8 , wherein the training of the neural network model comprises training the neural network model based on an identification signal for identifying an entity corresponding to the ecg training data and an authentication signal for verifying whether items of ecg training data corresponds to the same entity. 10. the ecg authentication device of claim 1 , wherein the processor is further configured to receive the ecg signal, measured by another device, using an antenna. 11. a training method comprising: receiving electrocardiogram (ecg) training data; augmenting the ecg data; and filtering the ecg signal using a band pass filter; training a neural network model for ecg authentication based on the augmented ecg training data; training candidate neural network models for the augmented ecg training data; selecting a candidate neural network model from the candidate neural network models by minimizing a cross-entropy based on an uppermost layer of the candidate neural network model, wherein the augmenting comprises detecting a fiducial point from the ecg data after filtering and acquiring a data segment from the filtered ecg signal based on the fiducial point. 12. the training method of claim 11 , wherein the filtering of the ecg training data comprises filtering the ecg training data using a band pass filter having different passbands. 13. the training method of claim 11 , wherein the filtering of the ecg training data comprises filtering the ecg training data using a band pass filter having a fixed passband. 14. the training method of claim 11 , wherein the fiducial point comprises a peak point of the filtered ecg training data and a minimum point close to the peak point. 15. the training method of claim 11 , wherein the augmenting of the ecg training data further comprises: normalizing a data segment obtained through the offset. 16. the training method of claim 11 , wherein the training of the neural network model comprises: training candidate neural network models for each item of the augmented ecg training data; and selecting a candidate neural network model from the candidate neural network models based on an accuracy of a candidate semantic feature extracted using each of the candidate neural network models. 17. the training method of claim 16 , wherein a final neural network model used for the ecg authentication is determined based on the selected candidate neural network model. 18. the training method of claim 16 , further comprising: selecting a second candidate neural network model from the remaining candidate neural network models to increase the accuracy based on the semantic feature corresponding to the second candidate neural network model being combined with the selected candidate semantic feature. 19. the training method of claim 11 , wherein the training of the neural network model comprises minimizing a value of a loss function wherein f denotes an output of a last fully connected layer of the candidate neural network model and the output indicates a semantic feature extracted from the ecg training data, t denotes an index of an actual type of entity, θ id . denotes a parameter of the soft-max layer corresponding to a last layer of the candidate neural network model, p i denotes an actual probability distribution corresponding to the ecg training data and {circumflex over (p)} i denotes a probability distribution estimated using the candidate neural network model. 20. a electrocardiogram (ecg) authentication device comprising: an antenna; a cellular radio configured to transmit and receive data via the antenna according to a cellular communications standard; a touch-sensitive display; a sensor configured to measure an ecg signal of a subject; a memory configured to store instructions; and a band pass filter to filter the ecg signal; a processor configured to receive the ecg signal, to extract a semantic feature of the ecg signal using a neural network model, to authenticate the subject based on the extracted semantic feature, and to display the result of the authenticate on the touch-sensitive display, wherein the neural network model is trained by detecting a fiducial point from the ecg signal after filtering and using a candidate neural network model selected from candidate neural network models whose cross-entropy of its uppermost layer has been minimized; and acquiring a data segment from the filtered ecg signal based on the fiducial point.
cross-reference to related application(s) this application claims the benefit under 35 usc § 119(a) of chinese patent application no. 201610007772.7 filed on jan. 6, 2016, in the state intellectual property office of the people's republic of china and korean patent application no. 10-2016-0119392 filed on sep. 19, 2016 in the korean intellectual property office, the entire disclosures of which are incorporated herein by reference for all purposes. background 1. field the following description relates to biometric authentication technology for authenticating a user based on a biosignal. 2. description of related art biometric authentication is technology for identifying a user based on individual biological or behavior feature such as, for example, an iris, a fingerprint, a pulse pattern, a gait. in biometric authentication, an electrocardiogram (ecg) authentication is a method of identifying a user based on an ecg signal. because these unique biosignals are not easily stolen or accidentally lost and are robust against forgery or falsification, the application of the ecg authentication in security technology is promising. summary this summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. this summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter. in one general aspect, there is provided an electrocardiogram (ecg) authentication method including acquiring an ecg signal of a subject, extracting a semantic feature of the acquired ecg signal using a neural network model, and authenticating the subject based on the extracted semantic feature. the ecg authentication method may include preprocessing the ecg signal before extracting the semantic feature, wherein the preprocessing includes filtering the acquired ecg signal, detecting at least one fiducial point from the filtered ecg signal based on a fiducial point corresponding to the neural network model, and acquiring a data segment from the filtered ecg signal based on the at least one fiducial point. the extracting of the semantic feature of the ecg signal may include extracting a semantic feature from the data segment using the neural network model. the at least one fiducial point may include a peak point of the filtered ecg signal and a minimum point close to the peak point. the authenticating may include calculating a similarity between the extracted semantic feature and a registered feature, and determining an authentication of the ecg signal based on a comparison of the similarity and a threshold. the neural network model may be a semantic feature extraction model trained using a deep learning scheme based on ecg training data. the at least one fiducial point may include at least one of: the peak point of the filtered ecg signal and a left and a right minimum points close to the peak point, the peak point and the left minimum point, or the peak point and the right minimum point. in another general aspect, there is provided an electrocardiogram (ecg) authentication apparatus including a processor configured to receive an ecg signal of a subject, extract a semantic feature of the ecg signal using a neural network model, and authenticate the subject based on the extracted semantic feature. in one general aspect, there is provided a training method including receiving electrocardiogram (ecg) training data, augmenting the ecg data, and training a neural network model for ecg authentication based on the augmented ecg training data. the augmenting of the ecg data may include filtering the ecg training data using a filter, detecting at least one fiducial point from the filtered ecg training data, and acquiring a plurality of data segments having different lengths from the filtered ecg training data based on the at least one fiducial point. the filtering of the ecg training data may include filtering the ecg training data using a band pass filter having different passbands. the filtering of the ecg training data may include filtering the ecg training data using a band pass filter having a fixed passband. the at least one fiducial point may include a peak point of the filtered ecg training data and a minimum point close to the peak point. the augmenting of the ecg training data may include selecting a fiducial point from a current data segment of the ecg training data, performing an offset on the current data segment based on the fiducial point, and normalizing a data segment obtained through the offset. the training of the neural network model may include training a plurality of candidate neural network models for each item of the augmented ecg training data, and selecting at least one candidate neural network model from the candidate neural network models based on an accuracy of a candidate semantic feature extracted using each of the candidate neural network model. a final neural network model used for the ecg authentication may be determined based on the selected at least one candidate neural network model. the training method may include selecting a second candidate neural network model from the remaining candidate neural network models to increases the accuracy based on the semantic feature corresponding to the second candidate neural network model being combined with the selected candidate semantic feature. the training of the neural network model may include training the neural network model based on an identification signal for identifying an entity corresponding to the ecg training data and an authentication signal for verifying whether items of ecg training data corresponds to the same entity. in another general aspect, there is provided electrocardiogram (ecg) authentication device including an antenna, a cellular radio configured to transmit and receive data via the antenna according to a cellular communications standard, a touch-sensitive display, a sensor configured to measure an ecg signal of a subject, a memory configured to store instructions, and a processor configured to receive the ecg signal, to extract a semantic feature of the ecg signal using a neural network model, to authenticate the subject based on the extracted semantic feature, and to display the result of the authenticate on the touch-sensitive display. the processor may receive the ecg signal, measured by another device, using the antenna. other features and aspects will be apparent from the following detailed description, the drawings, and the claims. brief description of the drawings fig. 1 illustrates an example of authenticating a user based on an electrocardiogram (ecg) signal. fig. 2 illustrates an example of an operation of an ecg authentication method. fig. 3 illustrates an example of training a neural network model based on ecg training data. figs. 4a through 4e illustrate examples of a filtering result of ecg training data. fig. 5 illustrates an example of a fiducial point detected from ecg training data. figs. 6a through 6i illustrate examples of data segments acquired based on a fiducial point. figs. 7a through 7d illustrate examples of a result obtained by performing additional data processing on ecg training data. fig. 8a illustrates an example of a neural network model. fig. 8b illustrates an example of a neural network model. fig. 9 illustrates an example of selecting an optimal candidate neural network model from candidate neural network models using a greedy algorithm. fig. 10 illustrates an example of an ecg authentication apparatus. fig. 11 illustrates an example of a training apparatus. throughout the drawings and the detailed description, unless otherwise described or provided, the same drawing reference numerals will be understood to refer to the same elements, features, and structures. the drawings may not be to scale, and the relative size, proportions, and depiction of elements in the drawings may be exaggerated for clarity, illustration, and convenience. detailed description the following detailed description is provided to assist the reader in gaining a comprehensive understanding of the methods, apparatuses, and/or apparatuses described herein. however, various changes, modifications, and equivalents of the methods, apparatuses, and/or apparatuses described herein will be apparent after an understanding of the disclosure of this application. for example, the sequences of operations described herein are merely examples, and are not limited to those set forth herein, but may be changed as will be apparent after an understanding of the disclosure of this application, with the exception of operations necessarily occurring in a certain order. also, descriptions of features that are known in the art may be omitted for increased clarity and conciseness. the features described herein may be embodied in different forms, and are not to be construed as being limited to the examples described herein. rather, the examples described herein have been provided merely to illustrate some of the many possible ways of implementing the methods, apparatuses, and/or apparatuses described herein that will be apparent after an understanding of the disclosure of this application. various alterations and modifications may be made to the examples. here, the examples are not construed as limited to the disclosure and should be understood to include all changes, equivalents, and replacements within the idea and the technical scope of the disclosure. terms such as first, second, a, b, (a), (b), and the like may be used herein to describe components. each of these terminologies is not used to define an essence, order or sequence of a corresponding component but used merely to distinguish the corresponding component from other component(s). for example, a first component may be referred to a second component, and similarly the second component may also be referred to as the first component. it should be noted that if it is described in the specification that one component is “connected,” “coupled,” or “joined” to another component, a third component may be “connected,” “coupled,” and “joined” between the first and second components, although the first component may be directly connected, coupled or joined to the second component. the terminology used herein is for the purpose of describing particular examples only, and is not to be used to limit the disclosure. as used herein, the terms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. as used herein, the terms “include, “comprise,” and “have” specify the presence of stated features, numbers, operations, elements, components, and/or combinations thereof, but do not preclude the presence or addition of one or more other features, numbers, operations, elements, components, and/or combinations thereof. fig. 1 illustrates an example of authenticating a user based on an electrocardiogram (ecg) signal. an ecg authentication apparatus 110 performs ecg authentication based on an ecg signal of a user 120 . an ecg signal is a signal including information on an electrical activity of a heart. in an example, the ecg signal is measured by contacting electrodes included in the ecg authentication apparatus 110 to a skin of the user 120 . an ecg authentication includes a process of determining whether the user 120 is a preregistered user based on an ecg signal measured from a body of the user 120 . the ecg authentication is applicable to various applications such as, for example, an access control, financial transactions, a check-in at an airport, a health care service, and a security service. in an example of fig. 1 , the ecg authentication apparatus 110 may be embedded in or interoperate with various digital devices such as, for example, a mobile phone, a cellular phone, a smart phone, a personal computer (pc), a laptop, a notebook, a subnotebook, a netbook, or an ultra-mobile pc (umpc), a tablet personal computer (tablet), a phablet, a mobile internet device (mid), a personal digital assistant (pda), an enterprise digital assistant (eda), a digital camera, a digital video camera, a portable game console, an mp3 player, a portable/personal multimedia player (pmp), a handheld e-book, an ultra mobile personal computer (umpc), a portable lab-top pc, a global positioning system (gps) navigation, a personal navigation device or portable navigation device (pnd), a handheld game console, an e-book, and devices such as a high definition television (hdtv), an optical disc player, a dvd player, a blue-ray player, a setup box, robot cleaners, a home appliance, content players, communication systems, image processing systems, graphics processing systems, other consumer electronics/information technology (ce/it) device, or any other device capable of wireless communication or network communication consistent with that disclosed herein. the ecg authentication apparatus 110 may be embedded in or interoperate with a smart appliance, an intelligent vehicle, an apparatus for automatic driving, a smart home environment, a smart building environment, a smart office environment, office automation, and a smart electronic secretary system. the digital devices may also be implemented as a wearable device, which is worn on a body of a user. in one example, a wearable device may be self-mountable on the body of the user, such as, for example, a ring, a watch, a pair of glasses, glasses-type device, a bracelet, an ankle bracket, a belt, a band, an anklet, a belt necklace, an earring, a headband, a helmet, a device embedded in the cloths, or as an eye glass display (egd), which includes one-eyed glass or two-eyed glasses. in another non-exhaustive example, the wearable device may be mounted on the body of the user through an attaching device, such as, for example, attaching a smart phone or a tablet to the arm of a user using an armband, incorporating the wearable device in a cloth of the user, or hanging the wearable device around the neck of a user using a lanyard. in an example, when the user 120 wearing the wearable device on one hand contacts one of the electrodes in the wearable device using another hand, an electric closed circuit may be formed in the body of the user 120 . in such electric closed circuit, a change in current due to a heartbeat may be measured as a change in ecg. the ecg authentication apparatus 110 extracts a feature of an ecg signal acquired using a neural network model and determines whether to authenticate based on the extracted feature. for example, the ecg authentication apparatus 110 calculates a similarity between the extracted feature and a preregistered feature. in this example, the ecg authentication apparatus 110 determines that an authentication succeeds when the similarity is greater than or equal to a threshold, and determines that the authentication fails when the similarity is less than the threshold. the neural network model is a statistical model obtained through an imitation of a biological neural network and acquires a problem-solving skill through a training process. parameters of the neural network model are adjusted through the training process. the ecg authentication apparatus 110 acquires the feature of the ecg signal representing a unique biometric feature of the user 120 using the neural network trained based on various training data. the feature of the ecg signal acquired by the ecg authentication apparatus 110 is a feature acquired using the trained neural network model. the feature is also referred to as, for example, semantic feature. fig. 2 illustrates an example of a method of ecg authentication. the operations in fig. 2 may be performed in the sequence and manner as shown, although the order of some operations may be changed or some of the operations omitted without departing from the spirit and scope of the illustrative examples described. many of the operations shown in fig. 2 may be performed in parallel or concurrently. in addition to the description of fig. 2 below, the above descriptions of fig. 1 , are also applicable to fig. 2 , and are incorporated herein by reference. thus, the above description may not be repeated here. referring to fig. 2 , in 210 , an ecg authentication apparatus, for example, the ecg authentication apparatus 110 of fig. 1 or an ecg authentication apparatus 1000 acquires an ecg signal of a user. for example, the ecg authentication apparatus acquires the ecg signal using a sensor included in the ecg authentication apparatus or receives an ecg signal measured by another device. in 220 , the ecg authentication apparatus preprocesses the ecg signal. the preprocessing includes detection of a fiducial point and acquirement of a data segment. for example, the ecg authentication apparatus filters the ecg signal using a band pass filter configured to pass a predefined frequency band. a characteristic of a passing frequency band of the band pass filter is determined in a process of training a neural network model used for extracting a semantic feature of the ecg. through filtering, noise included in the ecg signal is removed or an ecg signal corresponding to a frequency band of interest is acquired from the ecg signal. the ecg authentication apparatus detects at least one fiducial point from the filtered ecg signal. the fiducial point includes at least one of a peak point and at least one minimum point close to the peak point. the fiducial point is also referred to as a key point. the ecg signal has a plurality of maximum points and minimum points. minimum points close to the peak point are defined based on the peak point among the maximum points and minimum points. a fiducial point to be detected from the filtered ecg signal is previously determined in a process of training the neural network model. for example, a type of the fiducial point to be detected is previously determined by detecting the peak point or detecting the minimum point close to the peak point from the filtered ecg signal. when a plurality of fiducial points is used, accuracy on identification may increase. the ecg authentication apparatus acquires a data segment from the filtered ecg signal based on the detected fiducial point. for example, the ecg authentication apparatus acquires a signal having a length previously defined based on the peak point in the filtered ecg signal as the data segment. in this example, the length previously defined based on the peak point may be a predefined time interval measured before and after the peak point. depending on an example, the preprocessing may not be performed in some cases. in 230 , the ecg authentication apparatus extracts the semantic feature of the ecg signal using the neural network model. the neural network model is a feature extracting model previously trained based on training data. the neural network data outputs the semantic feature of the ecg signal used for ecg authentication based on input data. the training process of the neural network model will also be described with reference to fig. 3 . in an example, the ecg signal on which the preprocessing is performed may be input to the neural network model. in another example, the ecg signal in a non-preprocessed state, acquired in 210 , may be input to the neural network model. the neural network model extracts the semantic feature to be used for an authentication from the data segment acquired during the preprocessing or the ecg signal. in 240 , the ecg authentication apparatus authenticates a user based on the extracted semantic feature. the ecg authentication apparatus calculates a similarity between the semantic feature and a predefined registered feature or a reference feature corresponding to a target to be compared with the semantic feature. the authentication apparatus determines an authentication result to be a success in authentication or a fail in authentication based on a comparison result of the calculated similarity and a threshold. for example, a cosine similarity between a vector of the semantic feature and a vector of the registered feature is used as a method of measuring the similarity. in this example, according to an increase in the cosine similarity, the similarity between the semantic feature and the registered feature increases. thus, the ecg authentication apparatus determines that the authentication succeeds when the cosine similarity is greater than or equal to the threshold and determines that the authentication fails when the cosine similarity is less than the threshold. other methods of calculating the similarity between the semantic feature and the registered feature, other than cosine similarity, are considered to be well within the scope of the present disclosure. fig. 3 illustrates an example of training a neural network model based on ecg training data. the operations in fig. 3 may be performed in the sequence and manner as shown, although the order of some operations may be changed or some of the operations omitted without departing from the spirit and scope of the illustrative examples described. many of the operations shown in fig. 3 may be performed in parallel or concurrently. in addition to the description of fig. 2 below, the above descriptions of figs. 1-2 , are also applicable to fig. 3 , and are incorporated herein by reference. thus, the above description may not be repeated here. a training device, for example, a training device 1100 of fig. 11 trains a neural network model to be used by the ecg authentication apparatus based on ecg training data. referring to fig. 3 , in 310 , the training device receives the ecg training data. in 320 , the training device increases the ecg training data. through filtering, data segmentation based on a fiducial point, and an offset processing, the training device acquires an amount of ecg training data greater than an original amount of ecg training data. increasing the ecg training data is also referred to as processing data augmentation. when the neural network model is appropriately trained based on various ecg training data, the trained neural network model may extract a feature having an accurate identification skill from the ecg signal. in an example, operation 320 includes operations 321 through 326 . in an example, operations 321 through 326 may be thoroughly or partially performed. in 321 , the training device filters the ecg training data using a filter. the training device removes noise included in the ecg training data through the filtering. for example, the training device filters the ecg training data using a band pass filter having a fixed passband. also, the training device acquires ecg training data having various frequency bands using a plurality of band pass filters corresponding to different passbands. in 322 , the training device detects at least one fiducial point from the filtered ecg training data. the training device extracts at least one of a peak point of the filtered ecg training data and at least one minimum point close to the peak point, and sets a detected point to be a fiducial point. an ecg signal has a plurality of maximum points and minimum points. among the maximum points and minimum points, minimum points are formed close to the peak point. the training device detects a fiducial point based on at least one of the peak point and left and right minimum points close to the peak point, the peak point and the left minimum point close to the peak point, or the peak point and the right minimum point close to the peak point from the filtered ecg training data. in 323 , the training device acquires a plurality of data segments having different lengths from the filtered ecg training data based on the at least one fiducial point. for example, the training device acquires data segments having different lengths defined in advance based on the peak point and data segments having difference lengths defined in advance based on a minimum point close to the peak point from the filtered ecg training data. to increase the ecg training data, the training device performs operation 325 and operation 326 , selectively. the training device performs offset and normalization on the data segments in an additional processing of 325 and 326 , respectively. the additional processing is also referred to as a disturbance processing. the ecg training data may be diversified through the additional processing. in 324 , the training device selects a fiducial point to perform the offset on a current data segment. in 325 , the training device performs the offset on the current segment based on the selected fiducial point. for example, when the peak point is selected as the fiducial point in the data segment, the training device acquires a plurality of data segments having different lengths based on the peak point. in 326 , the training device normalizes the data segments to have the same length after the offset. the training device performs a data augment processing and the disturbance processing on the ecg training data and acquires the augmented ecg training data in various forms. a number of items of the augmented ecg training data corresponds to “a number of filters having different passbands*a number of fiducial points*a number of data segments having difference lengths”. in general, the ecg training data may be insufficiently provided to train the neural network model. in augmenting the ecg training data, meaningful data increases in the original ecg training data. by training the neural network model based on a large amount of ecg training data including the meaningful data, a performance of the neural network model for extracting a distinctive semantic feature from an ecg may be improved. when ecg training data collected from one user is augmented, a collected time of the ecg training data or a heart rate may include a relatively large difference. through the data augmentation processing of the ecg training data, the training device increases an amount of meaningful data in the original ecg training data, and improves the performance of the neural network model corresponding to the difference in the heart rate or a collection environment of the ecg training data. in 330 , the training device trains the neural network model to be used for an ecg authentication based on the augmented training data. the neural network model is trained based on a deep training method. the deep training method indicates a machine learning algorithm for attempting a high-level abstraction by combining various non-linear transformation schemes. the neural network model trained based on the deep training method includes an input layer, an output layer, and at least one hidden layer located between the input layer and the output layer. in one example, the training device trains the neural network model based on an identification signal and a verification signal. in an example, the identification signal indicates a signal used to identify an object corresponding to a first ecg, and the verification signal indicates a signal used to verify whether the object corresponding to the first ecg matches an object corresponding to a second ecg. the identification signal and the verification signal are also referred to as a supervision signal. the neural network model increases a difference in semantic feature included in the ecg training data of different objects and reduces a similarity in semantic feature between the objects in a feature space. in an example, operation 330 includes operations 331 and 332 . in 331 , the training device trains a plurality of candidate neural network models for each item of the augmented ecg training data. each of the trained candidate neural network models corresponds to a frequency passband, a type of a fiducial point, the number of fiducial points, and a length of a data segment. thus, the training device acquires candidate neural network models corresponding to “the number of filters having different passbands*the number of fiducial points*the number of data segments having different lengths”. for example, after the data augmentation processing and the disturbance processing are performed, 261 data segments are acquired, 261=29 (the number of filters having different passbands)*3 (the number of fiducial points)*3 (the number of data segments having different lengths). the training device acquires 261 trained candidate neural network models by training a candidate neural network model for each of the 261 data segments independently of one another. each candidate neural network model includes a plurality of layers. a node corresponding to each entity is located in an uppermost layer that is an output layer of the candidate neural network model. for example, a previous layer of a last layer of a neural network model is a fully connection layer, and the last layer is a soft-max including a plurality of nodes. in this example, the plurality of nodes respectively corresponds to the plurality of entities and thus, the number of nodes is the same as the number of entities. the training device trains the candidate neural network models based on a supervised training method. a supervision signal used for the supervised learning is a signal to be compared with a signal output from the candidate neural network model. also, the supervision signal is used for adjusting a weight of neurons included in the candidate neural network models. the supervision signal includes an identification signal and a verification signal. the identification signal is used for determining whether a result of identification performed on an entity of a type corresponding to the ecg training data through a nonlinear mapping of layers is valid. the verification signal is used for determining whether a result obtained by verifying whether two items of ecg training data belong to the same entity through the nonlinear mapping of the layers is valid. the training device compares a signal output from the uppermost layer of the candidate neural network model to the supervision signal. the training device adjusts the weight of the neurons included in the candidate neural network model such that a value of an error function indicating a comparison result is less than or equal to a threshold. in this example, the error function may be a function used to measure a difference between the supervision signal and the signal output from the candidate neural network model. the training device uses, for example, a cross-entropy loss function as the error function. the training device adjusts the weight of the neurons included in the candidate neural network model by minimizing a cross-entropy according to equation 1. equation 1 represents a loss function corresponding to the identification signal. the training device adjusts the weight of the neurons to reduce the output value in equation 1. through this, the training device increases a difference between semantic features in ecg training data of different entities in a feature space and a similarity between the semantic features. in equation 1, f denotes an output of a last fully connected layer of the candidate neural network model and the output indicates a semantic feature extracted from the ecg training data, t denotes an index of an actual type of entity, θ d . denotes a parameter of the soft-max layer corresponding to the last layer of the candidate neural network model, p i denotes an actual probability distribution corresponding to the ecg training data. here, if i=t, p i =1. if not i=t, p i =0. {circumflex over (p)} i denotes a probability distribution estimated using the candidate neural network model. in equation 1, a performance of the candidate neural network model increases according to an increase in a prediction probability of an actual type entity t and a decrease in a value of the loss function of equation 1. equation 2 represents a loss function corresponding to the verification signal. the training device extracts a feature from the same entity and performs a training process for verification. a pair of items of ecg training data extracted from the same entity is referred to as a positive sample, for example, y being equal to +1. a pair of items of ecg training data is referred to as a negative sample, for example, y being equal to zero. in equation 2, x i and x j each denote the ecg training data, f i and f j each denote a semantic feature corresponding to the ecg training data. if y ij =1, a euclidean distance between the semantic features of x i and x j minimized and thus, x i and x j belong to the same entity. if y ij =0, the distance between the semantic features of x i and x j is greater than m although the euclidean distance is minimized. thus, x i and x j belong to different entities, respectively. here, m is a predetermined constant. the training device selects the pair of items of ecg training data until a parameter converges to a predetermined value. when a relatively great amount of ecg training data is provided, a probability that the selected pair of items of ecg training data is the positive sample is relatively low. to increase the probability that the selected items of ecg training data is the positive sample, the training device divides the ecg training data into a plurality of groups, for example, mini-batches, and searches each of the groups for a pair of positive samples. by searching each of the groups for the pair of positive samples, the training device generates a greater number of pairs of positive samples in comparison to a case in which one pair of positive samples is selected at one time. a loss function of each of the groups is obtained by correcting equation 1 and equation 2. the obtained loss function is represented by equation 3 as below. in equation 3, n denotes a total number of samples included in a small group, di=σ j=1 n , f=[f 1 , f 2 , . . . , f n ], and l=d−y. λ denotes a weight between two items and tr( ) denotes a trace calculation. in the loss function loss, a portion associated with an i th sample is represented by equation 4. the training device calculates a gradient of x i using equation 4. after calculating the gradient, the training device enters an optimizing process. in one example, the training device updates each parameter of the candidate neural network model based on a back propagation method. the back propagation method includes a forward propagation and an error back propagation. a node of an input layer receives information input from an outside, and transfers the received information to an intermediate layer. the intermediate layer is a data processing layer in the candidate neural network model and performs a data exchange. the intermediate layer includes at least one hidden layer. after information to be transferred to each node of an output layer is processed by a last hidden layer, the forward propagation is performed. thereafter, the training device outputs a data processing result through the output layer. when an actual output of the candidate neural network model differs from an expected output, the training device performs the error back propagation to adjust a parameter, for example, the weight of neurons, of the candidate neural network model. the training device corrects a weight of each layer included in the candidate neural network model using an error gradient descent method based on a difference between the actual output and the expected output of the candidate neural network model. as such, the training device corrects weights, starting from the output layer, the hidden layer, and the input layer. the training device adjusts the weight of each layer by repetitively performing the forward propagation and the error back propagation and completes the training process. the foregoing training process is repetitively performed by a preset number of training times or until the difference between the actual output and the expected output of the candidate neural network model is less than the threshold. in 332 , the training device selects at least one candidate neural network model from the candidate neural network models based on accuracies of candidate semantic features extracted by the candidate neural network models. the training device determines a final neural network model to be used for ecg signal authentication based on the selected at least one candidate neural network model. for example, the training device selects a candidate neural network model that outputs a candidate semantic feature having a highest accuracy from the candidate neural network models. also, the training device selects at least one candidate semantic feature determined as having a relatively high accuracy from a plurality of candidate semantic features using a greedy algorithm. a candidate neural network model corresponding to the selected semantic feature is determined to be the final network model. for example, the training device uses a forward greedy algorithm and a backward greedy algorithm to select the candidate semantic feature having a high accuracy from the plurality of candidate semantic features. the training device calculates an accuracy of each of the candidate semantic features using the forward greedy algorithm and selects a candidate semantic feature having the highest accuracy from the candidate semantic features. from the remaining candidate semantic features, the training device selects a candidate semantic feature that maximizes the accuracy when combined with the selected semantic feature. the foregoing process is repetitively performed until the number of selected semantic features reaches a first number of semantic features set in advance or the accuracy does not increase. using the backward greedy algorithm, the training device removes one candidate semantic feature that maximizes the accuracy when combined with the unselected remaining candidate semantic features from the candidate semantic features selected using the forward greedy algorithm. the foregoing process is repetitively performed until the number of unselected or non-removed semantic features reaches a second number of semantic features set in advance or the accuracy does not increase. in an example, the second number of semantic features is less than or equal to the first number of semantic features. figs. 4a through 4e illustrate examples of a filtering result of ecg training data. a training device filters ecg training data using a filter. for example, the training device filters the ecg training data using a band pass filter having different frequency passbands. filtering results obtained in such process are shown in figs. 4a through 4e . the training device removes noise from the ecg training data through a filtering process. using a band pass filter having a plurality of different passbands, the training device acquires a greater amount of ecg training data including meaningful information when compared to a case in which a filter having a single fixed passband is used. a plurality of passbands of the band pass filter used in the filtering process is shown in table 1 as below. table 11-10 hz1-15 hz1-20 hz1-30 hz1-40 hz1-50 hz3-15 hz3-20 hz3-30 hz3-40 hz3-50 hz5-15 hz5-20 hz5-30 hz5-40 hz5-50 hz10-20 hz10-30 hz10-40 hz10-50 hz15-25 hz15-35 hz15-45 hz20-30 hz20-40 hz20-50 hz25-35 hz25-45 hz35-45 hz as shown in table 1, a start frequency of a passband of the band pass filter is in a range between 1 hertz (hz) and 35 hz, and an end frequency is in a range between 10 hz and 50 hz. the passbands in table 1 may each be a frequency segment or a frequency passband allowing the ecg training data to incorporate a great amount of meaningful information and reducing the noise. as illustrated in figs. 4a through 4e , when the different passbands are used, different filtering results are obtained. according to an increase in a bandwidth of the passband, an amount of noise included in the filtered ecg training data also increases. for example, an amount of noise included in a filtering result decreases in an order of a passband of 1 to 40 hz, a passband of 1 to 30 hz, a passband of 3 to 30 hz, and a passband of 3 to 15 hz and thus, a smoothness of a graph of the ecg training data increases in the order. also, the wider the bandwidth of the passband, the greater an amount of salient information on the ecg signal is acquired through the filtering. by using the band pass filters having different passbands, respectively, the training device acquires ecg training data including a greater amount of salient information when compared to a case in which a band pass filter having a single passband is used. fig. 5 illustrates an example of a fiducial point detected from ecg training data. referring to fig. 5 , ecg training data has a plurality of maximum points or minimum points, and a point q and a point s are formed as minimum points closest to a point r corresponding to a peak point. a training device sets one of the point r, the point q, and the point s to be a fiducial point. in an example, the training device sets the point r to be the fiducial point. based on the point r, the training device acquires data segments having different lengths of, for example, 160 sample points, 190 sample points, or 220 sample points. the data segments acquired based on the point r includes overall information associated with a single heartbeat. for example, one data segment having the length of 160 sample points corresponds to a length of “63 samples points before the point r+the point r+96 sample points after the point r”. in an example, the training device sets the point q or the point s to be the fiducial point and acquires data segments having a relatively small length corresponding to 30 through 50 sample points. the data segments acquired based on the point q or the point s includes partial information associated with the heartbeat. figs. 6a through 6i illustrate examples of data segments acquired based on a fiducial point. figs. 6a through 6i illustrates data segments having different lengths based on a fiducial point selected from a point r, a point q, a point s. referring to fig. 6a , a graph “q:l=30” represents a data segment having a length of 30 sample points based on the point q. as illustrated in figs. 6a through 6i , data segments acquired based on the point q and the point s have lengths less than a length of a data segment acquired based on the point r. a graph of the data segment acquired based on the point q or the point s selected as the fiducial point represents a portion of heart beats. in contrast, a graph of the data segment acquired based on the point r selected as the fiducial point represents full information associated with the heart beats. table 2 shows a plurality of data segments acquired based on types of the point r, the point q, and the point s. table 2before rafter rtotal lengthr-1606396160r-19063126190r-22073146220r-3092030r-50193150before qafter qtotal lengthq-3092030q-50193050before safter stotal lengths-3092030s-50193050 figs. 7a through 7d illustrate examples of a result obtained by performing additional data processing on ecg training data. a training device selectively performs a data augmentation processing for each data segment. the training device also uses a length of a data segment and a fiducial point used as a reference for acquiring the data segment in the additional data processing. the training device performs an offset on both ends of the data segment while maintaining a baseline of ecg training data. after the offset, the training device normalizes the data segment to have the same length. for example, when a point r is set as a fiducial point, and when a data segment set to have 63 sample points before the point r and 96 sample points after the point r is present, a total length of the data segment may be 160 sample points. as shown in table 3, the training device acquires data segments having different lengths through an additional data processing including a first case and a second case. in table 3, numbers indicate the number of sample points. table 3before point rafter point rtotal lengthreference6396160casefirst case6193155second case6599165 figs. 7a through 7c illustrate data segments having a length of 160 sample points, a length of 155 sample points, and 165 sample points based on a point r as a fiducial point. the data segments have different lengths with respect to the same ecg training data. a training device performs an offset on a data segment of a reference case to generate data segments having different lengths as shown in a first case and a second case. referring to fig. 7d , the training device normalizes the data segments of figs. 7a through 7c such that the data segments have the same length. the training device adjusts the lengths of the data segments of the first case and the second case such that the data segments each have 63 sample points before the point r similarly to the reference case. for example, in the graphs of figs. 7b and 7c , the training device normalizes 61 sample points and 65 sample points to be 63 sample points. also, the training device performs normalization such that the number of sample points after the point r corresponds to 96 sample points. for example, in the graphs of figs. 7b and 7c , the training device adjusts 93 sample points and 99 sample points to be 96 sample points. as shown in a graph of fig. 7d , the training device maintains each of the data segments of figs. 7a through 7c to have the length of 160 sample points. the training device similarly performs the normalization on data segments corresponding to different lengths or different fiducial points through the foregoing process and acquires various items of ecg training data. in figs. 7a through 7d , l denotes a length of a data segment and the length corresponds to the number of sample points configuring the data segment. fig. 8a illustrates an example of a neural network model, and fig. 8b illustrates another example of a neural network model. in an example of a neural network model of fig. 8a , a pair of items of ecg training data is input to an input layer of a candidate neural network model. a lowermost layer may be the input layer and a number of nodes or neurons included in the input layer may be the same as a size of a dimension of ecg training data. an uppermost layer may be an output layer. the output layer is trained based on an identification signal. a hidden layer is located between the input layer and the output layer. an output of a last hidden layer indicates a learned semantic feature. a pair of features output from the last hidden layer corresponds to one authentication output, for example, +1 or −1. in an example of a neural network model of fig. 8b , a pair of items of ecg training data are input to two neural network models. a weight of a hidden layer is shared between the two neural network models. in the two neural network models, an output of a previous hidden layer of the last hidden layer is determined to be a semantic feature. fig. 9 illustrates an example of selecting an optimal candidate neural network model from candidate neural network models using a greedy algorithm. a training device acquires a plurality of candidate neural network models as a training result. not all of the candidate neural network models may provide a high performance. the training device selects a candidate neural network model providing a highest performance. the training device extracts a candidate semantic feature from ecg training data using each of the candidate neural network models. for example, the training device selects at least one candidate semantic feature satisfying a reference from a plurality of candidate semantic features using a greedy algorithm. the training device uses a forward greedy algorithm and a backward greedy algorithm to select a candidate semantic feature. in the forward greedy algorithm, the training device evaluates a performance of a single candidate neural network model and select a candidate neural network model corresponding to the highest performance. an evaluation of the candidate neural network model may be performed based on a candidate semantic feature corresponding to the candidate neural network model. in an example of fig. 9 , according to a decrease in a value of a model score, a performance of a candidate neural network model increases. based on the forward greedy algorithm, the training device selects a candidate neural network model 1 at round 1. hereinafter, the candidate neural network model is also referred to as a model. at round 2, the training device selects, as a model 2, a model that obtains the highest performance when combined with the model 1 corresponding to the model selected in a previous round, round 1 from remaining models, for examples, the model 2 through a model n. such selecting process may be repetitively performed until the number of selected candidate semantic features reaches a first number of features set in advance or until an accuracy of the candidate semantic feature does not increase. the backward greedy algorithm is used to complement the forward greedy algorithm. it is difficult to correct an error occurring at an earlier round using the forward greedy algorithm. for example, since the model 1 is selected at the round 1, the model 1 is not removed from finally selected candidate neural network models although a combination of the model 2 selected at the round 2 and a model 4 selected at round 3 provides the highest performance. in other words, when the model 1 is selected at the round 1, the combination of the model 2 and the model 4 may not be selected although the combination provides the highest performance. the training device evaluates performances of m models at every round using the backward greedy algorithm. through such evaluation, the training device removes one model and evaluates performances of combinations of remaining models. the evaluation of the model may be performed based on a candidate semantic feature output from the model. in this example, a model that maximizes a performance when the model is removed may be selected as a model to be removed. the foregoing process may be repetitively performed until the number of unselected or non-removed remaining candidate semantic features reaches a second number of features set in advance or until an accuracy of the candidate semantic feature does not increase. in an example, the second number of features may be less than or equal to the first number of features. fig. 10 illustrates an example of an ecg authentication apparatus. referring to fig. 10 , an ecg authentication apparatus 1000 includes a processor 1010 , a memory 1020 , and a display 1040 . the processor 1010 performs at least one of operations described with reference to figs. 1 and 2 . for example, the processor 1010 performs ecg authentication based on an ecg signal of a user acquired through an ecg measurement device 1030 . depending on an example, the ecg measurement device 1030 may be included in the ecg authentication apparatus 1000 . the processor 1010 extracts a semantic feature from the ecg signal using a neural network model and performs the ecg authentication by comparing the extracted semantic feature and a registered feature registered in advance. the memory 1020 stores instructions for performing at least one of the operations described with reference to figs. 1 and 2 , or stores results and data acquired through an operation of the ecg authentication apparatus 1000 . in some examples, the memory 1020 includes a non-temporary computer-readable medium as described below. when an ecg authentication process is completed, the ecg authentication apparatus 1000 provides an ecg authentication result to a user using, for example, a display 1040 , a speaker, or a vibration feedback sensor. in an example, the display 1040 may be a physical structure that includes one or more hardware components that provide the ability to render a user interface and/or receive user input. the display 1040 can encompass any combination of display region, gesture capture region, a touch sensitive display, and/or a configurable area. in an example, the display 1040 is an external peripheral device that may be attached to and detached from the ecg authentication apparatus 1000 . the display 1040 may be a single-screen or a multi-screen display. a single physical screen can include multiple displays that are managed as separate logical displays permitting different content to be displayed on separate displays although part of the same physical screen. the display 1040 may also be implemented as an eye glass display (egd), which includes one-eyed glass or two-eyed glasses. in an example, the ecg authentication apparatus 1000 generates a control signal allowing an access of the user in response to a success in authentication. in an example, the ecg authentication apparatus 1000 restricts the access of the user in response to a fail in authentication. fig. 11 illustrates an example of a training apparatus. referring to fig. 11 , a training apparatus 1100 includes a processor 1110 , a memory 1120 . the processor 1110 performs at least one of operations described with reference to figs. 3 through 9 . for example, the processor 1110 trains a neural network model based on ecg training data. the processor 1110 performs signal processing for increasing the ecg training data and trains each candidate neural network models based on the ecg training data. the processor 1110 selects at least one candidate neural network models to be used for ecg authentication from candidate neural network models based on an accuracy of candidate semantic features output by the candidate neural network models. the memory 1120 stores instructions for performing at least one of the operations described with reference to figs. 3 through 9 , or stores data and results acquired through an operation of the training apparatus 1100 . the apparatuses, units, modules, devices, and other components described herein are implemented by hardware components. examples of hardware components include controllers, sensors, generators, drivers, and any other electronic components known to one of ordinary skill in the art. in one example, the hardware components are implemented by one or more processors or computers. a processor or computer is implemented by one or more processing elements, such as an array of logic gates, a controller and an arithmetic logic unit, a digital signal processor, a microcomputer, a programmable logic controller, a field-programmable gate array, a programmable logic array, a microprocessor, or any other device or combination of devices known to one of ordinary skill in the art that is capable of responding to and executing instructions in a defined manner to achieve a desired result. in one example, a processor or computer includes, or is connected to, one or more memories storing instructions or software that are executed by the processor or computer. hardware components implemented by a processor or computer execute instructions or software, such as an operating system (os) and one or more software applications that run on the os, to perform the operations described herein. the hardware components also access, manipulate, process, create, and store data in response to execution of the instructions or software. for simplicity, the singular term “processor” or “computer” may be used in the description of the examples described herein, but in other examples multiple processors or computers are used, or a processor or computer includes multiple processing elements, or multiple types of processing elements, or both. in one example, a hardware component includes multiple processors, and in another example, a hardware component includes a processor and a controller. a hardware component has any one or more of different processing configurations, examples of which include a single processor, independent processors, parallel processors, single-instruction single-data (sisd) multiprocessing, single-instruction multiple-data (simd) multiprocessing, multiple-instruction single-data (misd) multiprocessing, and multiple-instruction multiple-data (mimd) multiprocessing. the methods illustrated in figs. 2-3 that perform the operations described in this application are performed by computing hardware, for example, by one or more processors or computers, implemented as described above executing instructions or software to perform the operations described in this application that are performed by the methods. for example, a single operation or two or more operations may be performed by a single processor, or two or more processors, or a processor and a controller. one or more operations may be performed by one or more processors, or a processor and a controller, and one or more other operations may be performed by one or more other processors, or another processor and another controller. one or more processors, or a processor and a controller, may perform a single operation, or two or more operations. the instructions or software to control a processor or computer to implement the hardware components and perform the methods as described above, and any associated data, data files, and data structures, are recorded, stored, or fixed in or on one or more non-transitory computer-readable storage media. examples of a non-transitory computer-readable storage medium include read-only memory (rom), electrically erasable programmable read-only memory (eeprom), random-access memory (ram), dynamic random access memory (dram), static random access memory (sdram), flash memory, non-volatile memory, cd-roms, cd-rs, cd+rs, cd-rws, cd+rws, dvd-roms, dvd-rs, dvd+rs, dvd-rws, dvd+rws, dvd-rams, bd-roms, bd-rs, bd-r lths, bd-res, blue-ray or optical disk storage, hard disk drive (hdd), solid state drive (ssd), flash memory, magnetic tapes, floppy disks, magneto-optical data storage devices, optical data storage devices, hard disks, solid-state disks, and any other device that is configured to store the instructions or software and any associated data, data files, and data structures in a non-transitory manner and providing the instructions or software and any associated data, data files, and data structures to a processor or computer so that the processor or computer can execute the instructions. in one example, the instructions or software and any associated data, data files, and data structures are distributed over network-coupled computer systems so that the instructions and software and any associated data, data files, and data structures are stored, accessed, and executed in a distributed fashion by the processor or computer. while this disclosure includes specific examples, it will be apparent to one of ordinary skill in the art that various changes in form and details may be made in these examples without departing from the spirit and scope of the claims and their equivalents. the examples described herein are to be considered in a descriptive sense only, and not for purposes of limitation. descriptions of features or aspects in each example are to be considered as being applicable to similar features or aspects in other examples. suitable results may be achieved if the described techniques are performed in a different order, and/or if components in a described system, architecture, device, or circuit are combined in a different manner, and/or replaced or supplemented by other components or their equivalents. therefore, the scope of the disclosure is defined not by the detailed description, but by the claims and their equivalents, and all variations within the scope of the claims and their equivalents are to be construed as being included in the disclosure.
114-077-128-552-568
FR
[ "MX", "FR", "BR", "CA", "EP", "NO", "DK", "CN", "AU", "US", "WO", "DE", "EA", "JP", "AT", "OA", "GC", "GB" ]
C04B7/02,C04B14/04,C04B14/24,C04B16/08,E21B33/14,C04B20/00,C04B22/14,C04B28/02,C04B28/08,C04B111/70,C09K8/46,C09K8/473,C09K8/58,E21B33/13,C04B38/08,C09K8/03
1999-07-29T00:00:00
1999
[ "C04", "E21", "C09" ]
a lowdensity and lowporosity cementing slurry for oil wells or the like.
the invention relates to a cement slurry for cementing an oil well or the like, the slurry having a density lying in the range 0.9 gcm3 to 1.3 gcm3, and being constituted by a solid fraction and a liquid fraction, having porosity (volume ratio of liquid fraction over solid fraction) lying in the range 38 % to 50 %. the solid fraction is constitued by a mixture of lightweight particles, microcement and optionally portland cement and gypsum. such cements have remarkable mechanical properties due to their very low porosity in spite of having very low density.
1. a well cementing slurry, comprising a solid fraction and a liquid fraction and having a density in the range 0.9 g/cm ^{ 3 } to 1.3 g/cm ^{ 3 } and a porosity from about 38% to about 50%, wherein the solid fraction comprises: 2. a cementing slurry as claimed in claim 1 , wherein the particles of the first component have approximately at least 100 times the mean particle size of the second component. 3. a cementing slurry as claimed in claim 1 , wherein the first component has a mean particle size of greater than 100 microns. 4. a cementing slurry as claimed in claim 1 , wherein the second component comprises portland cement. 5. a cementing slurry as claimed in claim 4 , wherein the second component comprises a mixture of portland cement and microslag. 6. a cementing slurry as claimed in claim 1 , wherein the first component comprises particles having a density of less than 0.8 g/cm ^{ 3 } . 7. a cementing slurry as claimed in claim 1 wherein the first component comprises at least one particle selected from the group consisting of hollow microspheres, silico-aluminate cenospheres, hollow glass beads, sodium-calcium-borosilicate glass beads, ceramic microspheres, silica-alumina microspheres, plastics materials and polypropylene beads. 8. a cementing slurry as claimed in claim 1 , further comprising at least one additive selected from the group consisting of dispersants, antifreezes, water retainers, cement setting accelerators, cement setting retarders and foam stabilizers. 9. a well cementing slurry, comprising a solid fraction and a liquid fraction and having a density in the range 0.9 g/cm ^{ 3 } to 1.3 g/cm ^{ 3 } and a porosity from about 38% to about 50%, wherein the solid fraction comprises: 10. a cementing slurry as claimed in claim 9 , wherein the second component comprises particles having a density of less than 2 g/cm ^{ 3 } . 11. a cement slurry as claimed in claim 10 , wherein the second component comprises particles having a density of less than 0.8 g/cm ^{ 3 } . 12. a cementing slurry as claimed in claim 10 wherein the second component comprises at least one particle selected from the group consisting of hollow microspheres, silico-aluminate cenospheres, hollow glass beads, sodium-calcium-borosilicate glass beads, ceramic microspheres, silica-alumina microspheres, plastics materials and polypropylene beads. 13. a cementing slurry as claimed in claim 9 wherein the second component comprises a cement. 14. a cementing slurry as claimed in claim 13 , wherein the cement comprises portland cement. 15. a cementing slurry as claimed in claim 9 , wherein the particles of the first component have approximately at least 100 times the mean particle size of the third component. 16. a cementing slurry as claimed in claim 9 , wherein the first component has a mean particle size of greater than 100 microns. 17. a cementing slurry as claimed in claim 9 , wherein the third component comprises portland cement. 18. a cementing slurry as claimed in claim 17 , wherein the third component comprises a mixture of portland cement and microslag. 19. a cementing slurry as claimed in claim 9 , wherein the first component comprises particles having a density of less than 0.8 g/cm ^{ 3 } . 20. a cementing slurry as claimed in claim 9 wherein the first component comprises at least one particle selected from the group consisting of hollow microspheres, silico-aluminate cenospheres, hollow glass beads, sodium-calcium-borosilicate glass beads, ceramic microspheres, silica-alumina microspheres, plastics materials and polypropylene beads. 21. a cementing slurry as claimed in claim 9 , further comprising at least one additive selected from the group consisting of dispersants, antifreezes, water retainers, cement setting accelerators, cement setting retarders and foam stabilizers. 22. a cementing slurry as claimed in claim 9 , further comprising 0% to 30% by volume of the solid fraction of gypsum. 23. a well cementing slurry, comprising a solid fraction and a liquid fraction and having a density in the range 0.9 g/cm ^{ 3 } to 1.3 g/cm ^{ 3 } and a porosity from about 38% to about 50%, wherein the solid fraction comprises: 24. a cementing slurry as claimed in claim 23 , slurry has a porosity of less than about 45%. 25. a cementing slurry as claimed in claim 23 , wherein the first component comprises particles having a density of less than 0.8 g/cm ^{ 3 } . 26. a cementing slurry as claimed in claim 23 wherein the first component comprises at least one particle selected from the group consisting of hollow microspheres, silico-aluminate cenospheres, hollow glass beads, sodium-calcium-borosilicate glass beads, ceramic microspheres, silica-alumina microspheres, plastics materials and polypropylene beads. 27. a cementing slurry as claimed in claim 23 , further comprising at least one additive selected from tee group consisting of dispersants, antifreezes, water retainers, cement setting accelerators, cement setting retarders and foam stabilizers.
the present invention relates to drilling techniques for oil wells, gas wells, water wells, geothermal wells, and the like. more precisely, the invention relates to cementing slurries of low density and low porosity. after an oil well or the like has been drilled, casing or coiled tubing is lowered down the borehole and is cemented over all or part of its height. cementing serves in particular to eliminate any fluid interchange between the various formation layers that the borehole passes through, preventing gas from rising via the annulus surrounding the casing, or indeed it serves to limit ingress of water into a well in production. naturally, another main objective of cementing is to consolidate the borehole and to protect the casing. while it is being prepared and then injected into the well so as to be placed in the zone that is to be cemented, the cementing slurry must present relatively low viscosity and it must have rheological properties that are practically constant. however, once it is in place, an ideal cement would rapidly develop high compression strength so as to enable other work on the well that is being built to start again quickly, and in particular. so as to enable drilling to be continued. the density of the cement must be adjusted so that the pressure at the bottom of the well compensates at least for the pore pressure in the geological formations through which the well passes so as to avoid any risk of eruption. as well as this lower limit on density, there is also an upper limit. this upper limit is that the hydrostatic pressure generated by the column of cement plus the head losses due to the circulation of the fluids being pumped must remain below the fracturing pressure of the rocks in the section being cemented. certain geological formations are very fragile and require densities close to that of water or even lower. the risk of eruption diminishes with column height so the density required for compensating pore pressure is then lower. in addition, cementing a large height of column is advantageous since that makes it possible to reduce the number of sections that are cemented. after a section has been cemented, drilling must be restarted at a smaller diameter, so having a large number of sections requires a hole to be drilled near the surface that is of large diameter, thereby giving rise to excess cost due to the large volume of rock to be drilled and due to the large weight of steel required for the sections of casing, given their large diameters. all of those factors favor the use of cement slurries of very low density. the cement slurries in the most widespread use have densities of about 1900 kg/m ^{ 3 } , which is about twice the density desired for certain deposits. to lighten them, the simplest technique is to increase the quantity of water while adding stabilizing additives (known as extenders) to the slurry for the purpose of avoiding particles settling and/or free water forming at the surface of the slurry. manifestly, that technique cannot get down to a density close to 1000 kg/m ^{ 3 } . furthermore, hardened cements formed from such slurries have greatly reduced compression strength, a high degree of permeability, and poor adhesive capacity. for these reasons, that technique cannot be used to go below densities of about 1300 kg/m ^{ 3 } while still conserving good isolation between geological layers and providing sufficient reinforcement for the casing. another technique consists in lightening the cement slurry by injecting gas into it (generally air or nitrogen) before it sets. the quantity of air or nitrogen added is such as to reach the required density. it can be such as to form a cement foam. that technique provides performance that is a little better than the preceding technique since the density of gas is lower than that of water, so less needs to be added. nevertheless, in oil industry applications density remains limited in practice to densities greater than 1100 kg/m ^{ 3 } , even when starting with slurries that have already been lightened with water. above a certain quality of foam, i.e. a certain ratio of gas volume to volume of the foamed slurry, the stability of the foam falls off very quickly, the compression strength of the foam after it has set becomes too low, and its permeability becomes too high, thereby compromising durability in a hot aqueous medium which includes ions that are aggressive to a greater or lesser extent for cement. u.s. pat. no. 3,804,058 and gb 2,027,687a describe the use of hollow glass or ceramic micro-spheres to produce low density cement slurries for use in the oil and gas industry. an object of the present invention is to provide cementing slurries that are more particularly adapted to cementing oil wells or the like, having both low density and low porosity, and that are obtained without introducing gas. according to the invention, this object is achieved by a cement slurry for cementing an oil well or the like, the slurry having a density lying in the range 0.9 g/cm ^{ 3 } to 1.3 g/cm ^{ 3 } , in particular in the range 0.9 g/cm ^{ 3 } to 1.1 g/cm ^{ 3 } , and being constituted by a solid fraction and a liquid fraction, having porosity (volume ratio of liquid fraction over solid fraction) lying in the range 38% to 50%, and preferably less than 45%. the solid fraction is preferably constituted by a mixture comprising: 60% to 90% (by volume) of lightweight particles having a mean size lying in the range 20 microns (m) to 350 m; 10% to 30% (by volume) of micro-cement having a mean particle diameter lying in the range 0.5 m to 5 m; 0 to 20% (by volume) of portland cement, having particles with a mean diameter lying in the range 20 m to 50 m; and 0 to 30% (by volume) of gypsum. the low porosities achieved make it possible to optimize mechanical properties and permeability. by presenting mechanical properties that are much better than those of conventional lightened systems, and permeabilities that are lower, the leakproofing and adhesion properties of ultralightweight cement and the resistance of such formulations to chemical attack are thus much better than with the systems presently in use for low densities, even though the invention makes it possible to reach densities that are exceptionally low, and in particular that are lower than the density of water. in addition, slurries of the invention do not require gas, thus making it possible to avoid the logistics that would otherwise be required for manufacturing foamed cements. the method of the invention is characterized in that particulate additives are incorporated in the cement slurry, such that in combination with one another and with the other particulate components of the slurry, and in particular with the particles of micro-cement (or comparable hydraulic binder), they give rise to a grain-size distribution that significantly alters the properties of the slurry. the said particulate additives are organic or inorganic and they are selected for their low density. the low density is obtained by combining lightweight particles and cement (or a comparable hydraulic binder). nevertheless, theological and mechanical properties will only be satisfactory if the size of the particles and the volume distribution thereof is selected in such a manner as to maximize the compactness of the solid mixture. for a solid mixture having two components (the lightweight particles and the micro-cement), this maximum compactness is generally obtained for a volume ratio of lightweight particles to micro-cement lying in the range 70:30 to 85:15, and preferably in the range 75:25 to 80:20, for lightweight particles selected to be of a size that is at least 100 times approximately the size of the particles of micro-cement, i.e. in general, particles that are greater than 100 m in size. these values can vary, in particular as a function of the greater or lesser dispersion in the grain-size distribution of the lightweight particles. particles having a mean size greater than 20 microns can also be used, but performance is not so good. particles greater than 350 microns are generally not used because of the narrow size of the annular gaps to be cemented. mixtures having three or more components are preferred since they make it possible to obtain greater compactness if the mean sizes of the various components are significantly different. for example, it is possible to use a mixture of lightweight particles having a mean size of 150 microns, lightweight particles having a mean size of 30 microns, and micro-cement, at a volume ratio lying close to 55:35:10, or departing a little from these optimum proportions, the mixture being constituted by 50% to 60% (by volume) of the first lightweight particles of mean diameter lying in the range 100 m to 400 m, 30% to 45% of the second lightweight particles of mean diameter lying in the range 20 m to 40 m, and 5% to 20% of micro-cement. depending on the application, the fraction of lightweight particles of intermediate size can be replaced by portland cement of ordinary size, in particular class g portland cement. the term micro-cement is used in the invention to designate any hydraulic binder made up of particles of mean size of about 3 m and including no, or at least no significant number of, particles of size greater than 10 m. they have a specific surface area per unit weight as determined by the air permeability test that is generally about 0.8 m ^{ 2 } /g. the micro-cement can essentially be constituted by portland cement, in particular a class g portland cement typically comprising about 65% lime, 25% silica, 4% alumina, 4% iron oxides, and less than 1% manganese oxide, or equally well by a mixture of portland micro-cement with microslag, i.e. a mixture making use essentially of compositions made from clinker comprising 45% lime, 30% silica, 10% alumina, 1% iron oxides and 5% to 6% manganese oxide (only the principal oxides are mentioned here; and these concentrations can naturally vary slightly as a function of the supplier) . for very low temperature applications (<30 c.), portland micro-cement is preferable over a mixture of micro-cement and slag because of its reactivity. if setting at right angles is required, plaster (gypsum) can be used for all or some of the middle-sized particles. the lightweight particles typically have density of less than 2 g/cm ^{ 3 } , and generally less than 0.8 g/cm ^{ 3 } . by way of example, it is possible to use hollow microspheres, in particular of silico-aluminate, known as cenospheres, a residue that is obtained from burning coal and having a mean diameter of about 150 m. it is also possible to use synthetic materials such as hollow glass beads, and more particularly preferred are beads of sodium-calcium-borosilicate glass presenting high compression strength or indeed microspheres of a ceramic, e.g. of the silica-alumina type. these lightweight particles can also be particles of a plastics material such as beads of polypropylene. in general, the density of the slurry is adjusted essentially as a function of which lightweight particles are chosen, but it is also possible to vary the ratio of water to solid (keeping it in the range 38% to 50% by volume), the quantity of micro-cement or of comparable hydraulic binder (in the range 10% to 30%), and adding portland cement of ordinary size as a replacement for a portion of the lightweight particles. naturally, the slurry can also include one or more additives of types such as: dispersants; antifreeze; water retainers; cement setting accelerators or retarders; and/or foam stabilizers, which additives are usually added to the liquid phase, or where appropriate incorporated in the solid phase. formulations made in accordance with the invention have mechanical properties that are significantly better than those of foamed cements having the same density. compression strengths are very high and porosities very low. as a result, permeabilities are smaller by several orders of magnitude than those of same-density foamed cements, thereby conferring remarkable properties of hardness on such systems. the method of the invention considerably simplifies the cementing operation, since it avoids any need for logistics of the kind required for foaming. slurries prepared in accordance with the invention also have the advantage of enabling all of the characteristics of the slurry (rheology, setting time, compression strength, . . . ) to be determined in advance for the slurry as placed in the well, unlike foamed slurries where certain parameters can be measured on the slurry only prior to the introduction of gas (setting time). the following examples illustrate the invention without limiting its scope. example 1 low-density and low-porosity slurries can be obtained from mixtures of particles of two or three (or even more) different sizes, so long as the packing volume fraction (pvf) is optimized. the properties of three slurries prepared in accordance with the invention are described and compared with those of a conventional low-density extended slurry and of a foamed system: slurry a: a mixture of powders was prepared. it comprised 55% by volume of hollow spheres taken from cenospheres having an average size of 150 microns (specific gravity 0.75); 35% by volume of glass microspheres having an average size of 30 microns; and 10% by volume of a mixture of portland micro-cement and slag having a mean size of about 3 microns. the microspheres used are sold by 3m under the name scotchlite s60/10,000; such microspheres have a density of 0.6 g/cm ^{ 3 } and a grain-size distribution such that 10% of the particles (by volume) have a size of less than 15 m, 50% less than 30 m, and 90% less than 70 m; these particles were selected in particular because of their high compression strength (90% of the particles withstand isostatic compression of 68.9 mpa or 10,000 psi). water and the following additives were mixed with this powder so as to ensure that the volume percentage of liquid in the slurry was 42%: water retainer based on 2-acrylamido 2-methylpropane sulfonic acid (amps) at 0.2% (percent by weight of powder, i.e. all of the solid particles taken together (micro-cement, microspheres and cenospheres for this slurry a)); an antifoaming agent at 0.03 gallons per bag of powder; and a super-plasticizer based on polynaphthalene sulfonate at 0.07 gallons per bag of powder. it should be observed that a bag of powder is defined by analogy with bags of cement as being a bag containing 45.359 kg of mixture, in other words 1 gpb0.03834 liters of additive per kg of mixture. slurry b: a mixture of powders was prepared. it comprised 78% by volume of hollow spheres obtained from cenospheres having a mean size of 150 microns and a density of 0.63 g/cm ^{ 3 } , and 22% by volume of a mixture of portland micro-cement and slag having a mean size of about 3 microns. water and the following additives were mixed with the powder so that the volume percentage of liquid in the slurry was 42%: water retainer based on amps polymer at 0.2% by weight of powder; antifoaming agent at 0.03 gallons per bag of powder; and a super-plasticizer based on polynaphthalene sulfonate at 0.1 gallons per bag of powder. slurry c: a mixture of powders was prepared. it comprised 78% by volume of scotchlite glass microspheres having a mean size of 30 microns, and 22% by volume of a mixture of portland micro-cement and slag having a mean size of about 3 microns. water and the following additives were mixed with said powder so that the volume percentage of liquid in the slurry was 45%: water retainer based on amps polymer at 0.2% by weight of powder; an antifoaming agent at 0.03 gallons per bag of powder; and a super-plasticizer based on polynaphthalene sulfonate at 0.145 gallons per bag of powder. slurry d: a mixture of powders was prepared. it comprised 78.4% by volume of hollow spheres derived from cenospheres having a mean size of 150 microns (density 0.72 g/cm ^{ 3 } ) and 21.6% by volume of glass g portland cement. water and the following additive were mixed with said powder so that the volume percentage of liquid in the slurry was 57%: an antifoaming agent at 0.03 gallons per bag of powder. slurry e: a conventional slurry of density 1900 kg/cm ^{ 3 } was prepared based on a class g portland cement. the slurry was foamed with a quantity of foam of 50% so as to obtain a slurry whose final density was 950 kg/m ^{ 3 } . slurry a b c d e density 924 (7.7) 1068 (8.9) 1056 (8.8) 1130 (9.4) 950 (7.9) porosity 42% 42% 45% 57% 78%* pv 87 68 65 ty 3.7 (7.7) 8.6 (18) 3.4 (7.2) cs 11.7 (1700) 19.3 (2800) 14.5 (2100) 2.48 (360) 4.62 (670) the densities are expressed in kg/m ^{ 3 } (and in pounds per gallon in parentheses). rheology is expressed by a flow threshold ty in pascals (and in pounds per 100 square inch in parentheses), and by plastic viscosity in mpas or centipoise, using the bingham fluid model. these parameters were determined at ambient temperature. cs means compression strength after 24 hours for cement set at 60 c. (140 f.) at a pressure of 6.9 mpa (1000 psi), and it is expressed in mpa (and in pounds per square inch in parentheses). * in this case, porosity was calculated as volume of gaswater over total volume of the slurry. it can be seen that for the slurries prepared in accordance with the invention, compression strength is particularly high for densities that are so low and that these slurries present excellent rheology in spite of their low porosity. example 2 for slurries having a density greater than 8 pounds per gallon (ppg), a portion of the lightweight particles can be substituted by class g cement. slurry a: a mixture of powders was prepared. it comprised 55% by volume hollow spheres derived from cenospheres having a mean size of 150 microns, 35% by volume of scotchlite glass microspheres having a mean size of 30 microns, and 10% by volume of a mixture of portland micro-cement and slag having a mean size of about 3 microns. water and the following additives were mixed with the powder so that the volume percentage of the liquid in the slurry was 42%: water retainer based on amps polymer at 0.2% by weight of powder; an antifoaming agent at 0.03 gallons per bag of powder; and a super-plasticizer based on polynaphthalene sulfonate at 0.07 gallons per bag of powder. slurry b: a mixture of powders was prepared. it comprised 55% by volume of hollow spheres derived from cenospheres having a mean size of 150 microns, 25% by volume of scotchlite glass microspheres having a mean size of 30 microns, 10% by volume of a class g portland cement, and 10% by volume of a mixture of portland micro-cement and slag having a mean size of about 3 microns. water and the following additives were mixed with the powder so as obtain a volume percentage of liquid in the slurry of 42%: water retainer based on amps polymer at 0.2% by weight of powder; an antifoaming agent at 0.03 gallons per bag of powder; and a super-plasticizer based on polynaphthalene sulfonate at 0.01 gallons per bag of powder. slurry c: a mixture of powders was prepared. it comprised 55% by volume of hollow spheres derived from cenospheres having a mean size of 150 microns, 20% by volume of scotchlite glass microspheres having a mean size of 30 microns, 15% by volume of a class g portland cement, and 10% by volume of a mixture of portland micro-cement and slag-having a mean size of about 3 microns. water and the following additives were mixed with the powder so that the volume percentage of liquid in the slurry was 42%: water retainer based on amps polymer at 0.2% by weight of powder; an antifoaming agent at 0.03 gallons per bag of powder; and a super-plasticizer based on polynaphthalene sulfonate at 0.01 gallons per bag of powder. slurry d: a mixture of powders was prepared. it comprised 55% by volume of hollow spheres derived from cenospheres having a mean size of 150 microns, 15% by 5 volume of scotchlite glass microspheres having a mean size of 30 microns, 20% by volume of a class g portland cement, and 10% by volume of a mixture of portland micro-cement and slag having a mean size of about 3 microns. water and the following additives were mixed with the powder so as to obtain a 10 volume percentage of liquid in the slurry of 42%: water retainer based on amps polymer at 0.2% by weight of powder; an antifoaming agent at 0.03 gallons per bag of powder; and a super-plasticizer based on polynaphthalene sulfonate at 0.01 gallons per bag of powder. densities are expressed in kg/m ^{ 3 } (and in pounds per gallon in parentheses). rheology is expressed by the flow threshold ty in pascals (and in pounds per 100 square feet in parentheses), and by plastic viscosity pv in mpa.s or centipoise, using the bingham fluid model. these parameters were determined at ambient temperature. cs stands for compression strength after 24 hours and after 48 hours for cement setting at 60 c. at a pressure of 6.9 mpa (1000 psi), expressed in mpa (and in pounds per square inch in parentheses). slurry a b c d density 924 (7.7) 1068 (8.9) 1140 (9.5) 1218 (10.15) porosity 42% 42% 42% 42% pv 87 90 100 109 ty 7.7 8.8 9.0 11.2 cs (24 h) 7.58 (1100) 18.3 (2650) 19.7 (2850) 20.7 (3000) cs (48 h) 9.0 (1300) 19.0 (2750) 29.7 (4300) 28.3 (4100) adding portland cement as a portion of the medium-sized particles makes it possible to cover the entire range of densities from 8 ppg to 11 ppg and significantly improves compression strength. this addition does not disturb the good theological properties in any way. example 3 for slurries having a density greater than 8 ppg, a portion of the lightweight particles can be substituted by micro-cement or by a mixture of micro-cement and slag. slurry a: a mixture of powders was prepared. it comprised 55% by volume hollow spheres derived from cenospheres having a mean size of 150 microns, 30% by volume of scotchlite glass microspheres having a mean size of 30 microns, and 15% by volume of a mixture of portland micro-cement and slag having a mean size of about 3 microns. water and the following additives were mixed with the powder so that the volume percentage of the liquid in the slurry was 42%: water retainer based on amps polymer at 0.2% by weight of powder; an antifoaming agent at 0.03 gallons per bag of powder; and a super-plasticizer based on polynaphthalene sulfonate at 0.07 gallons per bag of powder. slurry b: a mixture of powders was prepared. it comprised 55% by volume hollow spheres derived from cenospheres having a mean size of 150 microns, 25% by volume of scotchlite glass microspheres having a mean size of 30 microns, and 20% by volume of a mixture of portland micro-cement and slag having a mean size of about 3 microns. water and the following additives were mixed with the powder so that the volume percentage of the liquid in the slurry was 42%: water retainer based on amps polymer at 0.2% by weight of powder; an antifoaming agent at 0.03 gallons per bag of powder; and a super-plasticizer based on polynaphthalene sulfonate at 0.07 gallons per bag of powder. slurry a b density 990 (8.25) 1056 (8.8) porosity 42% 42% cs (24 h) 11.2 (1630) 21.4 (3100) cs (48 h) 11.7 (1700) 22.1 (3200) densities are expressed in kg/m ^{ 3 } (and in pounds per gallon in parentheses). cs means compression strength after 24 hours and 48 hours for cement set at 60 c. under a pressure of 6.9 mpa (1000 psi), expressed in mpa (and in pounds per square inch in parentheses). increasing the content of micro-cement and slag mixture gives rise to exceptional compression strength performance at 9 ppg. example 4 depending on the desired mechanical properties (flexibility, ability to withstand high pressures), various lightweight particles can be used so long as the pvf is optimized. slurry a: a mixture of powders was prepared. it comprised 55% by volume hollow spheres derived from cenospheres having a mean size of 150 microns, 30% by volume of hollow spheres derived from cenospheres having a mean size of 45 microns, and 15% by volume of a mixture of portland micro-cement and slag having a mean size of about 3 microns. water and the following additives were mixed with the powder so that the volume percentage of the liquid in the slurry was 42%: water retainer based on amps polymer at 0.2% by weight of powder; an antifoaming agent at 0.03 gallons per bag of powder; and a super-plasticizer based on polynaphthalene sulfonate at 0.07 gallons per bag of powder. slurry b: a mixture of powders was prepared. it comprised 55% by volume of particles of polypropylene having a means size of 300 microns, 30% by volume of scotchlite glass microspheres having a mean size of 30 microns, and 15% by volume of a mixture of portland micro-cement and slag having a mean size of about 3 microns. water and the following additives were mixed with the powder so that the volume percentage of the liquid in the slurry was 42%: a retarder based on purified lignosulfonates at 0.22% by weight of powder; a water retainer based on amps polymer at 0.2% by weight of powder; and a super-plasticizer based on polynaphthalene sulfonate at 0.05 gallons per bag of powder. slurry a b density 990 (8.25) 1068 (8.9) porosity 42% 42% pv 93 116 ty 20 9.3 cs (24 h) 18.3 (2640) 10.3 (1500)* cs (48 h) 18.7 (2700) 22.1 (3200)* densities are expressed in kg/m ^{ 3 } (and in pounds per gallon in parentheses). rheology is expressed by the flow threshold ty in pascals (and in pounds per 100 square feet in parentheses), and by plastic viscosity pv in mpa.s or centipoises, using the bingham fluid model. these parameters were determined at ambient temperature. cs means compression strength at 24 hours and at 48 hours from the cement setting at 60 c. under 6.9 mpa (1000 psi), expressed in mpa (and in pounds per square inch in parentheses). * compression strength at 24 hours for cement set at 104 c. (220 f.) under a pressure of 20.7 mpa (3000 psi), expressed in mpa and in psi in parentheses. example 5 for low temperature applications, the mixture of micro-cement and slag can be substituted by pure micro-cement, or plaster can be added to replace the medium-sized particles. we have compared a formulation of the invention with a foamed plaster formulation. slurry a: a mixture of powders was prepared. it comprised 42.7% by volume of hollow spheres derived from cenospheres having a mean size of 150 microns, 20% by volume of hollow spheres derived from cenospheres having a mean size of 45 microns, 27.3% by volume of gypsum, and 10% by volume of a mixture of portland micro-cement and slag having a mean size of about 3 microns. water and the following additives were mixed with the powder so that the volume percentage of liquid in the slurry was 42%: retarder based on purified lignosulfonates at 0.05 gallons per bag of powder; a water retainer of the example at 0.04 gallons per bag of powder; and an antifoaming agent at 0.03 gallons per bag of powder. slurry b (reference) : this slurry corresponds to the prior art. a mixture of powders was prepared. it comprised 40% by volume of class g cement and 60% by volume of plaster. water and additives were mixed with the powder so that the density of the slurry was 1900 kg/m ^{ 3 } (15.8 ppg). to foam this slurry, entirely conventional wetting agents were added: d138 and f052.1 in a 1:1 ratio. the quantity added depends on foam quality. it was adjusted so as to obtain a slurry having a density of 1320 kg/m ^{ 3 } (11 pounds per gallon). 1320 1218 densities (11) (10.15) slurry a q 0 (of the pv 112 invention) ty 6.7 cs (at 12 hours for cement set at 4 c. under 2.41 6.9 mpa) (350) cs (at 24 hours for cement set at 25 c. 14.8 under 6.9 mpa) (2150) slurry b q 30% (reference) cs (at 24 hours for cement set at 18 c. 2.96 under atmospheric pressure) (430) cs (at 48 hours for cement set at 18 c. 4.55 under atmospheric pressure) (660) densities are expressed in kg/m ^{ 3 } (and in pounds per gallon in parentheses). rheology is expressed by the flow threshold ty in pascals (and in pounds per 100 square feet in parentheses), and by plastic viscosity pv in mpa.s or centipoises, using the bingham fluid model. these parameters were determined at ambient temperature. cs stands for compression strength under conditions stated in the table, expressed in mpa (and in pounds per square inch in parentheses).
114-292-205-292-854
US
[ "WO", "CN", "US", "EP" ]
H04W4/029,G06F3/038,G06F3/04886,G06F3/04895,G06F3/04817,G06F3/0482,G06F3/04883,G06F3/14,G06F3/0488,G06F3/0489
2020-09-25T00:00:00
2020
[ "H04", "G06" ]
user interfaces for tracking and finding items
in some embodiments, an electronic device presents user interfaces for defining identifiers for remote locator objects. in some embodiments, an electronic device locates a remote locator object. in some embodiments, an electronic device provides information associated with a remote locator object. in some embodiments, an electronic device displays notifications associated with a trackable device. in some embodiments, a first device generates alerts.
claims 1. a method comprising: at an electronic device in communication with one or more wireless antenna, a display generation component, and one or more input devices: while displaying, via the display generation component, a respective user interface for inputting an identifier for a remote locator object, wherein the respective user interface includes a representation of a first portion of the identifier and a representation of a second portion of the identifier, receiving, via the one or more input devices, a respective input; and in response to receiving the respective input: in accordance with a determination that the respective input corresponds to selection of the representation of the first portion of the identifier, displaying, via the display generation component, a first user interface for selecting a graphic for the first portion of the identifier; and in accordance with a determination that the respective input corresponds to selection of the representation of the second portion of the identifier, displaying, via the display generation component, a second user interface for selecting one or more text characters for the second portion of the identifier. 2. the method of claim 1, wherein the first user interface is displayed in a first portion of the respective user interface, and the second user interface is displayed in the first portion of the respective user interface. 3. the method of any of claims 1-2, wherein the respective user interface includes a respective user interface element for selecting from a plurality of predefined options for the second portion of the identifier for the remote locator object, the method further comprising: in response to receiving the respective input, and in accordance with a determination that the respective input is directed to the respective user interface element: in accordance with a determination that the respective input corresponds to a request to select a first respective predefined option of the plurality of predefined options for the second portion for the identifier, displaying: a first graphic in the representation of the first portion of the identifier that corresponds to the first respective predefined option; and first text corresponding to the first respective predefined option in the representation of the second portion of the identifier; and in accordance with a determination that the respective input corresponds to a request to select a second respective predefined option of the plurality of predefined options for the second portion of the identifier, displaying: a second graphic, different from the first graphic, in the representation of the first portion of the identifier that corresponds to the second respective predefined option; and second text corresponding to the second respective predefined option in the representation of the second portion of the identifier, wherein the second text is different from the first text. 4. the method of claim 3, wherein: the first text corresponding to the first respective predefined option in the representation of the second portion of the identifier are displayed concurrently with text that is selected based on a name of a user of the electronic device, and the second text corresponding to the second respective predefined option in the representation of the second portion of the identifier are displayed concurrently with the text that is selected based on the name of the user of the electronic device. 5. the method of any of claims 1-4, wherein the first user interface includes a soft emoji keyboard for selecting the graphic for the first portion of the identifier. 6. the method of any of claims 1-5, wherein the second user interface includes a text keyboard for selecting the one or more text characters for the second portion of the identifier. 7. the method of any of claims 1-6, wherein: the second user interface includes a selectable option that is selectable to transition from the second user interface to the first user interface, and the first user interface does not include a selectable option that is selectable to transition from the first user interface to the second user interface. 8. the method of any of claims 1-7, wherein the respective user interface includes a respective user interface element for selecting from a plurality of predefined options for the identifier for the remote locator object, the method further comprising: in response to receiving the respective input, and in accordance with a determination that the respective input is directed to the respective user interface element: in accordance with a determination that the respective input corresponds to a request to select a first respective predefined option of the plurality of predefined options for the second portion of the identifier, displaying, in the respective user interface, first text corresponding to the first respective predefined option in the representation of the second portion of the identifier appended to a name of the user of the electronic device; and in accordance with a determination that the respective input corresponds to a request to select a second respective predefined option of the plurality of predefined options for the second portion of the identifier, displaying, in the respective user interface, second text corresponding to the second respective predefined option in the representation of the second portion of the identifier appended to the name of the user of the electronic device, wherein the second text is different from the first text. 9. the method of any of claims 1-8, wherein the respective user interface is displayed in response to selection of a respective option included in a respective user interface element, the respective option corresponding to a request to provide a non-predefmed identifier for the remote locator object, and the respective user interface element further includes a plurality of options for selecting from a plurality of predefined options for the second portion of the identifier for the remote locator object. 10. the method of any of claims 1-9, wherein the respective user interface is displayed in response to selection of a selectable option displayed in a user interface associated with the remote locator object. 11. the method of any of claims 1-10, further comprising: in response to receiving the respective input: in accordance with the determination that the respective input corresponds to selection of the representation of the first portion of the identifier, visually distinguishing the representation of the first portion of the identifier from the representation of the second portion of the identifier; and in accordance with the determination that the respective input corresponds to selection of the representation of the second portion of the identifier, visually distinguishing the representation of the second portion of the identifier from the representation of the first portion of the identifier. 12. the method of any of claims 1-11, further comprising: displaying, via the display generation component, a map user interface that includes a representation of a map that indicates locations of one or more objects, including the remote locator object, wherein the map user interface includes the representation of the first portion of the identifier of the remote locator object displayed at a location on the representation of the map that corresponds to a current location of the remote locator object. 13. the method of any of claims 1-12, further comprising: displaying, via the display generation component, a map user interface that includes a representation of a map that indicates locations of one or more objects, including the remote locator object, wherein: in accordance with a determination that a plurality of objects, including a first object and a second object, satisfy one or more criteria, the map user interface includes a respective representation of the plurality of objects without including a first representation of the first object and a second representation of the second object, and in accordance with a determination that the plurality of objects do not satisfy the one or more criteria, the map user interface includes the first representation of the first object and the second representation of the second object. 14. the method of claim 13, wherein the one or more criteria include a criterion that is satisfied when the plurality of objects are within a threshold distance of a respective electronic device. 15. the method of any of claims 13-14, wherein the one or more criteria include a criterion that is satisfied when the plurality of objects are in wireless communication with a respective electronic device. 16. the method of any of claims 13-15, further comprising: while displaying the respective representation of the plurality of objects in the map user interface, receiving, via the one or more input devices, selection of the respective representation of the plurality of objects; and in response to receiving the selection of the respective representation of the plurality of objects, displaying, in the map user interface, the first representation of the first object and the second representation of the second object. 17. an electronic device, comprising: one or more processors; memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for: while displaying, via a display generation component, a respective user interface for inputting an identifier for a remote locator object, wherein the respective user interface includes a representation of a first portion of the identifier and a representation of a second portion of the identifier, receiving, via one or more input devices, a respective input; and in response to receiving the respective input: in accordance with a determination that the respective input corresponds to selection of the representation of the first portion of the identifier, displaying, via the display generation component, a first user interface for selecting a graphic for the first portion of the identifier; and in accordance with a determination that the respective input corresponds to selection of the representation of the second portion of the identifier, displaying, via the display generation component, a second user interface for selecting one or more text characters for the second portion of the identifier. 18. a non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of an electronic device, cause the electronic device to perform a method comprising: while displaying, via a display generation component, a respective user interface for inputting an identifier for a remote locator object, wherein the respective user interface includes a representation of a first portion of the identifier and a representation of a second portion of the identifier, receiving, via one or more input devices, a respective input; and in response to receiving the respective input: in accordance with a determination that the respective input corresponds to selection of the representation of the first portion of the identifier, displaying, via the display generation component, a first user interface for selecting a graphic for the first portion of the identifier; and in accordance with a determination that the respective input corresponds to selection of the representation of the second portion of the identifier, displaying, via the display generation component, a second user interface for selecting one or more text characters for the second portion of the identifier. 19. an electronic device, comprising: one or more processors; memory; and means for, while displaying, via a display generation component, a respective user interface for inputting an identifier for a remote locator object, wherein the respective user interface includes a representation of a first portion of the identifier and a representation of a second portion of the identifier, receiving, via one or more input devices, a respective input; and means for, in response to receiving the respective input: in accordance with a determination that the respective input corresponds to selection of the representation of the first portion of the identifier, displaying, via the display generation component, a first user interface for selecting a graphic for the first portion of the identifier; and in accordance with a determination that the respective input corresponds to selection of the representation of the second portion of the identifier, displaying, via the display generation component, a second user interface for selecting one or more text characters for the second portion of the identifier. 20. an information processing apparatus for use in an electronic device, the information processing apparatus comprising: means for, while displaying, via a display generation component, a respective user interface for inputting an identifier for a remote locator object, wherein the respective user interface includes a representation of a first portion of the identifier and a representation of a second portion of the identifier, receiving, via one or more input devices, a respective input; and means for, in response to receiving the respective input: in accordance with a determination that the respective input corresponds to selection of the representation of the first portion of the identifier, displaying, via the display generation component, a first user interface for selecting a graphic for the first portion of the identifier; and in accordance with a determination that the respective input corresponds to selection of the representation of the second portion of the identifier, displaying, via the display generation component, a second user interface for selecting one or more text characters for the second portion of the identifier. 21. an electronic device, comprising: one or more processors; memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing any of the methods of claims 1-16. 22. a non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of an electronic device, cause the electronic device to perform any of the methods of claims 1-16. 23. an electronic device, comprising: one or more processors; memory; and means for performing any of the methods of claims 1-16. 24. an information processing apparatus for use in an electronic device, the information processing apparatus comprising: means for performing any of the methods of claims 1-16. 25. a method comprising: at an electronic device in communication with one or more wireless antenna, a display generation component and one or more input devices: displaying, via the display generation component, a first user interface; while displaying the first user interface, receiving a request, via the one or more input devices, to locate a remote locator object; and in response to receiving the request to locate the remote locator object, displaying, via the display generation component, a user interface for locating the remote locator object, including: in accordance with a determination that one or more criteria are satisfied, displaying, in the user interface, a selectable option that is selectable to emit light from a lighting element of the electronic device; and in accordance with a determination that the one or more criteria are not satisfied, forgoing displaying, in the user interface, the selectable option that is selectable to emit light from the lighting element of the electronic device. 26. the method of claim 25, wherein the electronic device includes one or more cameras that are used to determine a location of the electronic device relative to the remote locator object. 27. the method of claim 26, wherein the lighting element of the electronic device, when emitting light, emits light onto a portion of a physical environment of the electronic device that is within a field of view of the one or more cameras. 28. the method of any of claims 26-27, wherein the one or more cameras are located on a first side of the electronic device, and the lighting element is located on the first side of the electronic device. 29. the method of any of claims 26-28, wherein the lighting element is used as a flash for the one or more cameras when the electronic device is capturing media using the one or more cameras in a media capture application. 30. the method of any of claims 26-29, wherein the user interface for locating the remote locator object includes a representation of a portion of a physical environment of the electronic device that is within a field of view of the one or more cameras. 31. the method of any of claims 25-30, further comprising: in accordance with the determination that the one or more criteria are satisfied, displaying, in the user interface, an indication that additional light is needed to locate the remote locator object. 32. the method of any of claims 25-31, wherein the user interface includes an indication of an identifier associated with the remote locator object. 33. the method of any of claims 25-32, further comprising: while not displaying the selectable option that is selectable to emit light from the lighting element of the electronic device in the user interface, determining that the one or more criteria have become satisfied; and in response to determining that the one or more criteria have become satisfied, updating the user interface to include the selectable option that is selectable to emit light from the lighting element of the electronic device. 34. the method of any of claims 25-33, further comprising: while displaying the selectable option that is selectable to emit light from the lighting element of the electronic device in the user interface, determining that a second set of criteria are satisfied; and in response to determining that the second set of criteria are not satisfied, ceasing to display the selectable option that is selectable to emit light from the lighting element of the electronic device. 35. the method of any of claims 25-34, wherein the one or more criteria include one or more of a criterion that is satisfied when a level of ambient light in a physical environment of the electronic device is less than a threshold level, and a criterion that is satisfied when a distance between the electronic device and the remote locator object is less than a threshold distance. 36. the method of any of claims 25-35, further comprising: receiving, via the one or more input devices, selection of the selectable option to emit light from the lighting element of the electronic device; in response to receiving the selection of the selectable option: emitting light from the lighting element of the electronic device; and updating the user interface to include a second selectable option that is selectable to cease emitting light from the lighting element of the electronic device. 37. the method of any of claims 25-36, further comprising: while displaying the user interface for locating the remote locator object: while the electronic device is further than a threshold distance from the electronic device, displaying, in the user interface, a first user interface for locating the remote locator object; while displaying the first user interface for locating the remote locator object, determining that the electronic device is closer than the threshold distance from the electronic device; and in response to determining that the electronic device is closer than the threshold distance from the electronic device, updating the user interface to include a second user interface, different from the first user interface, for locating the remote locator object. 38. the method of any of claims 25-37, wherein the first user interface is a user interface that includes information about the remote locator object. 39. an electronic device, comprising: one or more processors; memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for: displaying, via a display generation component, a first user interface; while displaying the first user interface, receiving a request, via one or more input devices, to locate a remote locator object; and in response to receiving the request to locate the remote locator object, displaying, via the display generation component, a user interface for locating the remote locator object, including: in accordance with a determination that one or more criteria are satisfied, displaying, in the user interface, a selectable option that is selectable to emit light from a lighting element of the electronic device; and in accordance with a determination that the one or more criteria are not satisfied, forgoing displaying, in the user interface, the selectable option that is selectable to emit light from the lighting element of the electronic device. 40. a non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of an electronic device, cause the electronic device to perform a method comprising: displaying, via a display generation component, a first user interface; while displaying the first user interface, receiving a request, via one or more input devices, to locate a remote locator object; and in response to receiving the request to locate the remote locator object, displaying, via the display generation component, a user interface for locating the remote locator object, including: in accordance with a determination that one or more criteria are satisfied, displaying, in the user interface, a selectable option that is selectable to emit light from a lighting element of the electronic device; and in accordance with a determination that the one or more criteria are not satisfied, forgoing displaying, in the user interface, the selectable option that is selectable to emit light from the lighting element of the electronic device. 41. an electronic device, comprising: one or more processors; memory; and means for displaying, via a display generation component, a first user interface; means for, while displaying the first user interface, receiving a request, via one or more input devices, to locate a remote locator object; and means for, in response to receiving the request to locate the remote locator object, displaying, via the display generation component, a user interface for locating the remote locator object, including: in accordance with a determination that one or more criteria are satisfied, displaying, in the user interface, a selectable option that is selectable to emit light from a lighting element of the electronic device; and in accordance with a determination that the one or more criteria are not satisfied, forgoing displaying, in the user interface, the selectable option that is selectable to emit light from the lighting element of the electronic device. 42. an information processing apparatus for use in an electronic device, the information processing apparatus comprising: means for displaying, via a display generation component, a first user interface; means for, while displaying the first user interface, receiving a request, via one or more input devices, to locate a remote locator object; and means for, in response to receiving the request to locate the remote locator object, displaying, via the display generation component, a user interface for locating the remote locator object, including: in accordance with a determination that one or more criteria are satisfied, displaying, in the user interface, a selectable option that is selectable to emit light from a lighting element of the electronic device; and in accordance with a determination that the one or more criteria are not satisfied, forgoing displaying, in the user interface, the selectable option that is selectable to emit light from the lighting element of the electronic device. 43. an electronic device, comprising: one or more processors; memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing any of the methods of claims 25-38. 44. a non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of an electronic device, cause the electronic device to perform any of the methods of claims 25- 38. 45. an electronic device, comprising: one or more processors; memory; and means for performing any of the methods of claims 25-38. 46. an information processing apparatus for use in an electronic device, the information processing apparatus comprising: means for performing any of the methods of claims 25-38. 47. a method comprising: at an electronic device in communication with one or more wireless antenna, a display generation component and one or more input devices: while displaying, via the display generation component, a map user interface that includes a representation of a remote locator object, receiving, via the one or more input devices, an input corresponding to a request to display additional information about the remote locator object; and in response to receiving the input corresponding to the request to display the additional information about the remote locator object, updating the map user interface to include a respective user interface associated with the remote locator object, wherein: in accordance with a determination that the remote locator object satisfies one or more first criteria, the respective user interface includes a respective user interface element that includes first information about the remote locator object; and in accordance with a determination that the remote locator object does not satisfy the one or more first criteria, the respective user interface does not include the respective user interface element that includes the first information about the remote locator object. 48. the method of claim 47, wherein the respective user interface includes a selectable option that is selectable to initiate a process to obtain directions to a location associated with the remote locator object. 49. the method of any of claims 47-48, wherein the respective user interface includes a selectable option that is selectable to initiate a process to cause the remote locator object to generate audio. 50. the method of any of claims 47-49, wherein the first information includes information about an ability of the electronic device to communicate with the remote locator object. 51. the method of claim 50, wherein the information about the ability of the electronic device to communicate with the remote locator object includes information that a wireless communication functionality of the electronic device is disabled. 52. the method of any of claims 47-51, wherein the first information includes an indication that a process to generate audio at the remote locator object is in progress. 53. the method of any of claims 47-52, wherein the first information includes an indication that a process to configure the remote locator object is in progress. 54. the method of any of claims 47-53, wherein the first information includes an indication that a battery level of the remote locator object is below a threshold. 55. the method of any of claims 47-54, wherein the first information includes an indication that a location of the remote locator object is shared with a user that is not associated with the electronic device. 56. the method of any of claims 47-55, wherein the first information includes an indication that the remote locator object has been designated as being lost and is operating in a lost mode. 57. the method of any of claims 47-55, wherein the first information includes an indication that the remote locator object has been designated as being lost and will operate in a lost mode in response to one or more connection criteria being satisfied. 58. the method of any of claims 47-57, wherein the first information includes information associated with an ability of a user that is not associated with the electronic device to determine a location of the remote locator object. 59. the method of claim 58, wherein the information associated with the ability of the user that is not associated with the electronic device to determine the location of the remote locator object includes an indication of an identity of the user. 60. the method of claim 58, wherein the information associated with the ability of the user that is not associated with the electronic device to determine the location of the remote locator object does not include an indication of an identity of the user. 61. the method of any of claims 47-60, further comprising: while displaying, via the display generation component, the respective user interface element, receiving, via the one or more input devices, an input directed to the respective user interface element; and in response to receiving the input directed to the respective user interface element, displaying, via the display generation component, second information, different from the first information, associated with the remote locator object. 62. the method of claim 61, wherein the second information includes a selectable option that is selectable to initiate a process to set a current location of the remote locator object as a safe zone. 63. the method of any of claims 61-62, wherein the second information includes information about changing a battery of the remote locator object. 64. the method of any of claims 61-63, wherein the second information includes an indication of a remaining duration that a location of the remote locator object is shared with a user that is not associated with the electronic device. 65. the method of any of claims 61-64, further comprising: in response to receiving the input directed to the respective user interface element, displaying, via the display generation component, a selectable option for requesting sharing of a location of the remote locator object from an owner of the remote locator object. 66. the method of any of claims 61-65, further comprising: in response to receiving the input directed to the respective user interface element, changing a wireless communication functionality of the electronic device. 67. the method of any of claims 47-66, wherein: in accordance with a determination that the remote locator object satisfies one or more second criteria, the respective user interface includes a second respective user interface element that includes second information about the remote locator object, and in accordance with a determination that the remote locator object does not satisfy the one or more second criteria, the respective user interface does not include the second respective user interface element that includes second information about the remote locator object. 68. the method of any of claims 47-67, further comprising: while displaying, via the display generation component, the respective user interface element, receiving, via the one or more input devices, an input directed to the respective user interface element; and in response to receiving the input directed to the respective user interface element, changing a setting associated with the remote locator object. 69. the method of claim 68, wherein changing the setting associated with the remote locator object includes enabling a wireless communication functionality of the electronic device to communicate with the remote locator object. 70. the method of any of claims 47-69, further comprising: while displaying the respective user interface that includes the respective user interface element, determining that the remote locator object no longer satisfies the one or more first criteria; and in response to determining that the remote locator object no longer satisfies the one or more first criteria, ceasing to display the respective user interface element. 71. an electronic device, comprising: one or more processors; memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for: while displaying, via a display generation component, a map user interface that includes a representation of a remote locator object, receiving, via one or more input devices, an input corresponding to a request to display additional information about the remote locator object; and in response to receiving the input corresponding to the request to display the additional information about the remote locator object, updating the map user interface to include a respective user interface associated with the remote locator object, wherein: in accordance with a determination that the remote locator object satisfies one or more first criteria, the respective user interface includes a respective user interface element that includes first information about the remote locator object; and in accordance with a determination that the remote locator object does not satisfy the one or more first criteria, the respective user interface does not include the respective user interface element that includes the first information about the remote locator object. 72. a non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of an electronic device, cause the electronic device to perform a method comprising: while displaying, via a display generation component, a map user interface that includes a representation of a remote locator object, receiving, via one or more input devices, an input corresponding to a request to display additional information about the remote locator object; and in response to receiving the input corresponding to the request to display the additional information about the remote locator object, updating the map user interface to include a respective user interface associated with the remote locator object, wherein: in accordance with a determination that the remote locator object satisfies one or more first criteria, the respective user interface includes a respective user interface element that includes first information about the remote locator object; and in accordance with a determination that the remote locator object does not satisfy the one or more first criteria, the respective user interface does not include the respective user interface element that includes the first information about the remote locator object. 73. an electronic device, comprising: one or more processors; memory; and means for, while displaying, via a display generation component, a map user interface that includes a representation of a remote locator object, receiving, via one or more input devices, an input corresponding to a request to display additional information about the remote locator object; and means for, in response to receiving the input corresponding to the request to display the additional information about the remote locator object, updating the map user interface to include a respective user interface associated with the remote locator object, wherein: in accordance with a determination that the remote locator object satisfies one or more first criteria, the respective user interface includes a respective user interface element that includes first information about the remote locator object; and in accordance with a determination that the remote locator object does not satisfy the one or more first criteria, the respective user interface does not include the respective user interface element that includes the first information about the remote locator object. 74. an information processing apparatus for use in an electronic device, the information processing apparatus comprising: means for, while displaying, via a display generation component, a map user interface that includes a representation of a remote locator object, receiving, via one or more input devices, an input corresponding to a request to display additional information about the remote locator object; and means for, in response to receiving the input corresponding to the request to display the additional information about the remote locator object, updating the map user interface to include a respective user interface associated with the remote locator object, wherein: in accordance with a determination that the remote locator object satisfies one or more first criteria, the respective user interface includes a respective user interface element that includes first information about the remote locator object; and in accordance with a determination that the remote locator object does not satisfy the one or more first criteria, the respective user interface does not include the respective user interface element that includes the first information about the remote locator object. 75. an electronic device, comprising: one or more processors; memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing any of the methods of claims 47-70. 76. a non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of an electronic device, cause the electronic device to perform any of the methods of claims 47- 70. 77. an electronic device, comprising: one or more processors; memory; and means for performing any of the methods of claims 47-70. 78. an information processing apparatus for use in an electronic device, the information processing apparatus comprising: means for performing any of the methods of claims 47-70. 79. a method comprising: at an electronic device in communication with one or more wireless antenna, a display generation component and one or more input devices: while a remote locator object that is associated with a user other than a user of the electronic device is near the electronic device: in accordance with a determination that one or more first criteria are satisfied, automatically presenting, without user input, a tracking alert that indicates that the remote locator object that is not associated with the user of the electronic device satisfies the one or more first criteria, wherein the one or more first criteria include: a first criterion that is satisfied when the remote locator object has remained within a first threshold distance of the electronic device while the electronic device has moved more than a second threshold distance, wherein the second threshold distance is greater than the first threshold distance, and a second criterion that is satisfied when the remote locator object has remained within a third threshold distance of the electronic device for longer than a time threshold after the electronic device moved more than the second threshold distance. 80. the method of claim 79, further comprising: while the remote locator object that is associated with the user other than the user of the electronic device is near the electronic device: in accordance with a determination that the one or more first criteria are not satisfied, forgoing automatically presenting the tracking alert. 81. the method of any of claims 79-80, further comprising: while the remote locator object that is associated with the user other than the user of the electronic device is near the electronic device and before the one or more first criteria are satisfied: in accordance with a determination that one or more second criteria are satisfied, including a criterion that is satisfied when the user other than the user of the electronic device has attempted to locate the remote locator object, automatically presenting, without user input, a tracking alert that indicates that the remote locator object that is not associated with the user of the electronic device satisfies the one or more second criteria. 82. the method of any of claims 79-81, further comprising: while the remote locator object that is associated with the user other than the user of the electronic device is near the electronic device and before the one or more first criteria are satisfied: in accordance with a determination that one or more second criteria are satisfied, including a criterion that is satisfied when a current location of the electronic device is within a threshold distance of a predetermined location associated with the user of the electronic device, automatically presenting, without user input, a tracking alert that indicates that the remote locator object that is not associated with the user of the electronic device satisfies the one or more second criteria. 83. the method of any of claims 79-82, further comprising: while the remote locator object that is associated with the user other than the user of the electronic device is near the electronic device and before the one or more first criteria are satisfied: in accordance with a determination that one or more second criteria are satisfied, including a criterion that is satisfied when a current time is within a threshold time of a new identifier being selected for the remote locator object, automatically presenting, without user input, a tracking alert that indicates that the remote locator object that is not associated with the user of the electronic device satisfies the one or more second criteria. 84. the method of any of claims 79-83, wherein the one or more first criteria include a criterion that is satisfied if no tracking alert associated with the remote locator object has been presented by the electronic device within a predefined time period. 85. the method of any of claims 79-84, wherein the first threshold distance is 10 feet. 86. the method of any of claims 79-85, wherein the third threshold distance is a value between 1 and 30 feet. 87. the method of any of claims 79-86, wherein the electronic device has moved more than the second threshold distance when the electronic device has moved from a first location to a second location that is more than 200 feet from the first location. 88. the method of any of claims 79-87, wherein the one or more first criteria include a criterion that is satisfied when the remote locator object is not near a second electronic device that is associated with the user other than the user of the electronic device. 89. the method of any of claims 79-88, wherein the one or more first criteria include a criterion that is satisfied when the electronic device has moved less than a fourth threshold distance after moving more than the second threshold distance during a second time threshold. 90. the method of any of claims 79-89, further comprising: receiving, via the one or more input devices, a request to associate the electronic device with a respective object; and in response to receiving the request to associate the electronic device with the respective object: in accordance with a determination that the respective object satisfies one or more second criteria, including a criterion that is satisfied when the respective object is a trackable object, automatically presenting an alert that indicates that the respective object is a trackable object. 91. the method of any of claims 79-90, further comprising: receiving, via the one or more input devices, a request to view information about one or more trackable objects in an environment of the electronic device; and in response to receiving the request to view the information about the one or more trackable objects in the environment of the electronic device, displaying, via the display generation component, one or more representations of the one or more trackable objects in the environment of the electronic device. 92. the method of claim 91, wherein the one or more trackable objects include a first trackable object associated with a first representation of the one or more representations, and the first representation is displayed with a representation of a respective user, other than the user of the electronic device, associated with the first trackable object. 93. the method of any of claims 91-92, further comprising: in accordance with a determination that at least one trackable object is in the environment of the electronic device, displaying, via the display generation component, a visual indication that at least one trackable object is in the environment of the electronic device, wherein the request to view the information about the one or more trackable objects in the environment of the electronic device comprises selection of the visual indication that at least one trackable object is in the environment of the electronic device. 94. an electronic device, comprising: one or more processors; memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for: while a remote locator object that is associated with a user other than a user of the electronic device is near the electronic device: in accordance with a determination that one or more first criteria are satisfied, automatically presenting, without user input, a tracking alert that indicates that the remote locator object that is not associated with the user of the electronic device satisfies the one or more first criteria, wherein the one or more first criteria include: a first criterion that is satisfied when the remote locator object has remained within a first threshold distance of the electronic device while the electronic device has moved more than a second threshold distance, wherein the second threshold distance is greater than the first threshold distance, and a second criterion that is satisfied when the remote locator object has remained within a third threshold distance of the electronic device for longer than a time threshold after the electronic device moved more than the second threshold distance. 95. a non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of an electronic device, cause the electronic device to perform a method comprising: while a remote locator object that is associated with a user other than a user of the electronic device is near the electronic device: in accordance with a determination that one or more first criteria are satisfied, automatically presenting, without user input, a tracking alert that indicates that the remote locator object that is not associated with the user of the electronic device satisfies the one or more first criteria, wherein the one or more first criteria include: a first criterion that is satisfied when the remote locator object has remained within a first threshold distance of the electronic device while the electronic device has moved more than a second threshold distance, wherein the second threshold distance is greater than the first threshold distance, and a second criterion that is satisfied when the remote locator object has remained within a third threshold distance of the electronic device for longer than a time threshold after the electronic device moved more than the second threshold distance. 96. an electronic device, comprising: one or more processors; memory; and means for, while a remote locator object that is associated with a user other than a user of the electronic device is near the electronic device: in accordance with a determination that one or more first criteria are satisfied, automatically presenting, without user input, a tracking alert that indicates that the remote locator object that is not associated with the user of the electronic device satisfies the one or more first criteria, wherein the one or more first criteria include: a first criterion that is satisfied when the remote locator object has remained within a first threshold distance of the electronic device while the electronic device has moved more than a second threshold distance, wherein the second threshold distance is greater than the first threshold distance, and a second criterion that is satisfied when the remote locator object has remained within a third threshold distance of the electronic device for longer than a time threshold after the electronic device moved more than the second threshold distance. 97. an information processing apparatus for use in an electronic device, the information processing apparatus comprising: means for, while a remote locator object that is associated with a user other than a user of the electronic device is near the electronic device: in accordance with a determination that one or more first criteria are satisfied, automatically presenting, without user input, a tracking alert that indicates that the remote locator object that is not associated with the user of the electronic device satisfies the one or more first criteria, wherein the one or more first criteria include: a first criterion that is satisfied when the remote locator object has remained within a first threshold distance of the electronic device while the electronic device has moved more than a second threshold distance, wherein the second threshold distance is greater than the first threshold distance, and a second criterion that is satisfied when the remote locator object has remained within a third threshold distance of the electronic device for longer than a time threshold after the electronic device moved more than the second threshold distance. 98. an electronic device, comprising: one or more processors; memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing any of the methods of claims 79-93. 99. a non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of an electronic device, cause the electronic device to perform any of the methods of claims 79- 93. 100. an electronic device, comprising: one or more processors; memory; and means for performing any of the methods of claims 79-93. 101. an information processing apparatus for use in an electronic device, the information processing apparatus comprising: means for performing any of the methods of claims 79-93. 102. a method, comprising: at a first device with one or more motion detecting sensors and one or more wireless transmission elements and one or more output devices: detecting, via the one or more motion detecting sensors, motion of the first device; and in response to detecting the motion of the first device: in accordance with a determination that first alert criteria are met, wherein the first alert criteria include a requirement that the first device has not been in wireless communication with a second device that is capable of tracking a location of the first device within a predetermined period of time, prior to detecting the motion, generating an alert via the one or more output devices; and in accordance with a determination that the first device was in wireless communication with the second device that is capable of tracking the location of the first device within the predetermined period of time prior to detecting the motion, forgoing generating the alert via the one or more output devices. 103. the method of claim 102, including: after generating the alert, continuing to detect motion of the first device, and in response to continuing to detect the motion of the first device, continuing to generate the alert via the one or more output devices. 104. the method of any of claims 102-103, including: after generating the alert, ceasing to detect, via the one or more motion detecting sensors, motion of the first device; and in response to ceasing to detect the motion of the first device, ceasing to generate the alert via the one or more output devices. 105. the method of any of claims 102-104, wherein the alert includes one or more of, an audio alert, a haptic alert, and a visual alert. 106. the method of any of claims 102-105, wherein the first device is a remote tracking device and the second device is a personal communication device 107. the method of any of claims 102-106, wherein the first alert criteria include a requirement that the first device is not currently within a predetermined distance of an electronic device that is capable of displaying alerts about the presence of the first device. 108. the method of any of claims 102-107, wherein the first alert criteria include a requirement that the first device has not been temporarily associated with a second user account that is different than a first user account with which the first device is associated. 109. the method of any of claims 102-108, including, in response to detecting the motion of the first device: in accordance with a determination that second alert criteria are met, wherein the second alert criteria include a requirement that the first device has not been in wireless communication with the second device within the predetermined period of time prior to detecting the motion and that the first device is currently within a predetermined distance of a third device that is capable of displaying alerts about the presence of the first device, transmitting, via the one or more wireless transmission elements, information to the third device that, when received by the third device, will cause the third device to output a second alert about the presence of the first device. 110. the method of claim 109, including, in response to detecting the motion of the first device: in accordance with a determination that the second alert criteria are met, wherein the second alert criteria include the requirement that the first device has not been in wireless communication with the second device within the predetermined period of time prior to detecting the motion and that the first device is currently within the predetermined distance of the third device that is capable of displaying alerts about the presence of the first device, transmitting, via the one or more wireless transmission elements, the information to the third device that, when received by the third device will cause the third device to output the second alert about the presence of the first device and forgoing outputting the alert via the one or more output devices of the first device. 111. the method of any of claims 109-110, including, in response to detecting the motion of the first device: in accordance with a determination that the second alert criteria are met, wherein the second alert criteria include the requirement that the first device has not been in wireless communication with the second device within the predetermined period of time prior to detecting the motion and that the first device is currently within the predetermined distance of the third device that is capable of displaying alerts about the presence of the first device: transmitting, via the one or more wireless transmission elements, the information to the third device that, when received by the third device, will cause the third device to output the second alert about the presence of the first device, and outputting the alert via the one or more output devices of the first device. 112. a first device, comprising: one or more processors; memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for: detecting, via one or more motion detecting sensors, motion of the first device; and in response to detecting the motion of the first device: in accordance with a determination that first alert criteria are met, wherein the first alert criteria include a requirement that the first device has not been in wireless communication with a second device that is capable of tracking a location of the first device within a predetermined period of time, prior to detecting the motion, generating an alert via one or more output devices; and in accordance with a determination that the first device was in wireless communication with the second device that is capable of tracking the location of the first device within the predetermined period of time prior to detecting the motion, forgoing generating the alert via the one or more output devices. 113. a non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of a first device, cause the first device to perform a method comprising: detecting, via one or more motion detecting sensors, motion of the first device; and in response to detecting the motion of the first device: in accordance with a determination that first alert criteria are met, wherein the first alert criteria include a requirement that the first device has not been in wireless communication with a second device that is capable of tracking a location of the first device within a predetermined period of time, prior to detecting the motion, generating an alert via one or more output devices; and in accordance with a determination that the first device was in wireless communication with the second device that is capable of tracking the location of the first device within the predetermined period of time prior to detecting the motion, forgoing generating the alert via the one or more output devices. 114. a first device, comprising: one or more processors; memory; and means for detecting, via one or more motion detecting sensors, motion of the first device; and means for in response to detecting the motion of the first device: in accordance with a determination that first alert criteria are met, wherein the first alert criteria include a requirement that the first device has not been in wireless communication with a second device that is capable of tracking a location of the first device within a predetermined period of time, prior to detecting the motion, generating an alert via one or more output devices; and in accordance with a determination that the first device was in wireless communication with the second device that is capable of tracking the location of the first device within the predetermined period of time prior to detecting the motion, forgoing generating the alert via the one or more output devices. 115. a first device, comprising: one or more processors; memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing any of the methods of claims 102-111. 116. a non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of a first device, cause the first device to perform any of the methods of claims 102-111. 117. a first device, comprising: one or more processors; memory; and means for performing any of the methods of claims 102-111.
user interfaces for tracking and finding items cross-reference to related applications [0001] this application claims the benefit of u.s. provisional application no. 63/083,735, filed september 25, 2020, u.s. provisional application no. 63/110,715, filed november 6, 2020, and u.s. provisional application no. 63/176,883, filed april 19, 2021, the contents of which are herein incorporated by reference in their entireties for all purposes. field of the disclosure [0002] this relates generally to user interfaces that enable a user to track and find items using an electronic device. background of the disclosure [0003] user interaction with electronic devices has increased significantly in recent years. these devices can be devices such as televisions, multimedia devices, mobile devices, computers, tablet computers, and the like. [0004] in some circumstances, users may wish to use such devices to track and/or find items. enhancing the user’s interactions with the device improves the user's experience with the device and decreases user interaction time, which is particularly important where input devices are battery-operated. [0005] it is well understood that personally identifiable information data should be managed and handled so as to minimize risks of unintentional or unauthorized access or use, and the nature of authorized use should be clearly indicated to users. in particular, the use of personally identifiable information should follow privacy policies and practices that are generally recognized as meeting or exceeding industry or governmental requirements for maintaining the privacy of users. summary of the disclosure [0006] some embodiments described in this disclosure are directed to user interfaces for defining identifiers for remote locator objects. some embodiments described in this disclosure are directed to locating a remote locator object. some embodiments described in this disclosure are directed to providing information associated with a remote locator object. some embodiments described in this disclosure are directed to displaying notifications associated with a trackable device. some embodiments described in this disclosure are directed to generating alerts. the full descriptions of the embodiments are provided in the drawings and the detailed description, and it is understood that the summary provided above does not limit the scope of the disclosure in any way. brief description of the drawings [0007] for a better understanding of the various described embodiments, reference should be made to the detailed description below, in conjunction with the following drawings in which like reference numerals refer to corresponding parts throughout the figures. [0008] fig. 1 a is a block diagram illustrating a portable multifunction device with a touch-sensitive display in accordance with some embodiments. [0009] fig. ib is a block diagram illustrating exemplary components for event handling in accordance with some embodiments. [0010] fig. 2 illustrates a portable multifunction device having a touch screen in accordance with some embodiments. [0011] fig. 3 is a block diagram of an exemplary multifunction device with a display and a touch-sensitive surface in accordance with some embodiments. [0012] fig. 4a illustrates an exemplary user interface for a menu of applications on a portable multifunction device in accordance with some embodiments. [0013] fig. 4b illustrates an exemplary user interface for a multifunction device with a touch-sensitive surface that is separate from the display in accordance with some embodiments. [0014] fig. 5a illustrates a personal electronic device in accordance with some embodiments. [0015] fig. 5b is a block diagram illustrating a personal electronic device in accordance with some embodiments. [0016] figs. 5c-5d illustrate exemplary components of a personal electronic device having a touch-sensitive display and intensity sensors in accordance with some embodiments. [0017] figs. 5e-5h illustrate exemplary components and user interfaces of a personal electronic device in accordance with some embodiments. [0018] figs. 6a-6r illustrate exemplary ways in which an electronic device provides user interfaces for defining identifiers for remote locator objects in accordance with some embodiments of the disclosure. [0019] figs. 7a-7h are flow diagrams illustrating a method of providing user interfaces for defining identifiers for remote locator objects in accordance with some embodiments of the disclosure. [0020] figs. 8a-8i illustrate exemplary ways in which an electronic device locates a remote locator object in accordance with some embodiments of the disclosure. [0021] figs. 9a-9g are flow diagrams illustrating a method of locating a remote locator object in accordance with some embodiments of the disclosure. [0022] figs. 10a-10t illustrate exemplary ways in which an electronic device provides information associated with a remote locator object and/or provides mechanisms for adjusting operation of the remote locator object or the electronic device in accordance with some embodiments of the disclosure. [0023] figs. 11 a- 111 are flow diagrams illustrating a method of providing information associated with a remote locator object and/or providing mechanisms for adjusting operation of the remote locator object or the electronic device in accordance with some embodiments of the disclosure. [0024] figs. 12a-12g illustrate exemplary ways in which an electronic device displays notifications associated with a trackable device in accordance with some embodiments of the disclosure. [0025] figs. 13a-13f are flow diagrams illustrating a method of displaying notifications associated with a trackable device in accordance with some embodiments of the disclosure. [0026] figs. 14a-14r illustrate an electronic device displaying notifications of tracking by an unknown remote locator object. [0027] figs. 15a-15e are flow diagrams illustrating a method of generating alerts in accordance with some embodiments. detailed description [0028] the following description sets forth exemplary methods, parameters, and the like. it should be recognized, however, that such description is not intended as a limitation on the scope of the present disclosure but is instead provided as a description of exemplary embodiments. [0029] there is a need for electronic devices to name remote locator objects and/or locate remote locator objects. such techniques can reduce the cognitive burden on a user who uses such devices and/or wishes to control their use of such devices. further, such techniques can reduce processor and battery power otherwise wasted on redundant user inputs. [0030] although the following description uses terms “first,” “second,” etc. to describe various elements, these elements should not be limited by the terms. for example, a first touch could be termed a second touch, and, similarly, a second touch could be termed a first touch, without departing from the scope of the various described embodiments. the first touch and the second touch are both touches, but they are not the same touch. these terms are only used to distinguish one element from another. [0031] the terminology used in the description of the various described embodiments herein is for the purpose of describing particular embodiments only and is not intended to be limiting. it will be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. it will be further understood that the terms “includes,” “including,” “comprises,” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. as used in the description of the various described embodiments and the appended claims, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. [0032] the phrase “if it is determined” or “if [a stated condition or event] is detected” is, optionally, construed to mean “upon determining” or “in response to determining” or “upon detecting [the stated condition or event]” or “in response to detecting [the stated condition or event],” depending on the context. the term “if’ is, optionally, construed to mean “when” or “upon” or “in response to determining” or “in response to detecting,” depending on the context. [0033] embodiments of electronic devices, user interfaces for such devices, and associated processes for using such devices are described. in some embodiments, the device is a portable communications device, such as a mobile telephone, that also contains other functions, such as pda and/or music player functions. it should also be understood that, in some embodiments, the device is not a portable communications device, but is a desktop computer with a touch-sensitive surface (e.g., a touch screen display and/or a touchpad). other portable electronic devices, such as laptops or tablet computers with touch-sensitive surfaces (e.g., touch screen displays and/or touchpads), are, optionally, used. [0034] it should be understood that the electronic device optionally includes one or more other physical user-interface devices, such as a physical keyboard, a mouse, and/or a joystick. in the discussion that follows, an electronic device that includes a display and a touch-sensitive surface is described. [0035] the device typically supports a variety of applications, such as one or more of the following: a web browsing application, a website creation application, a word processing application, a disk authoring application, a spreadsheet application, a gaming application, a telephone application, a drawing application, a presentation application, a video conferencing application, , a workout support application, a digital camera application, a digital video camera application, a photo management application, an e-mail application, an instant messaging application, a digital music player application, and/or a digital video player application. [0036] one or more functions of the touch-sensitive surface as well as corresponding information displayed on the device are, optionally, adjusted and/or varied from one application to the next and/or within a respective application. in this way, a common physical architecture (such as the touch-sensitive surface) of the device optionally supports the variety of applications with user interfaces that are intuitive and transparent to the user. the various applications that are executed on the device optionally use at least one common physical user-interface device, such as the touch-sensitive surface. [0037] attention is now directed toward embodiments of portable devices with touch- sensitive displays. fig. 1 a is a block diagram illustrating portable multifunction device 100 with touch-sensitive display system 112 in accordance with some embodiments. device 100 includes memory 102 (which optionally includes one or more computer-readable storage mediums), memory controller 122, one or more processing units (cpus) 120, audio circuitry 110, speaker 111, microphone 113, input/output (i/o) subsystem 106, peripherals interface 118, rf circuitry 108, other input control devices 116, and external port 124. touch-sensitive display 112 is sometimes called a “touch screen” for convenience and is sometimes known as or called a “touch-sensitive display system.” device 100 optionally includes one or more contact intensity sensors 165 for detecting intensity of contacts on device 100 (e.g., a touch-sensitive surface such as touch-sensitive display system 112 of device 100). device 100 optionally includes one or more tactile output generators 167 for generating tactile outputs on device 100 (e.g., generating tactile outputs on a touch-sensitive surface such as touch-sensitive display system 112 of device 100 or touchpad 355 of device 300). device 100 optionally includes one or more optical sensors 164. these components optionally communicate over one or more communication buses or signal lines 103. [0038] using the intensity of a contact as an attribute of a user input allows for user access to additional device functionality that may otherwise not be accessible by the user on a reduced-size device with limited real estate for receiving user input (e.g., via a touch-sensitive display, a touch-sensitive surface, or a physical/mechanical control such as a knob or a button) and/or displaying affordances (e.g., on a touch-sensitive display). as used in the specification and claims, the term “intensity” of a contact on a touch-sensitive surface refers to the force or pressure (force per unit area) of a contact (e.g., a finger contact) on the touch-sensitive surface, or to a substitute (proxy) for the force or pressure of a contact on the touch-sensitive surface. intensity of a contact is, optionally, determined (or measured) using various approaches and various sensors or combinations of sensors. for example, one or more force sensors underneath or adjacent to the touch-sensitive surface are, optionally, used to measure force at various points on the touch-sensitive surface. the intensity of a contact has a range of values that includes at least four distinct values and more typically includes hundreds of distinct values (e.g., at least 256). in some implementations, force measurements from multiple force sensors are combined (e.g., a weighted average) to determine an estimated force of a contact. similarly, a pressure- sensitive tip of a stylus is, optionally, used to determine a pressure of the stylus on the touch- sensitive surface. alternatively, the size of the contact area detected on the touch-sensitive surface and/or changes thereto, the capacitance of the touch-sensitive surface proximate to the contact and/or changes thereto, and/or the resistance of the touch-sensitive surface proximate to the contact and/or changes thereto are, optionally, used as a substitute for the force or pressure of the contact on the touch-sensitive surface. in some implementations, the substitute measurements for contact force or pressure are converted to an estimated force or pressure, and the estimated force or pressure is used to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is a pressure threshold measured in units of pressure). in some implementations, the substitute measurements for contact force or pressure are used directly to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is described in units corresponding to the substitute measurements). [0039] as used in the specification and claims, the term “tactile output” refers to physical displacement of a device relative to a previous position of the device, physical displacement of a component (e.g., a touch-sensitive surface) of a device relative to another component (e.g., housing) of the device, or displacement of the component relative to a center of mass of the device that will be detected by a user with the user’s sense of touch. in some cases, a user will feel a tactile sensation such as an “down click” or “up click” even when there is no movement of a physical actuator button associated with the touch-sensitive surface that is physically pressed (e.g., displaced) by the user’s movements. for example, in situations where the device or the component of the device is in contact with a surface of a user that is sensitive to touch (e.g., a finger, palm, or other part of a user’s hand), the tactile output generated by the physical displacement will be interpreted by the user as a tactile sensation corresponding to a perceived change in physical characteristics of the device or the component of the device. for example, movement of a touch-sensitive surface (e.g., a touch-sensitive display or trackpad) is, optionally, interpreted by the user as a “down click” or “up click” of a physical actuator button. as another example, movement of the touch-sensitive surface is, optionally, interpreted or sensed by the user as “roughness” of the touch-sensitive surface, even when there is no change in smoothness of the touch-sensitive surface. while such interpretations of touch by a user will be subject to the individualized sensory perceptions of the user, there are many sensory perceptions of touch that are common to a large majority of users. thus, when a tactile output is described as corresponding to a particular sensory perception of a user (e.g., an “up click,” a “down click,” “roughness”), unless otherwise stated, the generated tactile output corresponds to physical displacement of the device or a component thereof that will generate the described sensory perception for a typical (or average) user. [0040] the various components shown in fig. 1 a are implemented in hardware, software, or a combination of both hardware and software, including one or more signal processing and/or application-specific integrated circuits. it should be appreciated that device 100 is only one example of a portable multifunction device, and that device 100 optionally has more or fewer components than shown, optionally combines two or more components, or optionally has a different configuration or arrangement of the components. [0041] memory controller 122 optionally controls access to memory 102 by other components of device 100. memory 102 optionally includes high-speed random access memory and optionally also includes non-volatile memory, such as one or more flash memory devices, magnetic disk storage devices, or other non-volatile solid-state memory devices. [0042] the one or more processors 120 run or execute various software programs and/or sets of instructions stored in memory 102 to perform various functions for device 100 and to process data. peripherals interface 118 can be used to couple input and output peripherals of the device to cpu 120 and memory 102. in some embodiments, peripherals interface 118, memory controller 122, and cpu 120 are, optionally, implemented on a single chip, such as chip 104. in some other embodiments, they are, optionally, implemented on separate chips. [0043] rf (radio frequency) circuitry 108 receives and sends rf signals, also called electromagnetic signals. rf circuitry 108 converts electrical signals to/from electromagnetic signals and communicates with communications networks and other communications devices via the electromagnetic signals. rf circuitry 108 optionally includes well-known circuitry for performing these functions, including but not limited to an antenna system, an rf transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a codec chipset, a subscriber identity module (sim) card, memory, and so forth. the rf circuitry 108 optionally includes well-known circuitry for detecting near field communication (nfc) fields, such as by a short-range communication radio. rf circuitry 108 optionally communicates with networks, such as the internet, also referred to as the world wide web (www), an intranet and/or a wireless network, such as a cellular telephone network, a wireless local area network (lan) and/or a metropolitan area network (man), and other devices by wireless communication. the wireless communication optionally uses any of a plurality of communications standards, protocols, and technologies, including but not limited to high-speed uplink packet access (hsupa), evolution, data-only (ev-do), hspa, hspa+, dual-cell hspa (dc-hspda), global system for mobile communications (gsm), enhanced data gsm environment (edge), high-speed downlink packet access (hsdpa), long term evolution (lte), near field communication (nfc), wideband code division multiple access (w-cdma), code division multiple access (cdma), time division multiple access (tdma), wireless fidelity (wi-fi) (e.g., ieee 802.11a, ieee 802.1 lb, ieee 802.1 lg, ieee 802.1 in, and/or ieee 802.1 lac), bluetooth, bluetooth low energy (btle), voice over internet protocol (voip), wi- max, a protocol for e-mail (e.g., internet message access protocol (imap) and/or post office protocol (pop)), session initiation protocol for instant messaging and presence leveraging extensions (simple), instant messaging and presence service (imps)), short message service (sms), and/or instant messaging (e.g., extensible messaging and presence protocol (xmpp), or any other suitable communication protocol, including communication protocols not yet developed as of the filing date of this document. [0044] audio circuitry 110, speaker 111, and microphone 113 provide an audio interface between a user and device 100. speaker 111 converts the electrical signal to human-audible sound waves. audio circuitry 110 receives audio data from peripherals interface 118, converts the audio data to an electrical signal, and transmits the electrical signal to speaker 111. audio circuitry 110 also receives electrical signals converted by microphone 113 from sound waves. audio circuitry 110 converts the electrical signal to audio data and transmits the audio data to peripherals interface 118 for processing. audio data is, optionally, retrieved from and/or transmitted to memory 102 and/or rf circuitry 108 by peripherals interface 118. in some embodiments, audio circuitry 110 also includes a headset jack (e.g., 212, fig. 2). the headset jack provides an interface between audio circuitry 110 and removable audio input/output peripherals, such as output-only headphones or a headset with both output (e.g., a headphone for one or both ears) and input (e.g., a microphone). [0045] i/o subsystem 106 couples input/output peripherals on device 100, such as touch screen 112 and other input control devices 116, to peripherals interface 118. i/o subsystem 106 optionally includes display controller 156, optical sensor controller 158, intensity sensor controller 159, haptic feedback controller 161, and one or more input controllers 160 for other input or control devices. the one or more input controllers 160 receive/send electrical signals from/to other input control devices 116. in some embodiments, input controller(s) 160 are, optionally, coupled to any (or none) of the following: a keyboard, an infrared port, a usb port, and a pointer device such as a mouse. the one or more buttons (e.g., 208, fig. 2) optionally include an up/down button for volume control of speaker 111 and/or microphone 113. the one or more buttons optionally include a push button (e.g., 206, fig. 2). other input control devices 116 optionally include physical buttons (e.g., push buttons, rocker buttons, etc.), dials, slider switches, joysticks, click wheels, and so forth. [0046] a quick press of the push button optionally disengages a lock of touch screen 112 or optionally begins a process that uses gestures on the touch screen to unlock the device. a longer press of the push button (e.g., 206) optionally turns power to device 100 on or off. touch screen 112 is used to implement virtual or soft buttons and one or more soft keyboards. the functionality of one or more of the buttons are, optionally, user-customizable. [0047] touch-sensitive display 112 provides an input interface and an output interface between the device and a user. touch screen 112 displays visual output to the user. in some embodiments, some or all of the visual output optionally corresponds to user-interface objects. the visual output optionally includes graphics, text, icons, video, and any combination thereof (collectively termed “graphics”). display controller 156 receives and/or sends electrical signals from/to touch screen 112. [0048] touch screen 112 and display controller 156 (along with any associated modules and/or sets of instructions in memory 102) detect contact (and any movement or breaking of the contact) on touch screen 112 and convert the detected contact into interaction with user-interface objects (e.g., one or more soft keys, icons, web pages, or images) that are displayed on touch screen 112. touch screen 112 has a touch-sensitive surface, sensor, or set of sensors that accepts input from the user based on haptic and/or tactile contact. in an exemplary embodiment, a point of contact between touch screen 112 and the user corresponds to a finger of the user. [0049] touch screen 112 and display controller 156 optionally detect contact and any movement or breaking thereof using any of a plurality of touch sensing technologies now known or later developed, including but not limited to capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other proximity sensor arrays or other elements for determining one or more points of contact with touch screen 112. in an exemplary embodiment, projected mutual capacitance sensing technology is used. touch screen 112 optionally uses led (light emitting diode) technology lcd (liquid crystal display) technology, or lpd (light emitting polymer display) technology, although other display technologies are used in other embodiments. [0050] a touch-sensitive display in some embodiments of touch screen 112 is, optionally, analogous to multi-touch sensitive touchpads. however, touch screen 112 displays visual output from device 100, whereas touch-sensitive touchpads do not provide visual output. [0051] the user optionally makes contact with touch screen 112 using any suitable object or appendage, such as a stylus, a finger, and so forth. in some embodiments, the device translates the rough finger-based input into a precise pointer/cursor position or command for performing the actions desired by the user. in some embodiments, the user interface is designed to work primarily with finger-based contacts and gestures, which can be less precise than stylus- based input due to the larger area of contact of a finger on the touch screen. touch screen 112 optionally has a video resolution in excess of 100 dpi. in some embodiments, the touch screen has a video resolution of approximately 160 dpi. [0052] in some embodiments, in addition to the touch screen, device 100 optionally includes a touchpad (not shown) for activating or deactivating particular functions. the touchpad is, optionally, a touch-sensitive surface that is separate from touch screen 112 or an extension of the touch-sensitive surface formed by the touch screen. in some embodiments, the touchpad is a touch-sensitive area of the device that, unlike the touch screen, does not display visual output. [0053] device 100 also includes power system 162 for powering the various components. power system 162 optionally includes a power management system, one or more power sources (e.g., battery, alternating current (ac)), a power converter or inverter, a power status indicator (e.g., a light-emitting diode (led)), a recharging system, a power failure detection circuit, and any other components associated with the generation, management and distribution of power in portable devices. [0054] device 100 optionally also includes one or more optical sensors 164. fig. 1a shows an optical sensor coupled to optical sensor controller 158 in i/o subsystem 106. optical sensor 164 receives light from the environment, projected through one or more lenses, and converts the light to data representing an image. optical sensor 164 optionally includes charge- coupled device (ccd) or complementary metal-oxide semiconductor (cmos) phototransistors. in conjunction with imaging module 143 (also called a camera module), optical sensor 164 optionally captures still images or video. in some embodiments, an optical sensor is located on the front of the device so that the user’s image is, optionally, obtained for video conferencing while the user views the other video conference participants on the touch screen display. in some embodiments, an optical sensor is located on the back of device 100, opposite touch screen display 112 on the front of the device so that the touch screen display is enabled for use as a viewfinder for still and/or video image acquisition. in some embodiments, the position of optical sensor 164 can be changed by the user (e.g., by rotating the lens and the sensor in the device housing) so that a single optical sensor 164 is used along with the touch screen display for both video conferencing and still and/or video image acquisition. [0055] fig. 1 a shows a contact intensity sensor coupled to intensity sensor controller 159 in i/o subsystem 106. device 100 optionally also includes one or more contact intensity sensors 165. contact intensity sensor 165 optionally includes one or more piezoresistive strain gauges, capacitive force sensors, electric force sensors, piezoelectric force sensors, optical force sensors, capacitive touch-sensitive surfaces, or other intensity sensors (e.g., sensors used to measure the force (or pressure) of a contact on a touch-sensitive surface). contact intensity sensor 165 receives contact intensity information (e.g., pressure information or a proxy for pressure information) from the environment. in some embodiments, at least one contact intensity sensor is located on the back of device 100, opposite touch screen display 112, which is located on the front of device 100. in some embodiments, at least one contact intensity sensor is collocated with, or proximate to, a touch-sensitive surface (e.g., touch-sensitive display system 112). [0056] device 100 optionally also includes one or more proximity sensors 166. fig. 1a shows proximity sensor 166 coupled to peripherals interface 118. in some embodiments, the proximity sensor turns off and disables touch screen 112 when the multifunction device is placed near the user’s ear (e.g., when the user is making a phone call). alternately, proximity sensor 166 is, optionally, coupled to input controller 160 in i/o subsystem 106. [0057] fig. 1 a shows a tactile output generator coupled to haptic feedback controller 161 in i/o subsystem 106. device 100 optionally also includes one or more tactile output generators 167. tactile output generator 167 optionally includes one or more electroacoustic devices such as speakers or other audio components and/or electromechanical devices that convert energy into linear motion such as a motor, solenoid, electroactive polymer, piezoelectric actuator, electrostatic actuator, or other tactile output generating component (e.g., a component that converts electrical signals into tactile outputs on the device). contact intensity sensor 165 receives tactile feedback generation instructions from haptic feedback module 133 and generates tactile outputs on device 100 that are capable of being sensed by a user of device 100. in some embodiments, at least one tactile output generator sensor is located on the back of device 100, opposite touch screen display 112, which is located on the front of device 100. in some embodiments, at least one tactile output generator is collocated with, or proximate to, a touch- sensitive surface (e.g., touch-sensitive display system 112) and, optionally, generates a tactile output by moving the touch-sensitive surface vertically (e.g., in/out of a surface of device 100) or laterally (e.g., back and forth in the same plane as a surface of device 100). [0058] device 100 optionally also includes one or more accelerometers 168. fig. 1 a shows accelerometer 168 coupled to peripherals interface 118. alternately, accelerometer 168 is, optionally, coupled to an input controller 160 in i/o subsystem 106. in some embodiments, information is displayed on the touch screen display in a portrait view or a landscape view based on an analysis of data received from the one or more accelerometers. device 100 optionally includes, in addition to accelerometer(s) 168, a magnetometer (not shown) and a gps (or glonass or other global navigation system) receiver (not shown) for obtaining information concerning the location and orientation (e.g., portrait or landscape) of device 100. [0059] in some embodiments, the software components stored in memory 102 include operating system 126, applications (or sets of instructions) 136, communication module (or set of instructions) 128, contact/motion module (or set of instructions) 130, text input module (or set of instructions) 134, graphics module (or set of instructions) 132, and global positioning system (gps) module (or set of instructions) 135. furthermore, in some embodiments, memory 102 (fig. 1a) or 370 (fig. 3) stores device/global internal state 157, as shown in figs. 1a and 3. device/global internal state 157 includes one or more of: active application state, indicating which applications, if any, are currently active; display state, indicating what applications, views or other information occupy various regions of touch screen display 112; sensor state, including information obtained from the device’s various sensors and input control devices 116; and location information concerning the device’s location and/or attitude. [0060] operating system 126 (e.g, windows, darwin, rtxc, linux, unix, os x, ios, or an embedded operating system such as vxworks) includes various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.) and facilitates communication between various hardware and software components. [0061] communication module 128 facilitates communication with other devices over one or more external ports 124 and also includes various software components for handling data received by rf circuitry 108 and/or external port 124. in some embodiments, the external port is a multi-pin (e.g., 30-pin) connector. external port 124 (e.g., universal serial bus (usb), firewire, etc.) is adapted for coupling directly to other devices or indirectly over a network (e.g., the internet, wireless lan, etc.). [0062] contact/motion module 130 optionally detects contact with touch screen 112 (in conjunction with display controller 156) and other touch-sensitive devices (e.g., a touchpad or physical click wheel). contact/motion module 130 receives contact data from the touch- sensitive surface. contact/motion module 130 includes various software components for performing various operations related to detection of contact, such as determining if contact has occurred (e.g., detecting a finger-down event), determining an intensity of the contact (e.g., the force or pressure of the contact or a substitute for the force or pressure of the contact), determining if there is movement of the contact and tracking the movement across the touch- sensitive surface (e.g., detecting one or more finger-dragging events), and determining if the contact has ceased (e.g., detecting a finger-up event or a break in contact). determining movement of the point of contact, which is represented by a series of contact data, optionally includes determining speed (magnitude), velocity (magnitude and direction), and/or an acceleration (a change in magnitude and/or direction) of the point of contact. these operations are, optionally, applied to single contacts (e.g., one finger contacts) or to multiple simultaneous contacts (e.g., “multitouch’vmultiple finger contacts). in some embodiments, contact/motion module 130 and display controller 156 detect contact on a touchpad. [0063] in some embodiments, contact/motion module 130 uses a set of one or more intensity thresholds to determine whether an operation has been performed by a user (e.g., to determine whether a user has “clicked” on an icon). for example, a mouse “click” threshold of a trackpad or touch screen display can be set to any of a large range of predefined threshold values without changing the trackpad or touch screen display hardware. in some embodiments, at least a subset of the intensity thresholds are determined in accordance with software parameters (e.g., the intensity thresholds are not determined by the activation thresholds of particular physical actuators and can be adjusted without changing the physical hardware of device 100). additionally, in some implementations, a user of the device is provided with software settings for adjusting one or more of the set of intensity thresholds (e.g., by adjusting individual intensity thresholds and/or by adjusting a plurality of intensity thresholds at once with a system-level click “intensity” parameter). [0064] contact/motion module 130 optionally detects a gesture input by a user. different gestures on the touch-sensitive surface have different contact patterns (e.g., different motions, timings, and/or intensities of detected contacts). for example, detecting a finger tap gesture includes detecting a finger-down event followed by detecting a finger-up (liftoff) event at the same position (or substantially the same position) as the finger-down event (e.g., at the position of an icon). thus, a gesture is, optionally, detected by detecting a particular contact pattern. as another example, detecting a finger swipe gesture on the touch-sensitive surface includes detecting a finger-down event followed by detecting one or more finger-dragging events, and subsequently followed by detecting a finger-up (liftoff) event. [0065] as used herein, the term “graphics” includes any object that can be displayed to a user, including, without limitation, text, web pages, icons (such as user-interface objects including soft keys), digital images, videos, animations, and the like. graphics module 132 includes various known software components for rendering and displaying graphics on touch screen 112 or other display, including components for changing the visual impact (e.g., brightness, transparency, saturation, contrast, or other visual property) of graphics that are displayed. [0066] in some embodiments, graphics module 132 stores data representing graphics to be used. graphics module 132 receives, from applications etc., one or more codes specifying graphics to be displayed along with, if necessary, coordinate data and other graphic property data, and then generates screen image data to output to display controller 156. each graphic is, optionally, assigned a corresponding code. [0067] haptic feedback module 133 includes various software components for generating instructions used by tactile output generator(s) 167 to produce tactile outputs, in response to user interactions with device 100, at one or more locations on device 100. [0068] text input module 134, which is, optionally, a component of graphics module 132, provides soft keyboards for entering text in various applications (e.g., contacts 137, browser 147, im 141, e-mail 140, and any other application that needs text input). [0069] gps module 135 determines the location of the device and provides this information for use in various applications (e.g., to camera 143 as picture/video metadata; to telephone 138 for use in location-based dialing; and to applications that provide location-based services such as local yellow page widgets, weather widgets, and map/navigation widgets). [0070] applications 136 optionally include the following modules (or sets of instructions), or a subset or superset thereof: • video player module; • music player module; • contacts module 137 (sometimes called an address book or contact list); • telephone module 138; • video conference module 139; • e-mail client module 140; • instant messaging (im) module 141; • workout support module 142; • camera module 143 for still and/or video images; • image management module 144; • browser module 147; • calendar module 148; • widget modules 149, which optionally include one or more of: dictionary widget 149-5, weather widget 149-1, stocks widget 149-2, alarm clock widget 149-4, calculator widget 149-3, and other widgets obtained by the user, as well as user-created widgets 149-6; • widget creator module 150 for making user-created widgets 149-6; • search module 151; • video and music player module 152, which merges music player module and video player module; • notes module 153; • map module 154; and/or • online video module 155. [0071] examples of other applications 136 that are, optionally, stored in memory 102 include java-enabled applications, other word processing applications, drawing applications, presentation applications, other image editing applications, encryption, digital rights management, voice recognition, and voice replication. [0072] in conjunction with touch screen 112, contact/motion module 130, graphics module 132, text input module 134, and display controller 156, contacts module 137 are, optionally, used to manage an address book or contact list (e.g., stored in application internal state 192 of contacts module 137 in memory 102 or memory 370), including: adding name(s) to the address book; deleting name(s) from the address book; associating telephone number(s) , physical address(es), e-mail address(es) or other information with a name; associating an image with a name; categorizing and sorting names; providing telephone numbers or e-mail addresses to initiate and/or facilitate communications by telephone 138, video conference module 139, e- mail 140, or im 141; and so forth. [0073] as noted above, the wireless communication optionally uses any of a plurality of communications standards, protocols, and technologies. in conjunction with rf circuitry 108, audio circuitry 110, speaker 111, microphone 113, touch screen 112, contact/motion module 130, graphics module 132, text input module 134, and display controller 156, telephone module 138 are optionally, used to enter a sequence of characters corresponding to a telephone number, access one or more telephone numbers in contacts module 137, modify a telephone number that has been entered, dial a respective telephone number, conduct a conversation, and disconnect or hang up when the conversation is completed. [0074] in conjunction with rf circuitry 108, audio circuitry 110, speaker 111, microphone 113, touch screen 112, contact/motion module 130, graphics module 132, text input module 134, contacts module 137, telephone module 138, display controller 156, optical sensor controller 158, and optical sensor 164, video conference module 139 includes executable instructions to initiate, conduct, and terminate a video conference between a user and one or more other participants in accordance with user instructions. [0075] in conjunction with rf circuitry 108, touch screen 112, contact/motion module 130, graphics module 132, text input module 134, and display controller 156, e-mail client module 140 includes executable instructions to create, send, receive, and manage e-mail in response to user instructions. in conjunction with image management module 144, e-mail client module 140 makes it very easy to create and send e-mails with still or video images taken with camera module 143. [0076] as used herein, “instant messaging” refers to both telephony-based messages (e.g., messages sent using sms or mms) and internet-based messages (e.g., messages sent using xmpp, simple, or imps). in conjunction with rf circuitry 108, touch screen 112, contact/motion module 130, graphics module 132, text input module 134, and display controller 156, the instant messaging module 141 includes executable instructions to enter a sequence of characters corresponding to an instant message, to modify previously entered characters, to transmit a respective instant message (for example, using a short message service (sms) or multimedia message service (mms) protocol for telephony -based instant messages or using simple, xmpp, or imps for internet-based instant messages), to receive instant messages, and to view received instant messages. in some embodiments, transmitted and/or received instant messages optionally include graphics, photos, audio files, video files and/or other attachments as are supported in an mms and/or an enhanced messaging service (ems). [0077] in conjunction with rf circuitry 108, touch screen 112, contact/motion module 130, graphics module 132, text input module 134, gps module 135, map module 154, display controller 156, and music player module, workout support module 142 includes executable instructions to create workouts (e.g., with time, distance, and/or calorie burning goals); select and play music for a workout; communicate with workout sensors (sports devices); receive workout sensor data; calibrate sensors used to monitor a workout; and display, store, and transmit workout data. [0078] in conjunction with touch screen 112, contact/motion module 130, graphics module 132, image management module 144, display controller 156, optical sensor(s) 164, and optical sensor controller 158, , camera module 143 includes executable instructions to capture still images or video (including a video stream) and store them into memory 102, modify characteristics of a still image or video, or delete a still image or video from memory 102. [0079] in conjunction with touch screen 112, contact/motion module 130, graphics module 132, text input module 134, display controller 156, , and camera module 143, image management module 144 includes executable instructions to arrange, label, delete, modify (e.g., edit), or otherwise manipulate, present (e.g., in a digital slide show or album), and store still and/or video images. [0080] in conjunction with rf circuitry 108, touch screen 112, contact/motion module 130, graphics module 132, text input module 134, and display controller 156, browser module 147 includes executable instructions to browse the internet in accordance with user instructions, including searching, linking to, receiving, and displaying web pages or portions thereof, as well as attachments and other files linked to web pages. [0081] in conjunction with rf circuitry 108, touch screen 112, contact/motion module 130, graphics module 132, text input module 134, e-mail client module 140, display controller 156, and browser module 147, calendar module 148 includes executable instructions to create, display, modify, and store calendars and data associated with calendars (e.g., calendar entries, to- do lists, etc.) in accordance with user instructions. [0082] in conjunction with rf circuitry 108, touch screen 112, display controller 156, contact/motion module 130, graphics module 132, text input module 134, and browser module 147, widget modules 149 are mini-applications that are, optionally, downloaded and used by a user (e.g., weather widget 149-1, stocks widget 149-2, calculator widget 149-3, alarm clock widget 149-4, and dictionary widget 149-5) or created by the user (e.g., user-created widget 149- 6). in some embodiments, a widget includes an xml (extensible markup language) file and a javascript file (e.g., yahoo! widgets). in some embodiments, a widget includes an html (hypertext markup language) file, a css (cascading style sheets) file, and a javascript file. [0083] in conjunction with rf circuitry 108, touch screen 112, contact/motion module 130, graphics module 132, text input module 134, display controller 156, and browser module 147, the widget creator module 150 are, optionally, used by a user to create widgets (e.g., turning a user-specified portion of a web page into a widget). [0084] in conjunction with touch screen 112, contact/motion module 130, graphics module 132, text input module 134, and display controller 156, search module 151 includes executable instructions to search for text, sound, music, image, video, and/or other files in memory 102 that match one or more search criteria (e.g., one or more user-specified search terms) in accordance with user instructions. [0085] in some embodiments, device 100 optionally includes the functionality of an mp3 player. in conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, audio circuitry 110, speaker 111, rf circuitry 108, and browser module 147, video and music player module 152 includes executable instructions that allow the user to download and play back recorded music and other sound files stored in one or more file formats, such as mp3 or aac files, and executable instructions to display, present, or otherwise play back videos (e.g., on touch screen 112 or on an external, connected display via external port 124). [0086] in conjunction with touch screen 112, display controller 156, contact/motion module 130, graphics module 132, and text input module 134, notes module 153 includes executable instructions to create and manage to-do lists, notes, and the like in accordance with user instructions. [0087] in conjunction with rf circuitry 108, touch screen 112, contact/motion module 130, graphics module 132, text input module 134, gps module 135, browser module 147, and display controller 156, map module 154 are, optionally, used to receive, display, modify, and store maps and data associated with maps (e.g., driving directions, data on stores and other points of interest at or near a particular location, and other location-based data) in accordance with user instructions. [0088] in conjunction with touch screen 112, contact/motion module 130, graphics module 132, audio circuitry 110, speaker 111, rf circuitry 108, text input module 134, e-mail client module 140, browser module 147, and display controller 156, online video module 155 includes instructions that allow the user to receive, access, browse(e.g., by streaming and/or download), play back (e.g., on the touch screen or on an external, connected display via external port 124), send an e-mail with a link to a particular online video, and otherwise manage online videos in one or more file formats, such as h.264. in some embodiments, instant messaging module 141, rather than e-mail client module 140, is used to send a link to a particular online video. [0089] each of the above-identified modules and applications corresponds to a set of executable instructions for performing one or more functions described above and the methods described in this application (e.g., the computer-implemented methods and other information processing methods described herein). these modules (e.g., sets of instructions) need not be implemented as separate software programs, procedures, or modules, and thus various subsets of these modules are, optionally, combined or otherwise rearranged in various embodiments. in some embodiments, memory 102 optionally stores a subset of the modules and data structures identified above. for example, video player module is, optionally, combined with music player module into a single module (e.g., video and music player module 152, fig. 1a). furthermore, memory 102 optionally stores additional modules and data structures not described above. [0090] by using a touch screen and/or a touchpad as the primary input control device for operation of device 100, the number of physical input control devices (such as push buttons, dials, and the like) on device 100 is, optionally, reduced. in some embodiments, device 100 is a device where operation of a predefined set of functions on the device is performed exclusively through a touch screen and/or a touchpad. [0091] the predefined set of functions that are performed exclusively through a touch screen and/or a touchpad optionally include navigation between user interfaces. in some embodiments, the touchpad, when touched by the user, navigates device 100 to a main, home, or root menu from any user interface that is displayed on device 100. in some embodiments, the menu button is a physical push button or other physical input control device instead of a touchpad. in such other embodiments, a “menu button” is implemented using a touchpad. [0092] fig. ib is a block diagram illustrating exemplary components for event handling in accordance with some embodiments. in some embodiments, memory 102 (fig. 1a) or 370 (fig. 3) includes a respective application 136-1 (e.g., any of the aforementioned applications 137-151, 155, 380-390) and event sorter 170 (e.g., in operating system 126). [0093] event sorter 170 includes event monitor 171 and event dispatcher module 174. event sorter 170 receives event information and determines the application 136-1 and application view 191 of application 136-1 to which to deliver the event information. in some embodiments, application 136-1 includes application internal state 192, which indicates the current application view(s) displayed on touch-sensitive display 112 when the application is active or executing. in some embodiments, device/global internal state 157 is used by event sorter 170 to determine which application(s) is (are) currently active, and application internal state 192 is used by event sorter 170 to determine application views 191 to which to deliver event information. [0094] in some embodiments, application internal state 192 includes additional information, such as one or more of: user interface state information that indicates information being displayed or that is ready for display by application 136-1, resume information to be used when application 136-1 resumes execution, a state queue for enabling the user to go back to a prior state or view of application 136-1, and a redo/undo queue of previous actions taken by the user. [0095] event monitor 171 receives event information from peripherals interface 118. peripherals interface 118 transmits information it receives from i/o subsystem 106 or a sensor, such as proximity sensor 166, accelerometer(s) 168, and/or microphone 113 (through audio circuitry 110). event information includes information about a sub-event (e.g., a user touch on touch-sensitive display 112, as part of a multi-touch gesture). information that peripherals interface 118 receives from eo subsystem 106 includes information from touch-sensitive display 112 or a touch-sensitive surface. [0096] in some embodiments, peripherals interface 118 transmits event information only when there is a significant event (e.g., receiving an input above a predetermined noise threshold and/or for more than a predetermined duration). in other embodiments, event monitor 171 sends requests to the peripherals interface 118 at predetermined intervals. in response, peripherals interface 118 transmits event information. [0097] in some embodiments, event sorter 170 also includes an active event recognizer determination module 173 and/or a hit view determination module 172. [0098] views are made up of controls and other elements that a user can see on the display. hit view determination module 172 provides software procedures for determining where a sub-event has taken place within one or more views when touch-sensitive display 112 displays more than one view. [0099] another aspect of the user interface associated with an application is a set of views, sometimes herein called application views or user interface windows, in which information is displayed and touch-based gestures occur. in some embodiments, the lowest level view in which a touch is detected is, optionally, called the hit view, and the set of events that are recognized as proper inputs are, optionally, determined based, at least in part, on the hit view of the initial touch that begins a touch-based gesture. thus, the application views (of a respective application) in which a touch is detected optionally correspond to programmatic levels within a programmatic or view hierarchy of the application. [00100] when an application has multiple views organized in a hierarchy, hit view determination module 172 identifies a hit view as the lowest view in the hierarchy which should handle the sub-event. hit view determination module 172 receives information related to sub-events of a touch-based gesture. in most circumstances, the hit view is the lowest level view in which an initiating sub-event occurs (e.g., the first sub-event in the sequence of sub-events that form an event or potential event). once the hit view is identified by the hit view determination module 172, the hit view typically receives all sub-events related to the same touch or input source for which it was identified as the hit view. [00101] active event recognizer determination module 173 determines which view or views within a view hierarchy should receive a particular sequence of sub-events. in some embodiments, active event recognizer determination module 173 determines that all views that include the physical location of a sub-event are actively involved views, and therefore determines that all actively involved views should receive a particular sequence of sub-events. in other embodiments, active event recognizer determination module 173 determines that only the hit view should receive a particular sequence of sub-events. in other embodiments, even if touch sub-events were entirely confined to the area associated with one particular view, views higher in the hierarchy would still remain as actively involved views. [00102] event dispatcher module 174 dispatches the event information to an event recognizer (e.g., event recognizer 180). in some embodiments, event dispatcher module 174 stores in an event queue the event information, which is retrieved by a respective event receiver 182. in embodiments including active event recognizer determination module 173, event dispatcher module 174 delivers the event information to an event recognizer determined by active event recognizer determination module 173. [00103] in some embodiments, operating system 126 includes event sorter 170. alternatively, application 136-1 includes event sorter 170. in yet other embodiments, event sorter 170 is a part of another module stored in memory 102, such as contact/motion module 130, or is a stand-alone module. [00104] in some embodiments, application 136-1 includes a plurality of event handlers 190 and one or more application views 191, each of which includes instructions for handling touch events that occur within a respective view of the application’s user interface. typically, a respective application view 191 includes a plurality of event recognizers 180. each application view 191 of the application 136-1 includes one or more event recognizers 180. in other embodiments, one or more of event recognizers 180 are part of a separate module, such as a user interface kit (not shown) or a higher level object from which application 136-1 inherits methods and other properties. in some embodiments, a respective event handler 190 includes one or more of: data updater 176, object updater 177, gui updater 178, and/or event data 179 received from event sorter 170. event handler 190 optionally utilizes or calls data updater 176, object updater 177, or gui updater 178 to update the application internal state 192. also, in some embodiments, one or more of data updater 176, object updater 177, and gui updater 178 are included in a respective application view 191. alternatively, one or more of the application views 191 include one or more respective event handlers 190. [00105] a respective event recognizer 180 receives event information (e.g., event data 179) from event sorter 170 and identifies an event from the event information. in some embodiments, event recognizer 180 also includes at least a subset of: metadata 183, and event delivery instructions 188 (which optionally include sub-event delivery instructions). event recognizer 180 includes event receiver 182 and event comparator 184. [00106] event receiver 182 receives event information from event sorter 170. the event information includes information about a sub-event, for example, a touch or a touch movement. when the sub-event concerns motion of a touch, the event information optionally also includes speed and direction of the sub-event. depending on the sub-event, the event information also includes additional information, such as location of the sub-event. in some embodiments, events include rotation of the device from one orientation to another (e.g., from a portrait orientation to a landscape orientation, or vice versa), and the event information includes corresponding information about the current orientation (also called device attitude) of the device. [00107] in some embodiments, event comparator 184 includes event definitions 186. event comparator 184 compares the event information to predefined event or sub-event definitions and, based on the comparison, determines an event or sub-event, or determines or updates the state of an event or sub-event. event definitions 186 contain definitions of events (e.g., predefined sequences of sub-events), for example, event 1 (187-1), event 2 (187-2), and others. in some embodiments, sub-events in an event (187) include, for example, touch begin, touch end, touch movement, touch cancellation, and multiple touching. in one example, the definition for event 2 (187-2) is a dragging on a displayed object. the dragging, for example, comprises a touch (or contact) on the displayed object for a predetermined phase, a movement of the touch across touch-sensitive display 112, and liftoff of the touch (touch end). in another example, the definition for event 1 (187-1) is a double tap on a displayed object. the double tap, for example, comprises a first touch (touch begin) on the displayed object for a predetermined phase, a first liftoff (touch end) for a predetermined phase, a second touch (touch begin) on the displayed object for a predetermined phase, and a second liftoff (touch end) for a predetermined phase. in some embodiments, the event also includes information for one or more associated event handlers 190. [00108] in some embodiments, event comparator 184 performs a hit test to determine which user-interface object is associated with a sub-event. for example, in an application view in which three user-interface objects are displayed on touch-sensitive display 112, when a touch is detected on touch-sensitive display 112, event comparator 184 performs a hit test to determine which of the three user-interface objects is associated with the touch (sub-event). if each displayed object is associated with a respective event handler 190, the event comparator uses the result of the hit test to determine which event handler 190 should be activated. for example, event comparator 184 selects an event handler associated with the sub-event and the object triggering the hit test. in some embodiments, event definition 187 includes a definition of an event for a respective user-interface object. [00109] in some embodiments, the definition for a respective event (187) also includes delayed actions that delay delivery of the event information until after it has been determined whether the sequence of sub-events does or does not correspond to the event recognizer’s event type. [00110] when a respective event recognizer 180 determines that the series of sub-events do not match any of the events in event definitions 186, the respective event recognizer 180 enters an event failed, event impossible, or event ended state, after which it disregards sub sequent sub-events of the touch-based gesture. in this situation, other event recognizers, if any, that remain active for the hit view continue to track and process sub-events of an ongoing touch-based gesture. [00111] in some embodiments, a respective event recognizer 180 includes metadata 183 with configurable properties, flags, and/or lists that indicate how the event delivery system should perform sub-event delivery to actively involved event recognizers. in some embodiments, metadata 183 includes configurable properties, flags, and/or lists that indicate whether sub-events are delivered to varying levels in the view or programmatic hierarchy. in some embodiments, metadata 183 includes configurable properties, flags, and/or lists that indicate how event recognizers interact, or are enabled to interact, with one another. [00112] in some embodiments, a respective event recognizer 180 activates event handler 190 associated with an event when one or more particular sub-events of an event are recognized. activating an event handler 190 is distinct from sending (and deferred sending) sub-events to a respective hit view. in some embodiments, a respective event recognizer 180 delivers event information associated with the event to event handler 190. in some embodiments, event recognizer 180 throws a flag associated with the recognized event, and event handler 190 associated with the flag catches the flag and performs a predefined process. [00113] in some embodiments, event delivery instructions 188 include sub-event delivery instructions that deliver event information about a sub-event without activating an event handler. instead, the sub-event delivery instructions deliver event information to event handlers associated with the series of sub-events or to actively involved views. event handlers associated with actively involved views or with the series of sub-events receive the event information and perform a predetermined process. [00114] in some embodiments, object updater 177 creates and updates objects used in application 136-1. for example, object updater 177 creates a new user-interface object or updates the position of a user-interface object. gui updater 178 updates the gui. for example, gui updater 178 prepares display information and sends it to graphics module 132 for display on a touch-sensitive display. in some embodiments, data updater 176 creates and updates data used in application 136-1. for example, data updater 176 updates the telephone number used in contacts module 137, or stores a video file used in video player module. [00115] in some embodiments, event handler(s) 190 includes or has access to data updater 176, object updater 177, and gui updater 178. in other embodiments, they are included in two or more software modules. in some embodiments, data updater 176, object updater 177, and gui updater 178 are included in a single module of a respective application 136-1 or application view 191. [00116] it shall be understood that the foregoing discussion regarding event handling of user touches on touch-sensitive displays also applies to other forms of user inputs to operate multifunction devices 100 with input devices, not all of which are initiated on touch screens. for example, oral instructions; mouse movement and mouse button presses, optionally coordinated with single or multiple keyboard presses or holds; pen stylus inputs; contact movements such as taps, drags, scrolls, etc. on touchpads; movement of the device; detected eye movements; biometric inputs; and/or any combination thereof are optionally utilized as inputs corresponding to sub-events which define an event to be recognized. [00117] fig. 2 illustrates a portable multifunction device 100 having a touch screen 112 in accordance with some embodiments. the touch screen optionally displays one or more graphics within user interface (ui) 200. in this embodiment, as well as others described below, a user is enabled to select one or more of the graphics by making a gesture on the graphics, for example, with one or more fingers 202 (not drawn to scale in the figure) or one or more styluses 203 (not drawn to scale in the figure). in some embodiments, the gesture optionally includes one or more taps, one or more swipes (from left to right, right to left, upward and/or downward), and/or a rolling of a finger (from right to left, left to right, upward and/or downward) that has made contact with device 100. in some embodiments, selection of one or more graphics occurs when the user breaks contact with the one or more graphics. in some implementations or circumstances, inadvertent contact with a graphic does not select the graphic. for example, a swipe gesture that sweeps over an application icon optionally does not select the corresponding application when the gesture corresponding to selection is a tap. [00118] device 100 optionally also include one or more physical buttons, such as “home” or menu button 204. alternatively, in some embodiments, the menu button is implemented as a soft key in a gui displayed on touch screen 112. as described previously, menu button 204 is, optionally, used to navigate to any application 136 in a set of applications that are, optionally, executed on device 100. [00119] in some embodiments, device 100 includes touch screen 112, menu button 204, push button 206 for powering the device on/off and locking the device, volume adjustment button(s) 208, subscriber identity module (sim) card slot 210, headset jack 212, and docking/ charging external port 124. device 100 also, optionally, includes one or more contact intensity sensors 165 for detecting intensity of contacts on touch screen 112 and/or one or more tactile output generators 167 for generating tactile outputs for a user of device 100. push button 206 is, optionally, used to turn the power on/off on the device by depressing the button and holding the button in the depressed state for a predefined time interval; to lock the device by depressing the button and releasing the button before the predefined time interval has elapsed; and/or to unlock the device or initiate an unlock process. in an alternative embodiment, device 100 also accepts verbal input for activation or deactivation of some functions through microphone 113. [00120] fig. 3 is a block diagram of an exemplary multifunction device with a display and a touch-sensitive surface in accordance with some embodiments. in some embodiments, device 300 is a laptop computer, a desktop computer, a tablet computer, a multimedia player device, a navigation device, an educational device (such as a child’s learning toy), a gaming system, or a control device (e.g., a home or industrial controller). device 300 need not be portable. device 300 typically includes one or more processing units (cpus) 310, one or more network or other communications interfaces 360, memory 370, and one or more communication buses 320 for interconnecting these components. communication buses 320 optionally include circuitry (sometimes called a chipset) that interconnects and controls communications between system components. device 300 includes input/output (i/o) interface 330 comprising display 340, which is typically a touch screen display. i/o interface 330 also optionally includes a keyboard and/or mouse (or other pointing device) 350 and touchpad 355, tactile output generator 357 for generating tactile outputs on device 300 (e.g., similar to tactile output generator(s) 167 described above with reference to fig. 1 a), sensors 359 (e.g., optical, acceleration, proximity, touch-sensitive, and/or contact intensity sensors similar to contact intensity sensor(s) 165 described above with reference to fig. 1 a). memory 370 optionally includes one or more storage devices remotely located from cpu(s) 310. memory 370 includes high-speed random access memory, such as dram, sram, ddr ram, or other random access solid state memory devices; and optionally includes non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid state storage devices. in some embodiments, memory 370 stores programs, modules, and data structures analogous to the programs, modules, and data structures stored in memory 102 of portable multifunction device 100 (fig. 1 a), or a subset thereof. furthermore, memory 370 optionally stores additional programs, modules, and data structures not present in memory 102 of portable multifunction device 100. for example, memory 370 of device 300 optionally stores drawing module 380, presentation module 382, word processing module 384, website creation module 386, disk authoring module 388, and/or spreadsheet module 390, while memory 102 of portable multifunction device 100 (fig. 1 a) optionally does not store these modules. [00121] each of the above-identified elements in fig. 3 is, optionally, stored in one or more of the previously mentioned memory devices. each of the above-identified modules corresponds to a set of instructions for performing a function described above. in some embodiments, memory 370 optionally stores a subset of the modules and data structures identified above. furthermore, memory 370 optionally stores additional modules and data structures not described above. the above-identified modules or programs (e.g., sets of instructions) need not be implemented as separate software programs, procedures, or modules, and thus various subsets of these modules are, optionally, combined or otherwise rearranged in various embodiments. [00122] attention is now directed towards embodiments of user interfaces that are, optionally, implemented on, for example, portable multifunction device 100. [00123] fig. 4a illustrates an exemplary user interface for a menu of applications on portable multifunction device 100 in accordance with some embodiments. similar user interfaces are, optionally, implemented on device 300. in some embodiments, user interface 400 includes the following elements, or a subset or superset thereof: • signal strength indicator(s) 402 for wireless communication(s), such as cellular and wi fi signals; • time 404; • bluetooth indicator 405; • battery status indicator 406; • tray 408 with icons for frequently used applications, such as: o icon 416 for telephone module 138, which optionally includes an indicator 414 of the number of missed calls or voicemail messages; o icon 418 for e-mail client module 140, which optionally includes an indicator 410 of the number of unread e-mails; o icon 422 for video and music player module 152; and o icon 420 for browser module 147; and icons for other applications, such as: o icon 424 for im module 141; o icon 442 for workout support module 142; o icon 430 for camera module 143; o icon 428 for image management module 144; o icon 426 for calendar module 148; o icon 438 for weather widget 149-1; o icon 434 for stocks widget 149-2; o icon 440 for alarm clock widget 149-4; o icon 444 for notes module 153; o icon 436 for map module 154; o icon 432 for online video module 155; and o icon 446 for a settings application or module, which provides access to settings for device 100 and its various applications 136. [00124] in some embodiments, a label for a respective application icon includes a name of an application corresponding to the respective application icon. in some embodiments, a label for a particular application icon is distinct from a name of an application corresponding to the particular application icon. [00125] fig. 4b illustrates an exemplary user interface on a device (e.g., device 300, fig. 3) with a touch-sensitive surface 451 (e.g., a tablet or touchpad 355, fig. 3) that is separate from the display 450 (e.g., touch screen display 112). device 300 also, optionally, includes one or more tactile output generators 357 for generating tactile outputs for a user of device 300 and/or one or more contact intensity sensors (e.g., one or more of sensors 359) for detecting intensity of contacts on touch-sensitive surface 451. [00126] although some of the examples that follow will be given with reference to inputs on touch screen display 112 (where the touch-sensitive surface and the display are combined), in some embodiments, the device detects inputs on a touch-sensitive surface that is separate from the display, as shown in fig. 4b. in accordance with these embodiments, the device detects contacts (e.g., 460 and 462 in fig. 4b) with the touch-sensitive surface 451 at locations that correspond to respective locations on the display (e.g., in fig. 4b, 460 corresponds to 468 and 462 corresponds to 470). in some embodiments, the touch-sensitive surface (e.g., 451 in fig. 4b) has a primary axis (e.g., 452 in fig. 4b) that corresponds to a primary axis (e.g., 453 in fig. 4b) on the display (e.g., 450). in this way, user inputs (e.g., contacts 460 and 462, and movements thereof) detected by the device on the touch-sensitive surface (e.g., 451 in fig. 4b) are used by the device to manipulate the user interface on the display (e.g., 450 in fig. 4b) of the multifunction device when the touch-sensitive surface is separate from the display. it should be understood that similar methods are, optionally, used for other user interfaces described herein. [00127] additionally, while the following examples are given primarily with reference to finger inputs (e.g., finger contacts, finger tap gestures, finger swipe gestures), it should be understood that, in some embodiments, one or more of the finger inputs are replaced with input from another input device (e.g., a mouse-based input or stylus input). for example, a tap gesture is, optionally, replaced with a mouse click while the cursor is located over the location of the tap gesture (e.g., instead of detection of the contact followed by ceasing to detect the contact). as another example, a swipe gesture is, optionally, replaced with a mouse click (e.g., instead of a contact) followed by movement of the cursor along the path of the swipe (e.g., instead of movement of the contact). similarly, when multiple user inputs are simultaneously detected, it should be understood that multiple computer mice are, optionally, used simultaneously, or a mouse and finger contacts are, optionally, used simultaneously. [00128] fig. 5a illustrates exemplary personal electronic device 500. in some embodiments, device 500 can include some or all of the features described with respect to devices 100 and 300 (e.g., figs. 1a-4b). device 500 includes body 502. in some embodiments, device 500 has touch-sensitive display screen 504, hereafter touch screen 504. alternatively, or in addition to touch screen 504, device 500 has a display and a touch-sensitive surface. as with devices 100 and 300, in some embodiments, touch screen 504 (or the touch- sensitive surface) optionally includes one or more intensity sensors for detecting intensity of contacts (e.g., touches) being applied. the one or more intensity sensors of touch screen 504 (or the touch-sensitive surface) can provide output data that represents the intensity of touches. the user interface of device 500 can respond to touches based on their intensity, meaning that touches of different intensities can invoke different user interface operations on device 500. [00129] in some embodiments, device 500 has one or more input mechanisms 506 and 508. examples of physical input mechanisms include push buttons and rotatable mechanisms. input mechanisms 506 and 508, if included, can be physical. in some embodiments, device 500 has one or more attachment mechanisms. these attachment mechanisms permit device 500 to be worn by a user. such attachment mechanisms, if included, can permit attachment of device 500 with, for example, hats, eyewear, earrings, necklaces, shirts, jackets, bracelets, watch straps, chains, trousers, belts, shoes, purses, backpacks, and so forth. [00130] fig. 5b depicts exemplary personal electronic device 500. in some embodiments, device 500 can include some or all of the components described with respect to figs. 1a, ib, and 3. device 500 can include input mechanisms 506 and/or 508. input mechanism 506 is, optionally, a rotatable input device or a depressible and rotatable input device, for example. input mechanism 508 is, optionally, a button, in some examples. device 500 has bus 512 that operatively couples i/o section 514 with one or more computer processors 516 and memory 518. eo section 514 can be connected to display 504, which can have touch- sensitive component 522 and, optionally, intensity sensor 524 (e.g., contact intensity sensor). in addition, i/o section 514 can be connected with communication unit 530 for receiving application and operating system data, using wi-fi, bluetooth, near field communication (nfc), cellular, and/or other wireless communication techniques. [00131] personal electronic device 500 optionally includes various sensors, such as gps sensor 532, accelerometer 534, directional sensor 540 (e.g., compass), gyroscope 536, motion sensor 538, and/or a combination thereof, all of which can be operatively connected to eo section 514. input mechanism 508 is, optionally, a microphone, in some examples. [00132] memory 518 of personal electronic device 500 can include one or more non- transitory computer-readable storage mediums, for storing computer-executable instructions, which, when executed by one or more computer processors 516, for example, can cause the computer processors to perform the techniques described below, including processes 700, 900, 1100, 1300 and 1500 (figs. 7, 9, 11, 13 and 15). in some examples, the storage medium is a transitory computer-readable storage medium. in some examples, the storage medium is a non- transitory computer-readable storage medium. the non-transitory computer-readable storage medium can include, but is not limited to, magnetic, optical, and/or semiconductor storages. examples of such storage include magnetic disks, optical discs based on cd, dvd, or blu-ray technologies, as well as persistent solid-state memory such as flash, solid-state drives, and the like. a computer-readable storage medium can be any medium that can tangibly contain or store computer-executable instructions for use by or in connection with the instruction execution system, apparatus, or device. personal electronic device 500 is not limited to the components and configuration of fig. 5b, but can include other or additional components in multiple configurations. [00133] in addition, in methods described herein where one or more steps are contingent upon one or more conditions having been met, it should be understood that the described method can be repeated in multiple repetitions so that over the course of the repetitions all of the conditions upon which steps in the method are contingent have been met in different repetitions of the method. for example, if a method requires performing a first step if a condition is satisfied, and a second step if the condition is not satisfied, then a person of ordinary skill would appreciate that the claimed steps are repeated until the condition has been both satisfied and not satisfied, in no particular order. thus, a method described with one or more steps that are contingent upon one or more conditions having been met could be rewritten as a method that is repeated until each of the conditions described in the method has been met. this, however, is not required of system or computer readable medium claims where the system or computer readable medium contains instructions for performing the contingent operations based on the satisfaction of the corresponding one or more conditions and thus is capable of determining whether the contingency has or has not been satisfied without explicitly repeating steps of a method until all of the conditions upon which steps in the method are contingent have been met. a person having ordinary skill in the art would also understand that, similar to a method with contingent steps, a system or computer readable storage medium can repeat the steps of a method as many times as are needed to ensure that all of the contingent steps have been performed. [00134] as used here, the term “affordance” refers to a user-interactive graphical user interface object that is, optionally, displayed on the display screen of devices 100, 300, and/or 500 (figs. 1 a, 3, and 5a-5b). for example, a button, an image (e.g., icon), and text (e.g., hyperlink) each optionally constitute an affordance. [00135] as used herein, the term “focus selector” refers to an input element that indicates a current part of a user interface with which a user is interacting. in some implementations that include a cursor or other location marker, the cursor acts as a “focus selector” so that when an input (e.g., a press input) is detected on a touch-sensitive surface (e.g., touchpad 355 in fig. 3 or touch-sensitive surface 451 in fig. 4b) while the cursor is over a particular user interface element (e.g., a button, window, slider, or other user interface element), the particular user interface element is adjusted in accordance with the detected input. in some implementations, focus is moved from one region of a user interface to another region of the user interface without corresponding movement of a cursor or movement of a contact on a touch screen display (e.g., by using a tab key or arrow keys to move focus from one button to another button); in these implementations, the focus selector moves in accordance with movement of focus between different regions of the user interface. in some implementations that include a touch screen display (e.g., touch-sensitive display system 112 in fig. 1 a or touch screen 112 in fig. 4a) that enables direct interaction with user interface elements on the touch screen display, a detected contact on the touch screen acts as a “focus selector” so that when an input (e.g., a press input by the contact) is detected on the touch screen display at a location of a particular user interface element (e.g., a button, window, slider, or other user interface element), the particular user interface element is adjusted in accordance with the detected input. without regard to the specific form taken by the focus selector, the focus selector is generally the user interface element (or contact on a touch screen display) that is controlled by the user so as to communicate the user’s intended interaction with the user interface (e.g., by indicating, to the device, the element of the user interface with which the user is intending to interact). for example, the location of a focus selector (e.g., a cursor, a contact, or a selection box) over a respective button while a press input is detected on the touch-sensitive surface (e.g., a touchpad or touch screen) will indicate that the user is intending to activate the respective button (as opposed to other user interface elements shown on a display of the device). [00136] as used in the specification and claims, the term “characteristic intensity” of a contact refers to a characteristic of the contact based on one or more intensities of the contact. the characteristic intensity is, optionally, based on a predefined number of intensity samples, or a set of intensity samples collected during a predetermined time period (e.g., 0.05, 0.1, 0.2, 0.5, 1, 2, 5, 10 seconds) relative to a predefined event (e.g., after detecting the contact, prior to detecting liftoff of the contact, before or after detecting a start of movement of the contact, prior to detecting an end of the contact, before or after detecting an increase in intensity of the contact, and/or before or after detecting a decrease in intensity of the contact). in some embodiments, the characteristic intensity is based on multiple intensity samples. a characteristic intensity of a contact is, optionally, based on one or more of: a maximum value of the intensities of the contact, an average value of the intensities of the contact, a mean value of the intensities of the contact, a top 10 percentile value of the intensities of the contact, a value at the half maximum of the intensities of the contact, a value at the 90 percent maximum of the intensities of the contact, or the like. in some embodiments, the characteristic intensity is compared to a set of one or more intensity thresholds to determine whether an operation has been performed by a user. for example, the set of one or more intensity thresholds optionally includes a first intensity threshold and a second intensity threshold. in this example, a contact with a characteristic intensity that does not exceed the first threshold results in a first operation, a contact with a characteristic intensity that exceeds the first intensity threshold and does not exceed the second intensity threshold results in a second operation, and a contact with a characteristic intensity that exceeds the second threshold results in a third operation. in some embodiments, the duration of the contact is used in determining the characteristic intensity (e.g., when the characteristic intensity is an average of the intensity of the contact over time). in some embodiments, a comparison between the characteristic intensity and one or more thresholds is used to determine whether or not to perform one or more operations (e.g., whether to perform a respective operation or forgo performing the respective operation), rather than being used to determine whether to perform a first operation or a second operation. [00137] fig. 5c illustrates detecting a plurality of contacts 552a-552e on touch-sensitive display screen 504 with a plurality of intensity sensors 524a-524d. fig. 5c additionally includes intensity diagrams that show the current intensity measurements of the intensity sensors 524a-524d relative to units of intensity. in this example, the intensity measurements of intensity sensors 524a and 524d are each 9 units of intensity, and the intensity measurements of intensity sensors 524b and 524c are each 7 units of intensity. in some implementations, an aggregate intensity is the sum of the intensity measurements of the plurality of intensity sensors 524a-524d, which in this example is 32 intensity units. in some embodiments, each contact is assigned a respective intensity that is a portion of the aggregate intensity. fig. 5d illustrates assigning the aggregate intensity to contacts 552a-552e based on their distance from the center of force 554. more generally, in some implementations, each contact j is assigned a respective intensity ij that is a portion of the aggregate intensity, a, in accordance with a predefined mathematical function, ij = a (dj/ådi), where dj is the distance of the respective contact j to the center of force, and ådi is the sum of the distances of all the respective contacts (e.g., i=l to last) to the center of force. in this example, each of contacts 552a, 552b, and 552e are assigned an intensity of contact of 8 intensity units of the aggregate intensity, and each of contacts 552c and 552d are assigned an intensity of contact of 4 intensity units of the aggregate intensity. the operations described with reference to figs. 5c-5d can be performed using an electronic device similar or identical to device 100, 300, or 500. in some embodiments, the intensity sensors are used to determine a single characteristic intensity (e.g., a single characteristic intensity of a single contact). in some embodiments, a characteristic intensity of a contact is based on one or more intensities of the contact. it should be noted that the intensity diagrams are not part of a displayed user interface, but are included in figs. 5c-5d to aid the reader. [00138] in some embodiments, a smoothing algorithm is, optionally, applied to the intensities of the swipe contact prior to determining the characteristic intensity of the contact. for example, the smoothing algorithm optionally includes one or more of: a triangular smoothing algorithm, an unweighted sliding-average smoothing algorithm, a median filter smoothing algorithm, and/or an exponential smoothing algorithm. in some circumstances, these smoothing algorithms eliminate narrow spikes or dips in the intensities of the swipe contact for purposes of determining a characteristic intensity. in some embodiments, a portion of a gesture is identified for purposes of determining a characteristic intensity. for example, a touch-sensitive surface optionally receives a continuous swipe contact transitioning from a start location and reaching an end location, at which point the intensity of the contact increases. in this example, the characteristic intensity of the contact at the end location is, optionally, based on only a portion of the continuous swipe contact, and not the entire swipe contact (e.g., only the portion of the swipe contact at the end location). [00139] the intensity of a contact on the touch-sensitive surface is, optionally, characterized relative to one or more intensity thresholds, such as a contact-detection intensity threshold, a light press intensity threshold, a deep press intensity threshold, and/or one or more other intensity thresholds. in some embodiments, the deep press intensity threshold corresponds to an intensity at which the device will perform operations that are different from operations typically associated with clicking a button of a physical mouse or a trackpad. in some embodiments, the light press intensity threshold corresponds to an intensity at which the device will perform operations typically associated with clicking a button of a physical mouse or a trackpad. in some embodiments, when a contact is detected with a characteristic intensity below the light press intensity threshold (e.g., and above a nominal contact-detection intensity threshold below which the contact is no longer detected), the device will move a focus selector in accordance with movement of the contact on the touch-sensitive surface without performing an operation associated with the light press intensity threshold or the deep press intensity threshold. generally, unless otherwise stated, these intensity thresholds are consistent between different sets of user interface figures. [00140] an increase of characteristic intensity of the contact from an intensity below the deep press intensity threshold to an intensity above the deep press intensity threshold is sometimes referred to as a “deep press” input. an increase of characteristic intensity of the contact from an intensity below the light press intensity threshold to an intensity between the light press intensity threshold and the deep press intensity threshold is sometimes referred to as a “light press” input. a decrease of characteristic intensity of the contact from an intensity above the contact-detection intensity threshold to an intensity below the contact-detection intensity threshold is sometimes referred to as detecting liftoff of the contact from the touch-surface. an increase of characteristic intensity of the contact from an intensity below the contact-detection intensity threshold to an intensity between the contact-detection intensity threshold and the light press intensity threshold is sometimes referred to as detecting the contact on the touch-surface. in some embodiments, the contact-detection intensity threshold is zero. in some embodiments, the contact-detection intensity threshold is greater than zero. [00141] in some embodiments described herein, one or more operations are performed in response to detecting a gesture that includes a respective press input or in response to detecting the respective press input performed with a respective contact (or a plurality of contacts), where the respective press input is detected based at least in part on detecting an increase in intensity of the contact (or plurality of contacts) above a press-input intensity threshold. in some embodiments, the press input includes an increase in intensity of the respective contact above the press-input intensity threshold and a subsequent decrease in intensity of the contact below the press-input intensity threshold, and the respective operation is performed in response to detecting the subsequent decrease in intensity of the respective contact below the press-input threshold (e.g., an “up stroke” of the respective press input). in some embodiments, the respective operation is performed in response to detecting the increase in intensity of the respective contact above the press-input intensity threshold (e.g., a “down stroke” of the respective press input). [00142] figs. 5e-5h illustrate detection of a gesture that includes a press input that corresponds to an increase in intensity of a contact 562 from an intensity below a light press intensity threshold (e.g., “itl”) in fig. 5e, to an intensity above a deep press intensity threshold (e.g., “itd”) in fig. 5h. the gesture performed with contact 562 is detected on touch-sensitive surface 560 while cursor 576 is displayed over application icon 572b corresponding to app 2, on a displayed user interface 570 that includes application icons 572a-572d displayed in predefined region 574. in some embodiments, the gesture is detected on touch-sensitive display 504. the intensity sensors detect the intensity of contacts on touch-sensitive surface 560. contact 562 is maintained on touch-sensitive surface 560. the device determines that the intensity of contact 562 peaked above the deep press intensity threshold (e.g., “itd”). in some embodiments, the intensity, which is compared to the one or more intensity thresholds, is the characteristic intensity of a contact. in response to the detection of the gesture, and in accordance with contact 562 having an intensity that goes above the deep press intensity threshold (e.g., “itd”) during the gesture, reduced-scale representations 578a-578c (e.g., thumbnails) of recently opened documents for app 2 are displayed, as shown in figs. 5f-5h. it should be noted that the intensity diagram for contact 562 is not part of a displayed user interface, but is included in figs. 5e-5h to aid the reader. [00143] in some embodiments, the display of representations 578a-578c includes an animation. for example, representation 578a is initially displayed in proximity of application icon 572b, as shown in fig. 5f. representations 578a-578c form an array above icon 572b. as the animation proceeds, representation 578a moves upward and representation 578b is displayed in proximity of application icon 572b, as shown in fig. 5g. then, representations 578a moves upward, 578b moves upward toward representation 578a, and representation 578c is displayed in proximity of application icon 572b, as shown in fig. 5h. in some embodiments, the animation progresses in accordance with an intensity of contact 562, as shown in figs. 5f- 5g, where the representations 578a-578c appear and move upwards as the intensity of contact 562 increases toward the deep press intensity threshold (e.g., “itd”)· in some embodiments, the intensity, on which the progress of the animation is based, is the characteristic intensity of the contact. the operations described with reference to figs. 5e-5h can be performed using an electronic device similar or identical to device 100, 300, or 500. [00144] in some embodiments, the device employs intensity hysteresis to avoid accidental inputs sometimes termed “jitter,” where the device defines or selects a hysteresis intensity threshold with a predefined relationship to the press-input intensity threshold (e.g., the hysteresis intensity threshold is x intensity units lower than the press-input intensity threshold or the hysteresis intensity threshold is 75%, 90%, or some reasonable proportion of the press-input intensity threshold). in some embodiments, the press input is detected only when the device detects an increase in intensity of the contact from an intensity at or below the hysteresis intensity threshold to an intensity at or above the press-input intensity threshold and, optionally, a subsequent decrease in intensity of the contact to an intensity at or below the hysteresis intensity, and the respective operation is performed in response to detecting the press input (e.g., the increase in intensity of the contact or the decrease in intensity of the contact, depending on the circumstances). in some embodiments, the press input includes an increase in intensity of the respective contact above the press-input intensity threshold and a subsequent decrease in intensity of the contact below the hysteresis intensity threshold that corresponds to the press- input intensity threshold, and the respective operation is performed in response to detecting the subsequent decrease in intensity of the respective contact below the hysteresis intensity threshold (e.g., an “up stroke” of the respective press input). [00145] for ease of explanation, the descriptions of operations performed in response to a press input associated with a press-input intensity threshold or in response to a gesture including the press input are, optionally, triggered in response to detecting either: an increase in intensity of a contact from an intensity below the hysteresis intensity threshold to an intensity above the press-input intensity threshold, an increase in intensity of a contact above the press-input intensity threshold, a decrease in intensity of the contact below the hysteresis intensity threshold corresponding to the press-input intensity threshold, and/or a decrease in intensity of the contact below the press-input intensity threshold. additionally, in examples where an operation is described as being performed in response to detecting a decrease in intensity of a contact below the press-input intensity threshold, the operation is, optionally, performed in response to detecting a decrease in intensity of the contact below a hysteresis intensity threshold corresponding to, and lower than, the press-input intensity threshold. [00146] in some embodiments, a downloaded application becomes an installed application by way of an installation program that extracts program portions from a downloaded package and integrates the extracted portions with the operating system of the computer system. as used herein, an “installed application” refers to a software application that has been downloaded onto an electronic device (e.g., devices 100, 300, and/or 500) and is ready to be launched (e.g., become opened) on the device. [00147] as used herein, the terms “executing application” or “open application” refer to a software application with retained state information (e.g., as part of device/global internal state 157 and/or application internal state 192). an open or executing application is, optionally, any one of the following types of applications: • an active application, which is currently displayed on a display screen of the device that the application is being used on; • a suspended or hibernated application, which is not running, but has state information that is stored in memory (volatile and non-volatile, respectively) and that can be used to resume execution of the application; and • a background application (or background processes), which is not currently displayed, but one or more processes for the application are being processed by one or more processors. [00148] generally, opening a second application while in a first application does not close the first application. when the second application is displayed and the first application ceases to be displayed, the first application becomes a background application. as used herein, the term “closed application” refers to software applications without retained state information (e.g., state information for closed applications is not stored in a memory of the device). accordingly, closing an application includes stopping and/or removing application processes for the application and removing state information for the application from the memory of the device. [00149] attention is now directed towards embodiments of user interfaces (“ui”) and associated processes that are implemented on an electronic device, such as portable multifunction device 100, device 300, or device 500. user interfaces and associated processes naming a remote locator object [00150] users interact with electronic devices in many different manners. in some embodiments, an electronic device is able to track the location of an object such as a remote locator object. in some embodiments, the remote locator object, which supports location tracking functions, can be given a user-selected identifier (e.g., user-selected name). the embodiments described below provide ways in which an electronic device provides user interfaces for defining the identifier for a remote locator object, thus enhancing the user’s interactions with the electronic device. enhancing interactions with a device reduces the amount of time needed by a user to perform operations, and thus reduces the power usage of the device and increases battery life for battery-powered devices. it is understood that people use devices. when a person uses a device, that person is optionally referred to as a user of the device. [00151] figs. 6a-6r illustrate exemplary ways in which an electronic device 500 provides user interfaces for defining identifiers for remote locator objects in accordance with some embodiments of the disclosure. the embodiments in these figures are used to illustrate the processes described below, including the processes described with reference to figs. 7a-7h. [00152] fig. 6a illustrates electronic device 500 displaying user interface 600 (e.g., via a display device, etc.). in some embodiments, user interface 600 is displayed via a display generation component. in some embodiments, the display generation component is a hardware component (e.g., including electrical components) capable of receiving display data and displaying a user interface. in some embodiments, examples of a display generation component include a touch screen display (such as touch screen 504), a monitor, a television, a projector, an integrated, discrete, or external display device, or any other suitable display device that is in communication with device 500. [00153] in some embodiments, user interface 600 is a user interface associated with a respective remote locator object, optionally for managing and changing one or more settings associated with the respective remote locator object, for viewing information about the respective remote locator object, and/or for locating the respective remote locator object. in fig. 6a, user interface 600 is the user interface for a remote locator object referred to as “john’s keys”. for example, the respective remote locator object has been named by the user of device 500 as “john’s keys,” because, for example, the respective remote locator object is physically attached to john’s keys such that the respective remote locator object allows a user (e.g., john) to keep track of the location of john’s keys. [00154] in some embodiments, a remote locator object is a device with a battery, one or more wireless antenna and a low power processor that enables the device to function as a special- purpose remote locator object when associated with another physical object (e.g., wallet, purse, backpack, suitcase, car, set of keys, or the like). in some embodiments, the remote locator object is a multi-purpose device with location tracking capabilities such as a smartphone, tablet, computer, or watch. in some embodiments, a remote locator object is capable of transmitting location data to an electronic device (such as device 500). for example, a remote locator object optionally includes a gps locator. in some embodiments, a remote locator object does not include location tracking capability and relies on other electronic devices (e.g., such as device 500) to receive location data. in some embodiments, a remote locator object is able to wirelessly communicate with other electronic devices, such as electronic device 500 (e.g., over bluetooth, rf, ir, nfc, etc ). [00155] in some embodiments, user interface 600 includes a representation of identifier 604 and current location 606. in some embodiments, identifier 604 is a user-selected identifier (e.g., name) for the respective remote locator object indicating that user interface 600 is the user interface for john’s keys. in some embodiments, current location 606 is the determined current geographic location of john’s keys, optionally indicating if the current location is associated a known labeled location, such as “home”, “work”, “you”, etc., and/or when the current location was most recently updated (e.g., “just now”). for example, in fig. 6a, current location 606 indicates that john’s keys are near a location defined as “home”, that john’s keys are with the user (e.g., within a threshold distance, such as within 1 foot, 3 feet, 6 feet, 10 feet, etc., of the user’s device, such as device 500), and that the location information was most recently received “just now” (e.g., within the past 30 seconds, 1 minute, 3 minutes, 5 minutes, 10 minutes, etc.). [00156] in some embodiments, user interface 600 includes one or more selectable options for performing operations associated with the remote locator object and/or viewing and/or changing one or more settings associated with the remote locator object. in some embodiments, user interface 600 includes additional information associated with the status of the remote locator object in fig. 6a, user interface 600 includes selectable option 608 which is selectable to initiate a process to find and/or locate the respective remote locator object (e.g., in a manner similar to described below with respect to method 900, selectable option 610 to cause the respective remote locator object to emit an audible sound, notification settings 612 for managing one or more notification settings associated with the remote locator object, sharing settings 614 for managing the settings for sharing the location of the remote locator object with other people (e.g., other users), and selectable option 616 for renaming the remote locator object (e.g., for editing the identifier of the remote locator object). [00157] in fig. 6a, a user input 603 (e.g., a tap on touch screen 504) is received selecting selectable option 616. in some embodiments, in response to receiving user input 603, device 500 initiates a process to rename the remote locator object, including displaying user interface 618, as shown in fig. 6b. in some embodiments, user interface 618 includes list 620, which includes one or more predefined options for the identifier, and preview 626, which displays a preview of the currently selected identifier. [00158] in fig. 6b, list 620 includes predefined options 622a to 622d, which are selectable to select the respective option as the new name of the remote locator object. in some embodiments, list 620 is scrollable to display more predefined options. in some embodiments, predefined options 622a to 622d are predefined textual identifiers. for example, in fig. 6b, predefined option 622c corresponding to “keys” is the currently selected option (e.g., as illustrated by the box around predefined option 622c). in some embodiments, the predefined textual identifiers are associated with respective predefined graphical identifiers (e.g., emojis, icons, etc.). for example, a graphical identifier for the textual identifier “keys” is optionally a key emoji or key icon 628. in some embodiments, list 620 does not include representations of the corresponding graphical identifiers. as will be discussed in further detail below, the graphical identifier and textual identifiers are optionally used to refer to the remote locator object and in certain situations, the graphical identifier is used to refer to the remote locator object while in other situations, the textual identifier is used to refer to the remote locator object (optionally in some situations, both identifiers are used in combination to refer to the remote locator object). in some embodiments, list 620 includes custom option 624, which is not associated with a predefined textual identifier and is selectable to allow the user to provide a custom name for the remote locator object, as will be described in further detail below with respect to figs. 6d-6k. [00159] in some embodiments, preview 626 includes a preview of the identifier for the remote locator object based on the currently selected option from list 620. for example, in fig. 6b, preview 626 includes icon 628 and text field 630 corresponding to the graphical identifier and the textual identifier, respectively, for the remote locator object. in some embodiments, because predefined option 622c corresponding to the “key” option is currently selected, icon 628 corresponds to the graphical representation of “keys” (e.g., a key image) and text field 630 reads “john’s keys”. as shown in fig. 6b, the name of the owner of the remote locator object (e.g., “john”) is optionally prepended to the selected predefined option (e.g., “keys”). in some embodiments, the owner of a remote locator object is the user whose electronic device (e.g., device 500) is paired with the remote locator object and/or the user that initialized the remote locator object and has been associated with the remote locator object as the owner and who optionally is authorized to change one or more settings of the remote locator object. in some embodiments, icon 628 includes a representation of the corresponding graphical identifier associated with the selected predefined textual identifier. for example, in fig. 6b, icon 628 includes a representation of a key emoji. [00160] in fig. 6c, while displaying user interface 618, a user input 603 is received selecting predefined option 622d corresponding to the “bag” textual identifier. in some embodiments, in response to receiving user input 603 selecting predefined option 622d, list 620 is updated to indicate that predefined option 622d is the currently selected option and preview 626 is updated to reflect the updated selection, as shown in fig. 6d. in fig. 6d, the items in list 620 are scrolled upwards such that predefined option 622d is centered in list 620 (e.g., and optionally displayed with a selection and/or focus indicator), and preview 626 is updated such that icon 628 includes a representation of a bag icon (e.g., or bag emoji, which is a predefined graphical identifier associated with the “bag” predefined textual identifier), and text field 630 is updated to display the name of the owner of the device prepended to “bag” (e.g., the textual identifier associated with selectable option 622d). thus, as shown in fig. 6d, selecting a predefined option from list 620 causes a selection of both a graphical identifier and a textual identifier as the identifier for the remote locator object, which optionally causes preview 626 to update both the preview of the graphical identifier (e.g., icon 628) to reflect the predefined graphical identifier associated with the selected predefined textual identifier and the preview of the textual identifier (e.g., text field 630) to reflect the selected predefined textual identifier. [00161] in fig. 6d, a user input 603 is received selecting custom option 624 from list 620. in some embodiments, in response to receiving user input 603 selecting custom option 624, device 500 updates preview 626 such that icon 628 includes a generic or blank emoji icon (e.g., not associated with a predefined option) indicating that a graphical identifier has not been selected and text field 630 is blank (optionally text 630 includes textual instructions indicating that a custom name should be provided), as shown in fig. 6e. [00162] in fig. 6f, a user input 603 is received selecting icon 628. in some embodiments, the user input 603 selecting icon 628 is interpreted as a request to provide a custom graphical icon as the graphical identifier for the remote locator object. in some embodiments, in response to receiving user input 603, device 500 displays an emoji keyboard 632 for selecting an emoji as the graphical identifier for the remote locator object, as shown in fig. 6g. as shown in fig. 6g, emoji keyboard 632 is displayed at or near the bottom of user interface 618 (e.g., below preview 626 and list 620). in some embodiments, emoji keyboard 632 includes a plurality of emojis (e.g., graphical representations, icons, etc.) from which a graphical identifier is selected. in some embodiments, emoji keyboard 632 does not include an option for causing display of a text keyboard, as will be described in further detail below. in some embodiments, in response to receiving user input 603 in fig. 6f, device 500 highlights or otherwise visually distinguishes icon 628 to indicate that icon 628 has the current focus and that the graphical identifier for the remote locator object is being currently edited and/or selected (e.g., via emoji keyboard 632), as shown in fig. 6g. [00163] in some embodiments, selecting a respective emoji from emoji keyboard 632 causes the respective emoji to be selected as the graphical identifier of the remote locator object and to be displayed as icon 628. for example, in fig. 6g, a user input 603 is received selecting icon 5 from emoji keyboard 632. in some embodiments, in response to receiving the user input 603 selecting icon 5, icon 5 is selected as the graphical identifier for the remote locator object and preview 626 is updated such that icon 628 includes icon 5. in some embodiments, only one emoji or icon is used as the graphical identifier (e.g., a selection of a second emoji from the emoji keyboard overrides the previous selection). [00164] in fig. 6h, a user input 603 is received selecting text field 630 of preview 626. in some embodiments, the user input 603 selecting text field 630 is interpreted as a request to provide a custom name as the textual identifier for the remote locator object. in some embodiments, in response to receiving user input 603, device 500 highlights or otherwise visually distinguishes text field 630 to indicate that text field 630 has the current focus and that the textual identifier for the remote locator object is being currently edited and/or selected and displays text keyboard 634 in user interface 618, as shown in fig. 6f as shown in fig. 61, text keyboard 634 is displayed at or near the bottom of user interface 618 (e.g., at or near the same location in user interface 618 that emoji keyboard 632 was displayed). in some embodiments, text keyboard 634 replaces emoji keyboard 632. in some embodiments, text keyboard 634 is a soft (e.g., virtual) keyboard that includes a plurality of key that are selectable to insert the corresponding letter (or number) into text field 630. for example, in fig. 6j, in response to receiving user inputs selecting letters from text keyboard 634, text field 630 is populated accordingly. as shown in fig. 6j, when providing a custom textual identifier, the textual identifier is optionally not automatically prepended with the name of the owner of the remote locator object (e.g., optionally the owner is the user of device 500). in some embodiments, instead, the textual identifier is the custom textual identifier, without the name of the owner. in some embodiments, a user is able to manually enter the name of the owner (e.g., via text keyboard 634), as desired. in some embodiments, when providing a custom textual identifier, the textual identifier is automatically prepended with the name of the owner of the remote locator object (e.g., in a manner similar to predefined names, described above). [00165] in some embodiments, text keyboard 634 includes a selectable option that is selectable to cause display of emoji keyboard 632. for example, in fig. 6j, a user input 603 is received selecting an emoji button on text keyboard 634. in some embodiments, in response to receiving the user input 603 selecting the emoji button, device 500 replaces display of text keyboard 634 with emoji keyboard 632, as shown in fig. 6k. in some embodiments, in response to the display of emoji keyboard 632, the focus moves from text field 630 in preview 626 to icon 628 in preview 626 such that selection of an emoji from emoji keyboard 632 causes the selected emoji to be selected as the graphical identifier for the remote locator object (e.g., similarly to as described above with respect to figs. 6g-6h). thus, while editing the textual identifier for the remote locator object, the user is optionally able to switch to editing the graphical identifier by selecting a respective option on the text keyboard, but while editing the graphical identifier, the user is optionally not able to switch to editing the textual identifier via an option on the emoji keyboard. in some embodiments, a user is able to switch from editing the textual identifier to editing the graphical identifier or vice versa by selecting the respective field in preview 626 (e.g., selecting icon 628 to edit the graphical identifier and selecting text field 630 to edit the textual identifier). in some embodiments, text cannot be used for the graphical identifier and an emoji cannot be used for the textual identifier. [00166] it is understood that the above-described method of providing a custom graphical identifier and textual identifier for a remote locator object can be applied to editing a predefined identifier for the remote locator object. for example, after selecting a predefined identifier from list 620 (e.g., such as selecting “bag” in fig. 6c), a user is optionally able to select icon 628 and/or text field 630 to cause display of the emoji keyboard or text keyboard, respectively, to edit or otherwise modify the predefined identifier (e.g., optionally without causing the owner’s name from being removed from the textual identifier). [00167] figs. 6l-6n illustrate an embodiment in which remote locator objects are referred to by their graphical identifier and/or textual identifier, including grouping a plurality of remote locator objects. in fig. 6l, device 500 is displaying user interface 636 corresponding to a user interface for displaying a plurality of tracked objects. for example, user interface 636 includes a representation 638 of a map that includes one or more representations of tracked objects. in some embodiments, representation 638 of a map includes group 640 and icon 642. in some embodiments, group 640 corresponds to a plurality of tracked objects (e.g., such as a remote locator object) that are paired with device 500, or within a threshold distance from device 500 (e.g., 2 feet, 5 feet, 15 feet, etc.) and icon 642 corresponds to the remote locator object associated with “spouse’s keys”. in some embodiments, a plurality of tracked objects are grouped together if they are within a threshold distance from each other (e.g., 2 feet, 5 feet, 15 feet, etc.), and/or if they are paired to the same electronic device (e.g., optionally the primary device of the owner of the tracked objects, such as the user’s phone, the user’s computer, etc., not necessarily device 500). in some embodiments, group 640 includes one or more graphical representations of the objects in the group (e.g., icons from the identifiers of the objects), optionally with an indication that additional objects are in the group (e.g., if there are more than a threshold number of objects in the group, such as 2, 3, 6 items, etc.). as will be described in further detail below, a group of tracked objects is optionally able to be expanded to display all objects in the group. [00168] as shown above, the graphical identifier of remote locator objects are used to represent a remote locator object in a representation of a map. for example, icon 642 is the graphical identifier for “spouse’s keys” and represents the location of the respective remote locator object on the representation of the map. similarly, “icon 5” and “key icon” in group 640 are the graphical identifiers for “wallet” and “john’s bag”, respectively. thus, in some embodiments, the graphical identifier for a remote locator object is used to refer to a remote locator object, for example, on graphical user interface elements, such as representation 638 of a map. [00169] in fig. 6l, user interface 636 includes list 644 that includes an entry for remote locator objects and/or trackable objects for which device 500 receives location information, optionally sorted by distance. for example, in fig. 6l, list 644 includes entry 646-1 corresponding to a remote locator object associated with the user’s wallet, entry 646-2 corresponding to a remote locator object associated with the user’s bag, entry 646-3 corresponding to the user’s phone, and entry 646-4 corresponding to a remote locator object associated with the user’s spouse’s keys. in some embodiments, the entries include a graphical and/or textual indicator of the respective remote locator object (e.g., that optionally were selected via a process described above with respect to figs. 6a-6k) and/or an indication of the distance of the object from the user. for example, entry 646-1 includes a graphic corresponding to the graphical identifier for the respective remote locator object (e.g., “icon 5”), a textual description (e.g., “wallet”), and an indication that the respective remote locator object is with the user and determined to be 1 foot away. as shown in fig. 6l, entry 646-1 corresponding to the remote locator object associated with the user’s wallet does not include an indication of the user’s name (e.g., does not include the label “john’s). in some embodiments, entry 646-1 does not include an indication of the user’s name because the remote locator object is identified using a custom name, in a process similar to described above with respect to figs. 6d-6k). in some embodiments, the entries are selectable to display a user interface associated with the respective remote locator object, as will be described in further detail below with respect to figs. 6n-60. [00170] in fig. 6m, a user input 103 is received selecting group 640. in some embodiments, in response to receiving user input 103 selecting group 640, device 500 displays list 648, as shown in fig. 6n. in some embodiments, list 648 is a listing of the remote locator objects and/or tracked objects that are included in group 640. for example, in fig. 6n, list 648 includes entry 650-1 corresponding to a remote locator object associated with the user’s wallet, entry 650-2 corresponding to a remote locator object associated with the user’s bag, and entry 650-3 corresponding to the user’s phone, which have been determined to be with the user. in fig. 6n, list 648 does not include an entry corresponding to a remote locator object associated with spouse’s keys (e.g., entry 646-4 from fig. 6m) because, for example, the respective remote locator object has not been determined to be with the user (e.g., within the threshold distance of the user and/or paired with the user’s device). thus, in response to a user input selecting a group of a plurality of remote locator objects, device 500 updates the user interface to display the remote locator objects in the group and cease displaying the remote locator objects that are not in the group. in some embodiments, the representation 638 of the map no longer includes icon 642 (e.g., the icon indicating the location of the remote locator object associated with spouse’s keys) and is optionally shifted such that group 640 is centered in the representation 638 of the map (e.g., optionally representation 638 is zoomed into the location of group 640). [00171] in fig. 6n, a user input 603 is received selecting entry 650-1 corresponding to a remote locator object associated with the user’s wallet. in some embodiments, in response to user input 603, device 500 displays user interface 600 (e.g., similar to user interface 600 illustrated in fig. 6a), as shown in fig. 60. as shown in fig. 60, identifier 604 in user interface 600 uses the textual identifier for the remote locator object to refer to the remote locator object (e.g., as opposed to the graphical identifier). thus, in some embodiments, for example, when the remote locator object is being referred to in a textual context (e.g., as opposed to a graphical context such as a representation of a map), the textual identifier is used to refer to the remote locator object. [00172] figs. 6p-6r illustrate an exemplary method of selecting an identifier for a remote locator object that has not before been paired with device 500 and/or is not currently paired with device 500. in fig. 6p, device 500 detects that remote locator object 601 is within a threshold distance from device 500 (e.g., 1 inch, 2 inches, 5 inches, 1 foot, etc.). in some embodiments, in response to detecting that remote locator object 601 is within the threshold distance from device 500, device 500 pairs with remote locator object 601 or otherwise establishes a wireless communication session with remote locator object 601. in fig. 6p, remote locator object 601 is in an uninitialized state such that upon pairing with device 500 for the first time, device 500 initiates a process to set up (e.g., initialize) remote locator object 601, including displaying user interface 654. in some embodiments, user interface 654 includes a representation 656 of remote locator object 601 and indicates that a new remote locator object has been detected. in some embodiments, user interface 654 includes selectable option 658 that is selectable to continue the process to set up remote locator object 601. [00173] in fig. 6q, after continuing the process to set up remote locator object 601 (e.g., in response to receiving an input selecting selectable option 658 in fig. 60), device 500 displays user interface 660 for selecting an identifier for remote locator object 601. in some embodiments, user interface 660 includes one or more predefined options for the identifier of remote locator object 601. in fig. 6q, user interface 660 includes predefined options 622a to 622d (e.g., similar to predefined options 622a to 622d described above with respect to fig. 6b) and custom option 624 (e.g., similar to custom option 624 described above with respect to fig. 6b). in some embodiments, the list of predefined options is scrollable (e.g., upwards and/or downwards) to display other predefined options. in some embodiments, the predefined options are selectable to select the respective predefined option as the textual identifier for remote locator object 601 (e.g., and optionally also select the corresponding predefined graphical identifier associated with the selected textual identifier for remote locator object 601, similar to as described with reference to figs. 6b-6k). in some embodiments, user interface 660 does not include a preview of the selected identifier for remote locator object 601. thus, in the embodiment illustrated in fig. 6q, selecting a predefined option does not cause display of a corresponding predefined graphical identifier in a preview user interface element, but optionally does cause the corresponding predefined graphical identifier to be selected as the graphical identifier for remote locator object 601 (e.g., even though it is not displayed). in some embodiments, user interface 660 includes a preview of the selected identifier, similar to preview 626 described above with respect to figs. 6b-6k. [00174] in fig. 6q, a user input 603 is received selecting custom option 624 for providing a custom name for remote locator object 601. in some embodiments, in response to receiving user input 603 selecting custom option 624, device 500 displays text keyboard 634, as shown in fig. 6r. in some embodiments, text keyboard 634 is displayed below user interface 664 and user interface 664 is optionally displaced upwards (e.g., or text keyboard 634 is displayed in the bottom region of user interface 664). in some embodiments, custom option 624 is replaced with a content entry field including icon 626 and text entry field 630. in some embodiments, icon 626 and text entry field 630 share features similar to icon 626 and text entry field 630 described above with respect to figs. 6e-6k (e.g., being selectable to display an emoji keyboard or a text keyboard, respectively, etc., for selecting a graphical identifier and/or a textual identifier for remote locator object 601). details for selecting the graphical identifier and/or textual identifier for remote locator object 601 in user interface 664 are optionally the same as those described with reference to figs. 6e-6k. [00175] figs. 7a-7h are flow diagrams illustrating a method 700 of providing user interfaces for defining identifiers for remote locator objects in accordance with some embodiments, such as illustrated in figs. 6a-6r. the method 700 is optionally performed at an electronic device such as device 100, device 300, device 500 as described above with reference to figs. 1 a-1b, 2-3, 4a-4b and 5a-5h. some operations in method 700 are, optionally combined and/or order of some operations is, optionally, changed. [00176] as described below, the method 700 provides ways to define identifiers for remote locator objects. the method reduces the cognitive burden on a user when interaction with a user interface of the device of the disclosure, thereby creating a more efficient human- machine interface. for battery-operated electronic devices, increasing the efficiency of the user’s interaction with the user interface conserves power and increases the time between battery charges. [00177] in some embodiments, an electronic device in communication with one or more wireless antenna, a display generation component, and one or more input devices (e.g., electronic device 500, a mobile device (e.g., a tablet, a smartphone, a media player, or a wearable device) including wireless communication circuitry, optionally in communication with one or more of a mouse (e.g., external), trackpad (optionally integrated or external), touchpad (optionally integrated or external), remote control device (e.g., external), another mobile device (e.g., separate from the electronic device), a handheld device (e.g., external), and/or a controller (e.g., external), etc.), while displaying, via the display generation component, a respective user interface for inputting an identifier for a remote locator object, wherein the respective user interface includes a representation of a first portion of the identifier and a representation of a second portion of the identifier, receives (702), via the one or more input devices, a respective input, such as user input 603 selecting icon 628 or text field 630 in figs. 6f and 6h, respectively (e.g., a respective remote locator object is able to be identified by a user-selected identifier (e.g., the name of the remote locator object)). [00178] in some embodiments, the identifier for the respective remote locator object includes a graphic portion and a text portion. in some embodiments, the graphic portion is an icon, picture, symbol, emoji, or any other suitable graphical identifier. in some embodiments, the text portion is a textual description, name, or other suitable textual identifier. for example, if the remote locator object is associated with the user’s keys, the user is able to set the graphic portion of the identifier as a key icon or key emoji and the text portion of the identifier as the word “key”. in some embodiments, the remote locator object is referred to by either the first portion of the identifier, the second portion of the identifier, or a combination of both the first and second portions of the identifier. for example, when referring to the remote locator object on a representation of a map, the first portion of the identifier is used to identify the remote locator object (e.g., as an emoji, icon, symbol, or graphic), and when referring to the remote locator object on a list of devices, the second portion of the identifier is used to identify the remote locator object. in some embodiments, the user interface for defining, inputting, and/or selecting the identifier for the remote locator object includes a representation of the first portion of the identifier that is interactable to define the graphical identifier and a representation of the second portion of the identifier that is interactable to define the textual identifier. in some embodiments, the representations are two different user interface elements and/or fields. in some embodiments, the representations are two portions of one user interface element and/or field. for example, the user interface includes a “name” field that includes a graphical identifier prepended to a textual identifier. in some embodiments, the respective input is a selection of a respective portion of the identifier, such as a tap input on a touch-sensitive display at a location associated with the respective portion of the identifier. [00179] in some embodiments, the display generation component is a display integrated with the electronic device (optionally a touch screen display), external display such as a monitor, projector, television, or a hardware component (optionally integrated or external) for projecting a user interface or causing a user interface to be visible to one or more users, etc. [00180] in some embodiments, in response to receiving the respective input (704), in accordance with a determination that the respective input corresponds to selection of the representation of the first portion of the identifier, the electronic device displays (706), via the display generation component, a first user interface for selecting a graphic for the first portion of the identifier, such as displaying emoji keyboard 632 in fig. 6g in response to user input 603 selecting icon 628 in fig. 6f (e.g., if the user input selected the first portion of the identifier associated with the graphical identifier portion for the remote locator object, then display a user interface for selecting, configuring, and/or defining the graphical identifier for the remote locator object). [00181] for example, the user interface includes an emoji keyboard from which the user is able to select an emoji as the graphical identifier for the remote locator object. in some embodiments, the user interface includes a scrollable list of available options for graphical identifiers. in some embodiments, the user interface includes an interface to search for or upload a graphical image for use as a graphical identifier. in some embodiments, the first user interface is displayed concurrently with the representation of a first portion of the identifier and a representation of a second portion of the identifier. for example, an emoji keyboard is displayed below the representation of the first and second portions of the identifier. [00182] in some embodiments, in accordance with a determination that the respective input corresponds to selection of the representation of the second portion of the identifier, the electronic device displays (708), via the display generation component, a second user interface for selecting one or more text characters (e.g., numbers and/or letters) for the second portion of the identifier, such as displaying text keyboard 634 in fig. 61 in response to receiving user input 603 selecting text field 630 in fig. 6h (e.g., if the user input selected the second portion of the identifier that is associated with the textual identifier portion for the remote locator object, then display a second user interface for selecting, configuring, and/or defining the textual identifier for the remote locator object). [00183] for example, the second user interface includes a soft or virtual keyboard from which the user is able to enter a name for the remote locator object. in some embodiments, the second user interface includes an interface to search for or upload a graphical image for use as a graphical identifier. in some embodiments, the second user interface is displayed concurrently with the representation of a first portion of the identifier and a representation of a second portion of the identifier. for example, a soft keyboard (e.g., text keyboard) is displayed below the representation of the first and second portions of the identifier. [00184] the above-described manner of selecting an identifier for a remote locator object (e.g., by displaying a user interface for selecting a graphical identifier in response to a user selection of a representation of the graphical identifier or displaying a user interface for selecting a textual identifier in response to a user selection of a representation of the textual identifier) provides a quick and efficient way of selecting the graphical and textual identifier for the remote locator object (e.g., by providing the user with the option to set a particular portion of the identifier, without setting the other portions of the identifier, without requiring the user to perform additional inputs when setting just one portion of the identifier), which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient, which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00185] in some embodiments, the first user interface is displayed in a first portion of the respective user interface, and the second user interface is displayed in the first portion of the respective user interface (710), such as emoji keyboard 632 in fig. 6g being displayed in the same portion of the user interface as text keyboard 634 in fig. 61 (e.g., the first user interface occupies a subset of the respective user interface and is displayed at a particular position in the respective user interface). [00186] for example, the first user interface is an emoji keyboard and is displayed at or near the lower portion of the respective user interface. in some embodiments, the second user interface occupies a subset of the respective user interface (optionally the same amount, less, or more than the first user interface), and is displayed at or near the same portion that is occupied by the first user interface (e.g., the lower portion of the respective user interface). in some embodiments, display of the first and second user interface does not obscure the display of representation of the first and second portions of the identifier (e.g., optionally the representations are moved such that the first and second user interface does not obscure the representations). [00187] the above-described manner of displaying user interfaces for selecting a graphic and text as the identifier for a remote locator object (e.g., at the same portion in the respective user interface) provides a quick and efficient way of selecting the graphical and textual identifier for the remote locator object (e.g., by displaying the respective user interfaces at the same location in the respective user interface), which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user- device interface more efficient, which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00188] in some embodiments, the respective user interface includes a respective user interface element for selecting from a plurality of predefined options for the second portion of the identifier for the remote locator object (712), such as list 620 including a plurality of predefined options in fig. 5b (e.g., the respective user interface includes a selectable list, a drop down menu, or any other element for selecting an option for a plurality of predefined options as the textual identifier for the remote locator object). in some embodiments, the respective user interface element is pre-populated with a plurality of predefined options for naming the remote locator object. in some embodiments, the list includes a plurality of common items that the remote locator object is attached to (e.g., for the purpose of tracking the location of those objects). for example, the list includes keys, bag, backpack, purse, car, suitcase, etc. [00189] in some embodiments, in response to receiving the respective input, and in accordance with a determination that the respective input is directed to the respective user interface element (714), such as user input 603 selecting a predefined option in fig. 6c (e.g., the respective input corresponds to a selection of an option in the respective user interface element), in accordance with a determination that the respective input corresponds to a request to select a first respective predefined option of the plurality of predefined options for the second portion for the identifier (e.g., the respective input selected a first option from the list of options), the electronic device displays (716) a first graphic in the representation of the first portion of the identifier that corresponds to the first respective predefined option (718), such as displaying a bag emoji in icon 628 in fig. 6d (e.g., the first respective predefined option is associated with a first respective predefined graphic such that selecting the first respective predefined option for the second portion of the identifier causes the first respective predefined graphic to be selected for the first portion of the identifier (e.g., the graphical identifier for the remote locator object) and first text corresponding to the first respective predefined option in the representation of the second portion of the identifier (720), such as including the text “bag” in text field 630 in fig. 6d (e.g., selecting the text associated with the first respective predefined option as the textual identifier (e.g., the second portion of the identifier) for the remote locator object). [00190] for example, selecting the “key” option causes a key emoji to be selected for the first portion of the identifier. thus, in some embodiments, a first graphic associated with the first option is displayed in the representation of the first portion of the identifier. thus, the first respective predefined option is optionally displayed in the representation of the second portion of the identifier. [00191] in some embodiments, in accordance with a determination that the respective input corresponds to a request to select a second respective predefined option of the plurality of predefined options for the second portion of the identifier, such as if user input 603 selected a different predefined option in fig. 6c (e.g., the respective input selected a second option from the list of options), the electronic device displays (722) a second graphic, different from the first graphic, in the representation of the first portion of the identifier that corresponds to the second respective predefined option (724), such as if icon 628 included an emoji associated with the selected predefined option in fig. 6d (e.g., the second respective predefined option is associated with a second respective predefined graphic such that selecting the second respective predefined option for the second portion of the identifier causes the second respective predefined graphic to be selected for the first portion of the identifier (e.g., the graphical identifier for the remote locator object) and second text corresponding to the second respective predefined option in the representation of the second portion of the identifier, wherein the second text is different from the first text (726), such as if text field 630 included the text associated with the selected predefined option in fig. 6d (e.g., selecting the text associated with the second respective predefined option as the textual identifier (e.g., the second portion of the identifier) for the remote locator object). [00192] for example, selecting the “bag” option causes a bag emoji to be selected for the first portion of the identifier. thus, in some embodiments, a second graphic associated with the second option is displayed in the representation of the first portion of the identifier. thus, the second respective predefined option is optionally displayed in the representation of the second portion of the identifier. [00193] the above-described manner of selecting from a list of predefined identifiers for a remote locator object (e.g., by receiving an input selecting a predefined identifier, and in response, setting the textual identifier as the selected identifier and automatically setting the graphical identifier to a predefined graphic associated with the selected identifier) provides a quick and efficient way of selecting the graphical and textual identifier for the remote locator object (e.g., without requiring the user to perform additional inputs to select from a list of predefined graphical identifiers), which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient, which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00194] in some embodiments, the first text corresponding to the first respective predefined option in the representation of the second portion of the identifier are displayed concurrently with text that is selected based on a name of a user of the electronic device (728), such as “key” being displayed with the owner’s name “john” (e.g., optionally in the possessive form) in fig. 6c (e.g., the text associated with the selected option for the textual identifier is displayed in the representation of the second portion of the identifier appended (e.g., prepended, optionally in possessive form) with the name of owner of the remote locator object). for example, if the user selected the option for “keys”, the representation of the second portion of the identifier reads “john’s keys”. in some embodiments, the name of the owner of the remote locator object is automatically prepended to the selected options. [00195] in some embodiments, the second text corresponding to the second respective predefined option in the representation of the second portion of the identifier are displayed concurrently with the text that is selected based on the name of the user of the electronic device (730), such as text 630 including the owner’s name “john” in fig. 6c (e.g., the text associated with the selected option for the textual identifier is displayed in the representation of the second portion of the identifier appended (e.g., prepended) with the name of the owner of the remote locator object). for example, if the user selected the option for “bags”, the representation of the second portion of the identifier reads “john’s bag”. in some embodiments, the name of the owner of the remote locator object is automatically prepended to the selected options. [00196] the above-described manner of setting the identifier for a remote locator object (e.g., by automatically adding the owner of the user’s name to the selected textual identifier) provides a quick and efficient way of selecting the graphical and textual identifier for the remote locator object (e.g., without requiring the user to perform additional inputs to add his or her name to the textual identifier), which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient, which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00197] in some embodiments, the first user interface includes a soft emoji keyboard for selecting the graphic for the first portion of the identifier (731), such as emoji keyboard 632 in fig. 6g (e.g., a soft or virtual keyboard specifically for selecting emojis). in some embodiments, the emoji keyboard includes one or more tabs or pages associated with different categories of emojis, which are selectable to display emojis associated with the selected category. in some embodiments, the emoji keyboard does not include an option to switch to displaying a textual keyboard (e.g., for selecting numbers and/or letters). [00198] the above-described manner of selecting the graphical identifier for a remote locator object (e.g., by displaying an emoji keyboard from which an emoji is able to be selected as the graphical identifier for the remote locator object) provides a quick and efficient way of selecting the graphical and textual identifier for the remote locator object (e.g., without requiring the user to perform additional inputs to cause display of an emoji keyboard), which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient, which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00199] in some embodiments, the second user interface includes a text keyboard for selecting the one or more text characters for the second portion of the identifier (732), such as text keyboard 634 in fig. 61 (e.g., a soft or virtual keyboard for selecting numbers and/or letters). in some embodiments, the text keyboard includes a plurality of keys that are selectable to insert the selected number and/or letter in the representation of the second portion of the identifier. in some embodiments, the text keyboard includes an option for switching to an emoji keyboard. in some embodiments, in response to a user input selecting the option for switching to an emoji keyboard, the text keyboard is replaced with an emoji keyboard for selecting emojis for the graphical identifier (e.g., the device switches from editing the textual identifier to editing the graphical identifier based on whether the keyboard being displayed is the text keyboard or the emoji keyboard). [00200] the above-described manner of selecting the textual identifier for a remote locator object (e.g., by displaying a text keyboard to insert and/or edit text for the textual identifier for the remote locator object) provides a quick and efficient way of selecting the graphical and textual identifier for the remote locator object (e.g., without requiring the user to perform additional inputs to cause display of a text keyboard), which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient, which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00201] in some embodiments, the second user interface includes a selectable option that is selectable to transition from the second user interface to the first user interface (734), such as text keyboard 634 including an emoji button that is selectable to switch to displaying an emoji keyboard as in figs 6j-6k (e.g., the text keyboard includes an option for switching to an emoji keyboard, which optionally causes the device to switch from editing the textual identifier to editing the graphical identifier). [00202] in some embodiments, the first user interface does not include a selectable option that is selectable to transition from the first user interface to the second user interface (736), such as in figs. 6j-6k (e.g., the emoji keyboard does not include an option to switch to a text keyboard). [00203] the above-described manner of selecting an identifier for a remote locator object (e.g., by displaying a text keyboard that includes an option to switch to the emoji keyboard) provides a quick and efficient way of switching from editing the textual identifier to editing the graphical identifier (e.g., without requiring the user to perform additional inputs to complete the editing process for the textual identifier and then initiate the editing process for the graphical identifier), which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient, which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00204] in some embodiments, the respective user interface includes a respective user interface element for selecting from a plurality of predefined options for the identifier for the remote locator object (738), such as in fig. 6c (e.g., the respective user interface includes a selectable list, a drop-down menu, or any other element for selecting an option for a plurality of predefined options as the textual identifier for the remote locator object). in some embodiments, the respective user interface element is pre-populated with a plurality of predefined options for naming the remote locator object. in some embodiments, the list includes a plurality of common items that the remote locator object is attached to (e.g., for the purpose of tracking the location of those objects). for example, the list includes keys, bag, backpack, purse, car, suitcase, etc. [00205] in some embodiments, in response to receiving the respective input, and in accordance with a determination that the respective input is directed to the respective user interface element (740) (e.g., the respective input corresponds to a selection of an option in the respective user interface element), in accordance with a determination that the respective input corresponds to a request to select a first respective predefined option of the plurality of predefined options for the second portion of the identifier, the electronic device displays (742), in the respective user interface, first text corresponding to the first respective predefined option in the representation of the second portion of the identifier appended to a name of the user of the electronic device, such as in fig. 6d (e.g., when selecting an option from the list of predefined options, the representation of the second portion of the identifier includes the name of the owner of the remote locator object). [00206] for example, in response to selecting the “key” option, the representation of the second portion of the identifier reads “john’s keys”. in some embodiments, the name of the owner of the remote is not appended (e.g., prepended) to the textual identifier if the textual identifier is not a predefined textual identifier. for example, if the user provided a custom textual identifier, then the representation of the second portion of the identifier includes the custom textual identifier, but does not include the name of the owner of the remote locator object. [00207] in some embodiments, in accordance with a determination that the respective input corresponds to a request to select a second respective predefined option of the plurality of predefined options for the second portion of the identifier, the electronic device displays (744), in the respective user interface, second text corresponding to the second respective predefined option in the representation of the second portion of the identifier appended to the name of the user of the electronic device, wherein the second text is different from the first text, such as in fig. 6d (e.g., if the user input selected a second option from the list of predefined options, then the representation of the second portion of the identifier includes the name of the owner of the remote locator object appended (e.g., prepended) to the selected second option. [00208] the above-described manner of defining the identifier for a remote locator object (e.g., by automatically appending the name of the owner of the device to the textual identifier selected by the user) provides a quick and efficient way of switching from editing the textual identifier to editing the graphical identifier (e.g., without requiring the user to perform additional inputs to provide the owner’s name when setting the textual identifier), which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient, which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00209] in some embodiments, the respective user interface is displayed in response to selection of a respective option included in a respective user interface element, the respective option corresponding to a request to provide a non-predefmed (e.g., first and/or second portion for the) identifier for the remote locator object, and the respective user interface element further includes a plurality of options for selecting from a plurality of predefined options for the second portion of the identifier for the remote locator object (746), such as user input 603 selecting custom option 624 in fig. 6q causing display of icon 628 and text field 630 in fig. 6r (e.g., the user interface includes a list of predefined options as the textual identifier for the remote locator object). [00210] in some embodiments, the list of predefined options includes a “custom” or “other” option, the selection of which provides the user the option to provide a custom name for remote locator object. in some embodiments, selection of the “custom” or “other” option causes the display of the respective user interface object that includes a representation of the first portion (e.g., graphical portion) and second portion (e.g., textual portion) of the identifier, which are selectable to select the graphical identifier and textual identifier, respectively (e.g., and optionally cause display of an emoji keyboard or text keyboard, respectively, as described above). [00211] the above-described manner of defining a custom identifier for a remote locator object (e.g., by selecting a custom option from a list of predefined names) provides a quick and efficient way of providing a custom name (e.g., without limiting the user to only predefined names), which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient (e.g., by reducing confusion between remote locator objects that have the same identifier if the identifiers were limited to only the predefined options), which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00212] in some embodiments, the respective user interface is displayed in response to selection of a selectable option displayed in a user interface associated with the remote locator object (748), such as user input 603 selecting selectable option 616 in fig. 6a (e.g., on a user interface associated with the remote locator object, such as a settings user interface for managing the remote locator object or change one or more settings of the remote locator object, display a selectable option to rename the remote locator object (e.g., a user interface that includes additional information about the remote locator device). in some embodiments, the user interface associated with the remote locator object includes a selectable option for finding and/or locating the remote locator object (e.g., in a manner similar to described below with respect to method 900). [00213] the above-described manner of renaming a remote locator object (e.g., by selecting a selectable option to rename the remote locator object from a user interface associated with the remote locator object) provides a quick and efficient way of renaming a remote locator object (e.g., without requiring the user to reset the settings for the remote locator object and re initialize the remote locator object to change the name of the remote locator object), which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient, which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00214] in some embodiments, in response to receiving the respective input (750), in accordance with the determination that the respective input corresponds to selection of the representation of the first portion of the identifier, the electronic device visually distinguishes (752) the representation of the first portion of the identifier from the representation of the second portion of the identifier, such as in fig. 6g (e.g., while the first user interface is displayed, visually highlight the representation of the first portion of the identifier or any other suitable visual indication, to indicate that the first portion of the identifier is being edited). for example, when the first portion of the identifier is visually distinguished, selecting an option from a soft keyboard (e.g., emoji keyboard or text keyboard) causes the first portion of the identifier to be edited according to the selection on the soft keyboard (e.g., and the second portion of the identifier is not edited). [00215] in some embodiments, in accordance with the determination that the respective input corresponds to selection of the representation of the second portion of the identifier, the electronic device visually distinguishes (754) the representation of the second portion of the identifier from the representation of the first portion of the identifier, such as in fig. 61 (e.g., while the second user interface is displayed, visually highlight the representation of the second portion of the identifier or any other suitable visual indication, to indicate that the first portion of the identifier is being edited). for example, when the second portion of the identifier is visually distinguished, selecting an option from a soft keyboard (e.g., emoji keyboard or text keyboard) causes the second portion of the identifier to be edited according to the selection on the soft keyboard (e.g., and the first portion of the identifier is not edited). [00216] the above-described manner of indicating the portion of the identifier for the remote locator object being edited (e.g., by visually distinguishing the representation of the portion that was selected) provides a quick and efficient way of indicating the portion of the identifier that will be edited in response to an editing input (e.g., without requiring the user to perform inputs to determine whether the first portion or the second portion of the identifier is being edited), which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient, which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00217] in some embodiments, the electronic device displays (756), via the display generation component, a map user interface that includes a representation of a map that indicates locations of one or more objects, including the remote locator object, wherein the map user interface includes the representation of the first portion of the identifier of the remote locator object displayed at a location on the representation of the map that corresponds to a current location of the remote locator object, such as in fig. 6l (e.g., a map user interface includes a representation of the remote locator object that indicates the location of the remote locator object in the map (and which optionally includes one or more representations of other objects of which the location is known). [00218] in some embodiments, the remote locator object is represented by the first portion of the identifier (e.g., the graphical identifier). for example, the map user interface includes one or more graphical icons that representations the location of one or more objects (including the remote locator object) in the map user interface. in some embodiments, the second portion of the identifier is not displayed with the graphical icons. in some embodiments, in response to selecting the graphical icon, the map user interface is updated to display information about the corresponding remote locator object, including optionally referring to the remote locator object using the textual identifier (e.g., the second portion of the identifier). [00219] the above-described manner of representing a remote locator object (e.g., by representing the remote locator object using the graphical indicator) provides a quick and efficient way of representing a remote locator object (e.g., in a concise fashion, without requiring the display of the textual description, thus reducing the display area requirements), which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient, which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00220] in some embodiments, the electronic device displays (758), via the display generation component, a map user interface that includes a representation of a map that indicates locations of one or more objects, including the remote locator object, such as in fig. 6l (e.g., a map user interface includes a representation of the remote locator object that indicates the location of the remote locator object in the map (and which optionally includes one or more representations of other objects of which the location is known and/or tracked)). [00221] in some embodiments, in accordance with a determination that a plurality of objects, including a first object and a second object, satisfy one or more criteria (e.g., if more than a threshold number of tracked objects (e.g., 2 objects, 3 objects , 5 objects , 10 objects) are determined to be located at a respective location or within a threshold distance of each other (e.g., within 2 feet, within 10 feet, within ¼ mile, within 5 miles, etc., or if a threshold number of tracked objects are paired with the electronic device)), the map user interface includes a respective representation of the plurality of objects without including a first representation of the first object and a second representation of the second object (760), such as group 640 in fig. 6l (e.g., the plurality of objects are grouped together and represented as a set of objects). [00222] in some embodiments, the electronic device is the user’s primary device (e.g., the device is the user’s phone or the user’s computer, and optionally not the user’s tablet or the user’s watch). in some embodiments, the representation of the set of objects includes one or more representations of some objects in the set and optionally does not include representations of other objects in the set. for example, if the group includes four objects, then the representation of the set includes a representation of two of the objects and does not include representations of the other two objects. in some embodiments, the map user interface includes a user interface element that indicates the location on the representation of the map that the set of objects are located. for example, the map includes a black dot and the representation of the set of objects includes a graphical element (e.g., arrow, a dot, etc.) pointing towards the black dot. [00223] in some embodiments, in accordance with a determination that the plurality of objects do not satisfy the one or more criteria, the map user interface includes the first representation of the first object and the second representation of the second object (762), such as icon 642 in fig. 6l (e.g., without including the respective representation of the plurality of objects). [00224] in some embodiments, if less than the threshold number of tracked objects are determined to be located at the respective location or within the threshold distance of each other, then the objects are not grouped together and are optionally represented individually by their identifiers (optionally only by their graphical identifiers). for example, the map user interface includes a plurality of black dots and the individual representations of the objects include graphical elements pointing towards their respective black dots. [00225] the above-described manner of displaying the location of one or more tracked objects (e.g., by grouping together a set of objects and representing the group as one set if the set are close in proximity or by representing each object individually if the objects are not close in proximity) provides a quick and efficient way of indicating the location of multiple objects that are close together (e.g., without displaying a representation of each object, even if the objects are close together), which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient (e.g., by conserving display space and increasing the visibility of the displayed objects), which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00226] in some embodiments, the one or more criteria include a criterion that is satisfied when the plurality of objects are within a threshold distance of a respective electronic device (764), such as described in fig. 6l (e.g., the plurality of objects are within 2 feet, 5 feet, 200 feet, ½ mile, 1 mile, 10 miles, etc. of the device). in some embodiments, if the plurality of objects are within the threshold distance of the device if, on the representation of the map, the objects would otherwise be displayed within 1 mm, 5 mm, 1 cm, of the location of the device. in some embodiments, the respective electronic device is the user’s primary device (e.g., the user’s phone, the user’s laptop, etc.) and not necessarily the device that is displaying the user interface (e.g., the respective electronic device is not necessarily the device performing method 700, but can be another electronic device). in some embodiments, the respective electronic device is the device that is displaying the user interface and performing method 700. [00227] the above-described manner of displaying a group of tracked objects (e.g., as a group, if the objects are within a threshold distance of the device) provides a quick and efficient way of indicating the location of multiple objects that are close together (e.g., without individually displaying a representation of each object), which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient, which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00228] in some embodiments, the one or more criteria include a criterion that is satisfied when the plurality of objects are in wireless communication with a respective electronic device (766), such as described in fig. 6l (e.g., the plurality of objects are paired with the respective electronic device (e.g., via bluetooth, wifi, nfc, etc.)). for example, the plurality of objects are paired with the electronic device displaying the user interface. in another example, the plurality of objects are paired with the user’s primary electronic device, which is optionally a different electronic device than the device that is displaying the user interface. [00229] the above-described manner of displaying a group of tracked objects (e.g., as a group, if the objects are paired with the electronic device) provides a quick and efficient way of indicating the location of multiple objects that are close together (e.g., without individually displaying a representation of each object), which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user- device interface more efficient, which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00230] in some embodiments, while displaying the respective representation of the plurality of objects in the map user interface, the electronic device receives (768), via the one or more input devices, selection of the respective representation of the plurality of objects, such as in fig. 6m (e.g., while the plurality of objects are grouped together and represented as a set of objects, receiving a user input selecting the representation of the set of objects). [00231] in some embodiments, in response to receiving the selection of the respective representation of the plurality of objects, the electronic device displays (770), in the map user interface, the first representation of the first object and the second representation of the second object, such as in fig. 6n (e.g., expanding the set of objects and displaying representations of the objects in the set of objects (e.g., optionally displaying representations of each object)). [00232] in some embodiments, the user interface includes a list of the objects in the set of objects. in some embodiments, in response to receiving the selection of the respective representation of the plurality of objects, the map user interface is updated to cease displaying representations of other objects that are not in the plurality of objects (e.g., other objects that are not paired with the device, or other objects that are not within the threshold distance from the device). in some embodiments, in response to receiving the selection of the respective representation of the plurality of objects, the map user interface is updated to reposition the representation of the map such that the respective representation of the plurality of objects is centered. in some embodiments, the user interface displays more and/or different information about the set of objects (e.g., more and/or different information about the objects in the set) than was previously displayed before receiving the user input. for example, the user interface optionally includes entries for more objects in the group that was previously displayed. in some embodiments, the user interface displays a textual indication of the location of the group of objects (e.g., “with you”, “near home”, “with spouse”, etc.). [00233] the above-described manner of displaying a group of tracked objects (e.g., displaying the objects in the group in response to an input selecting the representation of the group) provides a quick and efficient way of indicating objects that are near the device (e.g., by displaying the objects that are near the device in a single user interface, optionally without displaying other objects that are not near the device), which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient, which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00234] it should be understood that the particular order in which the operations in figs. 7a-7h have been described is merely exemplary and is not intended to indicate that the described order is the only order in which the operations could be performed. one of ordinary skill in the art would recognize various ways to reorder the operations described herein. additionally, it should be noted that details of other processes described herein with respect to other methods described herein (e.g., methods 900, 1100, and 1300) are also applicable in an analogous manner to method 700 described above with respect to figs. 7a-7h. for example, providing user interfaces for defining identifiers for remote locator objects described above with reference to method 700 optionally has one or more of the characteristics of locating a remote locator object, providing information associated with a remote locator object, displaying notifications associated with a trackable device, etc., described herein with reference to other methods described herein (e.g., methods 900, 1100, and 1300). for brevity, these details are not repeated here. [00235] the operations in the information processing methods described above are, optionally, implemented by running one or more functional modules in an information processing apparatus such as general purpose processors (e.g., as described with respect to figs. 1a-1b, 3, 5a-5h) or application specific chips. further, the operations described above with reference to figs. 7a-7h are, optionally, implemented by components depicted in figs. 1 a-1b. for example, displaying operations 706, 708, 716, 722, 742, 744, 756, 758, and 770 and receiving operations 702 and 768 are, optionally, implemented by event sorter 170, event recognizer 180, and event handler 190. event monitor 171 in event sorter 170 detects a contact on touch screen 504, and event dispatcher module 174 delivers the event information to application 136-1. a respective event recognizer 180 of application 136-1 compares the event information to respective event definitions 186, and determines whether a first contact at a first location on the touch screen corresponds to a predefined event or sub-event, such as selection of an object on a user interface. when a respective predefined event or sub-event is detected, event recognizer 180 activates an event handler 190 associated with the detection of the event or sub event. event handler 190 optionally utilizes or calls data updater 176 or object updater 177 to update the application internal state 192. in some embodiments, event handler 190 accesses a respective gui updater 178 to update what is displayed by the application. similarly, it would be clear to a person having ordinary skill in the art how other processes can be implemented based on the components depicted in figs. 1 a-1b. locating a remote locator object [00236] users interact with electronic devices in many different manners. in some embodiments, an electronic device is able to track the location of an object such as a remote locator object. in some embodiments, the remote locator object, which supports location tracking functions, can be attached to items that do not support location tracking functions. the embodiments described below provide ways in which an electronic device locates a remote locator object, thus enhancing the user’s interactions with the electronic device. enhancing interactions with a device reduces the amount of time needed by a user to perform operations, and thus reduces the power usage of the device and increases battery life for battery-powered devices. it is understood that people use devices. when a person uses a device, that person is optionally referred to as a user of the device. [00237] figs. 8a-8i illustrate exemplary ways in which an electronic device 500 locates a remote locator object in accordance with some embodiments of the disclosure. the embodiments in these figures are used to illustrate the processes described below, including the processes described with reference to figs. 9a-9g. [00238] fig. 8a illustrates electronic device 500 displaying user interface 800 (e.g., via a display device, etc.). in some embodiments, user interface 800 is displayed via a display generation component. in some embodiments, the display generation component is a hardware component (e.g., including electrical components) capable of receiving display data and displaying a user interface. in some embodiments, examples of a display generation component include a touch screen display (such as touch screen 504), a monitor, a television, a projector, an integrated, discrete, or external display device, or any other suitable display device that is in communication with device 500. [00239] in some embodiments, user interface 800 is a user interface associated with a respective remote locator object, optionally for managing and/or changing one or more settings associated with the respective remote locator object, for viewing information about the respective remote locator object, and/or for locating the respective remote locator object, similar to user interface 600 described above with respect to fig. 6a. [00240] as shown in fig. 8 a, the respective remote locator object named “john’s keys” is determined to be (e.g., roughly) 30 feet from device 500. in fig. 8a, a user input 803 is received selecting selectable option 806 to locate the respective remote locator object. in some embodiments, in response to receiving user input 803 selecting selectable option 806, device 500 initiates a process to locate the respective remote locator object, as shown in fig. 8b. in some embodiments, the process to locate the respective remote locator object includes a plurality of different finding modes, and the finding mode that is used is optionally based on the distance that the remote locator object is from device 500. for example, if the distance between the remote locator object and device 500 is above a first threshold distance (e.g., more than 50 feet, 100 feet, ¼ mile, ½ mile, 1 mile, etc.), then the process to locate the respective remote locator object involves displaying one or more navigation directions on a representation of a map to travel from the current location to the determined location of the respective remote locator object (e.g., a map style finding mode). in some embodiments, if the distance is less than the first threshold distance, then the process to locate the respective remote locator object involves displaying one or more indications that are biased towards or point towards the location of the remote locator object to guide the user to move towards the remote locator object (e.g., a “compass” style finding mode). in some embodiments, either of the map style finding mode and the compass style finding mode has sub-modes based on the distance between the remote locator object and the device in which the user interface is updated or changes to provide a better finding experience, as will be discussed in more detail below. [00241] in fig. 8b, because the distance between device 500 and the remote locator object (e.g., remote locator object 830) is less than the first threshold distance (e.g., the distance is 30 feet), device 500 enters into the compass style finding mode and displays user interface 816. in some embodiments, user interface 816 includes a textual indication 818 of the remote locator object being located (e.g., the textual indicator of the remote locator object, which was optionally selected according to method 700 described above), exit affordance 824 that is selectable to exit the process of locating the remote locator object, and audio affordance 826 that is selectable to cause the remote locator object to generate an audible output. [00242] in some embodiments, user interface 816 further includes a plurality of user interface elements 820 (e.g., a “point cloud”) that, in combination, indicate the general location of remote locator object 830 (e.g., relative to device 500). in fig. 8b, because remote locator object 830 is farther than a second threshold distance from device 500 (e.g., more than 10 feet, 20 feet, 30 feet, 50 feet, etc. away from device 500), user interface 816 includes the plurality of user interface elements 820 that move around in user interface 816 and are optionally biased towards the location of remote locator object 830 (e.g., device 500 is in the first sub-mode of the compass-style finding mode). for example, a majority of the plurality of user interface elements 820 are located at the portion of user interface 816 that is closer to the location of remote locator object 830. [00243] in some embodiments, while performing the process to locate a remote locator object, device 500 optionally uses (e.g., automatically) one or more cameras of device 500 to capture images of the environment around device 500 (e.g., environment 828). in some embodiments, in addition to or alternatively to using the one or more cameras of device 500, device 500 uses one or more wireless communication circuitry (e.g., bluetooth, nfc, etc.) to locate and/or identify the remote locator object. in some embodiments, device 500 analyzes the captured images to facilitate identifying remote locator object 830 in environment 828 and/or determining the location of remote locator object 830 in environment 828. in some embodiments, the one or more cameras that are used to capture images of environment 828 are located on the side of device 500 opposite to the display generation component (e.g., on the opposite side of touch screen 504). in some embodiments, the one or more cameras that are used to capture images of environment 828 are the same cameras that are used to take photographs and/or videos using a camera application installed on device 500. [00244] in some embodiments, while the one or more cameras of device 500 are capturing images of environment 828 (e.g., continuously or at a predetermined interval, such as once every 0.5 seconds, every 1 second, every 5 seconds, every 10 seconds, etc.), user interface 816 includes representation 832 of the captured images of environment 828. in some embodiments, representation 832 is a visually modified version of the captured images (e.g., blurred, shaded, darkened, etc.). for example, in fig. 8b, environment 828 includes a table and remote locator object 830 placed on top of the table. in some embodiments, representation 832 is a blurred representation of the captured images of environment 828 (e.g., a blurred image of a table and a remote locator object on the table) displayed in the background of user interface 816 (e.g., the elements of user interface 816, such as the plurality of user interface elements 820, are displayed overlaid on representation 832). in some embodiments, displaying representation 832 indicates that the one or more cameras of device 500 are in use to help locate remote locator object 830. [00245] in fig. 8c, device 500 has moved in environment 828 such that remote locator object 830 is 25 feet away from device 500 (e.g., and located ahead and to the right of device 500). in some embodiments, because the distance between device 500 and remote locator object 830 is less than the second threshold distance, device 500 is in a second sub-mode of the compass-style finding mode. for example, in fig. 8c, user interface 816 is updated to include arrow 834 that points in the direction of remote locator object 830 (e.g., relative to device 500) and a textual description of the distance and direction of remote locator object 830. as shown in fig. 8c, representation 832 of the captured images of environment 828 shows that the table on which remote locator object 830 is located is now on the right side of device 500 and is now closer to device 500 (e.g., the representation of the table and/or remote locator object is larger) as compared to fig. 8b. [00246] in fig. 8c, the ambient luminance level of environment 828 is above a threshold level (e.g., above 10 lux, 50 lux, 100 lux, 500 lux, etc.). in some embodiments, device 500 determines the ambient luminance using an ambient light sensor of device 500 (e.g., such as optical sensor 164 and/or proximity sensor 166 described above with respect to fig. 4a). in some embodiments, because the ambient luminance level of environment 828 is such that environment 828 is bright enough for the one or more cameras to be able to capture a sufficiently clear image of environment 828 and/or of remote locator object 830 (e.g., enough detail, enough resolution, enough contrast, etc.) and for device 500 to identify remote locator object 830, user interface 816 does not include a selectable option for turning on a lighting element of device 500 during the finding mode. [00247] in fig. 8d, during the finding mode, device 500 determines that the ambient luminance of environment 828 has dropped below the threshold level (e.g., the lights have turned off, the sun has set, the user has walked into a dark room, for example). as shown in fig. 8d, because environment 828 is darker than in fig. 8c, representation 832 of the captured images of environment 828 reflects the darkened environment. in some embodiments, in response to determining that the ambient luminance of environment 828 has dropped below the threshold level, device 500 displays selectable option 836 in user interface 816, which is selectable to turn on the lighting element of device 500. in some embodiments, the lighting element that is turned on is the same lighting element used as a flash when taking pictures or videos with the one or more cameras of device 500 (e.g., in a camera application on device 500). in some embodiments, the lighting element is located on the same side of device 500 as the one or more cameras that are capturing images of environment 828. in some embodiments, the lighting element is able to light up at least a part of the environment that is captured by the one or more cameras of device 500. in some embodiments, in response to determining that the ambient luminance of environment 828 has dropped below the threshold level, user interface includes textual description 838 that more light is required (e.g., suggesting that the user turn on the lighting element of device 500). [00248] in fig. 8e, while displaying selectable option 836 and textual description 838, device 500 detects that the ambient luminance of environment 828 has increased back above the threshold level. in some embodiments, in response to detecting that the ambient luminance of environment 828 has increased back above the threshold level, user interface 816 is updated to remove selectable option 826 and textual description 838. in some embodiments, if the lighting element was turned on when device 500 detects that the ambient luminance of environment 828 has increased back above the threshold level, device 500 turns off the lighting element of device 500 (e.g., optionally only if the lighting element was on in response to selecting selectable option 838). [00249] in some embodiments, the threshold level above which the selectable option is ceased to be displayed (e.g., as in fig. 8e) is different than the threshold level below which the selectable option is displayed (e.g., as in fig. 8d). in some embodiments, the threshold level above which the selectable option is ceased to be displayed is more than the threshold level below which the selectable option is displayed. for example, while displaying selectable option 836, the ambient luminance has to increase to a level that is greater than the level that caused selectable option 836 to be displayed (e.g., 10 lux greater, 50 lux greater, 100 lux greater, 500 lux greater, 10% greater, 30% greater, 50% greater, 100% greater, etc.) in order for device 500 to cease displaying selectable option 836 in user interface 816. thus, device 500 optionally implements a hysteresis effect for displaying selectable option 836 and ceasing display of selectable option 836. in some embodiments, implementing a hysteresis effect prevents selectable option 836 from flickering in and out of user interface 816 (e.g., prevents selectable option 836 from switching back and forth from being displayed and not being displayed) if, for example, the ambient luminance is near the threshold level. [00250] in fig. 8f, while the ambient luminance of environment 828 is below the threshold level and user interface 816 includes selectable option 836 and textual description 838, a user input 803 is received selecting selectable option 836. in some embodiments, in response to receiving the user input 803 selecting selectable option 836, device 500 enables lighting element 840 of device 500, as shown in fig. 8g. in fig. 8g, a portion of environment 828 is illuminated by lighting element 840 such that the table and remote locator object 830 are illuminated. in some embodiments, representation 832 of the captured images of environment 828 reflects that the environment has been illuminated (e.g., the area that is illuminated is brighter than the area that is not illuminated). in some embodiments, selectable option 836 is updated to indicate that lighting element 840 is enabled. for example, in fig. 8g, the colors of selectable option 836 are inverted, although it is understood that any visual indication on selectable option 836 that lighting element 840 is enabled is possible. in some embodiments, in response to enabling lighting element 840, textual description 838 is removed from user interface 816 (e.g., due to no longer needing to indicate that more light is required). in some embodiments, selection of selectable option 836 when lighting element 840 is on causes lighting element 840 to be turned off (e.g., which optionally causes selectable option 836 to indicate that lighting element 840 is not enabled and optionally causes textual description 838 to be displayed in user interface 816). [00251] thus, as described above, while performing a process to locate a remote locator object (e.g., optionally while in a compass-style finding mode shown in figs. 8b-8g), if device 500 detects that the luminance level of the environment is below a threshold such that more light is needed to increase the accuracy and/or efficacy of locating the remote locator object in the environment, device 500 automatically displays a selectable option to turn on a lighting element of device 500 to increase the luminance level of the environment (e.g., a portion of the environment), optionally increasing the accuracy and/or efficacy of locating the remote locator object in the environment. [00252] in some embodiments, while device 500 is in certain finding modes, device 500 does not display a selectable option to turn on the lighting element of device 500, even if the ambient luminance of the environment is below the threshold (e.g., even if all other criteria that cause display of the selectable option are satisfied). [00253] for example, in fig. 8h, remote locator object 830 is less than a third threshold distance away from device 500 (e.g., less than ½ foot, 1 foot, 2 feet, 3 feet, 5 feet, etc.). in some embodiments, in response to determining that remote locator object 830 is less than the third threshold distance away from device 500, device 500 enters into a third sub-mode of the compass-style finding mode, as shown in fig. 8h. in fig. 8h, user interface 816 has been updated to display a representation 842 of remote locator object 830 and a bounding shape that closes into and merges with representation 842 of remote locator object 830 as device 500 approaches and reaches the location of remote locator object 830. [00254] in some embodiments, because device 500 is in the third sub-mode in which remote locator object 830 is less than the third threshold distance away from device 500, even if environment 828 has an ambient luminance level below the threshold level, user interface 816 does not include a selectable option to turn on the lighting element of device 500. in some embodiments, while in the third sub-mode, device 500 does not use the one or more cameras to help locate the remote locator object and enabling the lighting element would not help device 500 in locating the remote locator object. in some embodiments, while in the third sub-mode, device 500 wirelessly communicates directly with remote locator object 830 to determine its location (e.g., via radio communication circuitry). [00255] fig. 81 illustrates another situation in which device 500 does not display a selectable option to turn on the lighting element of device 500 while in a finding mode. in fig. 81 the distance between the remote locator object and device 500 is more than the first threshold distance (e.g., more than 50 feet, 100 feet, ¼ mile, ½ mile, 1 mile, etc.), and in response to determining that the distance between the remote locator object and device 500 is more than the first threshold distance, device 500 operates in a map style finding mode, which includes displaying one or more driving and/or navigation directions to travel from the current location of device 500 to the determined location of the remote locator object. in some embodiments, while in the map style finding mode, device 500 does not use the one or more cameras of device 500 to help locate the remote locator object and thus, device 500 does not display a selectable option to turn on the lighting element of device 500 in user interface 844, even though the ambient luminance level of the environment around device 500 is below the threshold value. as discussed above, in some embodiments, the remote locator object is optionally able to communicate with electronic devices in the vicinity of the remote locator object (e.g., devices which optionally do not have a previous relationship with the remote locator object) such that the remote locator object is able to cause its location to be updated and sent to device 500 (e.g., via a server). in some embodiments, in this way, device 500 is able to receive updates and/or access the location of the remote locator object (e.g., by querying a server that receives updated location information from the remote locator object) even if the remote locator object is not able to directly communicate with device 500. [00256] figs. 9a-9g are flow diagrams illustrating a method 900 of locating a remote locator object in accordance with some embodiments, such as in figs. 8a-8i. the method 900 is optionally performed at an electronic device such as device 100, device 300, device 500 as described above with reference to figs. 1 a-1b, 2-3, 4a-4b and 5a-5h. some operations in method 900 are, optionally combined and/or order of some operations is, optionally, changed. [00257] as described below, the method 900 provides ways to locate a remote locator object. the method reduces the cognitive burden on a user when interaction with a user interface of the device of the disclosure, thereby creating a more efficient human-machine interface. for battery-operated electronic devices, increasing the efficiency of the user’s interaction with the user interface conserves power and increases the time between battery charges. [00258] in some embodiments, an electronic device in communication with one or more wireless antenna, a display generation component and one or more input devices (e.g., electronic device 500, a mobile device (e.g., a tablet, a smartphone, a media player, or a wearable device) including wireless communication circuitry, optionally in communication with one or more of a mouse (e.g., external), trackpad (optionally integrated or external), touchpad (optionally integrated or external), remote control device (e.g., external), another mobile device (e.g., separate from the electronic device), a handheld device (e.g., external), and/or a controller (e.g., external), etc.) displays (902) a first user interface (e.g., via the display generation component), such as user interface 800 in fig. 8a (e.g., a user interface that includes information about one or more remote locator objects, a home screen user interface with a plurality of app launch icons, an application user interface, a virtual assistant user interface, or any other suitable user interface). [00259] in some embodiments, the display generation component is a display integrated with the electronic device (optionally a touch screen display), external display such as a monitor, projector, television, or a hardware component (optionally integrated or external) for projecting a user interface or causing a user interface to be visible to one or more users, etc. [00260] in some embodiments, while displaying the first user interface, the electronic device receives (904) a request, via the one or more input devices, to locate a remote locator object, such as user input 803 selecting selectable option 806 for locating the respective remote locator object in fig. 8 a (e.g., a user input tapping on a “find remote locator object” affordance or a request to a virtual assistant (e.g., voice request) to “find remote locator object”). [00261] in some embodiments, in response to receiving the request to locate the remote locator object, the electronic device displays (906), via the display generation component, a user interface for locating the remote locator object, such as user interface 816 in fig. 8b (e.g., initiate a process for finding and/or locating the remote locator object). [00262] in some embodiments, while the device is in a process for finding and/or locating a remote locator object, the device displays a user interface for guiding the user to locate the remote locator object. in some embodiments, the electronic device is used as a compass-like device for locating the remote locator object (e.g., a compass-style finding mode). for example, the device is able to determine the direction of the remote locator object and guide the user to move in the determined direction. in some embodiments, in the finding mode, the user interface includes visual indicators that are displayed via the display generation component to indicate the direction and/or distance of the remote locator object (e.g., arrows pointing in the direction of the remote locator object and/or a textual indication of the approximate distance that the remote locator object is from the device). in some embodiments, the device determines the location of the remote locator object (e.g., direction and distance) based on wireless communication with the remote locator object, such as via the one or more wireless antenna (e.g., via bluetooth, wifi, an ad-hoc wireless network, etc.). in some embodiments, the device determines the location of the remote locator object by using one or more cameras of the device to capture images of the environment around the device and analyze the images to identify and locate the remote locator object. in some embodiments, while using the one or more cameras of the device to find and identify the remote locator object, the device displays an augmented-reality environment to guide the user to the location of the remote locator object (e.g., an augmented reality finding mode). for example, the augmented-reality environment includes a representation of the real world environment being captured by the one or more cameras (e.g., a photorealistic live image of what is being captured by the cameras) that is modified to include one or more electronically generated elements that indicate the identified position of the remote locator object. for example, the electronically generated elements include an arrow pointing towards the remote locator object, a circle around the remote locator object, and/or a flag or balloon that appears attached to the remote locator object that is able to indicate the location of the remote locator object even if it is obscured behind a physical object, etc. [00263] in some embodiments, in accordance with a determination that one or more criteria are satisfied, the electronic device displays (908), in the user interface, a selectable option that is selectable to emit light from a lighting element of the electronic device, such as selectable option 836 in fig. 8d (e.g., while in a process for finding and/or locating a remote locator object, if one or more criteria are satisfied, the user interface includes a flashlight affordance that is selectable to activate a flashlight or any other suitable lighting element associated with the electronic device to assist the user in finding the remote locator object). [00264] in some embodiments, the criteria are satisfied if the amount of ambient light (optionally determined using one or more ambient light sensors) is less than a threshold amount (e.g., less than 20 lux, less than 50 lux, less than 100 lux, less than 500 lux, etc.). in some embodiments, enabling the lighting element helps the user visibly identify the remote locator object in the environment. in some embodiments, enabling the lighting element helps the device capture images of the environment for the purpose of accurately identifying the remote locator object. [00265] in some embodiments, in accordance with a determination that the one or more criteria are not satisfied, the electronic device forgoes displaying (910), in the user interface, the selectable option that is selectable to emit light from the lighting element of the electronic device, such as the lack of selectable option 836 in user interface 816 in fig. 8c (e.g., while in the process for finding and/or locating the remote locator object, if the one or more criteria are not satisfied, the user input does not include a flashlight affordance for enabling or disabling a lighting element of the electronic device). for example, if the ambient luminance is above the threshold, the user interface does not include a flashlight affordance for enabling the lighting element of the device. [00266] the above-described manner of displaying a selectable option that is selectable to turn on a lighting element if certain criteria are satisfied provides a quick and efficient way of improve visibility while looking for the remote locator object (e.g., by automatically displaying the selectable option for enabling the lighting element when needed, without requiring the user to perform additional inputs to determine whether enabling a lighting element would help with locating the remote locator object), which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient, which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00267] in some embodiments, the electronic device includes one or more cameras that are used to determine a location of the electronic device relative to the remote locator object (912), such as described in fig. 8b (e.g., the one or more cameras of the electronic device capture one or more images of the environment around the electronic device, and the electronic device analyzes the one or more captured images to identify and locate the remote locator object). in some embodiments, based on the analysis, the electronic device is able to determine the location of the remote locator object and guide the user to the determined location. in some embodiments, the images captured by the one or more cameras are analyzed to determine the orientation of the electronic device with respect to objects in the environment around the electronic device. in some embodiments, the location of the remote locator object is determined based on both on the analysis of the images captured by the one or more cameras and wireless communication with the remote locator object. [00268] the above-described manner of finding a remote locator object (e.g., using one or more cameras of the electronic device to visually find the remote locator object) provides a quick and efficient way of finding the remote locator object, which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient, which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00269] in some embodiments, the lighting element of the electronic device, when emitting light, emits light onto a portion of a physical environment of the electronic device that is within a field of view of the one or more cameras (914), such as described in fig. 8d (e.g., the lighting element is facing a respective direction such that when the lighting element is turned on, the scene that is captured by the one or more cameras is brightened due to the lighting element). [00270] thus, the effective area of the lighting element (e.g., the portion of the environment that is brightened by the lighting element) at least partially overlaps with the field of view of the one or more cameras (e.g., the portion of the environment that is captured by the one or more cameras). in some embodiments, the one or more cameras and/or the lighting element are located on a side other than the side that the display generation device is located. for example, the one or more cameras and the lighting element are located on the opposite side of the display generation component such that the user is able to see the display while the one or more cameras captures images to find the remote locator object. [00271] the above-described manner of finding a remote locator object (e.g., using one or more lighting elements to brighten the environment to improve the ability to identify and find the remote locator object) provides a quick and efficient way of finding the remote locator object (e.g., by using lighting elements to increase the brightness of the environment that is being captured by the one or more cameras of the device), which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient, which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00272] in some embodiments, the one or more cameras are located on a first side of the electronic device, and the lighting element is located on the first side of the electronic device (916), such as described in fig. 8d (e.g., the lighting element and the one or more cameras are located on the same side of the electronic device, optionally opposite of the display generation component). [00273] the above-described manner of finding a remote locator object (e.g., using one or more lighting elements that are located on the same side of the electronic device as the one or more cameras to illuminate the environment to improve the ability to identify and find the remote locator object) provides a quick and efficient way of finding the remote locator object, which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient, which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00274] in some embodiments, the lighting element is used as a flash for the one or more cameras when the electronic device is capturing media using the one or more cameras in a media capture application (918), such as described in fig. 8d (e.g., the lighting element used to brighten the environment to locate the remote locator object is the same lighting element that is used as a flash when using the one or more cameras to take pictures and/or videos using a camera application on the electronic device. [00275] the above-described manner of illuminating the environment to assist in finding the remote locator object (e.g., using the same lighting elements to illuminate the environment that is used as a flash when taking a picture or video using the one or more cameras) provides a quick and efficient way of finding the remote locator object, which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient (e.g., without requiring multiple lighting elements), which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00276] in some embodiments, the user interface for locating the remote locator object includes a representation of a portion of a physical environment of the electronic device that is within a field of view of the one or more cameras (920), such as representation 832 in fig. 8b (e.g., the user interface includes a representation of the environment that is captured by the one or more cameras). [00277] in some embodiments, the representation of the environment is visually modified to blur, obscure, reduce the resolution and/or reduce the level of detail of the captured images. in some embodiments, the representation of the environment is displayed in the background of the user interface. in some embodiments, displaying the representation of the environment provides an indication that the one or more cameras have been enabled and/or are assisting in locating the remote locator object. in some embodiments, if the one or more cameras are not enabled, the user interface does not include a representation of the captured environment. [00278] the above-described manner of indicating that the one or more cameras of the device are capturing images of the environment to locate the remote locator object (e.g., by displaying a representation of the environment that is being captured by the one or more cameras) provides a quick and efficient way of indicating that the one or more cameras are in use, which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient, which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00279] in some embodiments, in accordance with the determination that the one or more criteria are satisfied, the electronic device displays (922), in the user interface, an indication that additional light is needed to locate the remote locator object, such as textual description 838 in fig. 8d (e.g., when the one or more criteria are satisfied such that the user interface includes the selectable option that is selectable to emit light from a lighting element, the user interface includes an indication that the one or more criteria are satisfied and that more light is required and/or that enabling the lighting element is recommended (e.g., to assist in locating the remote locator object)). in some embodiments, the indication is a textual description that more light is required. in some embodiments, the indication is a graphical element that indicates that more light is required. [00280] the above-described manner of locating a remote locator object (e.g., by displaying an indication that more light is required when the ambient light is below a threshold luminance) provides a quick and efficient way of finding the remote locator object (e.g., by automatically determining that more light is required and instructing the user to enable a lighting element to illuminate the environment), which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient, which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00281] in some embodiments, the user interface includes an indication of an identifier associated with the remote locator object (924), such as textual indication 818 in fig. 8b (e.g., the user interface includes the identifier of the remote locator, for example, to indicate which remote locator object is being located). in some embodiments, the user interface includes a graphical identifier, a textual identifier, or any other suitable identifier, optionally including the name of the owner of the remote locator object. for example, the user interface includes a textual description “john’s keys” in some embodiments, the graphical and/or textual identifier are selected via a process described above with respect to method 700. [00282] the above-described manner of indicating the remote locator object being located (e.g., by displaying an indication of the identifier being located) provides a quick and efficient way of identifying the remote locator object being located, which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient (e.g., without requiring the user to perform additional inputs and/or interrupt the finding process to determine which remote locator object is being located), which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00283] in some embodiments, while not displaying the selectable option that is selectable to emit light from the lighting element of the electronic device in the user interface, the electronic device determines (926) that the one or more criteria have become satisfied, such as in fig. 8c illustrating device 500 not displaying selectable option 836 and subsequently detecting that the ambient luminance has dropped below the threshold level in fig. 8d (e.g., while displaying the user interface without the selectable option that is selectable to emit light from a lighting element of the electronic device, determining that the criteria for displaying the selectable option have become satisfied). for example, while in the process to find the remote locator object, detecting that the ambient light has reduced to below a threshold amount of luminance (e.g., the user walked into a dark room, the sun set, a light turned off, etc.). [00284] in some embodiments, in response to determining that the one or more criteria have become satisfied, the electronic device updates (928) the user interface to include the selectable option that is selectable to emit light from the lighting element of the electronic device, such as in fig. 8d (e.g., in response to determining that the criteria have become satisfied, displaying the selectable option for enabling the lighting element). [00285] the above-described manner of displaying a selectable option that is selectable to turn on a lighting element if certain criteria are satisfied provides a quick and efficient way of improve visibility while looking for the remote locator object (e.g., by automatically displaying the selectable option for enabling the lighting element when needed, without requiring the user to perform additional inputs to determine whether enabling a lighting element would help with locating the remote locator object), which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient, which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00286] in some embodiments, while displaying the selectable option that is selectable to emit light from the lighting element of the electronic device in the user interface, the electronic device determines (930) that a second set of one or more criteria are satisfied, such as in fig. 8d illustrating device 500 displaying selectable option 836 and subsequently detecting that the ambient luminance has risen above the threshold level in fig. 8e (e.g., while displaying the user interface with the selectable option that is selectable to emit light from a lighting element of the electronic device, determining that a second set of criteria are satisfied). [00287] in some embodiments, the second set of criteria are satisfied when the first set of criteria are no longer satisfied. for example, while in the process to find the remote locator object, detecting that the ambient light has increased to above the threshold amount of luminance (e.g., the user walked into a brighter room, a light turned on, etc.). in some embodiments, the second set of criteria includes the same luminance threshold as the luminance threshold of the first set of criteria. in some embodiments, the second set of criteria includes a different luminance threshold than the luminance threshold of the first set of criteria. for example, the luminance threshold exhibits a hysteresis effect such that the luminance threshold for the first set of criteria is lower than the luminance threshold for the second criteria (e.g., lower by 10 lux, 50 lux, 100 lux, 500 lux, 5%, 10%, 30%, 50%, etc.). for example, when the selectable option is displayed, the ambient light level has to increase to above a level that is higher than the level that caused the selectable option to be displayed in order for the selectable option to be removed from display. in some embodiments, implementing a hysteresis effect prevents the selectable option from rapidly switching back and forth from being displayed and not being displayed, for example, if the ambient luminance is at or near the threshold level. [00288] in some embodiments, in response to determining that the second set of one or more criteria are no longer satisfied, the electronic device ceases (932) to display the selectable option that is selectable to emit light from the lighting element of the electronic device, such as in fig. 8e (e.g., while displaying the user interface with the selectable option that is selectable to emit light from a lighting element of the electronic device, determining that the second criteria for ceasing display of the selectable option are satisfied, and in response to determining that the second criteria are satisfied, ceasing display of the selectable option for enabling the lighting element). in some embodiments, if the second criteria are satisfied, the lighting element is automatically disabled (e.g., turned off, optionally only if the lighting element was turned on as a result of a user input selecting the selectable option). [00289] the above-described manner of displaying a selectable option that is selectable to turn on a lighting element (e.g., when certain criteria are satisfied, but ceasing display of the selectable option of the criteria are no longer satisfied) provides a quick and efficient way of improve visibility while looking for the remote locator object (e.g., by automatically ceasing display of the selectable option for enabling the lighting element when no longer needed, without requiring the user to perform additional inputs to determine whether enabling a lighting element would help with locating the remote locator object), which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient, which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00290] in some embodiments, the one or more criteria include one or more of a criterion that is satisfied when a level of ambient light in a physical environment of the electronic device is less than a threshold level, and a criterion that is satisfied when a distance between the electronic device and the remote locator object is less than a threshold distance (934), such as described in fig. 8d and fig. 81 (e.g., the one or more criteria includes a requirement that the ambient light is less than a luminance threshold (e.g., less than 10 lux, 50 lux, 100 lux, 500 lux, 1000 lux, etc.)). [00291] in some embodiments, the one or more criteria includes a requirement that the current time of day is within a predetermined time window (e.g., after sunrise, after 30 minutes before sunrise, etc., before sunset, before 30 minutes after sunset, etc.), optionally alternatively to the requirement that the ambient light is less than the luminance threshold. in some embodiments, the one or more criteria includes a requirement that the distance between the device and the remote locator object is less than a first threshold distance (e.g., less than 5 feet, 10 feet, 30 feet, 50 feet, 100 feet, etc.). in some embodiments, the first threshold distance is the distance within which the device initiates a compass-style finding mode to find the remote locator object (e.g., as opposed to a map navigation mode). in some embodiments, the first threshold distance is the distance within which the one or more cameras of the device are able to accurately identify the remote locator object and/or the distance within which the lighting element is able to illuminate the environment around the remote locator object. in some embodiments, the one or more criteria includes a requirement that the distance between the device and the remote locator object is more than a second threshold distance (e.g., more than 1 foot, 3 feet, 6 feet, 10 feet, etc.). in some embodiments, the second threshold distance is a distance within which the device is able to directly communicate with the remote locator object to determine an accurate position of the remote locator object (e.g., the distance within which the device is connected with the remote locator object via bluetooth). in some embodiments, the second threshold distance is a distance within which the one or more cameras of the device is not used to determine the location of the remote locator object and enabling the lighting element optionally does not assist in locating the remote locator object. [00292] the above-described manner of displaying a selectable option that is selectable to turn on a lighting element (e.g., when the remote locator object is within a threshold distance of the device and when the ambient light is less than a threshold amount) provides a quick and efficient way of improve visibility while looking for the remote locator object (e.g., by automatically displaying the selectable option for enabling the lighting element if the lighting element is able to assist in locating the locator object, without requiring the user to perform additional inputs to determine whether enabling a lighting element would help with locating the remote locator object), which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient, which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00293] in some embodiments, the electronic device receives (936), via the one or more input devices, selection of the selectable option to emit light from the lighting element of the electronic device, such as in fig. 8f (e.g., a tap input on the selectable option for turning on the lighting device). [00294] in some embodiments, in response to receiving the selection of the selectable option (938), the electronic device emits (940) light from the lighting element of the electronic device, such as in fig. 8g (e.g., turning on the lighting element such that the environment is illuminated by the lighting element). [00295] in some embodiments, the electronic device updates (942) the user interface to include a second selectable option that is selectable to cease emitting light from the lighting element of the electronic device, such as selectable option 836 being updated to become selectable to turn off the lighting element in fig. 8g (e.g., replacing the selectable option with a second selectable option or updating the selectable option to be selectable to cause the lighting element to turn off). [00296] the above-described manner of disabling the lighting element (e.g., while the lighting element is on, replacing the selectable option for turning on the lighting element with a selectable option for turning of the lighting element) provides a quick and efficient way of disabling the lighting element, which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient, which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00297] in some embodiments, while displaying the user interface for locating the remote locator object (944), while the electronic device is further than a threshold distance from the electronic device, the electronic device displays (946), in the user interface, a first user interface for locating the remote locator object, such as in fig. 8b (e.g., if the device is farther than a threshold distance from the remote locator object, the device is in a first locator mode). [00298] for example, if the device is more than 30 feet, 50 feet, 100 feet, ½ mile, etc. from the remote locator object, then the process to find the remote locator object includes displaying a representation of a map and directions to travel to the location of the remote locator object. in some embodiments, if the device is more than a threshold distance such as 10 feet, 20 feet, 30 feet, etc. from the remote locator object, then the user interface includes one or more graphical elements (e.g., a point cloud) that are optionally biased in the direction of the remote locator object. [00299] in some embodiments, while displaying the first user interface for locating the remote locator object, the electronic device determines (948) that the electronic device is closer than the threshold distance from the electronic device, such as in fig. 8c (e.g., if the device is less than 10 feet, 20 feet, 30 feet, etc. from the remote locator object, the user interface replaces display of the point cloud with an arrow that is pointing toward the direction of the remote locator object (e.g., a compass style arrow), which optionally includes an indication of the distance between the device and the remote locator object). [00300] thus, as the device changes orientation and/or as the device moves around the physical environment, the arrow is updated to point towards the remote locator object. in some embodiments, if the device is less than 1 foot, 3 feet, 6 feet, etc. from the remote locator object, the user interface replaces display of the arrow with a representation of the remote locator object and a circular indicator around the remote locator object that reduces in size and merges into the representation of the remote locator object as the user approaches the remote locator object and reaches the location of the remote locator object. [00301] in some embodiments, in response to determining that the electronic device is closer than the threshold distance from the electronic device, the electronic device updates (950) the user interface to include a second user interface, different from the first user interface, for locating the remote locator object, such as in fig. 8c (e.g., updating the user interface to display a different user interface element for indicating the location of the remote locator object). [00302] for example, if the device is less than 10 feet, 20 feet, 30 feet, etc. from the remote locator object, the user interface replaces display of the point cloud with an arrow that is pointing toward the direction of the remote locator object (e.g., a compass style arrow). in some embodiments, the user interface provides live feedback of the distance and location of the remote locator object relative to the electronic device. in some embodiments, if the device is less than a threshold distance from the remote locator object (e.g., 10 feet, 30 feet, 50 feet, etc.), and the device is held upwards to face the remote locator object, the device enters into an augmented reality finding mode in which a representation of the environment is displayed in the user interface, optionally with a virtual element that indicates the location of the remote locator object (e.g., a virtual representation of balloon attached to the remote locator object, a virtual arrow pointed at the remote locator object, etc.). [00303] the above-described manner of displaying different user interfaces for finding the remote locator object based on the distance to the remote locator object provides a quick and efficient way of finding the remote locator object (e.g., by updating the user interface as the distance to the remote locator object changes to optimize the finding experience and provide a process that’s optimized for the distance), which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user- device interface more efficient (e.g., without requiring the user to perform additional user inputs to change the type of finding mode that is used), which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00304] in some embodiments, the first user interface is a user interface that includes information about the remote locator object (952), such as user interface 800 in fig. 8a (e.g., the user interface associated with the remote locator object includes a selectable option that is selectable to initiate the process to find the remote locator object). in some embodiments, the user interface associated with the remote locator object includes options for changing one or more settings of the remote locator object, such as to rename the remote locator object (e.g., as discussed in more detail above with respect to method 700) and/or includes one or more respective user interface elements that include information about the remote locator object. [00305] the above-described manner of initiating a process to find a remote locator object (e.g., in response to selection of a selectable option from a user interface associated with the remote locator object provides a quick and efficient way of finding the remote locator object, which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient (e.g., without requiring the user to perform additional user inputs to navigate to specific user interfaces to initiate the process to find the remote locator object), which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00306] it should be understood that the particular order in which the operations in figs. 9a-9g have been described is merely exemplary and is not intended to indicate that the described order is the only order in which the operations could be performed. one of ordinary skill in the art would recognize various ways to reorder the operations described herein. additionally, it should be noted that details of other processes described herein with respect to other methods described herein (e.g., method 700, 1100, and 1300) are also applicable in an analogous manner to method 900 described above with respect to figs. 9a-9g. for example, locating a remote locator object described above with reference to method 900 optionally has one or more of the characteristics of providing user interfaces for defining identifiers for remote locator objects, providing information associated with a remote locator object, displaying notifications associated with a trackable device, etc., described herein with reference to other methods described herein (e.g., methods 700, 1100, and 1300). for brevity, these details are not repeated here. [00307] the operations in the information processing methods described above are, optionally, implemented by running one or more functional modules in an information processing apparatus such as general purpose processors (e.g., as described with respect to figs. 1a-1b, 3, 5a-5h) or application specific chips. further, the operations described above with reference to figs. 9a-9g are, optionally, implemented by components depicted in figs. 1 a-1b. for example, displaying operations 902, 906, 908, 922, and 946 and receiving operations 904 and 936 are, optionally, implemented by event sorter 170, event recognizer 180, and event handler 190. event monitor 171 in event sorter 170 detects a contact on touch screen 504, and event dispatcher module 174 delivers the event information to application 136-1. a respective event recognizer 180 of application 136-1 compares the event information to respective event definitions 186, and determines whether a first contact at a first location on the touch screen corresponds to a predefined event or sub-event, such as selection of an object on a user interface. when a respective predefined event or sub-event is detected, event recognizer 180 activates an event handler 190 associated with the detection of the event or sub-event. event handler 190 optionally utilizes or calls data updater 176 or object updater 177 to update the application internal state 192. in some embodiments, event handler 190 accesses a respective gui updater 178 to update what is displayed by the application. similarly, it would be clear to a person having ordinary skill in the art how other processes can be implemented based on the components depicted in figs. 1 a-1b. providing information associated with a remote locator object [00308] users interact with electronic devices in many different manners. in some embodiments, an electronic device is able to track the location of an object such as a remote locator object. in some embodiments, one or more settings of a remote locator object and/or of the electronic device and/or the status of the remote locator object and/or electronic device can affect the functionality of the remote locator object, such as the remote locator object’s ability to provide location information, for example. the embodiments described below provide ways in which an electronic device provides information associated with a remote locator object and/or provides mechanisms for adjusting operation of the remote locator object or the electronic device, thus enhancing the user’s interactions with the electronic device. enhancing interactions with a device reduces the amount of time needed by a user to perform operations, and thus reduces the power usage of the device and increases battery life for battery-powered devices. it is understood that people use devices. when a person uses a device, that person is optionally referred to as a user of the device. [00309] figs. 10a-10t illustrate exemplary ways in which an electronic device 500 provides information associated with a remote locator object and/or provides mechanisms for adjusting operation of the remote locator object or the electronic device in accordance with some embodiments of the disclosure. the embodiments in these figures are used to illustrate the processes described below, including the processes described with reference to figs. 11 a- 111. [00310] fig. 10a illustrates electronic device 500 displaying user interface 1000 (e.g., via a display device, etc.). in some embodiments, user interface 1000 is displayed via a display generation component. in some embodiments, the display generation component is a hardware component (e.g., including electrical components) capable of receiving display data and displaying a user interface. in some embodiments, examples of a display generation component include a touch screen display (such as touch screen 504), a monitor, a television, a projector, an integrated, discrete, or external display device, or any other suitable display device that is in communication with device 500. [00311] in some embodiments, user interface 1000 is a user interface for displaying a plurality of tracked objects, similar to user interface 636 described above with respect to figs. 6l-6n (e.g., sharing similar characteristics and behaviors as user interface 636). in some embodiments, user interface 1000 includes representation 1002 of a map that includes one or more representations of tracked objects. for example, representation 1002 of the map includes icon 1004 corresponding to the “wallet” tracked object and is displayed at a location on representation 1002 of the map associated with the determined location of the “wallet” tracked object. similarly, representation 1002 of the map optionally includes icon 1006 corresponding to the “spouse’s keys” tracked object and is displayed at a location on representation 1002 of the map associated with the determined location of the “spouse’s key” tracked object. in some embodiments, the “wallet” and “spouse’s keys” tracked objects are remote locator objects that are associated with (e.g., attached to) the user’s wallet and the user’s spouse’s keys. in some embodiments, tracked objects other than remote locator objects are displayed in representation 1002 of the map, such as mobile phones, computers, laptops, wearable devices, headphones, gps trackers, or any other suitable electronic device capable of determining location information. [00312] in some embodiments, user interface 1000 includes list 1008 (e.g., similar to list 644 described above with respect to figs. 6l-6n) that includes one or more entries associated with the one or more trackable items that are displayed on representation 1002 of the map. for example, in fig. 10a, list 1008 includes entry 1010-1 and entry 1010-2. in some embodiments, entry 1010-1 corresponds to the “wallet” tracked object (e.g., represented on representation 1002 of the map by icon 1004), and entry 1010-2 corresponds to the “spouse’s keys” tracked object (e.g., represented on representation 1002 of the map by icon 1006). [00313] in some embodiments, selection of a respective icon on representation 1002 and/or a respective entry on list 1008 causes display of a user interface associated with the respective tracked object associated with the selected item. for example, in fig. 10b, a user input 1003 (e.g., a tap input) is received selecting icon 1004 corresponding to the “wallet” tracked object. in some embodiments, in response to receiving user input 1003, device 500 displays user interface 1012 (e.g., optionally replacing list 1008 with user interface 1012 and displayed concurrently with representation 1002 of the map), as shown in fig. ioc. in some embodiments, user interface 1012 is a user interface associated with the “wallet” tracked object, similar to user interface 600 described above with respect to fig. 6a. as shown in fig. ioc, user interface 1012 encompasses less than the entire display area of touch screen 504. for example, user interface 1012 has a preview mode and a full screen mode, as will be described in further detail below. [00314] as shown in fig. ioc, user interface 1012 includes a representation of an identifier 1014 for the “wallet” remote locator object, and a representation of the current location 1016 of the “wallet” remote locator object. in some embodiments, identifier 1014 is a user-selected identifier (e.g., that was optionally selected via a process described above with respect to method 700) for the respective remote locator object indicating that user interface 1012 is the user interface for the remote locator object identified as “wallet”. in some embodiments, user interface 1012 includes selectable option 1020 that is selectable to initiate a process to find and/or locate the respective remote locator object (e.g., in a manner similar to described above with respect to method 900) and selectable option 1022 to cause the respective remote locator object to emit an audible sound. [00315] in some embodiments, user interface 1012 includes one or more information modules that provides status information that is relevant to the respective remote locator object. in some embodiments, the one or more information modules are included on user interface 1012 when certain criteria associated with the respective information module are satisfied (and are optionally not included on user interface 1012 when the criteria are not satisfied). for example, in fig. ioc, user interface 1012 includes information module 1018 that indicates that the “wallet” remote locator object is in the process of finishing setup. in some embodiments, user interface 1012 includes information module 1018 if the respective remote locator object has not yet completed setup. in some embodiments, a remote locator object is in the process of finishing setup if one or more settings are being configured and/or one or more initialization steps are being performed (e.g., optionally which was initiated by device 500 or another device, in a process similar to described above with respect to figs. 6p-6r). in some embodiments, one or more functions of the remote locator object are not available until setup is completed. in some embodiments, module 1018 is selectable to display more information associated with respective module (e.g., information associated with the conditions that caused the respective module to be included in user interface 1012). for example, module 1018 is optionally selectable to display the status of the setup, to interrupt the setup, and/or to change one or more setup settings of the respective remote locator object. in some embodiments, module 1018 is not selectable to display additional information. [00316] in fig. 10d, an upward swipe of contact 1003 in user interface 1012 is received. in some embodiments, in response to receiving the upward swipe from contact 1003, user interface 1012 is updated to expand the size of user interface 1012, as shown in fig. 10d (e.g., optionally to encompass more of the display area of touch screen 504). as shown, in some embodiments, user interface 1012 is optionally displayed in a small mode and concurrently with another user interface or optionally displayed in a full screen mode. in some embodiments, user interface 1012 is not displayed in a small mode and in response to user input 1003 in fig. 10b, device 500 displays user interface 1012 in a full screen mode such as in fig. 10d (e.g., without requiring an upward swipe of contact 1003 as shown in fig. 10d). [00317] in fig. 10e, the criteria for displaying module 1018 is no longer satisfied (e.g., the remote locator object is not in the process of completing setup because it has completed setup), and the criteria for module 1019 is satisfied such that module 1019 is displayed in user interface 1012. in some embodiments, module 1019 indicates that the bluetooth protocol/functionality of device 500 is disabled and the criteria for module 1019 is that the bluetooth protocol of device 500 is disabled. for example, if bluetooth is disabled such that bluetooth devices are unable to communicate with device 500, user interface 1012 includes module 1019. in some embodiments, the “wallet” remote locator object communicates with device 500 via the bluetooth protocol (e.g., continuously, periodically, at least at some times, etc.) such that if the bluetooth protocol is disabled, one or more features of the remote locator object are optionally unavailable. for example, in some embodiments, the remote locator object is not able to provide location information directly to device 500 and the location information of the remote locator object may be delayed or disabled. in such circumstances, device 500 is optionally not able to directly communicate with the remote locator object and/or device 500 is optionally not able to issue commands to the remote locator object directly (optionally device 500 is still able to issue commands via another electronic device that is able to communicate directly with the remote locator object, for example by issuing the command to the other electronic device, which forwards the command to the remote locator object). [00318] for example, in fig. 10e, a user input 1003 is received selecting selectable option 1022. in some embodiments, because bluetooth is disabled, in response to receiving user input 1003, device 500 is unable to communicate directly with the remote locator object to issue the command to cause the remote locator object to emit a sound. in some embodiments, in response to receiving user input 1003, because the process to cause the remote locator object to emit the sound is in progress, user interface 1012 displays module 1028 indicating that emission of the sound is pending (e.g., optionally concurrently with module 1019), as shown in fig. 10f. in some embodiments, module 1028 is displayed when the remote locator object is currently emitting a sound (e.g., optionally instead of when the command to cause the remote locator object to emit a sound is pending). in some embodiments, module 1028 is selectable to cancel the command to cause the remote locator object to emit a sound (e.g., or optionally to cause the remote locator object to stop emitting a sound). [00319] in some embodiments, module 1019 is selectable to change the bluetooth settings of device 500. for example, in fig. 10f, while user interface 1012 includes module 1019, a user input 1003 is received selecting module 1019. in some embodiments, in response to receiving user input 1003, device 500 enables the bluetooth protocol of device 500 such that device 500 is able to connect to bluetooth devices (e.g., without displaying another user interface and/or without ceasing display of user interface 1012), such as remote locator object 1001, as shown in fig. 10g. in some embodiments, in response to receiving user input 1003, device 500 optionally displays a user interface for enabling bluetooth and/or managing one or more connectivity settings (e.g., wifi, airplane mode, etc.). in fig. 10g, because bluetooth has been enabled (e.g., in response to receiving user input 1003 in fig. 10f), device 500 is able to establish a wireless connection with remote locator object 1001 (e.g., via bluetooth) and transmit the command to emit a sound to remote locator object 1001. as shown in fig. 10g, remote locator object 1001 begins emitting a sound in response to receiving the command from device 500. in fig. 10g, because bluetooth is no longer disabled and device 500 is able to transmit the command to emit a sound to remote locator object 1001, user interface 1012 no longer includes module 1018 and module 1028. [00320] fig. 10h illustrates an embodiment in which airplane mode is enabled on device 500 and the user has marked the respective remote locator object as lost (e.g., via selection of a selectable option for marking the remote locator object as lost in user interface 1012). in some embodiments, marking a respective remote locator object as lost transmits a command to an external server (e.g., optionally a server that maintains and/or operates remote locator objects) that the remote locator object is lost. in some embodiments, when a respective remote locator object is marked as lost, users are able to see that the remote locator object is lost (e.g., for example, as a module on a user interface of the remote locator object, such as module 1036, as will be described below). in some embodiments, when a remote locator object is lost, a user that finds the remote locator object is able to see a message from the owner of the remote locator object and/or contact the owner of the remote locator object (e.g., to provide the owner with location information, to email, to call, and/or to text the owner with location information). in some embodiments, when a respective remote locator object is marked as lost, one or more personally identifiable information associated with the owner of the remote locator object is anonymized to protect the privacy of the owner of the remote locator object (e.g., name, address, contacts, location history, etc.). [00321] in some embodiments, enabling airplane mode on device 500 causes one or more wireless connectivity protocols of device 500 (e.g., wifi, bluetooth, nfc, etc.) to be disabled (e.g., optionally causes all of the wireless connectivity protocols to be disabled). in some embodiments, when airplane mode is enabled on device 500, device 500 is unable to directly communicate with the respective remote locator object (e.g., in a manner similar to discussed above with respect to figs. 10e-10g). similarly, when airplane mode is enabled on device 500, device 500 is optionally unable to issue a command to an external server indicating that the remote locator object is lost. thus, in fig. 10h, because airplane mode is enabled, device 500 is unable to mark the remote locator as lost (e.g., which optionally includes transmitting appropriate commands to the remote locator object and/or an external server in communication with the remote locator object from device 500). in some embodiments, because airplane mode is enabled, user interface 1012 includes module 1032 that indicates that airplane mode is enabled. in some embodiments, because device 500 is in the process of marking the respective remote locator object as lost (e.g., actively issuing the command to an external server, or waiting until device 500 is able to communicate with the external server), user interface 1012 includes module 1034 that indicates that the respective remote locator object is in the process of being marked as lost (e.g., being configured to lost mode). in some embodiments, module 1034 is selectable to cancel the command to cause the remote locator object to be marked as lost. [00322] in fig. 10h, while user interface 1012 includes module 1032, a user input 1003 is received selecting module 1032. in some embodiments, in response to receiving user input 1003, device 500 disables airplane mode such that device 500 is able to wirelessly connect to the remote locator object (e.g., if the respective remote locator object is within the effective range of device 500) and/or external servers (e.g., without displaying another user interface and/or without ceasing display of user interface 1012), as shown in fig. 101. in some embodiments, in response to receiving user input 1003, device 500 optionally displays a user interface for enabling and/or managing one or more connectivity settings (e.g., wifi, airplane mode, bluetooth, etc.). in fig. 101, because airplane mode has been disabled (e.g., in response to receiving user input 1003 in fig. 10h), device 500 is able to establish a connection with an external server to mark the respective remote locator object as lost. as shown in fig. 101, the respective remote locator object has been successfully marked as lost and in response, user interface 1012 includes module 1036 that indicates that the respective remote locator object is operating in lost mode (e.g., and optionally no longer includes module 1032 and module 1034). [00323] fig. 10j illustrates an embodiment in which the battery of the respective remote locator object is at a low level and the location of the respective locator object is being shared with the user’s spouse. in some embodiments, in accordance with a determination that the respective remote locator object has a low battery level, user interface 1012 includes module 1038 indicating that the respective remote locator object has a low battery level. in some embodiments, in accordance with a determination that the location of the respective remote locator object is being shared with another user, user interface 1012 includes module 1040 indicating that the location of the respective remote locator object is being shared with another user. in some embodiments, module 1040 includes an identifier (e.g., the name, the title, etc.) of the person with whom the location of the respective remote locator is shared. in some embodiments, module 1040 is selectable to change one or more sharing settings of the respective remote locator object, such as to add and/or remove people with whom the location of the respective remote locator object is shared and/or to change the duration of the sharing. [00324] in fig. 10j, a user input 1003 is received selecting module 1038. in some embodiments, in response to receiving user input 1003, device 500 displays user interface 1042, as shown in fig. 10k. in some embodiments, user interface 1042 includes instructions for changing the battery of the respective remote locator object. for example, in fig. 10k, user interface 1042 includes a representation 1044 of a remote locator object, which is optionally animated to illustrate the process for disassembling the remote locator object and changing the battery. in some embodiments, user interface 1042 includes textual instructions 1046 of how to disassemble the remote locator object and change the battery. fig. 10l illustrates representation 1044 animating to illustrate the disassembly of the remote locator object (e.g., twisting and opening to reveal the battery compartment). [00325] fig. 10m illustrates an embodiment in which device 500 determines that the respective remote locator object is not with the user (e.g., has been separated from device 500, is at a location that is more than the threshold distance from device 500, such as 50 feet, 100 feet, 500 feet, 1 mile, etc., and/or is farther from the threshold distance from a safe and/or trusted location). in some embodiments, in accordance with a determination that the respective remote locator object is not with the user, user interface 1012 includes module 1048 that indicates that the respective remote locator object is not with the user. in some embodiments, module 1048 is selectable to display the current determined location of the respective remote locator object (e.g., display a map user interface, similar to user interface 1000 described above with respect to fig. 10a). [00326] in fig. 10m, user interface 1012 includes module 1050 that indicates that the location of the respective remote locator object is shared with the spouse of the user. in some embodiments, the location of a remote locator object is able to be shared with other users such that other users are able to see the location of the remote locator object (e.g., using their own electronic devices). in some embodiments, the location of a remote locator object can be shared indefinitely or for a preset duration (e.g., for 1 hour, 2 hours, 12 hours, for the rest of the day, for 24 hours, etc.). in some embodiments, user interface 1012 includes module 1050 in accordance with a determination that the remote locator object is shared with another user. as shown in fig. 10m, module 1050 includes an indication of the user with which the location is shared (e.g., the name of the user, the title of the user, etc.). in some embodiments, module 1050 is selectable to view and/or change one or more sharing settings of the remote locator object. for example, selection of module 1050 optionally causes display of a user interface in which a user is able to share location with a new person (or person with which the location was previously shared), terminate sharing with a currently shared person, and/or change the duration of sharing for currently shared people. [00327] in some embodiments, module 1048 is selectable to mark the current location of the respective remote locator object as a safe and/or trusted location. for example, in fig. 10m, a user input 1003 is received selecting module 1048. in some embodiments, in response to receiving user input 1003, device 500 displays user interface 1052, as shown in fig. 10n. in some embodiments, user interface 1052 is a user interface for setting a trusted location for the respective remote locator object. in some embodiments, if a respective remote locator object is separated from the user (e.g., from device 500) by more than a threshold distance (e.g., more than 50 feet, 100 feet, 500 feet, 1 mile, etc.) but is located at or in a trusted location, a notification is not generated at device 500 to alert the user that the remote locator object is separated from the user. thus, a trusted location is a location within which the remote locator object can be located without alerting the user that the remote locator object may be misplaced. in some embodiments, trusted locations can be fixed or dynamic locations. examples of trusted locations can be the user’s workplace, the user’s home, the location of the user’s car, etc. [00328] in some embodiments, user interface 1052 includes text entry field 1052 in which a user is able to enter an address and add the trusted location via address entry. in some embodiments, user interface 1052 includes map 1054. in some embodiments, map 1054 includes an indication 1055 of the current location of device 500. in some embodiments, map 1054 includes pin 1056 which is initially located at the current determined location of the remote locator object, which the user is able to interact with and move around the map. in some embodiments, the trusted location can be added by moving pin 1056 to the desired location and setting the location as the trusted location (e.g., by clicking the “done” affordance). in some embodiments, pin 1056 is fixed to the center of map 1054 and the user is able to set the trusted location by panning the map such that the pin is at the intended location. in some embodiments, pin 1056 is initially set to the currently determined location of the remote locator object. [00329] in some embodiments, user interface 1052 includes radius options 1058-1 to 1058-4 for selecting the radius for the trusted location. for example, the user can select a small (e.g., selectable option 1058-1 for 50 feet, 100 feet, 200 feet, etc.), medium (e.g., selectable option 1058-2 for 100 feet, 200 feet, 500 feet, etc.), or large radius (e.g., selectable option 1058- 3, 400 feet, 600 feet, 1000 feet, etc.) around the pin 1056 in which separation notifications are not triggered. in some embodiments, the user can select selectable option 1058-4 to provide a custom radius for the trusted location. in some embodiments, map 1054 displays a visual indication of the radius selected by the user (e.g., shown as a dotted circle around pin 1056). in some embodiments, the user is able to perform a pinch gesture on map 1054 to enlarge or reduce the size of the dotted circle and provide a custom radius. in some embodiments, in response to the user’s gesture enlarging or reducing the size of the dotted circle, device 500 automatically moves the radius selection to selectable option 1058-4 corresponding to the custom radius option. in some embodiments, other methods of identifying and/or selecting a geographic location for a trusted location are possible and/or other methods of drawing a boundary for a trusted location. in some embodiments, a trusted location is a non-fixed location. for example, a trusted location can be an electronic device such that a pre-determined radius around the location of the electronic device is considered a trusted location. for example, if a remote locator object is within 10 feet of a user’s child’s primary electronic device (e.g., the user’s child’s phone), the remote locator object is considered to be in a trusted location (e.g., even if the remote locator object (and/or the user’s child’s primary electronic device) is more than the threshold distance from fixed trusted locations). [00330] fig. 10o illustrates an embodiment in which the location of the respective remote locator object is shared with another user for a limited duration. in fig. 10o, in accordance with a determination that the location of the respective remote locator object is shared with a user named “mike”, user interface 1012 includes module 1060 indicating that the location of the respective remote locator object is shared with mike. in some embodiments, if the sharing is of limited duration, module 1060 optionally includes an indication of the remaining duration of the sharing. for example, in fig. 10o, the remaining duration for sharing location with mike is two hours and module 1060 indicates that there are 2 hours of sharing remaining. in some embodiments, module 1060 is selectable to view and/or change one or more sharing settings of the remote locator object. for example, selection of module 1060 optionally causes display of a user interface in which a user is able to share location with a new person (or person with which the location was previously shared), terminate sharing with a currently shared person, and/or change the duration of sharing for currently shared people (e.g., similarly to selection of module 1050 descried above). [00331] fig. 10p illustrates an embodiment in which device 500 is displaying a user interface for a trackable object that is not owned by or otherwise associated with the user of device 500. in some embodiments, a trackable object is “owned” by a user (e.g., associated with the user’s account) whose electronic device was first paired with the trackable object (e.g., the first person to pair with and initialize the trackable object) and/or who has been marked as the owner of the trackable object (e.g., the person that has set himself or herself as the owner of the trackable object or otherwise claimed ownership of the trackable object). for example, in fig. 10p, user interface 1012 is associated with bob’s headphones (e.g., the trackable object is associated and/or paired with an account that is not the account of the user of device 500 and/or is not the currently active account on device 500). in some embodiments, bob’s headphones is a trackable device of which bob is able to see the location. in some embodiments, because bob is able to see the location of bob’s headphones, user interface 1012 includes module 1062 that indicates that bob is able to see the location of bob’s headphones. in some embodiments, user interface 1012 includes the name of the owner of the device and/or the name of the user that can see the location of the trackable object because the user has a trusted relationship with the respective person. for example, the user of device 500 optionally is friends with bob and/or has bob as a known contact. in some embodiments, if the user does not have a pre-existing relationship with the owner of the device, user interface 1012 optionally does not include the name of the owner of the device or the name of the user that can see the location of the device. thus, as described above, module 1062 is displayed in accordance with a determination that a criterion that user interface 1012 is associated with a trackable object that is owned by a user other than the user of device 500 (e.g., device 500 that is displaying user interface 1012 is not the device of the owner of the trackable object) is satisfied. [00332] in fig. 10q, device 500 is displaying user interface 1012 associated with a trackable object (e.g., a remote locator object) owned by a user that does not have a relationship with the user of device 500. in some embodiments, a respective user does not have a relationship with the user of device 500 if, for example, the respective user is not a contact of the user of device 500, is not a member of the family group that includes the user of device 500, has not previously shared the location of the trackable object and/or any trackable object with the user of device 500, etc.). in some embodiments, because the user does not have an existing relationship with the owner of the respective remote locator object, user interface 1012 indicates that the respective remote locator object is “someone’s” locator, and includes module 1064 that indicates that other people can see the location of the remote locator object (e.g., without displaying the name or identifier of that other user). in some embodiments, module 1064 is selectable to display instructions for disabling the respective remote locator object, thus preventing an unknown person from tracking the location of the user of device 500. [00333] for example, in fig. 10q, a user input 1003 is received selecting module 1064. in some embodiments, in response to receiving user input 1003, device 500 displays user interface 1066, as shown in fig. 10r. in some embodiments, user interface 1066 includes instructions for disassembling and disabling the remote locator object. for example, user interface 1066 includes textual instructions 1070 for disassembling and disabling the remote locator object. in some embodiments, user interface 1066 includes representation 1068 that animates to illustrate how to disassemble and disable the remote locator object. for example, in fig. 10r, representation 1068 animates to illustrate the remote locator object being twisted open and in fig. 10s, representation 1068 animates to illustrate the remote locator object being opened, revealing the battery, which can be removed to disable the remote locator object. [00334] fig. 10t illustrates an embodiment in which device 500 is displaying a user interface for a trackable object that is owned by a friend of the user of device 500 (e.g., the trackable object is associated and/or paired with the account of a contact of the user of device 500). in some embodiments, a friend of the user is a user that has been marked as a friend of the user of device 500. in some embodiments, a friend of the user is a user that has marked the user of device 500 as a friend. in some embodiments, user interface 1012 includes module 1072. in some embodiments, module 1072 is the same or similar to module 1062 described above with respect to fig. 10p (e.g., the text of module 1072 indicates that the location of the respective remote locator object can be seen by the friend of the user of device 500). in some embodiments, module 1072 is selectable to transmit a request to the owner of the remote locator object (e.g., the friend) to share the location of the respective remote locator object with the user of device 500. [00335] it is understood that the user interfaces discussed above can include any number and any combination of the modules above. for example, if the criteria for a first respective module is satisfied, the user interface can include the first respective module and if the criteria for a second respective module is satisfied, the user interface can include the second respective module. in some embodiments, if the criteria for a first respective module is satisfied, and the criteria for other modules are not satisfied, the user interface includes the first respective module and does not include other modules whose criteria are not satisfied. in some embodiments, certain modules are included in the user interface without regard to whether other modules are also included in the user interface. in some embodiments, certain modules interact with other modules such that the fact that a respective module is included in the user interface is a factor (e.g., criterion) in whether another module is included in the user interface (and/or the criteria for a certain module can share at least one criterion with another module). in some embodiments, as discussed above, in response to determining that a respective criteria for a respective module is no longer satisfied, the respective module is automatically removed from the user interface. in some embodiments, the respective module is automatically removed from the user interface when the respective criteria ceases to be satisfied, optionally while the user interface is still being displayed (e.g., the respective module is updated “live”). in some embodiments, the respective module is automatically removed from the user interface when the user interface is refreshed (e.g., after the device navigates away from the user interface and re-displays the user interface, at a future time). [00336] figs. 11 a- 111 are flow diagrams illustrating a method 1100 of providing information associated with a remote locator object and/or providing mechanisms for adjusting operation of the remote locator object or the electronic device in accordance with some embodiments, such as in figs. 10a-10t. the method 1100 is optionally performed at an electronic device such as device 100, device 300, device 500 as described above with reference to figs. 1 a-1b, 2-3, 4a-4b and 5a-5h. some operations in method 1100 are, optionally combined and/or order of some operations is, optionally, changed. [00337] as described below, the method 1100 provides information associated with a remote locator object and/or provides mechanisms for adjusting operation of the remote locator object or the electronic device. the method reduces the cognitive burden on a user when interaction with a user interface of the device of the disclosure, thereby creating a more efficient human-machine interface. for battery-operated electronic devices, increasing the efficiency of the user’s interaction with the user interface conserves power and increases the time between battery charges. [00338] in some embodiments, an electronic device in communication with one or more wireless antenna, a display generation component and one or more input devices (e.g., electronic device 500, a mobile device (e.g., a tablet, a smartphone, a media player, or a wearable device) including wireless communication circuitry, optionally in communication with one or more of a mouse (e.g., external), trackpad (optionally integrated or external), touchpad (optionally integrated or external), remote control device (e.g., external), another mobile device (e.g., separate from the electronic device), a handheld device (e.g., external), and/or a controller (e.g., external), etc.), displays, via the display generation component, a map user interface that includes a representation of a remote locator object, such as user interface 1000 in fig. 10a (e.g., a user interface that includes a representation of a map). [00339] in some embodiments, a representation of a remote locator object (e.g., an icon) is displayed in the representation of the map indicating the location of the remote locator object on the map. in some embodiments, the map user interface includes information about the remote locator object. in some embodiments, the map user interface includes information about other devices whose location information are available. in some embodiments, the representation of the map includes a plurality of representations of a plurality of objects (e.g., remote locator objects, electronic devices, etc.) indicating the locations of the plurality of objects on the map. in some embodiments, the map user interface includes a list of one or more objects, including the remote locator object, whose location information are available. [00340] in some embodiments, a display generation component is a display integrated with the electronic device (optionally a touch screen display), external display such as a monitor, projector, television, or a hardware component (optionally integrated or external) for projecting a user interface or causing a user interface to be visible to one or more users, etc. [00341] in some embodiments, while displaying the map user interface that includes the representation of a remote locator object, the electronic device receives (1102), via the one or more input devices, an input corresponding to a request to display additional information about the remote locator object, such as user input 1003 selecting icon 1004 in fig. 10b (e.g., receiving a selection input, such as a tap, on the representation of the remote locator object). in some embodiments, a selection input on the representation of the remote locator object is interpreted as a request to display additional information about the remote locator object. [00342] in some embodiments, in response to receiving the input corresponding to the request to display the additional information about the remote locator object, the electronic device updates (1104) the map user interface to include a respective user interface associated with the remote locator object, such as user interface 1012 in fig. ioc (e.g., displaying a user interface that includes information about the respective remote locator object). in some embodiments, the user interface associated with the respective remote locator object includes information such as the identifier (e.g., name) of the remote locator object, its current location, and/or its current status, etc. in some embodiments, the user interface associated with the respective remote locator object includes one or more selectable options for performing operations associated with the respective remote locator object, such as changing one or more settings of the respective remote locator object, changing the name of the remote locator object, etc. in some embodiments, the user interface associated with the respective remote locator is displayed concurrently with the representation of the map (e.g., overlaid on a portion of the representation of the map or displayed below the representation of the map). [00343] in some embodiments, in accordance with a determination that the remote locator object satisfies one or more first criteria, the respective user interface includes a respective user interface element that includes first information about the remote locator object (1106), such as module 1018 in fig. ioc (e.g., the user interface associated with the respective remote locator object includes one or more user interface elements associated with a current status of the remote locator object). [00344] for example, if the battery for the remote locator object is low, then the user interface includes a user interface element indicating that the battery is low. in another example, if the device is unable to wirelessly communicate with the remote locator object, then the user interface includes a user interface element indicating that the device is unable to wirelessly communicate with the remote locator object and optionally suggests to the user to enable one or more wireless communication protocols (e.g., enable bluetooth). in some embodiments, the user interface includes multiple user interface elements, each corresponding to a different state of the remote locator object. [00345] in some embodiments, in accordance with a determination that the remote locator object does not satisfy the one or more first criteria, the respective user interface does not include the respective user interface element that includes the first information about the remote locator object (1108), such as if user interface 1012 does not include module 1018 in fig. ioc (e.g., if the criteria associated with a respective state or condition is not satisfied, then the user interface does not include a user interface element associated with the respective state or condition). for example, if the device has bluetooth (or another communication profile) enabled and is able to wirelessly communicate with the remote locator object, the user interface does not include an element that indicates that the device is unable to wirelessly communicate with the remote locator object. [00346] the above-described manner of providing information about a remote locator object (e.g., displaying one or more user interface elements associated with different conditions if certain criteria are satisfied) provides a quick and efficient way of providing status information about the remote locator object (e.g., by displaying information about a certain state or condition only if certain criteria are satisfied, but not displaying the information if the criteria are not satisfied, without requiring the user to perform additional inputs to determine whether action is required to resolve an issue), which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient, which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00347] in some embodiments, the respective user interface includes a selectable option that is selectable to initiate a process to obtain directions to a location associated with the remote locator object (1110), such as selectable option 1020 in fig. ioc (e.g., the user interface associated with the respective remote locator object includes a selectable option to locate the remote locator object). [00348] in some embodiments, in response to selecting the selectable option to locate the remote locator object, the device initiates a finding mode. in some embodiments, if the distance to the remote locator object is above a threshold (e.g., 20 feet, 50 feet, 300 feet, ¼ mile, 1 mile, 3 miles, etc.), the finding mode is a map-based navigation mode and if the distance to the remote locator object is below the threshold, the finding mode is a compass-style navigation mode, similar to described below with respect to method 1300. [00349] the above-described manner of finding a remote locator object (e.g., by providing a selectable option to initiate a process to find the remote locator object in a user interface associated with the remote locator object) provides a quick and efficient way of finding the remote locator object (e.g., by displaying the selectable option in the same user interface for managing the settings of the remote locator object and that includes information about the remote locator object), which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient (e.g., without requiring the user to perform additional inputs or navigate to a different user interface to initiate the process to find the remote locator object), which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00350] in some embodiments, the respective user interface includes a selectable option that is selectable to initiate a process to cause the remote locator object to generate audio (1112), such as selectable option 1022 in fig. ioc (e.g., the user interface associated with the respective remote locator object includes a selectable option to cause the remote locator object to generate an audible tone (e.g., for the purpose of finding the remote locator object)). [00351] in some embodiments, in response to selecting the selectable option to cause the remote locator object to generate an audible tone, the electronic device issues a command to the remote locator object to generate an audible tone. in some embodiments, the remote locator object generates an audible tone until the electronic device receives an input selecting the selectable option to turn off the audible tone. thus, in some embodiments, the selectable option toggles the audible tone on and off. in some embodiments, the selectable option causes the remote locator object to generate an audible tone for a predetermined amount of time (e.g., 3 seconds, 5 seconds, 10 seconds, 30 seconds, 1 minute, etc.) and automatically stop generating the audible tone after the predetermined amount of time. [00352] the above-described manner of finding a remote locator object (e.g., by providing a selectable option to cause the remote locator object to generate audio) provides a quick and efficient way of finding the remote locator object (e.g., by displaying the selectable option in the same user interface for managing the settings of the remote locator object and that includes information about the remote locator object), which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient (e.g., without requiring the user to perform additional inputs or navigate to a different user interface to cause the remote locator object to generate audio), which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00353] in some embodiments, the first information includes information about an ability of the electronic device to communicate with the remote locator object (1114), such as module 1019 in fig. ioc and module 1032 in fig. 10h (e.g., the user interface associated with the respective remote locator object includes information about the current status of the tracking devices, such as information associated with the connectivity with the remote locator object, information about the battery level of the remote locator object, information about the location of the remote locator object, information about who is able to see the location of the remote locator object, etc.). [00354] for example, if the electronic device is not able to wirelessly communicate with the remote locator object, the user interface displays an indication that the electronic device is not able to communicate with the remote locator object and optionally displays a selectable option to change a respective setting to enable communication with the remote locator object. for example, the user interface optionally displays an indication that the device is in airplane mode (e.g., in which the wireless communication circuitry of the electronic device is optionally disabled) and is unable to communicate with the remote locator object. in some embodiments, the indication is optionally selectable to cause the electronic device to exit airplane mode. thus, in some embodiments, the user interface includes one or more indications of the status of the remote locator object that affects the operability of the remote locator object (e.g., the remote locator object’s ability to track location and/or the electronic device’s ability to receive location information from the remote locator object, etc.). [00355] the above-described manner of displaying information about the current status of the remote locator object (e.g., by displaying indications of the status of the remote locator object in the user interface for managing the settings of the remote locator object) provides a quick and efficient way of providing an indication of the operation of the remote locator object, which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient (e.g., without requiring the user to perform additional inputs or navigate to a different user interface to view different types of status information for the remote locator object), which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00356] in some embodiments, the information about the ability of the electronic device to communicate with the remote locator object includes information that a wireless communication functionality of the electronic device is disabled (1116), such as module 1019 in fig. ioc and module 1032 in fig. 10h (e.g., the user interface includes an indication of one or more settings associated with the wireless communication circuitry used to wirelessly communicate with the remote locator object). [00357] for example, the user interface includes an indication that airplane mode is enabled such that the electronic device is unable to wirelessly communicate with the remote locator object. in another example, the user interface includes an indication that a communication protocol (e.g., bluetooth, wifi, etc.) is disabled such that the electronic device is unable to wirelessly communicate with the remote locator object. in some embodiments, the indications are selectable to change the respective setting of the electronic device to enable the electronic device to wirelessly communicate with the remote locator object. for example, selecting the indication that the device is in airplane mode causes the device to exit airplane mode, and selecting the indication that the bluetooth circuitry is disabled causes the device to enable the bluetooth circuitry. [00358] the above-described manner of displaying connectivity information that affects the ability of the device to communicate with the remote locator object (e.g., by displaying indications of the status of one or more wireless communication circuitry that is used to communicate with the remote locator object) provides a quick and efficient way of providing an indication that the device is in a state in which it is unable to communicate with the remote locator object and receive location information from the remote locator object, which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient (e.g., without requiring the user to perform additional inputs or navigate to a different user interface to determine whether the settings associated with wireless communication circuitry are set to the correct values), which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00359] in some embodiments, the first information includes an indication that a process to generate audio at the remote locator object is in progress (1118), such as module 1028 in fig. 10f (e.g., the user interface includes an indication that a command has been sent or is being sent to the remote locator object to generate an audio output). in some embodiments, the indication is displayed while the command is being sent, optionally before receiving an acknowledgement that the remote locator object is generating the audio output. in some embodiments, the indication is updated to indicate that the remote locator object is generating audio, optionally in response to receiving an acknowledgement that the remote locator object has received the command and is generating audio output. in some embodiments, the indication is selectable to display the current initialization status of the remote locator object (e.g., which initialization step is being performed, how many steps are remaining, the estimated time to completion, etc.). [00360] the above-described manner of displaying an indication that the process to initiate audio to be generated the remote locator object is in progress (e.g., by displaying indications that a command has been sent to the remote locator object to generate audio output) provides a quick and efficient way of indicating that the process to generate audio at the remote locator object is in progress, which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient (e.g., without requiring the user to perform additional inputs or wait to determine whether the process to cause audio to be generated at the remote locator object is successful), which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00361] in some embodiments, the first information includes an indication that a process to configure the remote locator object is in progress (1120), such as module 1018 in fig. ioc (e.g., the user interface includes an indication that the remote locator object is still being initialized). for example, the remote locator object is receiving information from the electronic device and/or configuring internal settings to enable its location tracking features. in some embodiments, in response to determining that initialization has completed, the indication that the remote locator object is still being initialized is automatically dismissed. in some embodiments, in response to determining that initialization has completed, the indication is updated to indicate that setup has completed. [00362] the above-described manner of displaying an indication that the remote locator object is initializing provides a quick and efficient way of indicating that the full functionality of the remote locator object is not yet ready, which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user- device interface more efficient (e.g., without requiring the user to perform additional inputs or wait to determine whether initialization has completed), which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00363] in some embodiments, the first information includes an indication that a battery level of the remote locator object is below a threshold (1122), such as module 1038 in fig. 10j (e.g., the user interface includes an indication of the current battery level of the remote locator object and/or an indication that the battery level of the remote locator object is low (e.g., less than 5%, 10%, 30% battery level)). in some embodiments, the electronic device receives battery level information from the remote locator object. in some embodiments, the indication is selectable to display a tutorial for how to change the batteries of the remote locator object. [00364] the above-described manner of displaying an indication of the battery level the remote locator object provides a quick and efficient way of indicating that the battery of the remote locator object should be changed soon, which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient (e.g., by preventing the remote locator object from running out of battery unexpectedly and/or without requiring the user to separately determine the battery level of the remote locator object), which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00365] in some embodiments, the first information includes an indication that a location of the remote locator object is shared with a user that is not associated with the electronic device, such as module 1050 in fig. 10m and module 1060 in fig. 10o (e.g., if the location of the remote locator object is being shared with another user, the user interface includes an indication that the location of the remote locator object is being shared with another user). [00366] in some embodiments, the indication indicates the user that is receiving the location information and/or the duration of the sharing. for example, the location of the remote locator object is able to be shared indefinitely (e.g., until the user explicitly ends sharing) or shared for a preset duration (e.g., for 1 hour, for 2 hours, for 12 hours, for the rest of the day, for 24 hours, etc.), and the indication indicates the amount of time remaining (e.g., if the sharing is for a preset duration). in some embodiment, the indication is selectable to change the sharing settings of the remote locator object (e.g., to disable sharing, to extend the duration of the sharing, to see a list of who is receiving location information, etc.). [00367] the above-described manner of displaying an indication that the remote locator object is being shared with another user provides a quick and efficient way of indicating that the location of the remote locator object can be seen by another user, which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient (e.g., without requiring the user to perform additional user inputs and/or navigate to another user interface to determine whether the remote locator object is being shared with another user), which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00368] in some embodiments, the first information includes an indication that the remote locator object has been designated as being lost and is operating in a lost mode (1126), such as module 1036 in fig. 101 (e.g., if the remote locator object has been marked as lost (e.g., by the electronic device of the owner of the remote locator object), the user interface includes an indication that the location of the remote locator object has been marked as lost). [00369] in some embodiments, the indication is selectable to display information about the lost mode, to display the current location of the remote locator object, to display the last known location of the remote locator object, and/or to disable lost mode, etc. in some embodiments, the remote locator object is owned by another user and the indication is selectable to display information for how to contact the owner of the remote locator object. in some embodiments, the owner of the remote locator object is the user whose electronic device is paired with the remote locator object and/or the user that initialized the remote locator object and has been associated with the remote locator object as the owner and who optionally is authorized to change one or more settings of the remote locator object. [00370] the above-described manner of displaying an indication that the remote locator object has been marked as lost provides a quick and efficient way of indicating the current lost status of the remote locator object, which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient (e.g., without requiring the user to perform additional user inputs and/or navigate to another user interface to determine whether the remote locator object has successfully been marked as lost), which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00371] in some embodiments, the first information includes an indication that the remote locator object has been designated as being lost and will operate in a lost mode in response to one or more connection criteria being satisfied (1128), such as module 1034 in fig. 10h (e.g., if the remote locator object has been marked as lost, but the device has not received an acknowledgement that the lost status of the remote locator object has been enabled yet, the user interface includes an indication that the remote locator object is in the process of being marked as lost). [00372] in some embodiments, the indication is displayed in response to receiving a user input to mark the remote locator object as lost. for example, the user interface optionally includes a selectable option to mark the remote locator object as lost. in some embodiments, the remote locator object has been marked as lost, but has not yet enabled lost mode if, for example, the electronic device is in airplane mode and is unable to wirelessly transmit the request to mark the object as lost to a server associated with the remote locator object and/or to the remote locator object. in some embodiments, the indication is selectable to display information about lost mode and/or to initiate a process to terminate the request to mark the remote locator object as lost. [00373] the above-described manner of displaying an indication that the remote locator object is in the process of being marked as lost provides a quick and efficient way of acknowledging the request to mark the remote locator object as lost, which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient (e.g., without requiring the user to perform additional user inputs, navigate to another user interface, and/or wait to determine whether the remote locator object has successfully been marked as lost), which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00374] in some embodiments, the first information includes information associated with an ability of a user that is not associated with the electronic device to determine a location of the remote locator object (1130), such as module 1062 in fig. 10p and module 1064 in fig. 10q (e.g., if the remote locator object is being shared with another user or if the remote locator object is owned by a user other than the user of the electronic device such that the other user is able to see the location of the remote locator object, the user interface includes an indication that the location remote locator object can be seen by another user). [00375] in some embodiments, the indication that the location of the remote locator object can be seen by another user includes an indication of the name of the other user. for example, if the user owns the remote locator object and shared the location of the remote locator object with a contact, the indication indicates the contact with whom the remote locator object is shared. in some embodiments, if the remote locator object is owned by a user other than the user of the device, the indication does not indicate the name of the user that is able to see the location of the remote locator object. in some embodiments, the indication does indicate the name of the user that is able to see the location of the remote locator object. for example, if the remote locator object is owned by a contact of the user and/or is sharing the remote locator object with the user, then the indication indicates the name of the owner of the remote locator object. in some embodiments, if the remote locator object is owned by a user that is unknown to the user and/or being shared with a user that is unknown to the user (e.g., not in the contact list of the device), then the indication does not include the name of the owner of the device or the person with whom the remote locator object is shared. in some embodiments, the indicator is selectable to display more information about the sharing feature, to display more information about who can see the location of the object, and/or to display a tutorial of how to disable the remote locator object (e.g., to terminate location tracking). [00376] the above-described manner of displaying an indication that a user other than the user of the device is able to see the location of a remote locator object provides a quick and efficient way of notifying the user that the location of the user may be viewable by someone else, which provides privacy and security benefits to the user by alerting the user of potentially unknown tracking, and which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient (e.g., without requiring the user to perform additional user inputs, navigate to another user interface to determine whether another user is able to see the location of the remote locator object), and which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00377] in some embodiments, the information associated with the ability of the user that is not associated with the electronic device to determine the location of the remote locator object includes an indication of an identity of the user (1132), such as module 1062 in fig. 10p indicating that a user named “bob” can see the location of the remote locator object (e.g., the indication includes the name of the person that is able to view the location of the remote locator object). for example, if the remote locator object is owned by a contact of the user (optionally who has shared the remote locator object with the user) and/or is shared with a contact of the user (e.g., optionally a mutual contact), then the indication indicates the name of the owner of the remote locator object and/or the person with whom the remote locator object is shared. in some embodiments, if the remote locator object is owned by the user, the indication indicates the name of the person with whom the remote locator object is shared. in this way, providing the name of the person that can see the remote locator object’s location allows the user to determine whether the tracking is unintended, unexpected, or acceptable. [00378] the above-described manner of displaying the name of the person that is able to see the location of a remote locator object provides a quick and efficient way of notifying the user of the person that is able to view the location of the user and/or the remote locator object, which provides privacy and security benefits to the user by alerting the user of potentially unknown tracking and which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient (e.g., without requiring the user to perform additional user inputs or navigate to another user interface to determine who owns the remote locator object or is otherwise able to see the location of the remote locator object), and which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00379] in some embodiments, the information associated with the ability of the user that is not associated with the electronic device to determine the location of the remote locator object does not include an indication of an identity of the user (1134), such as module 1064 in fig. 10q not indicating the name of the user that can see the location of the remote locator object (e.g., the indication optionally does not include the name of the person that is able to view the location of the remote locator object and indicates that people other than the user are able to view the location of the remote locator object). [00380] for example, if the remote locator object is owned by a user that is unknown to the user and/or being shared with a user that is unknown to the user (e.g., not in the contact list of the device), then the indication does not include the name of the owner of the device or the person with whom the remote locator object is shared. in this way, the privacy of the owner of the remote locator object is protected, for example, if the user finds the remote locator object amongst other objects (e.g., in the same bag), the user is not able to associate the other objects with the name of the owner. [00381] the above-described manner of indicating that people other than the user are able to see the location of a remote locator object while concealing the name(s) of those people provides a quick and efficient way of notifying the user of the location of the remote locator object may be trackable by others, which further provides privacy and security benefits to the user by alerting the user of potentially unknown tracking and which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient (e.g., without requiring the user to perform additional user inputs or navigate to another user interface to determine whether the remote locator object is enabled and able to provide location information to its owner and/or other people), and which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00382] in some embodiments, while displaying, via the display generation component, the respective user interface element, the electronic device receives (1136), via the one or more input devices, an input directed to the respective user interface element, such as user input 1003 in fig. 10j (e.g., receiving an input (e.g., tap input) selecting the indications). [00383] in some embodiments, in response to receiving the input directed to the respective user interface element, the electronic device displays (1138), via the display generation component, second information, different from the first information, associated with the remote locator object, such as information about how to change the batteries in fig. 10k (e.g., updating the user interface to display information associated with the indications). for example, the user interface is replaced with another user interface that includes information about the respective indication, such as a tutorial user interface or a settings user interface, or the user interface is updated to include information about the respective indications (e.g., as a pop-up or embedded in the user interface). for example, in response to the user selecting an indication that the remote locator object is shared with another user, the device optionally displays information about the remaining duration of the sharing, the person or people with whom the remote locator object is shared, etc. [00384] the above-described manner of displaying information associated with the displayed indication provides a quick and efficient way of providing additional information to the user associated with the condition that caused the indication to be displayed, which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient (e.g., without requiring the user to perform additional user inputs or navigate to another user interface to determine what caused the indication to be displayed and how to properly respond to the indication), which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00385] in some embodiments, the second information includes a selectable option that is selectable to initiate a process to set a current location of the remote locator object as a safe zone (1140), such as in fig. 10n (e.g., a respective indication displayed in the user interface is selectable to set the current location of the remote locator object and/or the current location of the device as a trusted location). [00386] for example, the user interface optionally includes an indication that the remote locator object (which is optionally owned by the user of the device) is separated from the user (e.g., the location of the remote locator object is determined to be farther than a threshold distance from the user’s personal electronic device, such as 100 feet, 500 feet, 1 mile, 5 miles, 10 miles, etc.), and the indication is selectable to set the determined current location of the remote locator object as a trusted location. in some embodiments, a trusted location for a remote locator object is a location (e.g., a geographic area) within which the remote locator object does not cause separation alerts to be generated. in some embodiments, a separation alert is a notification and/or an alert that is generated at the electronic device of the owner of the remote locator object in accordance with a determination that the remote locator object has become physically separated from the electronic device of the owner (e.g., optionally by a threshold distance, such as 50 feet, 200 feet, 500 feet, ½ mile, 1 mile, etc., optionally for a threshold amount of time, such as 10 minutes, 30 minutes, 1 hour, etc.). in some embodiments, if the remote locator object has been determined to be physically separated from the electronic device, but is determined to be within a safe zone, a separation alert is not generated. for example, a user is able to set the location of the user’s home as a trusted location, the location of the user’s work, etc. in some embodiments, a trusted location is a fixed location or a moveable location. for example, the location of the user’s spouse is able to be set as a trusted location such that if the user’s remote locator object is with the user’s spouse, the remote locator object does not generate a separation alert and/or does not cause display of an indication that the remote locator object is separated from the user. in some embodiments, a user is able to set the radius of the trusted location (e.g., a radius around the current determined location of the remote locator object). [00387] the above-described manner of setting the location of the remote locator object as a trusted location provides a quick and efficient way of preventing the current location of the remote locator object from generating further alerts, which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient (e.g., without requiring the user to perform additional user inputs or navigate to another user interface to add the current location of the remote locator object as a trusted location), which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00388] in some embodiments, the second information includes information about changing a battery of the remote locator object (1142), such as in fig. 10k (e.g., in response to receiving a selection of an indication of the current battery level of the remote locator object and/or an indication that the current battery level of the remote locator object is low, the electronic device displays a tutorial for how to change the battery of the remote locator object). in some embodiments, the tutorial includes an animation of how to disassemble the remote locator object, how to remove the battery, how to insert a new battery, and/or the type of battery to use. [00389] the above-described manner of displaying information for how to change the battery of the remote locator object provides a quick and efficient way of guiding the user to change the battery, which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient (e.g., without requiring the user to perform additional user inputs or perform independent research to determine how to change the battery of the remote locator object), which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00390] in some embodiments, the second information includes an indication of a remaining duration that a location of the remote locator object is shared with a user that is not associated with the electronic device (1144), such as in response to selection of module 1060 in fig. 10o (e.g., in response to receiving a user input selecting an indication that the location of the remote locator object is temporarily being shared with another user, the electronic device displays a remaining duration of the sharing with the other user). [00391] for example, if the user shared the location of the remote locator object for a preset amount of time (e.g., 2 hours, 6 hours, the rest of the day, etc.), then the user interface includes an indication of that sharing and in response to a selection of the indication, the device displays an indication of the amount of time remaining (e.g., 2 hours remaining, 1 hour remaining, etc.). in some embodiments, the indication that the location of the remote locator object is being shared with another user itself includes an indication of the amount of time remaining in the sharing. in some embodiments, in response to selecting the indication, the device provides one or more options for changing the sharing setting, such as changing the sharing to an indefinite duration, extending the duration, shortening the duration, and/or ending the sharing. [00392] the above-described manner of displaying the remaining duration of the sharing of the remote locator object with another user provides a quick and efficient way of indicating when sharing of the remote locator object will end, which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient (e.g., without requiring the user to perform additional user inputs or navigate to another user interface to determine whether the sharing is indefinite or temporary and how much time is remaining), which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00393] in some embodiments, in response to receiving the input directed to the respective user interface element, the electronic device displays (1146), via the display generation component, a selectable option for requesting sharing of a location of the remote locator object from an owner of the remote locator object, such as module 1072 in fig. 10t (e.g., if the remote locator object is owned by a user other than the user of the electronic device, the user interface includes an indication that the remote locator object is owned by a user other than the user of the device (e.g., an indication that a location of the remote locator object is viewable by the owner of the device)). [00394] in some embodiments, the indication is selectable to display a selectable option that is selectable to request that the location of the remote locator object be shared with the user of the electronic device. in some embodiments, the request is transmitted to the owner of the remote locator object. in some embodiments, if one or more requests for sharing are pending for a respective remote locator object, the user interface includes an indication that one or more sharing requests are pending. in some embodiments, the indication that one or more sharing requests are pending includes an indication of the person that is requesting the sharing and is optionally selectable to enable sharing with the respective person (optionally for a preset duration of time, or indefinitely) or to dismiss the sharing request (e.g., optionally deny the request). [00395] the above-described manner of displaying the remaining duration of the sharing of the remote locator object with another user provides a quick and efficient way of indicating when sharing of the remote locator object will end, which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient (e.g., without requiring the user to perform additional user inputs or navigate to another user interface to request sharing from the owner of the device), which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00396] in some embodiments, in response to receiving the input directed to the respective user interface element, the electronic device changes (1148) a wireless communication functionality of the electronic device, such as device 500 enabling bluetooth functionality in fig. 10g in response to receive user input 1003 in fig. 10f (e.g., if the user interface includes an indication that a wireless communication protocol used to communicate with the remote locator object (e.g., bluetooth, wifi, etc.) is disabled, in response to receiving a selection of the indication, the device enables the respective wireless communication protocol). in some embodiments, in response to enabling the respective wireless communication protocol, the indication is removed from the user interface (e.g., no longer displayed). in some embodiments, the indication is updated to indicate that the respective wireless communication protocol has been enabled. [00397] the above-described manner of enabling a wireless communication functionality (e.g., in response to receiving an input selecting an indication that the respective wireless communication functionality is disabled) provides a quick and efficient way of enabling communication with the remote locator object (e.g., by determining that the device is unable to communicate with the remote locator object, determining that the reason that the device is unable to communicate with the remote locator object is that a wireless communication protocol is disabled, and providing an option to enable the respective wireless communication protocol), which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient (e.g., without requiring the user to perform additional user inputs to determine that the wireless communication functionality is disabled that is preventing the electronic device from communicating with the remote locator object and then perform additional inputs to enable the wireless communication functionality), which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00398] in some embodiments, in accordance with a determination that the remote locator object satisfies one or more second criteria, the respective user interface includes a second respective user interface element that includes second information about the remote locator object (1150), such as user interface 1012 including module 1018 and module 1028 in fig. 10f (e.g., different indications are associated with different criteria that cause the respective indication to be displayed in the user interface). [00399] thus, in some embodiments, if the criteria for a first respective indication are satisfied, the user interface includes the first respective indication and if the criteria for a second respective indication are satisfied, the user interface includes the second respective indication (e.g., in addition to the first respective indication). in some embodiments, multiple indications are displayed in the user interface if their respective criteria are satisfied. for example, the user interface optionally includes both an indication that a wireless communication protocol is disabled and an indication that the location of the remote locator object is being shared with another user (e.g., if the criteria for displaying the indication that a wireless communication protocol is disabled are satisfied and the criteria for displaying an indication that the remote locator object is being shared with another user are satisfied). [00400] in some embodiments, in accordance with a determination that the remote locator object does not satisfy the one or more second criteria, the respective user interface does not include the second respective user interface element that includes second information about the remote locator object (1152), such as in fig. 10e (e.g., if the conditions associated with a respective indication are not satisfied, do not include the respective indication in the user interface). [00401] the above-described manner of displaying one or more indications (e.g., in response to determining that the respective criteria for a respective indication are satisfied) provides a quick and efficient way of providing information to the user (e.g., by displaying multiple indications, without being limited to displaying only one indication at a time), which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient (e.g., without requiring the user to perform additional user inputs to view a plurality of status information), which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00402] in some embodiments, while displaying, via the display generation component, the respective user interface element, the electronic device receives (1154), via the one or more input devices, an input directed to the respective user interface element, such as in fig. 10f (e.g., while displaying an indication in the user interface, receiving a user input selecting the indication). [00403] in some embodiments, in response to receiving the input directed to the respective user interface element, the electronic device changes (1156) a setting associated with the remote locator object, such as in fig. 10g (e.g., changing one or more settings of the electronic device and/or one or more settings of the remote locator object). [00404] in some embodiments, the indication is associated with a respective setting of the electronic device or the remote locator object and selecting the indication initiates a process to change the respective setting. for example, if the indication indicates that a wireless communication protocol is disabled, selecting the indication initiates a process to enable the wireless communication protocol. in some embodiments, if the indication indicates that the location of the remote locator object is shared with another user, selecting the indication initiates a process to change the sharing settings of the remote locator object (e.g., disable sharing, enable more sharing, change the sharing duration, etc. [00405] the above-described manner of changing a setting associated with the remote locator object (e.g., in response to selection of an indication associated with the setting) provides a quick and efficient way of changing a setting relevant to the functionality of the remote locator object, which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient (e.g., without requiring the user to perform additional user inputs to navigate to a different user interface to change the relevant settings associated with the displayed indication), which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00406] in some embodiments, changing the setting associated with the remote locator object includes enabling a wireless communication functionality of the electronic device to communicate with the remote locator object (1158), such as in fig. 10g (e.g., the user interface includes an indication associated with a wireless communication protocol that is used to communicate with the remote locator object (e.g., bluetooth, wifi, etc.), and selecting the indication initiates a process to change a setting associated with the respective wireless communication protocol, such as to enable or disable the respective wireless communication protocol). [00407] the above-described manner of changing a setting associated with a wireless communication protocol (e.g., in response to selection of an indication associated with the setting) provides a quick and efficient way of enabling communication with the remote locator object by enabling the setting associated with the wireless communication protocol, which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient (e.g., without requiring the user to perform additional user inputs to navigate to a different user interface to change the settings associated with the wireless communication protocol), which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00408] in some embodiments, while displaying the respective user interface that includes the respective user interface element, the electronic device determines (1160) that the remote locator object no longer satisfies the one or more first criteria, such as in fig. 10g (e.g., while displaying a respective indication that is displayed in response to determining that first criteria are satisfied, determining that the first criteria are no longer satisfied). for example, if an indication indicates that a wireless communication protocol is disabled, if the device determines that the wireless communication protocol has been enabled, for example, via a settings user interface other than the respective user interface. [00409] in some embodiments, in response to determining that the remote locator object no longer satisfies the one or more first criteria, the electronic device ceases (1162) to display the respective user interface element, such as in fig. 10g (e.g., in response to determining that the first criteria are no longer satisfied, automatically (e.g., without receiving a user input for doing so) ceasing display of the respective indication associated with the first criteria (e.g., optionally with an animation of the respective indication being removed)). [00410] for example, in response to determining that the wireless communication protocol has been enabled, the device ceases display of the indication that the wireless communication protocol is disabled. in some embodiments, if an indication indicates that the location of the remote locator object is shared with another user for a duration of time, after the duration of time has elapsed, the indication automatically ceases to be displayed. in some embodiments, if the indication indicates that the battery level of the remote locator object is low, then in response to a determination that the battery level of the remote locator object is not low (e.g., due to the user replacing the battery), the indication automatically ceases to be displayed. in some embodiments, the indication is automatically ceased to be displayed even if the condition that caused display of the indication is resolved in a manner independent of the indication (e.g., via a process other than selection of the indication). in some embodiments, the indication remains displayed until the device navigates away from the user interface and navigates back to the user interface (e.g., refreshes display of the user interface). in such embodiments, in response to navigating back to the user interface, the information included in the user interface is refreshed such that if the criteria associated with a respective indication has ceased to be met, the user interface no longer includes the respective indication. in some embodiments, if the criteria associated with a respective indication ceased to be met while the device is not displaying the user interface, then at a future time when device 500 displays the user interface, the user interface optionally does not include the respective indication. thus, in some embodiments, when the device displays the user interface (e.g., when the device begins to display the user interface from not displaying the user interface, or optionally while the device is displaying the user interface), the device optionally determines whether the criteria associated with the one or more indications are met and either includes or does not include the indications accordingly. [00411] the above-described manner of ceasing display of a respective indication (e.g., automatically, in response to a determination that the criteria that caused display of the respective indications are no longer satisfied) provides a quick and efficient way of providing the most updated information about the remote locator object (e.g., by automatically displaying relevant indications and removing indications that are no longer relevant), which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient (e.g., without requiring the user to perform additional user inputs to determine whether an indication is still applicable or valid), which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00412] it should be understood that the particular order in which the operations in figs. 11 a- 111 have been described is merely exemplary and is not intended to indicate that the described order is the only order in which the operations could be performed. one of ordinary skill in the art would recognize various ways to reorder the operations described herein. additionally, it should be noted that details of other processes described herein with respect to other methods described herein (e.g., methods 700, 900, and 1300) are also applicable in an analogous manner to method 1100 described above with respect to figs. 11 a-l ii. for example, providing information associated with a remote locator object described above with reference to method 1100 optionally has one or more of the characteristics of providing user interfaces for defining identifiers for remote locator objects, locating a remote locator object, displaying notifications associated with a trackable device, etc., described herein with reference to other methods described herein (e.g., methods 700, 900, and 1300). for brevity, these details are not repeated here. [00413] the operations in the information processing methods described above are, optionally, implemented by running one or more functional modules in an information processing apparatus such as general purpose processors (e.g., as described with respect to figs. 1a-1b, 3, 5a-5h) or application specific chips. further, the operations described above with reference to figs. 11 a-l ii are, optionally, implemented by components depicted in figs. 1 a-1b. for example, displaying operations 1138, 1146 and receiving operations 1102, 1136, and 1154 are, optionally, implemented by event sorter 170, event recognizer 180, and event handler 190. event monitor 171 in event sorter 170 detects a contact on touch screen 504, and event dispatcher module 174 delivers the event information to application 136-1. a respective event recognizer 180 of application 136-1 compares the event information to respective event definitions 186, and determines whether a first contact at a first location on the touch screen corresponds to a predefined event or sub-event, such as selection of an object on a user interface. when a respective predefined event or sub-event is detected, event recognizer 180 activates an event handler 190 associated with the detection of the event or sub-event. event handler 190 optionally utilizes or calls data updater 176 or object updater 177 to update the application internal state 192. in some embodiments, event handler 190 accesses a respective gui updater 178 to update what is displayed by the application. similarly, it would be clear to a person having ordinary skill in the art how other processes can be implemented based on the components depicted in figs. 1 a-1b. displaying notifications associated with a trackable device [00414] users interact with electronic devices in many different manners. in some embodiments, an electronic device is able to track the location of a trackable device (e.g., a remote locator object, a trackable phone, a trackable tablet, a trackable headphone, a trackable media player, etc.). the embodiments described below provide ways in which an electronic device displays notifications indicating that a trackable device may be unexpectedly tracking the location of the electronic device associated with a user, thus enhancing the user’s interactions with the electronic device. enhancing interactions with a device reduces the amount of time needed by a user to perform operations, and thus reduces the power usage of the device and increases battery life for battery-powered devices. it is understood that people use devices. when a person uses a device, that person is optionally referred to as a user of the device. [00415] figs. 12a-12g illustrate exemplary ways in which an electronic device 500 displays notifications associated with a trackable device in accordance with some embodiments of the disclosure. the embodiments in these figures are used to illustrate the processes described below, including the processes described with reference to figs. 13a-13f. [00416] in some embodiments, an electronic device (e.g., electronic device 500) optionally determines that a trackable object (such as the remote locator objects described above with respect to methods 700, 900, and 1100) is unexpectedly following the location of the electronic device. in some embodiments, a trackable object is “unexpectedly” following the location of the electronic device if the trackable object has not been explicitly approved by the user of electronic device to follow the location of the electronic device and/or if the trackable object is associated with a user that does not have a pre-existing relationship with the user (e.g., is not a family member of the user, and/or is not an existing contact of the user, etc.). in some embodiments, if a trackable object (e.g., either known or unknown to the user) appears to be following the location of the device unexpectedly, the electronic device optionally determines that an alert should be presented indicating that a trackable object is or has been following the location of the user and that the owner of the trackable object (e.g., a user whose account is associated with or paired with the trackable object) is able to access the location of the trackable object. providing unauthorized tracking alerts provides privacy and security benefits to the user of the electronic device. in some embodiments, whether and when to present an alert requires a balance to reduce false positives and false negatives. in addition, how often to present an alert can affect the efficacy of the alert itself. for example, presenting too many alerts too often can cause a user to disable alerts altogether, or ignore alerts when they are presented, thus reducing or eliminating the benefits of the alerts. the embodiments below describe example situations in which the electronic device determines that an alert should be presented, and situations in which an alert is not presented. [00417] fig. 12a illustrates scenario 1201 in which electronic device 500 presents an unauthorized tracking alert. in fig. 12 a, at time to (represented by map 1202a), the electronic device (e.g., device 500) is located at geographic position 1204a and a trackable object is located at geographic position 1206a. in some embodiments, geographic position 1204a and geographic position 1206a are within a threshold distance from each other (e.g., within 1 foot, 3 feet, 10 feet, 50 feet, 100 feet, etc.). in some embodiments, device 500 determines at time to that the trackable object is within the threshold distance of device 500. in some embodiments, device 500 determines that the trackable object is within the threshold distance of device 500 by wirelessly polling the environment around device 500 to determine whether trackable objects are in the environment around device 500, or any other suitable method of wirelessly discovering the existence of and/or location of electronic devices. in some embodiments, in response to wirelessly polling the trackable object, device 500 receives a unique identifier (e.g., serial number, identifier, etc.) of the trackable object. in some embodiments, device 500 uses the unique identifier received from the trackable object during this process to determine whether the trackable object that is detected to be near device 500 is the same trackable object (e.g., the same trackable object that was previously observed and/or detected). in some embodiments, a first criterion for determining whether to provide an alert that a trackable object is tracking the location of a device is that the trackable object is within the threshold distance of device 500. thus, as shown in fig. 12a, at time to, the first criterion is satisfied. [00418] in fig. 12a, at time tl (represented by map 1202b), which is a time after time to, device 500 and the trackable object have both moved to a new geographic location: device 500 to geographic location 1204b and the trackable object to geographic location 1206b. in some embodiments, geographic location 1204b and geographic location 1206b are within a threshold distance from each other (e.g., within 1 foot, 3 feet, 10 feet, 50 feet, 100 feet, etc.). in some embodiments, a second criterion for determining whether to provide an alert that a trackable object is tracking the location of a device is that the trackable object is within a threshold distance of device 500 after the electronic device has moved (and/or remains within the threshold distance of device 500 while device 500 is moving) by more than a threshold amount (e.g., 20 feet, 50 feet, 100 feet, 500 feet, ½ mile, etc.). in some embodiments, the threshold amount that device 500 has to move to satisfy the second criterion is more than the threshold distance requirement for the distance between device 500 and the trackable object (e.g., device 500 has to move by more than the distance between device 500 and the trackable object that satisfies the first criteria). [00419] in some embodiments, the threshold distance requirement for the distance between device 500 and the trackable object after device 500 has moved is different from the threshold distance requirement from before the movement of device 500 (e.g., more or less). in some embodiments, the threshold distance requirement for the distance between the electronic device and the trackable object after the electronic device has moved is the same as the threshold distance requirement from before the electronic device has moved. thus, as shown in fig. 12a, at time tl, the second criterion is satisfied. in some embodiments, the second criterion ensures that the locator object is truly following the electronic device, rather than simply having been placed at a static location near the electronic device (which will be described in more detail below with respect to fig. 12b). [00420] in fig. 12a, at time t2 (represented by map 1202c), which is a time after time tl, both device 500 and the trackable object have remained at their previous locations (e.g., geographic location 1204b and 1206, respectively, the locations at time tl, or within a threshold distance from the locations at tl, such as 10 feet, 50 feet, 100 feet, 200 feet, 500 feet, etc.) for more than a threshold amount of time (e.g., 1 minute, 5 minutes, 30 minutes, 60 minutes, etc.). for example, in fig. 12a, time t2 is optionally 1 hour after time tl. in some embodiments, a third criterion for determining whether to provide an alert that a trackable object is tracking the location of a device is that the trackable object remains within the threshold distance of the electronic device (e.g., after device 500 has moved by more than a threshold amount) for at least the threshold amount of time during which device 500 moves by less than a threshold distance (e.g., less than 10 feet, 30 feet, 50 feet, 100 feet, 500 feet, etc.). in some embodiments, device 500 determines that the trackable object remains within the threshold distance of device 500 by continuously or periodically polling the trackable object to determine whether the trackable object remains within the threshold distance of device 500. in some embodiments, device 500 polls the trackable object every 30 minutes, every hour, every two hours, every four hours, every six hours, etc. as discussed above, device 500 determines that it is the same trackable object that has been tracking device 500 by determining that the identifier of the trackable object (which is optionally received in response to polling and/or querying the trackable object) is the same identifier that was received during previous polling and/or querying steps. thus, as shown in fig. 12a, at time t2, the third criterion is satisfied. as will be described below, after the third criterion is satisfied, device 500 is optionally able to move by more than the threshold distance without causing the third criterion to no longer be satisfied. in some embodiments, the third criterion ensures that the trackable object is truly following the electronic device, rather than simply moving along the same path as the electronic device (which will be described in more detail below with respect to fig. 12c). [00421] in some embodiments, after the third criterion is satisfied, device 500 optionally does not display a notification until a threshold amount of time has elapsed (e.g., 2 hours, 4 hours, 6 hours, 12 hours, 24 hours, etc.) while the trackable object remains within a threshold distance of device 500 (e.g., 10 feet, 50 feet, 100 feet, 200 feet, 500 feet, etc., optionally the same or different than the other threshold distances), optionally without regard to whether device 500 moves by more than the threshold amount described above. thus, in some embodiments, a fourth criterion for determining whether to provide an alert that a trackable object is tracking the location of a device is that the trackable object remains within the threshold distance of device 500 for at least the threshold amount of time (e.g., 1 hour, 2 hours, 4 hours, 6 hours, 12 hours, 24 hours, etc.). in some embodiments, the fourth criterion ensures that the trackable object is truly following device 500 and reduces the frequency of providing alerts to the user (e.g., to avoid producing too many alerts). [00422] thus, in fig. 12a, at time t3, which is a time after time t2, in accordance with and/or in response to a determination that the one or more criteria are satisfied (e.g., one or more of all of the first, second, third, and fourth criteria described above), device 500 displays notification 1210 on user interface 1208 (e.g., overlaid on top of the user interface that was displayed before notification 1210 was displayed) that indicates that an unknown locator object is tracking the location of device 500 and that the owner of the unknown locator object (e.g., the user whose account is associated with the unknown locator object) is able to see the location of the unknown locator. in some embodiments, as discussed above, the one or more criteria are satisfied if the unknown locator is within a first threshold distance of device 500, remains within a second threshold distance from device 500 while device 500 moves for more than a threshold distance, and remains within the second threshold distance from device 500 while device 500 does not move (optionally for at least a threshold amount of time, such as 5 minutes, 10 minutes, 30 minutes, 1 hour, 2 hours, etc.). [00423] it is understood that any of the above described criterion can be optional and/or the order of the criterion can be changed. for example, the electronic device displays notification 1210 in response to the third criterion having been satisfied (e.g., after the first and second criterion are satisfied), without requiring the fourth criterion to be satisfied (e.g., the fourth criterion is optional and notification 1210 is displayed when and/or in response to the third criterion being satisfied, without waiting for the fourth criterion to be satisfied). [00424] in some embodiments, the one or more criteria for determining whether to provide an alert that a trackable object is tracking the location of device 500 includes additional criterion not discussed above. for example, in some embodiments, the one or more criteria includes a criterion that the trackable object is separated from its owner (e.g., more than a threshold distance from the owner’s device, such as more than 20 feet, 50 feet, 100 feet, 500 feet, etc.). in some embodiments, the one or more criteria do not include a criterion that the trackable object is separated from its owner (e.g., the one or more criteria can be satisfied even if the trackable object is not separated from its owner). in some embodiments, the one or more criteria optionally include a criterion that the trackable object is not owned by a contact of the user and/or not owned by a family member of the user of device 500 (e.g., not owned by a user that is in the user’s family group). [00425] fig. 12b illustrates scenario 1211 in which electronic device 500 does not present an unauthorized tracking alert. in fig. 12b, at time to (represented by map 1212a), the electronic device (e.g., device 500) is located at geographic position 1214a and a trackable object is located at geographic position 1216a. in some embodiments, geographic position 1214a and geographic position 1216a are within a threshold distance of each other (e.g., within 1 foot, 3 feet, 10 feet, 50 feet, 100 feet, etc.). thus, at time to, the first criterion previously described is satisfied. [00426] in fig. 12b, at time tl (represented by map 1212b), which is a time after time to, device 500 has moved to geographic location 1214b while the trackable object remained at geographic location 1216a (or optionally moves to a different geographic location that is farther than a threshold distance from device 500). in some embodiments, geographic location 1214b is farther than a threshold distance from geographic location 1216a (e.g., more than 1 foot, 3 feet, 10 feet, 50 feet, 100 feet, etc.). thus, because device 500 is more than the threshold distance from the trackable object, the second criterion previously described is not satisfied. in some embodiments, in accordance with and/or in response to a determination that the second criteria is not satisfied, device 500 does not display a notification (e.g., such as notification 1210 described above with respect to fig. 12 a) that indicates that an unknown locator is tracking the location of device 500. in some embodiments, as discussed above, the one or more criteria are not satisfied if the unknown locator does not remain within the second threshold distance from device 500 while device 500 moves for more than a threshold distance. [00427] as shown above, the second criterion provides the benefit of reducing false positives, for example, if an unknown trackable object is placed at a stationary location that happens to be within a threshold distance of device 500 or if an unknown trackable object is in the possession of the owner of the object and is not following the user. [00428] fig. 12c illustrates scenario 1221 in which electronic device 500 does not present an unauthorized tracking alert. in fig. 12c, at time to (represented by map 1222a), the electronic device (e.g., device 500) is located at geographic position 1224a and a trackable object is located at geographic position 1226a. in some embodiments, geographic position 1224a and geographic position 1226a are within a threshold distance of each other (e.g., within 1 foot, 3 feet, 10 feet, 50 feet, 100 feet, etc.). thus, at time to, the first criterion previously described is satisfied. [00429] in fig. 12c, at time tl (represented by map 1222b), which is a time after time to, the electronic device and the trackable object have both moved to a new geographic location: the electronic device to geographic location 1224b and the trackable object to geographic location 1226b. in some embodiments, geographic location 1224b and geographic location 1226b are within a threshold distance from each other (e.g., within 1 foot, 3 feet, 10 feet, 50 feet, 100 feet, etc.). thus, at time tl, the second criterion is satisfied. [00430] in fig. 12c, at time t2 (represented by map 1222c), which is a time after time tl, device 500 remained at its previous location (e.g., geographic location 1224b) while the trackable object moved to a new geographic location 1226c that is more than a threshold distance from geographic location 1224b. thus, at time t2, the trackable object is no longer within the threshold distance from device 500 and did not remain at its previous location (e.g., or within a threshold distance from its previous location) for more than the threshold amount of time. thus, at time t2, the third criterion previously described is not satisfied. in some embodiments, in accordance with and/or in response to a determination that the third criteria is not satisfied, device 500 does not display a notification (e.g., such as notification 1210 described above with respect to fig. 12a) that indicates that an unknown locator is tracking the location of device 500. in some embodiments, as discussed above, the one or more criteria are not satisfied if the unknown locator does not remain within the second threshold distance of device 500 while device 500 does not move (optionally for at least a threshold amount of time, such as 5 minutes, 10 minutes, 30 minutes, 1 hour, 2 hours, etc.). [00431] as shown above, the third criterion provides the benefit of reducing false positives, for example, if device 500 and an unknown trackable object are both traveling on a common transport (e.g., taxi, bus, subway, etc.) and the unknown trackable object happens to be within a threshold distance of device 500 (e.g., in which case, when the user exits the common transport, the unknown trackable object may continue onwards). [00432] in some embodiments, device 500 generates a notification (e.g., such as notification 1210 described above with respect to fig. 12a) before the one or more criteria have been fully satisfied. for example, if the first and second criteria are satisfied and while waiting for the third criteria to become satisfied (e.g., due to the time duration requirement), device 500 detects that one or more early notification criteria are satisfied, which causes device 500 to generate a notification (e.g., such as notification 1210 described above with respect to fig. 12a), even though not all of the criterion of the one or more criteria have been satisfied. [00433] in some embodiments, the one or more early notification criteria are satisfied if device 500 determines that the owner of the trackable object has initiated a process to find the trackable object (e.g., in a manner similar to that described above with respect to method 900). in some embodiments, because the owner of the trackable object has initiated a process to find the trackable object, the owner of the trackable object is actively collecting and/or looking at the location of the trackable object, which potentially provides the owner with the location of device 500 (e.g., and thus the location of the user). thus, in some embodiments, device 500 provides an early notification (e.g., similar to notification 1210) to the user (e.g., earlier than otherwise would be provided, and before the normally required criteria are satisfied) in response to detecting that the owner of the trackable object has initiated a process to find the trackable object. [00434] in some embodiments, the one or more early notification criteria are additionally or alternatively satisfied if device 500 determines that device 500 is approaching one or more of the user’s safe and/or trusted locations (e.g., approaches within a threshold distance of the safe and/or trusted location, such as 200 feet, 500 feet, ¼ mile, ½ mile, 1 mile, etc.). in some embodiments, a user’s safe and/or trusted locations are locations previously indicated by the user as a safe and/or trusted location (e.g., the user’s home, the user’s place of work, etc.), as described previously. for example, if device 500 moves to a location that is within the threshold distance to the user’s home (which optionally has been set as a safe location) while the trackable object is within a threshold distance from device 500, device 500 optionally provides an early notification (e.g., similar to notification 1210) to the user (e.g., earlier than otherwise would be provided, and before the normally required criteria are satisfied). [00435] in some embodiments, the one or more early notification criteria are additionally or alternatively satisfied if device 500 determines that the trackable object will (e.g., is about to) change its unique identifier (e.g., or is within a threshold time before when the trackable object will change its unique identifier). in some embodiments, because device 500 optionally uses the unique identifier of the trackable object to determine whether a respective trackable object that is potentially tracking the user’s device is the same trackable object and not a different trackable object (e.g., in which case, the test(s) for determining whether to generate an alert resets for the new trackable object), if a trackable object changes its unique identifier, device 500 is optionally unable to determine whether the trackable object in question is a different trackable object or the same trackable object. thus, before the trackable object changes its unique identifier (e.g., at the time that the unique identifier is changed, 5 minutes before, 10 minutes before, 30 minutes before, an hour before, etc.), device 500 provides an early notification (e.g., similar to notification 1210) to the user (e.g., earlier than otherwise would be provided, and before the normally required criteria are satisfied). in some embodiments, trackable objects change their unique identifiers at a predetermined interval and/or at a predetermined time. thus, in some embodiments, device 500 generates an early notification to the user at or before the predetermined time and/or interval (e.g., every 3 hours, every 6 hours, every 12 hours, every 24 hours, every week, etc.). in some embodiments, device 500 is able to determine when the trackable object will change its unique identifier by querying the trackable object and/or querying an external server to determine the schedule associated with the trackable object for refreshing the unique identifier. [00436] in some embodiments, the early notification is generated only if certain criterion of the one or more criteria are satisfied when the early notification criteria is satisfied. for example, in some embodiments, the early notification criteria includes a requirement that the trackable object be within the threshold distance of the electronic device (e.g., the first criterion of the one or more criteria). in some embodiments, the early notification criteria additionally or alternatively includes a requirement that the trackable object is within a threshold distance of the electronic device after the electronic device has moved (or while the electronic device is moving) by more than a threshold amount (e.g., the second criterion of the one or more criteria). in some embodiments, the early notification criteria does not include the second criterion of the one or more criteria (e.g., the second criterion need not be satisfied for the early notification criteria to be satisfied). [00437] in some embodiments, criteria for generating an alert (e.g., the early notification criteria and/or the non-early notification criteria) includes a notification limiting and/or notification throttling feature. in some embodiments, even if all other criterion of the respective criteria are satisfied, device 500 only displays a predetermined maximum number of notifications (e.g., 1 notification, 3 notifications, 5 notifications, 10 notifications, etc.) for a predetermined period of time (e.g., every 1 hour, 3 hours, 6 hours, 12 hours, 24 hours, 48 hours, etc.). for example, the electronic device optionally displays a maximum of one tracking notification for each 24 hour period (e.g., in response to the first time the respective criteria are satisfied during the 24 hour period), even if the respective criteria are satisfied more than once during the 24 hour period. in some embodiments, device 500 will display notifications in response to the one or more criteria being satisfied until device 500 reaches the predetermined maximum number of notifications. in some embodiments, after reaching the predetermined maximum number of notifications, device 500 optionally will not display any further unauthorized tracking notifications until the predetermined period of time elapses. in some embodiments, the notification limiting and/or notification throttling feature is unique to a respective trackable object. for example, even if the maximum number of notifications has been reached for a first trackable object, device 500 optionally is able to display unauthorized tracking notifications for a second trackable object (e.g., if the respective criteria for the second trackable object are satisfied). in some embodiments, the notification limiting and/or notification throttling feature applies for all unauthorized tracking notifications, and applies to all trackable objects (e.g., is not unique to a respective trackable object). in some embodiments, implementing a notification limiting and/or notification throttling feature reduces the number of potentially repetitive notifications that are presented to the user, reduces the risk that the user will disable or ignore notification, and/or increases the likelihood that the user will engage with the notifications when they are presented. [00438] figs. 12d-12f illustrate an embodiment in which device 500 displays an indication that a trackable object is near device 500. in fig. 12d, device 500 is displaying user interface 1232 (e.g., a home screen user interface, similar to user interface 400 described above with respect to fig. 4a). in some embodiments, device 500 detects that a trackable object 1230 is near device 500. in some embodiments, trackable object 1230 is near device 500 if trackable object 1230 is within a threshold distance of device 500 (e.g., within 2 feet, 5 feet, 10 feet, 20 feet, 50 feet, etc.). in some embodiments, trackable object 1230 is near device 500 if trackable object 1230 is within an effective range of a wireless communication protocol (e.g., bluetooth, zigbee, nfc, etc.), such that device 500 is able to wirelessly communicate with trackable object 1230. [00439] in some embodiments, trackable object 1230 is any electronic device that is able to determine and/or report its geographic location to another electronic device (e.g., optionally the owner of trackable object 1230). in some embodiments, trackable object 1230 is able to determine its geographic location via one or more location identification circuitry, such as gps circuitry. in some embodiments, trackable object 1230 is able to determine its geographic location by communicating with another electronic device (e.g., such as device 500) and receiving location information from the other electronic device (e.g., the other electronic device is able to determine its own location via one or more location identification circuitry). in fig. 12d, trackable object 1230 is a pair of wireless headphones. [00440] in some embodiments, in response to and/or in accordance with a determination that trackable object 1230 is near device 500 (optionally additionally in accordance with a determination that trackable object 1230 is not paired with device 500), device 500 displays indication 1234. in some embodiments, indication 1234 is displayed at or near a respective edge and/or corner of touch screen 504 (e.g., near the top edge, near the left edge, near the top-left corner, etc.). in some embodiments, indication 1234 replaces one or more system indications that were previously displayed at the respective location of indication 1234. in some embodiments, indication 1234 indicates that device 500 has detected that a trackable device is near device 500. [00441] in fig. 12e, a user input 1203 (e.g., a tap input) is received selecting indication 1234. in some embodiments, in response to receiving user input 1203, device 500 displays user interface 1236, as shown in fig. 12f. in some embodiments, user interface 1236 is a user interface for displaying (e.g., locations of) a plurality of trackable objects, similar to user interface 636 described above with respect to fig. 6l. in some embodiments, user interface 1236 includes list 1238 that includes one or more entries of trackable items that are unknown to device 500. in some embodiments, a device is unknown to device 500 if device 500 does not have a current relationship with the respective device. for example, if the respective trackable device and/or trackable object is not paired with device 500 and/or if the respective trackable device and/or trackable object is not a device registered to the same user as the user of device 500, then the respective trackable device and/or trackable object is unknown to device 500. in some embodiments, additionally or alternatively, a respective trackable device and/or trackable object is unknown if the respective trackable device and/or trackable object is owned by another user (e.g., has been paired to another user’s device and/or associated with the account of another user). in some embodiments, a respective trackable device and/or trackable object is optionally unknown even if it is owned by a contact of the user (e.g., owned by someone that the user knows). [00442] in fig. 12f, list 1238 includes entry 1240-1 corresponding to bob’s headphones (e.g., trackable object 1230), and entry 1230-2 corresponding to an unknown user’s umbrella. in some embodiments, entry 1240-1 includes an indication of the owner’s name because the owner is a contact of the user (and/or because the user of device 500 is a contact of the owner). in some embodiments, entry 1240-2 does not include an indication of the owner’s name because the owner is not a contact of the user (and/or because the user of device 500 is not a contact of the owner). as shown in fig. 12f, list 1238 does not include entries for trackable objects that are known to the user and optionally only displays entries for trackable objects that are unknown to the user (e.g., optionally because user interface 1236 was displayed in response to a user input selecting indication 1234 in fig. 12e, as opposed to user interface 636 described above with respect to fig. 6l, which includes entries for known trackable objects). in some embodiments, entry 1240-1 and entry 1240-2 are selectable to display a user interface associated with the respective trackable object (e.g., to view information about and/or perform one or more functions associated with the respective trackable object, similar to the process described above with respect to method 1100). thus, in some embodiments, device 500 is able to provide an indication that trackable objects are in the vicinity of device 500 and allow the user to see a list of the trackable objects to determine whether to take appropriate action. [00443] in fig. 12g, device 500 initiates a process to pair with trackable object 1230. for example, device 500 received a sequence of user inputs to pair with and/or connect to trackable object 1230 (e.g., via a bluetooth wireless protocol). in some embodiments, in accordance with a determination that trackable object 1230 is trackable and optionally in accordance with a determination that trackable object 1230 is trackable by a user other than the user of device 500 (e.g., trackable object 1230 is owned by a user other than the user of device 500), device 500 displays popup user interface element 1242 (e.g., optionally overlaid on at least a portion of the user interface that was displayed when popup user interface element 1232 was displayed), as shown in fig. 12g. in some embodiments, popup 1242 indicates that the device with which device 500 is pairing is a trackable object and that the owner of the trackable object will be able to see the location of the trackable object. in some embodiments, popup 1242 includes selectable option 1244-1 that is selectable to continue the pairing process and selectable option 1244-2 that is selectable to display more information about trackable objects, about trackable object 1230 (e.g., which optionally provides the user with an option to cancel the pairing process, or automatically pauses the pairing process until and/or unless the user performs an additional input to continue the pairing process), etc. in some embodiments, popup 1242 includes a selectable option to cancel the pairing process. in some embodiments, device 500 optionally displays a notification instead of popup 1242. in some embodiments, device 500 optionally displays a banner instead of popup 1242. in some embodiments, providing an indication to the user that the trackable object with which the user is pairing is trackable informs the user that the trackable object, which the user may not know is a trackable object, is trackable such that the owner is able to see the location of the object. [00444] figs. 13a-13f are flow diagrams illustrating a method 1300 of displaying notifications associated with a trackable device in accordance with some embodiments, such as in figs. 12a-12g. the method 1300 is optionally performed at an electronic device such as device 100, device 300, device 500 as described above with reference to figs. 1 a-1b, 2-3, 4a- 4b and 5a-5h. some operations in method 1300 are, optionally combined and/or order of some operations is, optionally, changed. [00445] as described below, the method 1300 provide ways of displaying notifications associated with a trackable device. the method reduces the cognitive burden on a user when interaction with a user interface of the device of the disclosure, thereby creating a more efficient human-machine interface. for battery-operated electronic devices, increasing the efficiency of the user’s interaction with the user interface conserves power and increases the time between battery charges. [00446] in some embodiments, an electronic device in communication with one or more wireless antenna, a display generation component and one or more input devices (e.g., electronic device 500, a mobile device (e.g., a tablet, a smartphone, a media player, or a wearable device) including wireless communication circuitry, optionally in communication with one or more of a mouse (e.g., external), trackpad (optionally integrated or external), touchpad (optionally integrated or external), remote control device (e.g., external), another mobile device (e.g., separate from the electronic device), a handheld device (e.g., external), and/or a controller (e.g., external), etc.) is near a remote locator object that is associated with a user other than a user of the electronic device (e.g., within 6 inches, 1 feet, 3 feet, 10 feet, etc. of the user and/or the electronic device), such as in fig. 12a at time to when device 500 is within the threshold distance from the remote locator object. [00447] in some embodiments, while the remote locator object that is associated with the user other than the user of the electronic device is near the electronic device (1302), such as in fig. 12a, in accordance with a determination that one or more first criteria are satisfied, automatically presents (1304), without user input, a tracking alert that indicates that the remote locator object that is not associated with the user of the electronic device satisfies the one or more first criteria, such as notification 1210 in fig. 12a (e.g., generating an alert that indicates that the remote locator object is tracking or otherwise following the location of the electronic device). in some embodiments, generating the alert includes displaying a visual notification, generating an audible notification, generating a tactile output, etc. [00448] in some embodiments, the one or more first criteria include (e.g., the one or more criteria includes criterion and/or factors that indicate that an unknown or unexpected remote locator object is tracking or otherwise following the location of the user and/or the electronic device) a first criterion that is satisfied when the remote locator object has remained within a first threshold distance of the electronic device while the electronic device has moved more than a second threshold distance, wherein the second threshold distance is greater than the first threshold distance (1306), such as in fig. 12a at time tl when device 500 moved by more than a second threshold distance while remaining within the first threshold distance from the remote locator object (e.g., more than twice the first threshold distance, more than five times the first threshold distance, more than ten times the first threshold distance) (e.g., the remote locator object remains within a threshold distance from the electronic device (e.g., 6 inches, 1 feet, 3 feet, 10 feet, etc.) while the electronic device moves or otherwise changes location by a threshold amount (e.g., the device moves by 3 feet, 50 feet, 500 feet, half a mile, 1 mile, 5 miles, etc.)) and a second criterion that is satisfied when the remote locator object has remained within a third threshold distance of the electronic device for longer than a time threshold after the electronic device moved more than the second threshold distance (1308), such as in fig. 12a at time t2 when device 500 remains within the third threshold distance from the remote locator object for longer than the time threshold (e.g., after the first criterion is satisfied, the remote locator object remains within a threshold distance of the electronic device for longer than a time threshold, such as 10 minutes, 30 minutes, 1 hour, 4 hours, 8 hours, 12 hours, etc.). [00449] in some embodiments, the first criteria includes a criterion that is satisfied if the electronic device detects a remote locator object is not recognized by the electronic device. for example, the remote locator object is not currently paired with the electronic device or has not been paired with the electronic device in the past. in some embodiments, the first criteria includes a criterion that is satisfied if a remote locator object is not expected to be following the location of the user and/or electronic device (e.g., even if the device has previously paired with the remote locator object or has previously allowed tracking by the remote locator object). for example, if the device has previously approved of tracking by a respective remote locator object such that the electronic device has a previous relationship with the respective remote locator object (e.g., the remote locator object is not necessarily unknown to the device), but has not yet approved of the current instance of tracking by the respective remote locator object (e.g., the time window for a previous approval has elapsed). in some embodiments, the first criteria includes a criterion that is satisfied if a remote locator object is paired with another electronic device or is associated with a user other than the user of the electronic device (e.g., associated with another user account, another user profile, etc.). in some embodiments, the first criteria includes one or more tracking criterion that suggests that the remote locator object is tracking or otherwise following the location of the user and/or electronic device, such as the first criterion and second criterion described in further detail below. in some embodiments, the electronic device detects the presence of a remote locator object via bluetooth, wifi, nfc, wifi direct, an ad-hoc wireless network, or any other suitable wireless communication protocol. [00450] in some embodiments, the first criterion is satisfied if the remote locator object remains within the first threshold distance while the device is in motion for the second threshold distance (e.g., the remote locator object changes distance from the device but remains within threshold distance from the device). in some embodiments, the first criterion is satisfied if the remote locator object remains at the same distance from the first threshold distance while the device is in motion for the second threshold distance (e.g., the remote locator object remains at the same distance from the device during the entirety of the movement). [00451] in some embodiments, the third threshold distance is the same as the first threshold distance. in some embodiments, the third threshold distance is more or less than the first threshold distance. thus, in some embodiments, the one or more first criteria includes a two-part test for triggering a tracking alert to notify the user that a remote locator object may be tracking the user’s location. in some embodiments, the first part of the test determines whether the remote locator object is actually physically following the user and the second part of the test determines, after determining that the remote locator object is actually physically following the user, that the remote locator object remains following the user for a long enough time period. in some embodiments, the first part of the test determines whether the remote locator object remains with the user for a long enough time period and the second part of the test determines, after determining that the remote locator object remains with the user for a long enough time period, whether the remote locator is actually physically following the user. in some embodiments, the electronic device periodically polls the remote locator object to determine whether the remote locator object is still within the first threshold distance of the electronic device. in some embodiments, the second criteria is satisfied if the remote locator object is still within the first threshold distance of the electronic device for a threshold number of polls (e.g., 2 polls, 4 polls, 10 polls, etc.). for example, the electronic device polls the remote locator object (optionally polls for any object near the electronic device) every 2 hours and if the same remote locator object is found to be within the first threshold distance of the electronic device after four polls (e.g., after 8 hours), then the second criteria is satisfied. [00452] in some embodiments, a display generation component is a display integrated with the electronic device (optionally a touch screen display), external display such as a monitor, projector, television, or a hardware component (optionally integrated or external) for projecting a user interface or causing a user interface to be visible to one or more users, etc. [00453] the above-described manner of generating an alert that a remote locator object is tracking the location of the electronic device (e.g., in accordance with a determination that the remote locator object is following the electronic device for a threshold distance and for a threshold amount of time) provides a quick and efficient way of alerting the user of a potential unauthorized tracking (e.g., without requiring the user to determine whether a remote locator object has been tracking the location of the device for far enough and long enough), which further provides privacy and security benefits to the user by alerting the user of potential unauthorized tracking, and simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient, which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00454] in some embodiments, while the remote locator object that is associated with the user other than the user of the electronic device is near the electronic device (1310) (e.g., within 6 inches, 1 feet, 3 feet, 10 feet, etc. of the user and/or the electronic device), in accordance with a determination that the one or more first criteria are not satisfied, the electronic device forgoes (1312) automatically presenting the tracking alert, such as in fig. 12b and fig. 12c (e.g., if the one or more first criteria are not satisfied, do not generate the alert that indicates that the remote locator object is tracking the location of the electronic device). [00455] the above-described manner of generating an alert that a remote locator object is tracking the location of the electronic device (e.g., in accordance with a determination that the remote locator object is following the electronic device for a threshold distance and for a threshold amount of time, but not generating an alert if the remote locator object is not determined to be following the device for a threshold distance and for a threshold amount of time) provides a quick and efficient way of alerting the user of a potential unauthorized tracking (e.g., by reducing the possibility of false positives and/or reducing the frequency of generating notifications, which could cause the user to ignore and/or disable unauthorized tracking notifications), which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient, which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00456] in some embodiments, while the remote locator object that is associated with the user other than the user of the electronic device is near the electronic device (e.g., within 6 inches, 1 feet, 3 feet, 10 feet, etc. of the user and/or the electronic device) and before the one or more first criteria are satisfied (1314) (e.g., before the first criteria are satisfied that would cause generation of an alert), in accordance with a determination that one or more second criteria are satisfied, including a criterion that is satisfied when the user other than the user of the electronic device has attempted to locate the remote locator object, the electronic device automatically presents (1316), without user input, a tracking alert that indicates that the remote locator object that is not associated with the user of the electronic device satisfies the one or more second criteria, such as if device 500 determines that the owner of the electronic device initiated a process to find the remote locator object at time tl in fig. 12a (e.g., if one or more second criteria are satisfied, generate an alert that indicates that the remote locator object is tracking or otherwise following the location of the electronic device). [00457] in some embodiments, generating the alert includes displaying a visual notification, generating an audible notification, generating a tactile output, etc. thus, in some embodiments, an alert is generated even though the one or more first criteria are satisfied. in some embodiments, because the first criteria are not satisfied, the confidence level that the unknown remote locator object is lower than if the first criteria were satisfied. in some embodiments, the one or more second criteria are satisfied before the one or more first criteria would otherwise be satisfied and thus, when the one or more second criteria are satisfied, an early warning alert is generated. for example, the one or more second criteria include a criterion that is satisfied when the electronic device approaches within a threshold distance to a trusted location, such as home or work (e.g., within 100 feet, within 500 feet, within 1 mile, within 3 miles, etc.). in some embodiments, the one or more second criteria includes a criterion that is satisfied when the remote locator object receives a request to provide its current location information to the owner of the remote locator object, other than the user of the electronic device. in some embodiments, the owner of the remote locator object is the user whose electronic device is paired with the remote locator object and/or the user that initialized the remote locator object and has been associated with the remote locator object as the owner and who optionally is authorized to change one or more settings of the remote locator object. [00458] the above-described manner of generating an alert that a remote locator object is tracking the location of the electronic device (e.g., before the first criteria is satisfied, in accordance with a determination that the owner of the unknown remote locator object is requesting the location of the unknown remote locator object) provides a quick and efficient way of generating an early warning alert of a potential unauthorized tracking (e.g., by detecting that the owner of the remote locator object is attempting to gather the remote locator object’s location, potentially revealing the user’s current location, and generating an early warning alert), which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient, which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00459] in some embodiments, while the remote locator object that is associated with the user other than the user of the electronic device is near the electronic device (e.g., within 6 inches, 1 feet, 3 feet, 10 feet, etc. of the user and/or device) and before the one or more first criteria are satisfied (1318) (e.g., before the first criteria are satisfied that would cause generation of an alert), in accordance with a determination that one or more second criteria are satisfied, including a criterion that is satisfied when a current location of the electronic device is within a threshold distance of a predetermined location associated with the user of the electronic device (e.g., within 100 feet, 300 feet, 500 feet, ½ mile, 1 mile, 5 miles, etc. of a trusted location (e.g., a safe zone)), the electronic device automatically presents (1320), without user input, a tracking alert that indicates that the remote locator object that is not associated with the user of the electronic device satisfies the one or more second criteria, such as if device 500 determines that device 500 is approaching the user’s home at time tl in fig. 12a (e.g., generating an early warning alert that indicates that an unknown remote locator object is potentially tracking the user’s location). [00460] in some embodiments, a trusted location is associated with the electronic device and/or the user, such as a location defined by the user and/or the user’s contacts as the user’s home, the user’s work, the user’s school, the user’s family member’s schools, the user’s contact’s trusted locations, etc. in some embodiments, the trusted location is a location within which a remote locator object (e.g., the user’s remote locator object, which is optionally not the unknown remote locator object that is being determined as following the user) would not cause generation of an alert that the remote locator object has been separated from the user. in some embodiments, generating an early warning alert reduces the possibility that the owner of the unknown remote locator object is able to determine the location of the user’s trusted location, such as the user’s home. [00461] the above-described manner of generating an early warning alert that a remote locator object is tracking the location of the electronic device (e.g., before the first criteria is satisfied, in accordance with a determination that the device is within a threshold distance of a predefined location associated with the user of the electronic device) provides a quick and efficient way of generating an early warning alert of a potential unauthorized tracking (e.g., by detecting that the user is approaching a trusted location and the owner of the unknown remote locator object may be able to determine the location of the user’s trusted location via the remote locator object, and generating an early warning alert), which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient, which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00462] in some embodiments, while the remote locator object that is associated with the user other than the user of the electronic device is near the electronic device (e.g., within 6 inches, 1 feet, 3 feet, 10 feet, etc. of the user and/or device) and before the one or more first criteria are satisfied (1322) (e.g., before the first criteria are satisfied that would cause generation of an alert), in accordance with a determination that one or more second criteria are satisfied, including a criterion that is satisfied when a current time is within a threshold time of a new identifier being selected for the remote locator object, the electronic device automatically presents (1324), without user input, a tracking alert that indicates that the remote locator object that is not associated with the user of the electronic device satisfies the one or more second criteria, such as if device 500 in fig. 12a determines at time tl that the remote locator object will change its unique identifier within a threshold amount of time (e.g., generate an early warning alert if the current time is within 1 minute, 5 minutes, 30 minutes, 1 hour, 3 hours, etc. of when the unknown remote locator object resets its unique identifier to a new unique identifier). [00463] in some embodiments, remote locator objects reset their unique identifiers at a predetermined interval, such as every six hours, once a day, once a week, once a month, etc. thus, in some embodiments, when a remote locator object resets its unique identifier, the remote locator object optionally appears as if it is a different remote locator object than the one that has been tracking the user’s location. in such a situation, it may be desirable to generate an early warning alert before a remote locator object resets its unique identifier so that it does not appear, to the device, as if the remote locator object has stopped following the user and a new, different remote locator object has begun following the user. in some embodiments, resetting the unique identifier of a remote locator object to a new unique identifier prevents an unauthorized user from tracking the remote locator object because, for example, after the unique identifier is reset, a remote locator object with a new unique identifier is not able to be matched to information associated with the previous unique identifier, thus providing a security and privacy benefit to the owner of the remote locator object. [00464] the above-described manner of generating an early warning alert that a remote locator object is tracking the location of the electronic device (e.g., before the first criteria is satisfied, in accordance with a determination that the remote locator object will reset its unique identifier soon) provides a quick and efficient way of generating an early warning alert of a potential unauthorized tracking (e.g., by detecting that the remote locator object may resets its identifier soon such that the electronic device will be unable to determine whether it is the same remote locator object that’s tracking the user, and generating an early warning alert), which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient, which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00465] in some embodiments, the one or more first criteria include a criterion that is satisfied if no tracking alert associated with the remote locator object has been presented by the electronic device within a predefined time period (1326), such as if device 500 presents one alert every 12 hours in fig. 12a (e.g., the first criteria includes a criterion that a tracking alert has not yet been generated within a predefined time period). [00466] for example, for each unknown remote locator object, a tracking alert is generated once every predetermined interval of time, such as once every six hours, once every 12 hours, once a day, etc. in some embodiments, managing the frequency of tracking alerts prevents too many alerts from being generated (e.g., even if multiple conditions have occurred that would otherwise be sufficient to cause generation of a tracking alert) such that a user may be tempted to ignore tracking alerts or disable tracking alerts altogether. in some embodiments, the predefined time period is the amount of time that a remote locator object maintains its unique identifier without resetting to a new unique identifier. for example, only one tracking alert is generated for a particular unique identifier. in some embodiments, when a remote locator object resets its unique identifier to a new unique identifier, the electronic device restarts the process of determining whether the remote locator object satisfies the first criteria (e.g., the device discards the data associated with the previous unique identifier and generates new data for the new unique identifier). [00467] the above-described manner of managing the frequency of tracking alerts that a remote locator object is tracking the location of the electronic device (e.g., by generating one alert for a particular interval of time) provides a quick and efficient way of limiting the number of tracking alerts that are generated, which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient (e.g., by limiting the frequency of tracking alerts, which reduces the chances that a user will ignore or disable alerts, thus increasing the efficacy of each tracking alert), which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00468] in some embodiments, the first threshold distance is 10 feet (1328), such as if geographic location 1204b is within the first threshold distance from geographic location 1206b in fig. 12a (e.g., the first criterion is satisfied when the unknown remote locator object remains within 10 feet of the electronic device while the electronic device is moving, by at least the second threshold distance). in some embodiments, the first criterion is satisfied when the unknown remote locator object remains within 10 feet of the electronic device during the entirety of the time when the electronic device is moving by more than the second threshold distance. in some embodiments, the first criterion is satisfied when the unknown remote locator object is within 10 feet of the electronic device after the electronic device has moved by more than the second threshold distance (e.g., optionally without regard to whether the unknown remote locator object becomes farther than 10 feet of the electronic device while the electronic device is moving. in some embodiments, the first threshold distance is other distances such as 1 foot, 3 feet, 5 feet, 20 feet, 50 feet, 100 feet, etc. [00469] the above-described manner of generating an alert that a remote locator object is tracking the location of the electronic device (e.g., in accordance with a determination that the remote locator object is within 10 feet of the electronic device) provides a quick and efficient way of alerting the user of a potential unauthorized tracking (e.g., by requiring that the remote locator object be within 10 feet to be considered to be tracking the location of the device, which reduces the possibility of false positives and/or reducing the frequency of generating notifications, which could cause the user to ignore and/or disable unauthorized tracking notifications), which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient, which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00470] in some embodiments, the third threshold distance is a value between 1 and 30 feet (1330), such as if geographic location 1204c is within the third threshold distance from geographic location 1206c in fig. 12a (e.g., the second criterion is satisfied when the unknown remote locator object remains within 10 feet of the electronic device for the threshold amount of time after the first criterion is satisfied). in some embodiments, the second criterion is satisfied when the unknown remote locator object remains within 10 feet of the electronic device during the entirety of the threshold amount of time. in some embodiments, the second criterion is satisfied when the unknown remote locator object is within 10 feet of the electronic device at the beginning and end of the threshold amount of time (e.g., optionally without regard to whether the unknown remote locator object becomes farther than 10 feet of the electronic device at some point during the threshold time window. in some embodiments, the third threshold distance is other distances such as 1 foot, 3 feet, 5 feet, 20 feet, 50 feet, 100 feet, etc. [00471] the above-described manner of generating an alert that a remote locator object is tracking the location of the electronic device (e.g., in accordance with a determination that the remote locator object is within 10 feet of the electronic device for at least a threshold amount of time) provides a quick and efficient way of alerting the user of a potential unauthorized tracking (e.g., by requiring that the remote locator object be within 10 feet of the device for a threshold amount of time to be considered to be tracking the location of the device, which reduces the possibility of false positives and/or reducing the frequency of generating notifications, which could cause the user to ignore and/or disable unauthorized tracking notifications), which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient, which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00472] in some embodiments, the electronic device has moved more than the second threshold distance when the electronic device has moved from a first location to a second location that is more than 200 feet from the first location (1332), such as if geographic location 1204b is more than 200 feet from geographic location 1204a in fig. 12a (e.g., the first criterion is satisfied if the remote locator object remains within the first threshold distance while the electronic device is moving more than 500 feet). [00473] in some embodiments, requiring that the electronic device move at least 500 feet ensures that the remote locator object is truly following the electronic device, rather than the remote locator object having been left at a static location and the electronic device happening to be near that static location. thus, if the electronic device moves more than 500 feet and the remote locator object remains within the first threshold distance from the electronic device, then it can be determined that the remote locator object is following the electronic device because the remote locator object must have also moved by 500 feet. in some embodiments, the second threshold distance is other distances such as 200 feet, 400 feet, 800 feet, ¼ mile, ½ mile, 1 mile, etc. [00474] the above-described manner of generating an alert that a remote locator object is tracking the location of the electronic device (e.g., in accordance with a determination that the remote locator object is within a threshold distance of the electronic device while the electronic device moves by at least 500 feet) provides a quick and efficient way of alerting the user of a potential unauthorized tracking (e.g., by reducing the possibility of false positives and/or reducing the frequency of generating notifications, which could cause the user to ignore and/or disable unauthorized tracking notifications), which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient, which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00475] in some embodiments, the one or more first criteria include a criterion that is satisfied when the remote locator object is not near a second electronic device that is associated with the user other than the user of the electronic device (1334), such as if the remote locator object in fig. 12a is not separated from the owner of the remote locator object’s device (e.g., the unknown remote locator object is considered to be tracking the user only if the unknown remote locator object is separated from its owner’s device). [00476] in some embodiments, the unknown remote locator object is separated from its owner’s device if it is more than a threshold distance from the owner’s device (e.g., 5 feet, 10 feet, 50 feet, 300 feet, 500 feet, etc.) or if the remote locator object is farther than the effective distance to establish wireless communication with the owner’s device (e.g., out of bluetooth range, not connected to the same wifi network, etc.). in some embodiments, the first criteria requires that the unknown remote locator object be separated from its owner’s device while the electronic device is moving more than the second threshold distance and during the time threshold after the electronic device has moved more than the second threshold distance. [00477] the above-described manner of generating an alert that a remote locator object is tracking the location of the electronic device (e.g., in accordance with a determination that the remote locator object is separated from its owner’s device) provides a quick and efficient way of alerting the user of a potential unauthorized tracking (e.g., by requiring that the unknown remote locator object be separated from its owner for the remote locator object to be considered to be following the user, thus reducing the possibility of false positives and/or reducing the frequency of generating notifications, which could cause the user to ignore and/or disable unauthorized tracking notifications), which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient, which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00478] in some embodiments, the one or more first criteria include a criterion that is satisfied when the electronic device has moved less than a fourth threshold distance after moving more than the second threshold distance during a second time threshold (1336), such as device 500 not moving by more than the fourth threshold distance from time tl to time t2 in fig. 12a (e.g., after moving by more than the second threshold distance, the electronic device does not more by more than a fourth threshold distance for a second threshold amount of time). [00479] in some embodiments, the fourth threshold distance is 5 feet, 10 feet, 50 feet, 100 feet, etc. in some embodiments, the second threshold amount of time is 1 minute, 5 minutes, 10 minutes, 30 minutes, 1 hour, etc. in some embodiments, first criteria includes a requirement that the electronic device does not return to the original location when the electronic device initially detected that the unknown remote locator object is potentially tracking the electronic device (e.g., or does not return to within the second threshold distance from the original location). thus, in some embodiments, requiring that the device move by less than the fourth threshold distance during a second time threshold ensures that the remote locator object is still following the electronic device after reaching a stationary position, thus avoiding a false positive determination if the remote locator object is left on a mobile location that the user is also at. for example, if the remote locator object was left in the back seat of a taxicab that the user happens to be traveling in, the above-described requirement prevents an unauthorized tracking determination while the user is on the taxicab (e.g., due to the criterion not being satisfied until the user exits the taxicab, which optionally would cause the remote locator object to no longer be within the threshold distance of the electronic device). [00480] the above-described manner of generating an alert that a remote locator object is tracking the location of the electronic device (e.g., in accordance with a determination that the remote locator object has moved by less than a threshold distance during a second threshold time period) provides a quick and efficient way of alerting the user of a potential unauthorized tracking (e.g., by requiring that the device remain relatively stationary for a second threshold amount of time while the remote locator object remains within the threshold distance from the device for the remote locator object to be considered to be following the user, thus reducing the possibility of false positives and/or reducing the frequency of generating notifications, which could cause the user to ignore and/or disable unauthorized tracking notifications), which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient, which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00481] in some embodiments, the electronic device receives (1338), via the one or more input devices, a request to associate the electronic device with a respective object, such as in fig. 12g (e.g., a request to pair the electronic device with another electronic device (e.g., a respective object)). in some embodiments, pairing the electronic device with the respective object includes establishing a wired or wireless communication relationship (e.g., bluetooth, nfc, etc.) with the respective object. [00482] in some embodiments, in response to receiving the request to associate the electronic device with the respective object (1340), in accordance with a determination that the respective object satisfies one or more second criteria, including a criterion that is satisfied when the respective object is a trackable object, the electronic device automatically presents (1342) an alert that indicates that the respective object is a trackable object, such as popup 1242 in fig. 12g (e.g., if the respective object that the electronic device is attempting to pair with is an object that supports location tracking and/or has location tracking enabled, displaying an alert to notify the user that the object’s location may be tracked by the user or someone other than the user). [00483] for example, if the respective object supports location tracking and belongs to another user such that the other user is able to track the location of the respective object, then the electronic device generates an alert that the other user may be able to track the location of the object. for example, a pair of headphones may support location tracking and if the user borrows the headphones from a friend (e.g., the headphones are associated with an electronic device associated with the friend and/or the friend is marked as the owner of the headphones), then in response to pairing with the headphones, the device generates an alert to indicate that the friend may be able to track the location of the headphones. in some embodiments, the alert is presented only if the object has been configured to provide location information to the owner of the device. in some embodiments, the alert is presented even if the object has not been configured to provide location information to the owner. [00484] the above-described manner of generating an alert when attempting to pair with an object (e.g., in accordance with a determination that the object is trackable and/or tracked by a user other than the user of the device) provides a quick and efficient way of alerting the user of a potential unexpected and/or unknown tracking, which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient (e.g., by alerting the user that the object may be tracked by another user, which ensures that the security and/or privacy of the user is protected), which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00485] in some embodiments, the electronic device receives (1344), via the one or more input devices, a request to view information about one or more trackable objects in an environment of the electronic device, such as user input 1003 in fig. 12e (e.g., a user input selecting a selectable option for displaying the trackable items that are near or within a threshold distance of the device (e.g., 2 feet, 5 feet, 10 feet, 50 feet, etc.)). [00486] in some embodiments, in response to receiving the request to view the information about the one or more trackable objects in the environment of the electronic device, the electronic device displays (1346), via the display generation component, one or more representations of the one or more trackable objects in the environment of the electronic device, such as in fig. 12f (e.g., displaying representations of the objects that are trackable that are within the threshold distance of the device). [00487] in some embodiments, the displayed objects are those that are not currently paired with the electronic device (e.g., objects that are paired with the device are optionally not displayed). in some embodiments, the displayed objects are those that have not been shared with the user of the electronic device (e.g., objects that have been shared with the user are optionally not displayed). in some embodiments, the displayed objects are trackable objects that the user and/or the electronic does not know about (e.g., does not have a history with, have not previously paired with, are owned by people who are not contacts of the user, etc.). in some embodiments, the representations are displayed in a representation of the map. in some embodiments, the representations are displayed in a scrollable list. in some embodiments, the representations are selectable to display a user interface associated with the corresponding trackable object (e.g., to view information about the object and/or perform one or more operations with respect to the trackable object). [00488] the above-described manner of displaying a list of the trackable objects near the device (e.g., in response to a user input requesting display of the list of trackable objects) provides a quick and efficient way of displaying the objects near the user whose locations may be tracked, which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient (e.g., by alerting the user to the objects near the user whose locations may be tracked, potentially by someone other than the user, which ensures that the security and/or privacy of the user is protected), which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00489] in some embodiments, the one or more trackable objects include a first trackable object associated with a first representation of the one or more representations, and the first representation is displayed with a representation of a respective user, other than the user of the electronic device, associated with the first trackable object (1348), such as in fig. 12f (e.g., a respective representation of a trackable object optionally includes an indication of the name of the owner of the trackable object). [00490] for example, trackable headphones that are owned and/or tracked by bob are optionally referred to as “bob’s headphones”. in some embodiments, the respective representation displays the name of the owner only if the owner is a contact of the user. in some embodiments, the respective representation displays the name of the owner only if the respective object is paired with or has previously been paired with the electronic device. in some embodiments, the respective representation displays the name of the owner only if the owner has shared the location of the object with the user of the device. in this way, the user is able to determine the person that is potentially tracking the location of the object and optionally use this information to determine whether to unpair from the object, disable the object, move away from the object, or otherwise cause the object to be unable to track the user. in some embodiments, the respective representation does not display the name of the owner if the owner is not a contact of the user. [00491] the above-described manner of displaying a representation of a trackable object (e.g., with the name of the owner of the object that may be tracking the object) provides a quick and efficient way of indicating the person who may be tracking the trackable object, which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient (e.g., by alerting the user to the person that may be tracking the user, which ensures that the security and/or privacy of the user is protected), which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00492] in some embodiments, in accordance with a determination that at least one trackable object is in the environment of the electronic device, the electronic device displays (1350), via the display generation component, a visual indication that at least one trackable object is in the environment of the electronic device, such as indication 1234 in fig. 12d (e.g., displaying a visual indication that an object near the electronic device (e.g., within 2 feet, 5 feet, 10 feet, 50 feet, etc.), is trackable and optionally is configured to provide location information to a user other than the user of the electronic device). in some embodiments, the visual indication is displayed at or near the top of the user interface. [00493] in some embodiments, the request to view the information about the one or more trackable objects in the environment of the electronic device comprises selection of the visual indication that at least one trackable object is in the environment of the electronic device (1352), such as user input 1203 in fig. 12e (e.g., the visual indication is selectable to cause display of a user interface that includes a list of trackable objects that are in the vicinity of the electronic device). [00494] in some embodiments, the visual indication is displayed after the trackable object has been determined to be near the electronic device for a threshold amount of time (e.g., 5 minutes, 10 minutes, 30 minutes, 60 minutes, etc.). in some embodiments, the visual indication is displayed at allocation in the user interface associated with one or more status indicators. for example, the visual indication is displayed at a location that also includes indication of the battery level of the device, the wireless connectivity status, the date and/or time, etc. in some embodiments, the visual indication replaces one or more status indicators. [00495] the above-described manner of displaying a representation of a trackable object (e.g., by displaying a visual indication that objects around the user may be trackable, which is selectable to display a list of the objects around the user that may be trackable) provides a quick and efficient way of indicating the person who may be tracking the trackable object, which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient (e.g., by displaying the objects near the user that may be trackable , which ensures that the security and/or privacy of the user is protected), which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage. [00496] it should be understood that the particular order in which the operations in figs. 13a-13f have been described is merely exemplary and is not intended to indicate that the described order is the only order in which the operations could be performed. one of ordinary skill in the art would recognize various ways to reorder the operations described herein. additionally, it should be noted that details of other processes described herein with respect to other methods described herein (e.g., methods 700, 900, and 1100) are also applicable in an analogous manner to method 1300 described above with respect to figs. 13a-13f. for example, displaying notifications associated with a trackable device described above with reference to method 1300 optionally has one or more of the characteristics of providing user interfaces for defining identifiers for remote locator objects, locating a remote locator object, providing information associated with a remote locator object, etc., described herein with reference to other methods described herein (e.g., methods 700, 900, and 1100). for brevity, these details are not repeated here. [00497] the operations in the information processing methods described above are, optionally, implemented by running one or more functional modules in an information processing apparatus such as general purpose processors (e.g., as described with respect to figs. 1a-1b, 3, 5a-5h) or application specific chips. further, the operations described above with reference to figs. 13a-13f are, optionally, implemented by components depicted in figs. 1 a- 1b. for example, displaying operations 1346 and 1350 and receiving operations 1338 and 1344 are, optionally, implemented by event sorter 170, event recognizer 180, and event handler 190. event monitor 171 in event sorter 170 detects a contact on touch screen 504, and event dispatcher module 174 delivers the event information to application 136-1. a respective event recognizer 180 of application 136-1 compares the event information to respective event definitions 186, and determines whether a first contact at a first location on the touch screen corresponds to a predefined event or sub-event, such as selection of an object on a user interface. when a respective predefined event or sub-event is detected, event recognizer 180 activates an event handler 190 associated with the detection of the event or sub-event. event handler 190 optionally utilizes or calls data updater 176 or object updater 177 to update the application internal state 192. in some embodiments, event handler 190 accesses a respective gui updater 178 to update what is displayed by the application. similarly, it would be clear to a person having ordinary skill in the art how other processes can be implemented based on the components depicted in figs. 1 a-1b. [00498] figs. 14a-14r illustrate an electronic device 500 displaying notifications of tracking by an unknown remote locator object. fig. 14a illustrates an exemplary device 500 that includes touch screen 504. as shown in fig. 14a, the electronic device 500 presents a lock screen user interface 1400 (e.g., a wake screen user interface). in some embodiments, lock screen user interface 1400 is the user interface that is displayed when electronic device 500 is awoken (e.g., from a sleep or locked state). in some embodiments, lock screen user interface 1400 includes notification 1402. in some embodiments, notification 1402 notifies the user that an unknown remote locator object (e.g., optionally a “tag”) is tracking (e.g., following) the user’s location. in some embodiments, notification 1402 hides the owner of the remote locator object’s personal information, such as the label of the object and the owner’s name. in some embodiments, notification 1402 indicates to the user that the owner of the unknown remote locator object is able to see the location of the remote locator object. [00499] in some embodiments, notification 1402 is displayed when electronic device 500 (e.g., or a server) determines that the remote locator object’s location has been following the user’s location. in some embodiments, the remote locator object is determined to be following the user’s location if the position of the remote locator object is the same as (or within a threshold distance of, such as 5 feet, 10 feet, 20 feet) the user’s location for a threshold amount of time (e.g., 30 minutes, 1 hour, 2 hours). in some embodiments, the remote locator object is determined to be following the user’s location if the position of the remote locator object is the same as the user’s position after moving for a threshold distance (e.g., 1 mile, 2 miles, 3 miles). in some embodiments, the remote locator object is determined to be following the user’s location if the position of the remote locator object is within a threshold distance from the user (e.g., 2 feet, 3 feet, 4 feet, 10 feet). in some embodiments, a respective remote locator object is determined to be unknown if the respective remote locator object is not associated with the user/user account of device 500 and is not being shared with the user/user account of device 500 (e.g., is associated with another user account). in some embodiments, a remote locator object that has previously been shared with the user but is not currently shared with the user is also considered to be an unknown remote locator object that would trigger tracking alerts. in some embodiments, any combination of the above can be factors or requirements for determining whether the remote locator object is following the user. [00500] it is understood that although notification 1402 is illustrated as displayed on lock screen user interface 1400, notification 1402 can be displayed on other user interfaces (e.g., in all situations in which other notifications can be displayed). [00501] in fig. 14a, user input 1403 is received selecting notification 1402. in some embodiments, in response to the user input, electronic device 500 displays user interface 1411, as shown in fig. 14b. in some embodiments, user interface 1411 is a card user interface that is overlaid over another user interface (e.g., such as a home screen user interface). in some embodiments, user interface 1411 includes map 1412 that indicates the current location of the user (e.g., and thus, of the remote locator object that is tracking the user). in some embodiments, user interface 1411 includes selectable options 1414-1 to 1414-3 for performing functions with respect to the remote locator object that is tracking the user. in some embodiments, selectable option 1414-1 is selectable to allow the unknown remote locator object to track the user for the rest of the day (e.g., and thus suppress future tracking alerts for the respective unknown remote locator object for the rest of the day). in some embodiments, selectable option 1414-2 is selectable to allow the unknown remote locator object to track the user indefinitely (e.g., and thus suppress all future tracking alerts for the respective unknown remote locator object). in some embodiments, selectable option 1414-3 is selectable to provide more information regarding the remote locator object. [00502] in fig. 14b, user input 1403 is received selecting selectable option 1414-1. in some embodiments, in response to the user input, device 500 initiates a process for allowing the unknown remote locator to track the user’s location for the rest of the day. in some embodiments, when the unknown remote locator is allowed to track the user’s location, tracking alerts (e.g., such as notification 1402) are no longer displayed on device 500 for the remainder of the current day. in some embodiments, after tracking by the unknown remote locator object is allowed, the unknown remote locator object is added to the user’s application for tracking and finding items and is optionally displayed on user interface 1420 as an item that device 500 is tracking, such as in fig. 14c. in some embodiments, user interface 1420 is similar to user interface 670. in some embodiments, user interface 1420 lists item 1426-1 corresponding to the unknown remote locator object. in some embodiments, item 1426-1 indicates the length of time for which tracking alerts are suppressed (e.g., for another 8 hours and 13 minutes). in some embodiments, item 1426-1 does not reveal the name of the owner or the label of the remote locator object to preserve the privacy of the owner of (e.g., user account associated with) the remote locator object. in some embodiments, while tracking by the unknown remote locator object is allowed, the user is able to receive separation alerts if the unknown remote locator object separates from the user’s location by more than a threshold distance (e.g., 10 feet, 30 feet, 100 feet), similar to separation alert 802 described above with respect to figs. 8a-8p. [00503] in fig. 14d, user input 1403 is received selecting selectable option 1414-2 in user interface 1411. in some embodiments, in response to the user input, device 500 displays user interface 1430, as shown in fig. 14e. in some embodiments, to allow tracking indefinitely, device 500 requires the user to bring device 500 within a threshold distance (e.g., 1 inch, 3 inches, 5 inches) from the unknown remote locator object. in some embodiments, this ensures that the user has found the unknown remote locator object and/or that the user knows exactly what item is tracking the user’s location (e.g., and to not mistakenly approve the incorrect object). in some embodiments, user interface 1430 instructs the user to tap the unknown remote locator object using device 500 (e.g., bring device 500 within the threshold distance to the unknown remote locator object). in some embodiments, user interface 1430 includes an illustration 1432 of tapping the remote locator object with device 500 (e.g., a still image, a short video, an animation, etc.). in some embodiments, user interface 1430 includes selectable option 1434 that is selectable to cause the unknown remote locator object to emit an audible sound. [00504] in fig. 14f, the user brings device 500 within the above threshold distance to unknown remote locator object 1400. in some embodiments, in response to bringing device 500 within the threshold distance to unknown remote locator object 1400, communication is established between device 500 and unknown remote locator object 1400. in some embodiments, device 500 confirms that unknown remote locator object 1400 is the unknown remote locator object that is tracking the user’s location. in some embodiments, in response to bringing device 500 within the threshold distance to unknown remote locator object 1400, device 500 initiates a process for allowing the unknown remote locator object to track the user’s location for the rest of the day (e.g., or optionally until the user removes the authorization). in some embodiments, after the unknown remote locator object is allowed, the unknown remote locator object is added to user interface 1420, as shown in fig. 14g (e.g., similarly to described above with respect to fig. 14c). in some embodiments, item 1426-1 is displayed with an indicator that the remote locator object is ignored indefinitely. in some embodiments, item 1426-1 is selectable to change the user’s permission settings (e.g., such as to set a time limit on ignoring the object or to remove the authorization). [00505] in fig. 14h, user input 1403 is received selecting selectable option 1414-3 in user interface 1411. in some embodiments, in response to the user input, device 500 displays user interface 1440, as shown in fig. 141. in some embodiments, user interface 1440 displays a representation 1442 of the remote locator object that is tracking the user. in some embodiments, representation 1442 is an icon of the remote locator object. in some embodiments, representation 1442 is an interactable model of the remote locator object. for example, in some embodiments, a user input on representation 1442 optionally causes representation 1442 to spin or rotate in accordance with the user input. in some embodiments, representation 1442 spins, rotates or otherwise animates on its own (e.g., without user involvement). [00506] in some embodiments, user interface 1440 includes selectable options 1444-1, 1444-2 and 1444-3. in some embodiments, selectable option 1444-1 is selectable to cause the remote locator object to emit an audible sound to enable the user to find the remote locator object. in some embodiments, selectable option 1444-2 is selectable to allow the user to ignore the remote locator object (e.g., in a similar process as described above with respect to figs. 14b- 14g). in some embodiments, selectable option 1444-3 is selectable to display instructions for disabling the remote locator object. for example, in fig. 14j, a user input 1403 is received selecting selectable option 1444-3. in some embodiments, in response to the user input, device 500 displays user interface 1450. in some embodiments, user interface 1450 displays a representation 1452 of the remote locator object. in some embodiments, representation 1452 is an animation that illustrates steps for disassembling and disabling the remote locator object (e.g., optionally removing the batteries in remote locator object), as shown in figs. 14k-14m. selection of selectable option 1454 causes device 500 to cease displaying user interface 1450 without allowing the remote locator object to track the location of the user. [00507] in some embodiments, generating an alert when motion is detected by a first device that is not in communication with a device that is configured to track the location of the first device enables a person who is unaware that the first device is near them to easily identify the first device. continuing to generate the alert while the first device is being moved enables the person to identify the presence of the first device, locate the first device and then remove, disable, and/or dispose of the first device to prevent unauthorized tracking by the first device. [00508] the first device could be a standalone remote locator object or a remote locator object embedded in another object such as a pair of headphones, a suitcase, a bicycle, or the like. [00509] the alert can be disabled by bringing a respective device that is capable of communicating with the first device within range (e.g., short range wireless communication range) of the first device (e.g., to display a visual/interactive unauthorized tracking alert). in response to the first device being within range of the respective device, the respective device will display an alert and selection of the alert or a portion of the alert will initiate a process to disable the motion based alert generated by the first device. [00510] figs. 15a-15e are flow diagrams illustrating a method 1500 of generating alerts in accordance with some embodiments. for example, in some embodiments, a method 1500 is performed at a first device (e.g., a remote locator object, as described with reference to methods 700, 900, 1100 and/or 1300) with one or more motion detecting sensors (e.g., a gyroscope, accelerometer, magnetometer and/or inertial measurement unit) and one or more wireless transmission elements (e.g., wireless antenna), and one or more output devices (e.g., a speaker, a tactile output device, a display). in some embodiments, the method includes detecting (1502), via the one or more motion detecting sensors, motion of the first device, (e.g., the first device is associated with a user account), and in response to detecting the motion of the first device (1504): in accordance with a determination that first alert criteria are met, wherein the first alert criteria include a requirement that the first device has not been in wireless communication with a second device that is capable of tracking a location of the first device (e.g., because the second device is associated with a same user account as the first device or because the second device is associated with a different user account that has accepted an explicit invitation to track the location of the first device, such as device 500 in method 700, device 500 in method 900, and/or device 500 in method 1100) within a predetermined period of time (e.g., a predetermined period of time selected from 6 to 100 hours, such as 6, 12, 18, 24, 36, 48, 72, 96, etc. hours), prior to detecting the motion (e.g., movement of the first device above a motion threshold, such as motion above an acceleration threshold, motion about a velocity threshold, and/or motion above a position/distance threshold), generating (1506) an alert via the one or more output devices (e.g., an alert generated by the speakers/etc. of the first device). in some embodiments, the alert generated by the first device is in addition to and/or independent of unauthorized tracking alerts generated by a second device based on the presence of the first device, such as described with reference to method 1300 and/or figs. 14a-14r. further, a period of time criterion used by the second device to generate alerts according to method 1300 and/or figs. 14a-14r is optionally independent of (e.g., different from) the predetermined period of time used by the first device to generate the alert via the one or more output devices. in some embodiments, in accordance with a determination that the first device was in wireless communication with the second device that is capable of tracking the location of the first device (e.g., associated with the same user account as the first device) within the predetermined period of time prior to detecting the motion, forgoing (1508) generating the alert via the one or more output devices. [00511] in some embodiments, the method includes after generating the alert, continuing (1510) to detect motion of the first device (e.g., motion above the motion threshold, motion above or below the motion threshold, etc.), and in response to continuing to detect motion of the first device (e.g., continuing to detect movement of the first device above the motion threshold), continuing (1512) to generate the alert via the one or more output devices. [00512] in some embodiments, the method includes after generating the alert, ceasing (1514) to detect, via the one or more motion sensors, motion of the first device (e.g., detecting movement of the first device that is below the motion threshold for at least a threshold amount of time), and in response to ceasing to detect motion of the first device, ceasing (1516) to generate the alert via the one or more output devices. [00513] in some embodiments, the alert includes one or more of, an audio alert, a haptic alert, and a visual alert (e.g., one, two or three of an audio alert, a haptic alert, or a visual alert) (1518). [00514] in some embodiments, the first device is a remote tracking device (e.g., a low energy device that does not have a display and has a battery life of more than 6 months under typical usage conditions, such as the remote locator objects described with reference to methods 700, 900, 1100 and/or 1300 and/or figs. 14a-14r) and the second device is a personal communication device (1520) (e.g., a smartphone, watch, headset, tablet, or computer, such as device 500). [00515] in some embodiments, the first alert criteria include a requirement that the first device is not currently within a predetermined distance (e.g., a short range communication distance) of an electronic device that is capable of displaying alerts about the presence of the first device (1522) (e.g., alerts indicating separation of the first device from the electronic device and/or alerts that the first device is tracking the location of the electronic device, such as device 500 described with reference to method 1300 and/or figs. 14a-14r). [00516] in some embodiments, the first alert criteria include a requirement that the first device has not been temporarily associated with a second user account that is different than a first user account with which the first device is associated (1524) (e.g., the device has not been officially “borrowed” by another user that has accepted an explicit invitation to track the location of the first device, such as borrowing as described with reference to method 1300 and/or figs. 14a-14r). [00517] in some embodiments, the method includes in response to detecting motion of the first device (1526): in accordance with a determination that second alert criteria are met (e.g., different from the first alert criteria, such as the unauthorized tracking criteria of method 1300), wherein the second alert criteria include a requirement that the first device has not been in wireless communication with a second device (e.g., a second device that is capable of tracking a location of the first device (e.g., because the second device is associated with a same user account as the first device or because the second device is associated with a different user account that has accepted an explicit invitation to track the location of the first device, such as device 500 in method 700, device 500 in method 900, and/or device 500 in method 1100 and/or figs. 14a-14r) within a predetermined period of time (e.g., a predetermined period of time selected from 6 to 100 hours, such as 6, 12, 18, 24, 36, 48, 72, 96, etc. hours) prior to detecting the motion and that the first device is currently within a predetermined distance (e.g., a short range communication distance) of a third device that is capable of displaying alerts about the presence of the first device (e.g., a personal communication device (e.g., a smartphone, watch, headset, tablet, or computer capable of generating alerts indicating separation of the first device from the third device and/or alerts that the first device is tracking the location of the third device, such as device 500 described with reference to method 1300 and/or figs. 14a-14r)), transmitting (1528), via the one or more wireless transmission elements, information to the third device that, when received by the third device, will cause the third device to output a second alert about the presence of the first device (e.g., an alert indicating that the first device is tracking the location of the third device, such as device 500 described with reference to method 1300 and/or figs. 14a-14r). [00518] in some embodiments, the method includes in response to detecting motion of the first device (1530): in accordance with a determination that the second alert criteria are met, wherein the second alert criteria include a requirement that the first device has not been in wireless communication with a second device within a predetermined period of time prior to detecting the motion and that the first device is currently within a predetermined distance (e.g., a short range communication distance) of a third device that is capable of displaying alerts about the presence of the first device, transmitting (1532), via the one or more wireless transmission elements, information to the third device that, when received by the third device will cause the third device to output a second alert about the presence of the first device and forgoing outputting the alert via the one or more output devices of the first device. [00519] in some embodiments, the method includes in response to detecting motion of the first device (1534): in accordance with a determination that second alert criteria are met, wherein the second alert criteria include a requirement that the first device has not been in wireless communication with a second device within a predetermined period of time prior to detecting the motion and that the first device is currently within a predetermined distance (e.g., a short-range communication distance) of a third device that is capable of displaying alerts about the presence of the first device (1536): transmitting (1538), via the one or more wireless transmission elements, information to the third device that, when received by the third device, will cause the third device to output a second alert about the presence of the first device, and outputting (1540) the alert via the one or more output devices of the first device. [00520] as described above, one aspect of the present technology is the gathering and use of data available from specific and legitimate sources to improve the ability for users to track and locate items that may be of interest to them. the present disclosure contemplates that in some instances, this gathered data may include personal information data that uniquely identifies or can be used to identify a specific person. such personal information data can include location- based data, online identifiers, demographic data, telephone numbers, email addresses, home addresses, data or records relating to a user’s health or level of fitness (e.g., vital signs measurements, medication information, exercise information), date of birth, or any other personal information. [00521] the present disclosure recognizes that the use of such personal information data, in the present technology, can be used to the benefit of users. further, other uses for personal information data that benefit the user are also contemplated by the present disclosure. in some embodiments, the personal information data can be used to identify the location of remote locator objects and/or identify the location of the user. accordingly, use of such personal information data enables users to identify, find, and otherwise interact with remote locator objects. [00522] the present disclosure contemplates that those entities responsible for the collection, analysis, disclosure, transfer, storage, or other use of such personal information data will comply with well-established privacy policies and/or privacy practices. in particular, such entities would be expected to implement and consistently apply privacy practices that are generally recognized as meeting or exceeding industry or governmental requirements for maintaining the privacy of users. further, such collection/ sharing should occur only after receiving the consent of the users or other legitimate basis specified in applicable law. personal information from users should be collected for legitimate uses only. such information regarding the use of personal data should be prominent and easily accessible by users, and should be updated as the collection and/or use of data changes. additionally, such entities should consider taking any needed steps for safeguarding and securing access to such personal information data and ensuring that others with access to the personal information data adhere to their privacy policies and procedures. further, such entities can subject themselves to evaluation by third parties to certify their adherence to widely accepted privacy policies and practices. in addition, policies and practices should be adapted for the particular types of personal information data being collected and/or accessed and adapted to applicable laws and standards, including jurisdiction-specific considerations that may serve to impose a higher standard. for instance, in the us, collection of or access to certain health data may be governed by federal and/or state laws, such as the health insurance portability and accountability act (hipaa); whereas health data in other countries may be subject to other regulations and policies and should be handled accordingly. [00523] despite the foregoing, the present disclosure also contemplates embodiments in which users selectively block the use of, or access to, personal information data. for example, users can opt not to collect location information from remote locator objects. in another example, users can select to limit the length that location data is maintained or entirely block the storage of location data. in addition to providing “opt in” and “opt out” options, the present disclosure contemplates providing notifications relating to the access or use of personal information. for instance, a user may be notified upon accessing an application that their personal information data will be accessed and then reminded again just before personal information data is accessed by the application. that is, the present disclosure contemplates that hardware and/or software elements can be provided to prevent or block access to such personal information data. [00524] risk can be minimized by limiting the collection of data and deleting data once it is no longer needed. in addition, and when applicable, including in certain health related applications, data de-identification can be used to protect a user’s privacy. de-identification may be facilitated, when appropriate, by removing identifiers, controlling the amount or specificity of data stored (e.g., collecting location data at city level rather than at an address level), controlling how data is stored (e.g., aggregating data across users), and/or other methods such as differential privacy. moreover, it is the intent of the present disclosure that personal information data should be managed and handled in a way to minimize risks of unintentional or unauthorized access or use. [00525] therefore, although the present disclosure broadly covers use of personal information data to implement one or more various disclosed embodiments, the present disclosure also contemplates that the various embodiments can also be implemented without the need for accessing such personal information data. for example, location data and notifications can be delivered to users based on aggregated non-personal information data or a bare minimum amount of personal information. that is, the various embodiments of the present technology are not rendered inoperable due to the lack of all or a portion of such personal information data. [00526] personally identifiable information data should be managed and handled so as to minimize risks of unintentional or unauthorized access or use, and the nature of authorized use should be clearly indicated to users. it is well understood that the use of personally identifiable information should follow privacy policies and practices that are generally recognized as meeting or exceeding industry or governmental requirements for maintaining the privacy of users. [00527] the foregoing description, for purpose of explanation, has been described with reference to specific embodiments. however, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. many modifications and variations are possible in view of the above teachings. the embodiments were chosen and described in order to best explain the principles of the invention and its practical applications, to thereby enable others skilled in the art to best use the invention and various described embodiments with various modifications as are suited to the particular use contemplated.
115-091-404-007-097
PT
[ "WO", "CN", "EP", "US" ]
A61B5/00,H05K1/03,G06F3/044,H05K1/16,H05K3/32,H05K3/46,D03D15/00,H03K17/96
2015-06-09T00:00:00
2015
[ "A61", "H05", "G06", "D03", "H03" ]
multifuncional textile sensor
the present application describes the creation of a flexible textile structure with sensing and lighting capabilities without the loss of important features of a typical textile, for instance, comfort, seamless and mechanical flexibility. as sensing applications are described three different approaches that may or may not work together in the same system: a directly printed self- capacitive sensor, a knitted textile sensor and the integration of temperature/humidity bulk capacitive sensors directly on the textile. as lighting applications for decorative and signage purposes are used two different approaches that could work individually or together: an electroluminescent sensing device and the use of a hybrid sensor that includes the use of smd leds and a printed self-capacitive sensor. the sensing and lighting applications previously described can be used, as an example, inside an automobile passenger compartment since they are easily integrated on seats with different geometries, armrests and central panels to substitute common mechanical buttons and sensing devices, and create a cleaner and seamless environment, following current tendencies in car interiors.
claims 1. multifunctional textile sensor comprising: - a knitted self-capacitive sensor; - printed conductive tracks; - knitted conductive yarns; - printed self-capacitive sensors; - printed electroluminescent device. 2. multifunctional textile according to the previous claim, wherein the knitted self-capacitive sensor use a specific jersey structure or double structures, such as interlock, spacer and double faced. 3. multifunctional textile sensor according to any of the previous claims, wherein a conductive yarn is used to create the knitted self-capacitive sensor. 4. multifunctional textile sensor according to the previous claim, wherein the printed conductive tracks are connected to a very thin conventional capacitive temperature/humidity sensor, with thicknesses between 0.6 and 0.8mm, on the back of the textile structure. 5. multifunctional textile sensor according to any of the previous claims, wherein conductive yarns are directly welded to the printed circuit board that supports the capacitive temperature/humidity sensor. 6. multifunctional textile sensor according to any of the previous claims, comprising a printed self-capacitive sensor composed of single or double electrodes. 7. multifunctional textile sensor according to any of the previous claims, wherein the printed self-capacitive sensor is formed by tracks with sheet resistance comprised between 10 and 60 mq/sq/mil . 8. multifunctional textile sensor according to any of the previous claims, wherein the textile substrate is made of polyethersulfone, cotton, polyamide or a mixture between these fibers. 9. multifunctional textile sensor according to any of the previous claims, wherein the textile substrate comprises the use of a jersey structure or double structures. 10. multifunctional textile sensor according to any of the previous claims, comprising a polymeric membrane between the flexible textile structure and the printed elements. 11. multifunctional textile sensor according to any of the previous claims, comprising a printed self-capacitive sensor, used in conjunction with the electroluminescent device and placed around said device, consisting of a single printed electrode. 12. multifunctional textile sensor according to any of the previous claims, wherein the printed self-capacitive sensor comprises a support layer, printed conductive tracks, a barrier film for electrical and mechanical protection and an electronic control system coupled to the multi-structure for touch/proximity calibration and for electrical power. 13. multifunctional textile sensor according to any of the previous claims, wherein the flexible textile structure is knitted with a mix between polyethersulfone/cotton . 14. multifunctional textile sensor according to any of the previous claims, wherein the electronic system comprises a microcontroller, a sensor interface and a converter cc-cc with high noise immunity. 15. multifunctional textile sensor according to any of the previous claims, wherein the electroluminescent device is comprised of printed conductive, transparent and/or non- transparent, electroluminescent and dielectric layers/tracks with the conductive layers having sheet resistance comprised between 1 and 500 ω/sq/mil and the dielectric layers having a dielectric constant between 3 and 20, a barrier polymeric layer for electrical and mechanical protection and an electronic control system coupled to the textile structure for touch/proximity calibration and for electrical power supply. 16. multifunctional textile sensor according to any of the previous claims, wherein the transparent conductive layer comprises a transmittance between 60 and 90% on the visible region of the electromagnetic spectrum. 17. multifunctional textile sensor according to any of the previous claims, wherein an electromagnetic barrier is placed between the touch sensor and the electroluminescent device using printed conductive tracks. 18. method for printing on a sheet to sheet system, using screen printing and/or inkjet technology, on flexible substrates of the printed self-capacitive sensor used in the multifunctional textile sensor described in any of the claims 1 to 17 comprising the following steps: • elaboration of the digital design of the self- capacitive sensor that is intended to print; • printing of the conductive material over the flexible textile substrate; • thermal curing of the conductive material pattern at temperatures comprised between 80 and 140°c, for 10 to 20 minutes; • placement of leds using pick & place system; • dispensing of silver paste and/or a conductive adhesive to glue the leds to the printed sensor; • thermal curing of the dispensed conductive material temperatures comprised between 80 and 140°c, for 10 to 20 minutes; • lamination and/or coating of the polymeric barrier material . 19. method for printing on a roll-to-roll system, using rotary screen printing technology and/or rotogravure, on flexible substrates of the printed self-capacitive sensor used in the multifunctional textile sensor described in any of the claims 1 to 17 comprising the following steps: • elaboration of the digital design of the self- capacitive sensor that is intended to print; • printing of the conductive material at speeds comprised between 0.1 and 10 m/min over the flexible textile substrate; • thermal curing of the conductive pattern at temperatures comprised between 80 and 140°c, at speeds comprised between 0.1 and 10 m/min; • placement of leds using pick & place system; • dispensing of silver paste and/or a conductive adhesive to glue the leds to the printed sensor; • thermal curing of the dispensed conductive material temperatures comprised between 80 and 140°c, for 10 to 20 minutes; • lamination and/or coating of the polymeric barrier material at speeds comprised between 0.1 and 10 m/min. 20. method for printing the electroluminescent device and associated capacitive sensor on a sheet to sheet system, using screen printing technology and/or inkjet technology, used in the multifunctional textile sensor described in any of the claims 1 to 17 comprising the following steps: • printing of the transparent conductive material over the flexible textile substrate; • thermal curing of the transparent conductive layer at temperatures comprised between 80 and 100°c, for 10 to 15 minutes; • printing of the electroluminescent material over the transparent conductive layer; • thermal curing of the electroluminescent layer at temperatures comprised between 100 and 150°c, for 10 to 15 minutes; • printing of the dielectric material over the electroluminescent layer; • thermal curing of the dielectric layer at temperatures comprised between 100 and 150°c, for 10 to 15 minutes; • printing of a second dielectric material over the first dielectric layer; • thermal curing of second dielectric layer at temperatures comprised between 100 and 150°c, for 10 to 15 minutes; • printing of a conductive layer over the second dielectric layer and of conductive tracks over the flexible substrate; • thermal curing of the conductive layer and tracks at temperatures comprised between 80 and 140°c, for 10 to 20 minutes; • lamination and/or coating of the polymeric barrier material . 21. use of the multifunctional textile described in any of the claims 1 to 17 in automotive, aeronautic, medical, sports and military areas. lisbon, 09 june 2015
description "multifunctional textile sensor" technical domain the present application describes a flexible textile structure with sensing and lighting capabilities without losing the principal features of a typical textile, such as comfort, seamless and mechanical flexibility. prior art the document wo2013063188a1 presents a method for producing a mutual capacitance touch sensor by flexographic printing and/or electroless plating and/or spray coating/deposition. the present touch sensor is composed by two different electrodes that are printed on each side of the dielectric substrate using one or two different inks from the following group: copper, gold, nickel, tin and palladium, or alloys thereof. the technology now disclosed uses a different operation principal, typically named as self- capacitance, which allow the use of only one electrode and one type of ink, making the printing production steps easier and faster. the possibility of use different kind of printing technologies that do not involve the need of any kind of baths that could damage the substrate, as well as the use of different textiles structures as substrate makes this application different from the above mentioned document. additionally, the possibility of use decorative lighting directly integrated on the printed self-capacitive sensor device without make any interference with their functionality is also a difference that should be considered . the document us8044665b2 presents a method for producing a sensor product for electric field sensing in form of an array or matrix of electrically conductive areas and conductors that comprises the use of several layers attached to each other's. the use of different layers implies the use of different materials. this product is produced using technologies as etching, screen printing - including flat bed or rotary, gravure, offset, flexography, inkjet printing, electrostatography, electroplating and chemical plating. it is also mentioned the use of polyethylene terephthalate (pet), polypropylene (pp) or polyethylene (pe) as flexible substrate and the protective foil in form of nonwoven, fabric or foil, or a dielectric acrylic based coating. however, the use of only one electrode and one type of ink in the technology now disclosed makes the printing production steps of the device easier and faster. the possibility of using different kind of printing technologies that do not involve the need of any kind of steps that could damage the substrate, as well as the use of different textiles structures as substrate makes this application different from the one in the present document. the document us7719007b2 presents the construction of a multilayer flexible electronic device having the functionalities of electroluminescent lighting and touch detection. it is a multilayer stacked structure wherein the capacitive sensor layers are built on top of electroluminescent device, requiring two electrodes and a dielectric layer between them, and one of its electrodes being shared with the electroluminescent device. the use of textiles is also only mentioned as a possible source of the dielectric layer necessary for the capacitive sensor or as a possible coating. in our application, the capacitive sensing system is independent and placed around the electroluminescent system, using only one electrode. this allows the production of a simpler and thinner final device that requires the application of fewer layers. the document us6773129b2 presents a multilayer structure comprising a layer of textile on top of another substrate, such as foam, with an electroluminescent panel and touch switches disposed inside the multilayer structure. the disclosed electronic systems are of bulk nature and are simply placed inside the multilayer structure, whereas, in our application, the electronic devices are created using a textile substrate, and, as such, cannot be separated from said textile. the document us8952610b2 presents the creation of an electroluminescent textile by printing an electroluminescent device directly on a textile substrate using screen printing technology. although the application of the electroluminescent device is similar to the disclosed in this application, our technology involves the direct printing of a self-capacitive printed sensor onto the same textile substrate, that can be used to turn on or off the electroluminescent device. the document ep1927825a1 presents a textile capacitive sensor electrode. the sensor includes a planar capacitive sensitive region for detecting electric field changes on the environment close to the surface of the electrode. the textile capacitive sensor comprises at least one fabric including electrically conductive carbon fibers arranged in such a way as to define the substantially planar capacitive sensitive region. the present application discloses a different process to produce the textile capacitive sensor. the document pt105517b presents electrodes based on textile substrates. this electrode is based on tubular knitted fabrics. the fabric disclosed in this application is made in an open way allowing the fabric to be adjusted to any format and solution. this versatility and complexity of the process allow us to create any kind of shape and to have the conductive yarn appearing only when needed without changing the yarn on the feeders or change the input. in the width of the fabric it is possible to do different shapes not only in the length. this flexibility textile knitted method ends in a textile self-capacitive sensor different from the one disclosed in the above mentioned document . summary the present application discloses a flexible textile structure with sensing and lighting capabilities without losing the principal features of a typical textile, such as comfort, seamless and mechanical flexibility. as sensing applications are described, in this application, three different approaches that include a directly printed self- capacitive sensor, a knitted textile sensor and the integration of temperature/humidity bulk capacitive sensor directly on the textile. as lighting applications for decorative and signage purposes are used two different approaches that include an electroluminescent sensing device and the use of a hybrid sensor associating a surface mount device light emitting diodes (smd leds) and a printed self-capacitive sensor. the sensing device comprises a printed conductive electrode responsible for sensing and an outside printed conductive track that allows the control of the electric field around the sensor, and the incorporation of decorative red green blue (rgb) or single colour leds for lightning purposes. several problems had to be overcome during the development of the disclosed textile, namely, the large open areas present in the textile web, which caused difficulties during the printing steps, and electric interferences caused by the electroluminescent device when the junction of the two technologies is necessary, that caused malfunctions of the capacitive sensor. these problems were resolved by filing the gaps in the textile web, through the application of a polymeric membrane layer or by printing a thick polymeric layer, and by printing a grounded conductive track element between the electroluminescent and the sensing device, respectively. the lighting device consists of an electroluminescent device composed of several layers printed directly onto a textile substrate. these layers may be composed of materials presenting different properties, such as, conductivity, electroluminescence and/or dielectric properties . the use of a specific knitting textile structure typically called jersey structure or double structures such as interlock, spacer and double faced, for the textile sensor production, allow the use of different mixtures of different yarns materials, which results in a functional and flexible self-capacitive sensor with the mechanical properties of a typical textile structure and the same touch comfort. the use of hybrid solutions that included the use of printed conductive tracks connected to a conventional capacitive temperature/humidity sensor, with thicknesses between 0.6 and 0.8mm on the back of the textile structure, allows the use of a lamination and/or heated press to fix the conventional devices to the textile structures without change the typical stability of these materials. additionally is also a connection option the use of conductive yarns directly welded to the pcb that supports the capacitive temperature/humidity sensor. it's possible to use the described multifunctional textile in every situation where a textile substrate is applied as a covering or coating for a physical structure, for example, a car seat, arm rest or central console. the disclosed multifunctional textile structure permits the substitution of common mechanical buttons with signage reducing the visual noise and creating a cleaner environment, more in line with current tendencies. as main characteristics, the present application describes a self-capacitive printed sensor that comprises: • flexible substrate composed by different textiles materials with different structures constructions; • printed conductive tracks that form the self- capacitive printed sensor which comprise materials with sheet resistance comprised between 10 and 60 mq/sq/mil ; • barrier polymeric layer for electrical and mechanical protection; • an electronic control system coupled to the textile structure for touch/proximity calibration and for electrical power supply. in an embodiment, a textile material comprises the use of synthetic or natural fibers, such as polyethersulfone (pes), cotton (co), polyamide (pa) or a mixture between these fibers. in another embodiment, the textile comprises the use of a specific jersey structure or double structures, such as interlock, spacer and double faced. it is also disclosed in the present application the use of textile structure with a lower value of gaps in the textile web that allows the achievement of homogeneous printed conductive tracks. in an embodiment, the elongation of the textile structure has a maximum of 30-40% in length and 60-70% across. in another embodiment, a closed and flat structure has a shrinkage maximum value of 3 or 4%. in even another embodiment, a polymeric membrane is used between the flexible textile structure and the printed tracks . in an embodiment, the polymeric membrane is carried out in polyethylene terephthalate (pet) and/or polyurethane (pu) and/or polyethylene naphthalate (pen) polyvinyl chloride (pvc) and/or thermoplastic polyolefin (tpo) . in another embodiment, the printed conductive tracks that form the self-capacitive sensor present lengths ranging between 10 and 300mm. in even another embodiment, the printed conductive track presents widths ranging between 200μπι and 3000μπι, forming the self-capacitance sensor. in an embodiment, the distance between the printed conductive tracks is comprised between 200μπι and ιοοοομπι. in another embodiment, the printed conductive track presents thicknesses between 20μπι and 500μπι, forming the self-capacitive sensor. in even another embodiment the printed conductive tracks present a roughness between 20 and loonm. in an embodiment the printed conductive tracks present an object detection sensibility between 0 and 20mm of height. in another embodiment the printed conductive tracks work only as an object and/or person detection. in even another embodiment, the materials used in the printed conductive tracks of the self-capacitive sensor are silver and/or copper and/or aluminium and/or polymeric materials . it is also disclosed in the present application, the method for printing on a sheet to sheet system, using screen printing and/or inkjet technology, on flexible substrates of the printed self-capacitive sensor device which comprises the following steps: • elaboration of the digital design of the self- capacitive sensor that is intended to print; • printing of the conductive material over the flexible textile substrate; • thermal curing of the conductive material pattern at temperatures comprised between 80 and 140°c, for 10 to 20 minutes; • placement of leds using pick & place system; • dispensing of silver paste and/or a conductive adhesive to glue the leds to the printed sensor; • thermal curing of the dispensed conductive material temperatures comprised between 80 and 140°c, for 10 to 20 minutes; • lamination and/or coating of the polymeric barrier material . it is also disclosed in the present application, the method for printing on a roll-to-roll system, using rotary screen printing technology and/or rotogravure, on flexible textiles substrates of the printed self-capacitive sensor device described that comprises the following steps: • elaboration of the digital design of the self- capacitive sensor that is intended to print; • printing of the conductive material at speeds comprised between 0.1 and 10 m/min over the flexible textile substrate; • thermal curing of the conductive pattern at temperatures comprised between 80 and 140°c, at speeds comprised between 0.1 and 10 m/min; • placement of leds using pick & place system; • dispensing of silver paste and/or a conductive adhesive to glue the leds to the printed sensor; • thermal curing of the dispensed conductive material temperatures comprised between 80 and 140°c, for 10 to 20 minutes; • lamination and/or coating of the polymeric barrier material at speeds comprised between 0.1 and 10 m/min. in an embodiment, a polymeric material is laminate on top of the textile before the printing steps in all the previously mentioned methods. in another embodiment, the polymeric membrane is applied using a heated press. in even another embodiment, the polymeric membrane is applied using a hot lamination system. in an embodiment, the polymeric layer is applied using a coating technique, for example, slot die. in another embodiment, the polymeric layer is applied using a printing technique, for example, screen printing. in even another embodiment the printed self-capacitive sensor works as touch and/or proximity sensor. in an embodiment, an electroluminescent device or leds are introduced into the textile substrate and used as lighting devices . in another embodiment, one of the previously referred lighting devices is always used in conjunction with a self- capacitive sensor. in even another embodiment, the electroluminescent device and self-capacitive sensor comprises thin conductive layers, electroluminescent and dielectric materials, applied using at least one printing and/or coating technique . in an embodiment, an electromagnetic barrier is placed between the touch sensor and the electroluminescent device using printed conductive tracks. in another embodiment, the printed conductive tracks that form the self-capacitive sensor and the electromagnetic barrier present lengths ranging between 10 and 300mm. in even another embodiment, the printed conductive tracks that form the self-capacitive sensor and electromagnetic barrier present widths ranging between 200μπι and 3000μπι. in another embodiment, the distance between the printed conductive tracks and printed layers of the electroluminescent device is comprised between 200μπι and ιοοοομπι. in even another embodiment, the printed conductive tracks that form the self-capacitive sensor and the electromagnetic barrier present thicknesses between 20μπι and 500μπι. in an embodiment the printed conductive tracks present roughness between 20 and loonm. in another embodiment the printed conductive tracks present an object detection sensibility between 0 and 20mm of height . in even another embodiment the printed conductive tracks works only as an object and/or person detection. in an embodiment, the materials used in the printed conductive tracks of the self-capacitive sensor are silver and/or copper, and/or aluminium and/or conductive polymeric materials . in another embodiment, the electroluminescent device couple with self-capacitive sensor is comprised of: • printed conductive, electroluminescent and dielectric layers/tracks , with the conductive layers having sheet resistance comprised between 1 and 500 ω/sq/mil and the dielectric layers having a dielectric constant between 3 and 20; • barrier polymeric layer for electrical and mechanical protection; • an electronic control system coupled to the textile structure for touch/proximity calibration and for electrical power supply. in even another embodiment, a transparent conductive layer is used as an electrode in the building of the electroluminescent device. in an embodiment, the transparent conductive layer presents a thickness between 5μπι and 30μπι. in another embodiment, the electroluminescent and dielectric layers present a thickness between 5μπι and 30μπι. in even another embodiment, the electroluminescent and dielectric layers present a roughness between 10 and 500nm. in an embodiment, the transparent conductive layer possesses a transmittance between 60 and 90 % on the visible region of the electromagnetic spectrum. it is also disclosed in the present application, the method for printing on a sheet-to-sheet system, using screen printing and/or inkjet technology, of the electroluminescent device and associated self-capacitive sensor, which comprises the following steps: • elaboration of the digital design of the self- capacitive sensor and the electroluminescent that is intended to print; • printing of the transparent conductive material over the flexible textile substrate; • thermal curing of the transparent conductive layer at temperatures comprised between 80 and 100°c, for 10 to 15 minutes; • printing of the electroluminescent material over the transparent conductive layer; • thermal curing of the electroluminescent layer at temperatures comprised between 100 and 150°c, for 10 to 15 minutes; • printing of the dielectric material over the electroluminescent layer; • thermal curing of the dielectric layer at temperatures comprised between 100 and 150°c, for 10 to 15 minutes; • printing of a second dielectric material over the first dielectric layer; • thermal curing of second dielectric layer at temperatures comprised between 100 and 150°c, for 10 to 15 minutes; • printing of a conductive layer over the second dielectric layer and of conductive tracks over the flexible substrate; • thermal curing of the conductive layer and tracks at temperatures comprised between 80 and 140°c, for 10 to 20 minutes; • lamination and/or coating of the polymeric barrier material . it is also disclosed in the present application, the method for printing on a roll-to-roll system, using rotary screen printing and/or rotogravure technology, of the electroluminescent device and associated capacitive self- sensor, which comprises the following steps: • elaboration of the digital design of the self- capacitive sensor and the electroluminescent that is intended to print; • printing of the transparent conductive material over the flexible textile substrate, at speeds comprised between 0.1 and 10 m/min over the flexible textile substrate ; • thermal curing of the transparent conductive layer at temperatures comprised between 80 and 100°c, at speeds comprised between 0.1 and 10 m/min; • printing of the electroluminescent material over the transparent conductive layer; • thermal curing of the electroluminescent layer at temperatures comprised between 100 and 150°c, for 10 to 15 minutes; • printing of the dielectric material over the electroluminescent layer, at speeds comprised between 0.1 and 10 m/min; • thermal curing of the dielectric layer at temperatures comprised between 100 and 150°c, at speeds comprised between 0.1 and 10 m/min; • printing of a second dielectric material over the first dielectric layer, at speeds comprised between 0.1 and 10 m/min; • thermal curing of second dielectric layer at temperatures comprised between 100 and 150°c, at speeds comprised between 0.1 and 10 m/min; • printing of a conductive layer over the second dielectric layer and of conductive tracks over the flexible substrate, at speeds comprised between 0.1 and 10 m/min; • thermal curing of the conductive layer and tracks at temperatures comprised between 80 and 140°c, at speeds comprised between 0.1 and 10 m/min; • lamination and/or coating of the polymeric barrier material, at speeds comprised between 0.1 and 10 m/min . in an embodiment, the leds and the temperature/humidity sensors integrated are bulk electronic devices. in another embodiment, self-capacitive, temperature/humidity sensors are embedded into the textile substrate . it is also disclosed in the present application, the use of the printed self-capacitive sensor for decorative lighting purposes in areas such as automotive and aeronautics industry, since the final multifunctional textile could be easily integrated on seats with different geometries, armrests and central panels. it is also disclosed in the present application, the use of a knitted textile self-capacitive sensor that comprises the use of a specific jersey structure or double structures, such as interlock, spacer and double faced. in an embodiment, the textile sensor structure use a mix between pes/cotton (co) and a conductive yarn forming an interlock structure. in another embodiment, the proportion between pes/co and conductive yarn are 2ply' s of pes ne50 and lply of stainless steel yarn used with 0,035mm. in even another embodiment, the conductive yarns are composed of a based polyester yarn (70-85%) and stainless steel yarn (30-83%) . in an embodiment, the electrical resistance is between 10 to 20 ohm/meter. in another embodiment the title yarn should be between 16 nm and 60 nm. in even another embodiment, the knitted self-capacitive sensor is controlled by an electronic system. in an embodiment, the electronic system is composed by a microcontroller (mcu) , a sensor interface and a converter cc-cc with high noise immunity (emi) . in another embodiment the temperature/humidity sensor is a single device that uses capacitive technology to measure these properties. in even another embodiment, the temperature and humidity sensor is the hdc1000 model from texas instruments®, with i2c interface. in an embodiment, the sensor is assembled to a support structure . in another embodiment, the support structure is a printed circuit board with reduce thickness based on polyester or fr-4. in even another embodiment, the support structure is interconnected to the textile structure. in an embodiment, the interconnection is based on conductive printed tracks, adhesives and paints. in another embodiment, the interconnection is based on conductive yarns. general description the present application describes a multifunctional textile through the integration of lighting and sensing capabilities using innovative methods and technologies. the introduction of lighting capabilities is made possible by using two possible types of devices, namely, electroluminescent devices or leds . through the use of different technologies, temperature, humidity, touch and proximity sensing capabilities are also introduced into a textile substrate. the multifunctional textile described in this application can be applied on various areas like automotive, aeronautic, medical, sports, military and others. on the automotive area, for example, the intelligent textile can be used in the vehicle interiors to operate as sensor-actuators located on the arm rest, gear console or on the seat. the sensor can perform functions such as opening or closing the window, opening the trunk, opening or blocking the fuel oil tank, among others. they can also be use as functional lighting or decorative lighting to indicate to the user position of the touch/proximity sensors. to produce such a textile it was necessary to overcome difficulties related to the printing steps, due to the large open areas present in the web, and problems related to the placement of a capacitive printed sensor in close proximity to the electroluminescent device, due to the electric field interference that the sensor experienced when the electroluminescent was turned on. the mentioned problems related to the printing steps were overcome through the introduction of an optional primer polymeric layer, whose function was to fill the gaps in the textile web, reducing the open areas and preventing cracks and non- homogeneous thin printed tracks, or by increasing the number of printing operations for the first layers belonging to the electroluminescent device. to eliminate the electric field interference a conductive element was printed, and grounded, between the electroluminescent device and the capacitive printed sensor, introducing an electromagnetic barrier. in relation to their structure and composition, the electroluminescent device and touch sensor comprises thin layers of conductive, electroluminescent and dielectric materials, applied using at least one printing and/or coating technique. the layers of conductive, electroluminescent and dielectric materials have between 5 and 60 μπι of thickness. as for the leds and temperature/humidity sensors, these are bulk electronic devices. a self-capacity sensor is also created by the introduction of conductive wires during the knitting process of the textile substrate itself. this self-capacity sensor can be used as a touch/proximity sensor. the touch sensors may be constructed and used in two possible forms. in one embodiment, the touch sensor is composed of a single or double electrodes, with the possibility of placing leds, with typical smd dimensions, on a printed conductive track placed around the touch sensor using a pick & place system. the led' s function is to help identify the location of the touch sensor and/or indicate its state, on or off, depending on the used electronic control system. in another embodiment, the touch sensor is used in conjunction with the electroluminescent device and is placed around said device, consisting of a single printed electrode. in this case, the touch sensor is used to activate or deactivate the electroluminescent device, in the cases where said device has decorative functions, or, similar to the leds, the electroluminescent device is used to help identify the location of the touch sensor and/or indicate its state, on or off. due to the fact that the electroluminescent device, when active, produced electric interferences on the touch sensor and caused its malfunction, an electromagnetic barrier was introduced between the electroluminescent device and the touch sensor, under the form of a grounded conductive track. in the case of touch sensor used in conjunction with smd leds, the conductive track where the leds are placed can also function as a barrier that isolates the sensor from any outside interference. the large open areas present in the web caused problems during the printing steps required for the touch sensors and electroluminescent device creation and to overcome these problems it was necessary to introduce an optional primer polymeric layer, whose function was to fill the gaps in the web, reducing the open areas and preventing cracks and non-homogeneous thin printed tracks, or to increase the number of printing operations for the first layers belonging to the electroluminescent device. the touch sensor, when used alone or with leds, is comprised of an optional support layer, printed conductive tracks, a barrier film for electrical and mechanical protection and an electronic control system coupled to the multi-structure for touch/proximity calibration and for electrical power. the conductive tracks can be printed using several different types of sheet-to-sheet or roll-to- roll systems, such as, screen printing, rotary screen printing, rotogravure and/or inkjet, and as for the barrier film, a heated press or a hot lamination system can be used for its application, as illustrated in figure 1. the electroluminescent device coupled with a touch sensor is comprised of, starting from the textile substrate, an optional support layer, deposited using a heated press or a hot lamination system, a first transparent conductive layer, an electroluminescent layer, a dielectric layer, a second conductive layer, two conductive printed elements with sensing and electromagnetic shielding properties, a barrier film for electrical and mechanical protection and an electronic control system coupled to the multi-structure for electrical power. the method for printing the electroluminescent device and associated capacitive sensor comprises several alternating printing and thermal curing steps with a final laminating step, as illustrated in figures 2 and 3. the capacitive sensors described in the present application are named as self-capacitance and they operate based in the electronics controls continuously measuring the current on each electrode to the ground in order to establish a steady-state current. when a finger or an object approaches the sensor a change occurs in the electric field, increasing the current drawn as it creates a path do the ground. the dimensioning of the printed conductive tracks of the self-capacitive sensor enables the calibration of the sensor for obtaining different ranges of sensibility, operating voltages and currents. thus, the self-capacitive sensor is able to have different geometries presenting different advantages, like the possibility of using this kind of technology in different three-dimensional objects without loss of their touch/proximity sensibility, mechanical stability and electrical percolation. another advantage is the possibility of achieving an ideal combination between the widths and thicknesses of the printed conductive tracks, associated to the correct dimensioning of the electronic controls depending of the final product where the textile will be adapted. the textile substrates where the printed electronic devices are applied can present a jersey structures or double structures such as interlock, spacer or double faced. these jersey structures allow the creation of a textile structure with a closed loop. the substrates can be created using synthetic or natural fibres, such as, polyethersulfone (pes), cotton (co), polyamide (pa) and mixtures between these fibres. as for the knitted self-capacitive sensor, it uses an interlock structure and its production is conducted on an electronic knitting machine. this sensing textile substrate is knitted with a mix between pes/co and the conductive yarn and the knitted sensors may present different geometries and dimensions, as illustrated in figure 4. it operates like a field effect sensor, when the user enters or approximate to field, it detects the change and indicates an event has occurred with a corresponding output signal. the input stimulus to the field in this case can be the human body or an object. the problems identified for this specific knitting application are related to the maintenance of the mechanical stability of the complete textile structure, the maintenance of the same touch comfort for the end users, since the use of metallic yarns causes some comfort changes related to the cold sensation of the metal, and the connections between the knitted sensor and the electronic control system. the use of a specific knitting textile structure typically called as jersey structure, that allow the use of different mixtures of yarns of different materials, results in a functional and flexible sensor with the mechanical properties of a typical textile structure and the same touch comfort. temperature and relative humidity sensing capabilities are achieved using a digital sensor with i2c interface, for example model hdc1000 by texas instruments®, which uses a capacitive technology to measure the above mentioned parameters, avoiding the need of perforation on the textile substrate. the sensor is integrated into the textile substrate in two stages. in the first stage, the sensor is assembled in a support structure, preferably a printed circuit board with reduced thickness (0.6 to 0.8mm) based on polyester or fr-4, in order to increase the mechanical robustness of the structure. in a second stage the previous assembled structure is interconnected with the textile using two possible pathways, namely through silver printed conductive tracks or by textile conductive tracks. in the first approach, illustrated in figure 5, the connection is based on a support structure with metal pads (smt) that allow the electric contact with the conductive tracks, using adhesives or conductive inks. using textile conductive tracks, the interconnection method uses an associated supporting structure based on metallized holes, as illustrated in figure 6, which serve to support lead wire ends that are attached through these holes. it is further added a thin layer of epoxy in order to improve the fixing of wires to the frame. the main problem related to the use of the bulk temperature and humidity sensor was the embedding of such bulk directly on the textile structures without significant changes on the visible textile surface. the use of hybrid solutions that included the use of printed conductive tracks connected to a very thin conventional capacitive temperature/humidity sensor on the back of the textile structure allowed the use of a lamination and/or heated press to fix the conventional devices to the textile structures without producing changes in the typical stability of these materials. all of the electronic devices referred in this application use the same electronic control system, where the output signal is received by a compact electronic circuit and sent to the correspondent actuator systems. the electronic system is composed by a microcontroller (mcu) ; sensor interface; converter cc-cc with high noise immunity (emi) like the architecture represented on figure 7. as sensing applications are described three different approaches that may or may not work together in the same system: a directly printed self-capacitive sensor, a knitted textile sensor and the integration of temperature/humidity bulk capacitive sensor directly on the textile. as lighting applications for decorative and signage purposes are used two different approaches that may or may not work together: an electroluminescent sensing device and the use of a hybrid sensor that include the use of smd leds and a printed self-capacitive sensor. the present application uses electrical conductive yarns knitted simultaneously with different materials that are part of the textile structure, for the sensor production. the materials used in the present application as conductive wires are copper and/or stainless steel and/or silver and/or tungsten and/or molybdenum with diameters between 25 and 150μπι. the use of inorganic conductive materials on the construction of the present textile capacitive sensor and the possibility of knitting the textile sensor and textile structure together, guarantees the possibility of maintain the comfort, seamless and mechanical flexibility of the textile structure. brief description of figures for easier understanding of this application, figures are attached in the annex that represent different embodiments which nevertheless are not intended to limit the technology disclosed herein. figure 1 illustrates a schematic representation of a printed self-capacitive sensor with embedded leds, where the reference numbers are related with: 1- polymeric membranes; 2- printed conductive tracks; 3- textile substrate; 4- smd leds; 5- touch sensor. figure 2 illustrates a schematic representation of a printed electroluminescent device with self-capacitive sensors (cross-section) , where the reference numbers are related with: 1- polymeric membranes; 2- printed conductive tracks; 3- textile substrate; 6- printed conductive layer; 7- printed dielectric layer; 8- printed electroluminescent layer; 9- printed transparent conductive layer. figure 3 illustrates a schematic representation of a printed electroluminescent device with self-capacitive sensors (top-view) , where the reference numbers are related with : 2 - printed conductive tracks; 3 - textile substrate; 10 - printed electroluminescent device. figure 4 illustrates a schematic representation of possible geometries for the textile sensor. figure 5 illustrates a schematic representation of the pcb support structure used with printed conductive tracks on temperature/humidity sensor, where the reference numbers are related with: 11 - conductive pads; 12 - temperature/humidity sensor. figure 6 illustrates a schematic representation of the pcb support structure used with printed conductive wires on temperature/humidity sensor, where the reference numbers are related with: 12 - temperature/humidity sensor; 13 - holes for wires connections. figure 7 illustrates an electronic system architecture. description of embodiments the present application describes a multifunctional textile that comprises the integration of lighting and sensing capabilities using innovative methods and technologies. the introduction of lighting capabilities is made possible by using two possible types of devices, namely, electroluminescent devices or leds . through the use of different technologies, temperature, humidity, touch and proximity sensing capabilities are also introduced into a textile substrate. in relation to their structure and composition, the electroluminescent device and touch sensor comprising thin layers of conductive, electroluminescent and dielectric materials, applied using at least one printing and/or coating technique. as for the leds and temperature and humidity sensors, these are bulk electronic devices. a self-capacitive sensor was also created by the introduction of conductive wires during the knitting process of the textile substrate itself. the touch sensors may be constructed and used in two possible forms. in one embodiment, the touch sensor is composed of a single electrode, with the possibility of placing leds, with typical smd dimensions, on a printed conductive track placed around the touch sensor using a pick & place system. the led' s function is to help identify the location of the touch sensor and/or indicate its state, on or off, depending on the used electronic control system. in another embodiment, the touch sensor is used in conjunction with the electroluminescent device and is placed around said device, comprising a single printed electrode. in this case, the touch sensor is used to activate or deactivate the electroluminescent device, in the cases where said device has decorative functions, or, similar to the leds, the electroluminescent device is used to help identify the location of the touch sensor and/or indicate its state, on or off. due to the fact that the electroluminescent device, when active, produced electric interferences on the touch sensor and caused its malfunction, an electromagnetic barrier was introduced between the electroluminescent device and the touch sensor, under the form of a grounded conductive track. the touch sensor, when used alone or with leds, is comprised of an optional support layer, printed conductive tracks with sheet resistance comprised between 10 and 60 mq/sq/mil, a barrier film for electrical and mechanical protection and an electronic control system coupled to the multi-structure for touch/proximity calibration and for electrical power. the printed conductive tracks present lengths and widths ranging between 10-300mm and 200-300μπι, respectively, and the distance between the printed conductive tracks and layers is comprised between 200μπι and ιοοοομπι. they also present a thickness between 20μπι and 500μπι, and a roughness between 20 and loonm. in terms of object detection sensibility, they can detect the approach of an object, for example, a finger, at a distance of up to 20mm. the optional support layer may be composed of, for example, polyethylene terephthalate (pet) , and/or polyurethane (pu) , and/or polyethylene naphthalate (pen) , polyvinyl chloride (pvc) and/or thermoplastic polyolefin (tpo) . as for its deposition, it can be done using a heated press or a hot lamination system. several materials can be used in the creation of the printed conductive tracks, namely, silver, copper, aluminium and/or polymeric materials. these materials can be applied using several different types of sheet-to-sheet or roll-to-roll systems, such as, screen printing, rotary screen printing, rotogravure and/or inkjet. when the touch sensors are printed using screen printing or inkjet, their method of production comprise the following steps : • elaboration of the digital design of the self- capacitive sensor that is intended to print; • printing of the conductive material over the flexible textile substrate; • thermal curing of the conductive material pattern at temperatures comprised between 80 and 140°c, for 10 to 20 minutes; • placement of the leds using a pick & place system; • dispensing of silver paste and/or a conductive adhesive to glue the leds to the printed sensor; • thermal curing of the conductive adhesives or inks at temperatures comprised between 80 and 140°c, for 10 to 20 minutes; • lamination and/or coating of the polymeric barrier material . when the chosen techniques are rotary screen printing or rotogravure, the touch sensors are printed accordingly to the following steps: • printing of the conductive material at speeds comprised between 0.1 and 10 m/min over the flexible textile substrate; • thermal curing of the conductive pattern at temperatures comprised between 80 and 140°c, at speeds comprised between 0.1 and 10 m/min; • placement of the leds using a pick & place system; • dispensing of silver paste and/or a conductive adhesive to glue the leds to the printed sensor; • thermal curing of the conductive adhesives or inks at temperatures comprised between 80 and 140°c, for 10 to 20 minutes; • lamination and/or coating of the polymeric barrier material at speeds comprised between 0.1 and 10 m/min. the electroluminescent device coupled with a touch sensor comprises, starting from the textile substrate: • an optional support layer. this layer may be composed of, for example, polyethylene terephthalate (pet) , and/or polyurethane (pu) , and/or polyethylene naphthalate (pen) , polyvinyl chloride (pvc) and/or thermoplastic polyolefin (tpo) . as for its deposition, it can be done using a heated press or a hot lamination system; • a first transparent conductive layer. this layer has a sheet resistance comprised between 100 and 500 ω/sq/mil, a transmittance between 60 and 90 % on the visible region and a thickness between 5 and 15μπι; • an electroluminescent layer. this layer has thickness between 5 and 30μπι and a roughness between 10 and 500 nm; • a dielectric layer. this layer has a thickness between 10 and 60μπι, a dielectric constant between 3 and 20 and a roughness between 10 and 500 nm; • a second conductive layer, with a sheet resistance comprised between 10 and 60 mq/sq/mil, a thickness between 5 μπι and 50 μπι, and a roughness between 20 nm and 100 nm; • two conductive printed elements with sensing and electromagnetic shielding properties, with sheet resistance comprised between 10 and 60 mq/sq/mil, lengths and widths ranging between 10-300 mm and 200- 300 μπι, respectively, and a distance between the printed conductive elements and the transparent conductive layer between 200 μπι and 10000 μπι. they also present a thickness between 5 μπι and 50 μπι, and a roughness between 20 and loonm; • a barrier film for electrical and mechanical protection, made of the same materials as the optional support layer; • an electronic control system coupled to the multi- structure for electrical power. a method for printing the electroluminescent device and associated capacitive sensor on a sheet to sheet system, using screen printing technology and/or inkjet technology, comprises the following steps: • printing of the transparent conductive material over the flexible textile substrate; • thermal curing of the transparent conductive layer at temperatures comprised between 80 and 100°c, for 10 to 15 minutes; • printing of the electroluminescent material over the transparent conductive layer; • thermal curing of the electroluminescent layer at temperatures comprised between 100 and 150°c, for 10 to 15 minutes; • printing of the dielectric material over the electroluminescent layer; • thermal curing of the dielectric layer at temperatures comprised between 100 and 150°c, for 10 to 15 minutes; • printing of a second dielectric material over the first dielectric layer; • thermal curing of second dielectric layer at temperatures comprised between 100 and 150°c, for 10 to 15 minutes; • printing of a conductive layer over the second dielectric layer and of conductive tracks over the flexible substrate; • thermal curing of the conductive layer and tracks at temperatures comprised between 80 and 140°c, for 10 to 20 minutes; • lamination and/or coating of the polymeric barrier material . the textile substrates where the printed electronic devices are applied can present a jersey structures or double structures such as interlock, spacer or double faced. the elongation of these substrates must be a maximum of 30-40% in length and 60-70% across. a closed and flat structure is needed, and shrinkage values should not exceed 3-4%. the substrates can be created using synthetic or natural fibres, such as, polyethersulfone (pes), cotton (co), polyamide (pa) and mixtures between these fibres. temperature and relative humidity sensing capabilities are achieved using a digital sensor with i2c interface, for example model hdc1000 by texas instruments®, which uses a capacitive technology to measure the mentioned parameters. the sensor is integrated into the textile substrate in two stages. in the first stage, the sensor is placed assembled in a support structure, preferably a printed circuit board with thickness between 06.-0.8 mm based on polyester or fr- 4, in order to increase the mechanical robustness of the structure. in a second stage, the previous assembled structure is interconnected with the textile using two possible methods: • through silver based conductive tracks previously printed in the textile. this method uses an associated supporting structure based on metallized pads (smt) , which carry the electrical contact to the raceways via conductive adhesives and paints. the mentioned conductive tracks have sheet resistance comprised between 10 and 60 mq/sq/mil, lengths and widths ranging between 10-300 mm and 200-300 μπι, respectively, and a distance between the printed conductive elements and the transparent conductive layer between 200 μπι and 10000 μπι. they also present a thickness between 5 μπι and 50 μπι, and a roughness between 20 and 100 nm; • through conductive yarns. this method uses an associated supporting structure based on metallized holes, which serve to support lead wire ends that are attached through these holes. it is further added a thin layer of epoxy in order to improve the fixing of wires to the frame. the conductive yarns are composed of a based polyester yarn (70-85%) and stainless steel yarn (30-83%), their resistance between 10 to 20 ohm/meter and the title of the yarn should between nel6 and ne 60. as for the knitted self-capacitive sensor, it uses an interlock structure and in its production is conducted on an electronic knitting machine. this sensing textile substrate is knitted with a mix between pes/co and the conductive yarn, in the following proportions, 2 ply's of pes ne 50 and 1 ply of inox yarn used in 0, 035 mm. the conductive yarns are composed of a based polyester yarn (70-85%) and stainless steel yarn (30-83%), their resistance between 10 to 20 ohm/meter and the title of the yarn should between nm 16 and nm 60. the numbers of needles working on the developed structures were 1800x2, divided with 1800 in the cylinder and 1800 in the disc. the knitted self-capacitive sensor, as well as the temperature and relative humidity sensor, is controlled by an electronic system composed by a microcontroller (mcu) ; sensor interface and a converter cc-cc with high noise immunity (emi) . best modes example of the application of an electroluminescent device with an integrated touch sensor a pes/co textile substrate was used as the base material. a copolyester web membrane (9b8, from protechnic) was applied on the reverse side of the textile substrate using a heated press, applying 130°c and 3 bar of pressure during 15 seconds. a transparent conductive layer was then applied on top of the membrane using a synthetic polymer-based 3,4- polyethylenedioxithiophene (pedot) dispersion (clevios sv3, from heraeus) over an area of 3x3 cm, and subsequently cured at 100°c during 10 min. this was followed by the application of an electroluminescent paste (luxprint 8150b, from dupont) on top of the previous conductive layer over an area of 2x2 cm, and subsequently curing at 100°c during 15 min. two layers of a dielectric paste (luxprint 8153, from dupont) were then printed on top of the previous layer, completely covering it, and cured at 130°c during 15 min, after each layer was printed. a silver based paste (pe871, from dupont) was then used to print a previously chosen design on top of the dielectric layers and two conductive lines surrounding the first transparent conductive layer but not contacting it. these were cured applying 130°c during 15 min. a distance of 1 mm was left between each conductive lines and the silver layer printed on top of the dielectric layer. each printing step was conducted using a screen printing equipment model rp 2.2, from rokuprint, and a screen with a 230 polyester mesh. in the curing steps a box oven was used. finally, a copolyester film membrane (92m, protechnic) was applied in top of the previously printed layers using a heated press, applying 150°c and 3 bar of pressure during 15 seconds. example of the application of an integrated touch sensor a pes/co textile substrate was used as the base material. a copolyester web membrane (9b8, from protechnic) was applied on the reverse side of the textile substrate using a heated press, applying 130°c and 3 bar of pressure during 15 seconds. a silver based paste (pe871, from dupont) was then used to print a previously chosen design of the self- capacitive sensor on top of the web membrane and two conductive lines surrounding the first sensor that allow the posteriorly parallel electric connection of the smd leds using a pick & place system. the silver pastes were cured applying 100°c during 15 min. a distance of 2 mm was left between each conductive tracks. each printing step was conducted using a screen printing equipment model rp 2.2, from rokuprint, and a screen with a 230 polyester mesh. in the curing steps a box oven was used. finally, a copolyester film membrane (92m, protechnic) was applied in top of the previously printed layers using a heated press, applying 150°c and 3 bar of pressure during 15 seconds. example of a knitted textile sensor a textile substrate with a knitted self-capacitive textile sensor was produced by an electronic knitting machine by mayer & cie. the machine used a mixture of pes/co and conductive yarn (stainless steel - 316l from chori), built by 2 plys of pes and 1 ply of inox yarn with 0.035 mm. an interlock regular structure was knitted, using the previously mentioned knitting machine, with the conductive yarn only appearing in the sensor area and creating a rectangular shape with 15x5cm and a 5 mm thickness. the geometry is developed on associated software and then transferred to the machine where the needles receive an electrical input and work only when needed. due to the fact that the structure requires working with special yarns the speed of production was 16 rpm. on a conductive track a metallic crimp was applied to allow the connection to the electronic circuit through the soldering manual process. the electronic circuit is composed by a microcontroller (mcu) ; sensor interface; converter cc-cc with high noise immunity (emi) . these components were assembled on a small and compact printed circuit board to allow the system miniaturization . this description is of course not in any way restricted to the embodiments presented herein and any person with an average knowledge of the area can provide many possibilities for modification thereof without departing from the general idea as defined by the claims. the embodiments described above can obviously be combined with each other. the following claims further define different embodiments . lisbon, june 09, 2015
116-265-142-472-151
IT
[ "RU", "WO", "HU", "PL", "KR", "EP", "CN", "BR", "JP", "US" ]
B29D30/00,B29D30/20,B29D30/08,B29D30/06,B29D30/60
2004-12-16T00:00:00
2004
[ "B29" ]
method and installation for manufacturing tires for vehicle wheels
field: technological processes.substance: according to the method, a cylindrical frame structure comprising a carcass ply connected to annular anchoring structures axially spaced apart from each other is assembled at the assembly station. a cylindrical sleeve containing a tread band placed in a radially external position relative to the belt structure is produced at the final point. pass the cylindrical sleeve from the position of picking up the final point to the radially external position relative to the frame structure. the working positions of the trailing point are offset at an angle to each other.effect: increasing the speed and quality of manufacturing products.55 cl, 3 dwg
claims 1. a method for manufacturing tyres for vehicle wheels, comprising the steps of: a) building in a building station (14) a substantially cylindrical carcass structure (3) comprising at least one carcass ply (10) operatively associated to annular anchoring structures (7) axially spaced apart from each other; b) manufacturing in a finishing station (17) a substantially cylindrical sleeve comprising a tread band (5) applied at a radially outer position with respect to a belt structure (4) comprising at least one belt layer (12) including reinforcing cords substantially parallel to a circumferential development direction of the sleeve, said step b) comprising the steps of: bl) assembling a first belt structure (4) at a first working position (a) on a first auxiliary drum (19) of the finishing station (17); b2) applying a tread band (5) at at least one second working position (b) at a radially outer position with respect to a second belt structure (4) previously assembled on at least one second auxiliary drum (20) of the finishing station (17), said application step being carried out by laying down according to a predetermined path at least one continuous elongated element (27, 28) of green elastomeric material at a radially outer position with respect to said second belt structure (4); b3) positioning the first auxiliary drum (19) supporting the first belt structure (4) at said at least one second working position (b); b4) positioning said at least one second auxiliary drum (20) supporting the substantially cylindrical sleeve thus obtained at a picking position (d) of the finishing station (17); c) transferring said substantially cylindrical sleeve from said picking position of the finishing station (17) at a radially outer position with respect to a carcass structure (3) built in the meantime in the building station (14); wherein said steps from bl) to b4) are repeated cyclically; wherein steps bl) and b2) are carried out at least in part simultaneously with one another; and wherein steps b3) and b4) are carried out at least in part simultaneously with one another. 2. a method according to claim 1, wherein said step b) of manufacturing the substantially cylindrical sleeve is carried out by means of the steps of: bl) assembling at a first working position (a) a first belt structure (4) on a first auxiliary drum (19) of the finishing station (17); b2) applying at a second working position (b) at least one first portion of the tread band (5) at a radially outer position with respect to a second belt structure (4) previously assembled on a second auxiliary drum (20) of the finishing station (17); said application step being carried out by laying down according to a predetermined path at least one continuous elongated element (27, 28) of green elastomeric material at a radially outer position with respect to said second belt structure (4); b5) applying at at least one third working position (c) at least one second portion of the tread band (5) at a radially outer position with respect to a third belt structure (4) assembled on at least one third auxiliary drum (40) of the finishing station (17); said application step being carried out by laying down according to a predetermined path at least one continuous elongated element (42) of green elastomeric material at a radially outer position with respect to said third belt structure (4); b6) positioning the first auxiliary drum (19) supporting the first belt structure (4) at said second working position (b); b7) positioning the second auxiliary drum (20) supporting the second belt structure (4) and said at least one portion of the tread band (5) at said third working position (c); b8) positioning said at least one third auxiliary drum (40) supporting the substantially cylindrical sleeve thus obtained at a picking position (d) of the finishing station (17); wherein said steps bl), b2) and from b5) to b8) are repeated cyclically; wherein steps bl), b2) and b5) are carried out at least in part simultaneously with one another; and wherein steps from b6) to b8) are carried out at least in part simultaneously with one another. 3. a method according to any one of claims 1 or 2, wherein said steps b) and c) are carried out in a time interval substantially equal to or smaller than, the time for carrying out said step a) of building the carcass structure (3). 4. a method according to any one of claims 1 or 2, wherein said step bl) comprises the step of i) applying at a radially outer position with respect to the first auxiliary drum (19) one strip-like element (23) of green elastomeric material including at least one reinforcing cord to form axially contiguous circumferential coils, so as to obtain said belt layer (12) including reinforcing cords substantially parallel to the circumferential development direction of the sleeve. 5. a method according to any one of claims from 1 to 4, wherein said step bl) comprises the steps of: ii) applying at a radially outer position with respect to the first auxiliary drum (19) a first belt layer (ha) including first reinforcing cords inclined with respect to the circumferential development direction of the sleeve; and iii) applying at a radially outer position with respect to the first belt layer (1 ia) a second belt layer (l ib) comprising second reinforcing cords inclined along a crossed direction with respect to said first reinforcing cords. 6. a method according to claims 4 and 5, wherein said strip-like element (23) of green elastomeric material is applied at a radially outer position with respect to the second belt layer (l ib). 7. a method according to any one of claims from 1 to 6, wherein said step bl) further comprises the step of: iv) applying a further layer (13) of green elastomeric material at a radially outer position with respect to said belt layer (12), said layer (13) comprising a plurality of reinforcing cords. 8. a method according to any one of claims 1 or 2, wherein said step b2) is carried out at said at least one second working position (b) by laying down at a radially outer position with respect to said second belt structure (4) a first (27) and a second (28) continuous elongated element of green elastomeric material. 9. a method according to claim 8, wherein said step b2) is carried out at said at least one second working position (b) by laying down said first continuous elongated element (27) of green elastomeric material at a radially outer position with respect to said second belt structure (4) along substantially the entire transversal development thereof so as to form a radially inner layer of the tread band (5). 10. a method according to claims 1 and 8, wherein said step b2) is carried out at said at least one second working position (b) by laying down said second continuous elongated element (28) at a radially outer position with respect to said radially inner layer of the tread band (5) along substantially the entire transversal development thereof so as to form a radially outer layer of the tread band (5). 11. a method according to claims 1 and 8, wherein said step b2) is carried out at said at least one second working position (b) by laying down said first continuous elongated element (27) at a radially outer position with respect to at least one portion of said second belt structure (4) so as to form a corresponding portion of the tread band (5). 12. a method according to claim 11, wherein said step b2) is carried out at said at least one second working position (b) by laying down said second continuous elongated element (28) at an axially aligned position with respect to said portion of the tread band (5) formed by said first continuous elongated element (27), so as to form a further portion of the tread band (5). 13. a method according to claims 2 and 9, wherein said step b2) is carried out at said at least one second working position (b) by laying down a second continuous elongated element (28) at a radially outer position with respect to at least one portion of said radially inner layer of the tread band (5) so as to form a corresponding portion of a radially outer layer of the tread band (5). 14. a method according to claim 13, wherein said step b5) is carried out by laying down at said at least one third working position (c) at least one third continuous elongated element (42) of green elastomeric material at an axially aligned position with respect to said at least one portion of a radially outer layer of the tread band (5) formed by said second continuous elongated element (28), so as to form a further portion of the radially outer layer of the tread band (5). 15. a method according to claim 2, wherein said step b2) is carried out at said at least one second working position (b) by laying down a first continuous elongated element (27) of green elastomeric material at a radially outer position with respect to at least one portion of said second belt structure (4) so as to form a corresponding portion of a radially inner layer of the tread band (5). 16. a method according to claim 15, wherein said step b2) is carried out at said at least one second working position (b) by laying down a second continuous elongated element (28) at an axially aligned position with respect to said at least one portion of a radially inner layer of the tread band (5) formed by said first continuous elongated element (27), so as to form a further portion of the radially inner layer of the tread band (5). 17. a method according to claim 16, wherein said step b5) is carried out by laying down at said at least one third working position (c) at least one third continuous elongated element (42) of green elastomeric material at a radially outer position with respect to the radially inner layer of the tread band (5) along substantially the entire transversal development thereof so as to form a radially outer layer of the tread band (5). 18. a method according to any one of claims from 8 to 17, wherein said first (27) and said second (28) continuous elongated elements are laid down at opposite sides of said at least one second auxiliary drum (20). 19. a method according to any one of claims 1 or 2, wherein said steps b2) and b5) are carried out by delivering said continuous elongated elements (27, 28, 42) from „ respective delivery members (25, 26, 41) arranged at said at least one second (b) and at said at least one third (c) working position near said at least one second (20) and said at least one third (40) auxiliary drum, simultaneously with winding of the continuous elongated elements (27, 28, 41) on said drums (20, 40). 20. a method according to claim 1, wherein said step b) further comprises the step of b9) applying according to a respective predetermined path a further continuous elongated element of green elastomeric material at said picking position (d) at a radially outer position with respect to said second belt structure (4). 21. a method according to claim 20, wherein said step b9) is carried out by delivering said further continuous elongated element from a respective delivery member arranged at said picking position (d) near said second auxiliary drum (20), simultaneously with winding of the continuous elongated element on said drum (20). 22. a method according to claim 2, wherein said step b) further comprises the step of blo) applying according to a respective predetermined path a further continuous elongated element of green elastomeric material at said picking position (d) at a radially outer position with respect to said third belt structure (4). 23. a method according to claim 22, wherein said step blo) is carried out by delivering said further continuous elongated element from a respective delivery member arranged at said picking position (d) near one of said auxiliary drums (19, 20, 40), simultaneously with winding of the continuous elongated element on one of said auxiliary drums (19, 20, 40). 24. a method according to any one of claims 19, 21 or 23 wherein the delivery of said continuous elongated elements (27, 28, 42) is carried out by extrusion through said delivery members (25, 26, 41). 25. a method according to any one of claims 19 or 21, wherein said steps b2) or b9) are effected by carrying out, simultaneously with the application of said at least one continuous elongated element (27, 28, 42), the steps of: d) imparting to said at least one second auxiliary drum (20) carrying the second belt structure (4) a rotary motion about a geometric axis thereof, so as to circumferentially distribute said at least one continuous elongated element (27, 28, 42) at a radially outer position with respect to the second belt structure (4); e) carrying out controlled relative displacements between said at least one second auxiliary drum (20) and the delivery member (25, 26, 41) to form with said at least one continuous elongated element (27, 28, 42) a plurality of coils arranged in mutual side by side relationship to define said at least one portion of the tread band (5). 26. a method according to any one of claims 19 or 23, wherein said steps b5) or blo) are effected by carrying out, simultaneously with the application of said at least one continuous elongated element the steps of: d') imparting to said at least one third auxiliary drum (40) carrying the third belt structure (4) a rotary motion about a geometric axis thereof, so as to circumferentially distribute said at least one continuous elongated element at a radially outer position with respect to the third belt structure (4); e') carrying out controlled relative displacements between said at least one third auxiliary drum (40) and the delivery member to form with said at least one continuous elongated element a plurality of coils arranged in mutual side by side relationship to define at least one radially outer portion of the tread band (5). 27. a method according to claim 25, wherein said displacements are carried out by moving the second auxiliary drum (20) with respect to said delivery member (25, 26, 41). 28. a method according to claim 26, wherein said displacements are carried out by moving the third auxiliary drum (40) with respect to said delivery member (25, 26, 41). 29. a method according to any one of claims 25 or 26, wherein said steps d), e), d') and e') are carried out by a displacing apparatus (18) active on said at least one second (20) or, respectively, on said at least one second (20) and at least one third (40) auxiliary drum. 30. a method according to any one of claims 1 or 2, wherein said working positions (a, b, c) of the finishing station (17) are angularly offset with one another. 31. a method according to any one of claims 29 or 30, wherein said auxiliary drums (19, 20, 40) are supported by a substantially turret-like displacing apparatus (18) at positions angularly offset with one another and wherein said steps b3) and b4) or from b6) to b8) are carried out by rotating said displacing apparatus (18) about a substantially vertical rotation axis (y-y). 32. a method according to claim 31, wherein at least one of said auxiliary drums (19, 20, 40) is slidably supported by said displacing apparatus (18) and wherein the method comprises the further step of translating said at least one auxiliary drum (19, 20, 40) towards the rotation axis (y-y) of the displacing apparatus (18) before carrying out said rotation step of said apparatus (18). 33. a method according to any one of claims 1 or 2, wherein the picking position (d) of the cylindrical sleeve substantially coincides with said first working position (a). 34. a method according to any one of claims 1 or 2, further comprising after said step c), the step of shaping said carcass structure (3) according to a substantially toroidal shape, so as to associate the same to said substantially cylindrical sleeve transferred at a radially outer position with respect to the carcass structure (3). 35. a method according to any one of claims 19, 21 or 23 wherein the delivery of at least one of said continuous elongated elements (27, 28) is carried out by delivering at least one semifinished product of green elastomeric material in the form of a continuous strip by means of at least one of said delivery members (25, 26). 36. a plant (1) for manufacturing tyres for vehicle wheels comprising: a) a building station (14) for building a substantially cylindrical carcass structure (3) comprising at least one carcass ply (10) operatively associated to annular anchoring structures (7) axially spaced apart from each other; b) a finishing station (17) for manufacturing a substantially cylindrical sleeve comprising a tread band (5) applied at a radially outer position with respect to a belt structure (4) comprising at least one belt layer (12) including reinforcing cords substantially parallel to the circumferential development direction of the sleeve, said finishing station (17) comprising: bl) a first auxiliary drum (19); b2) at least one second auxiliary drum (20); b3) a displacing apparatus (18) adapted to support said auxiliary drums (19, 20) and to position said auxiliary drums (19, 20) at a first working position (a) wherein said belt structure (4) is assembled, at at least one second working position (b) wherein said tread band (5) is applied and at a picking position (d) of said substantially cylindrical sleeve; said first (a) and said at least one second (b) working position being defined in different zones of the finishing station (17); b4) at least one delivery device (22) of a strip-like element (23) of green elastomeric material including at least one reinforcing cord, arranged at said first working position (a) for operatively interacting with one of said auxiliary drums (19, 20); b5) at least one delivery member (25, 26) of a continuous elongated element (27, 28) of green elastomeric material arranged at said at least one second working position (b) for operatively interacting with one of said auxiliary drums (19, 20); c) at least one transfer device (36) of the substantially cylindrical sleeve manufactured in the finishing station (17), adapted to operatively interact with one of said auxiliary drums (19, 20) at said picking position (d) for transferring said substantially cylindrical sleeve at a radially outer position with respect to a carcass structure (3) built in the building station (14). 37. a plant (1) according to claim 36, further comprising: b6) at least one third auxiliary drum (40) supported by said displacing apparatus (18); b7) at least one second delivery member (41) of a continuous elongated element (42) of green elastomeric material arranged at at least one third working position (c) for operatively interacting with one of said auxiliary drums (19, 20, 40). 38. a plant (1) according to any one of claims 36 or 37, further comprising at least one delivery device (24) of belt layers arranged at said first working position (a) for operatively interacting with one of said auxiliary drums (19, 20, 40). 39. a plant (1) according to any one of claims 36 or 37, comprising at least one further delivery device of a belt layer (13) comprising a plurality of reinforcing cords, arranged at said first working position (a) for operatively interacting with one of said auxiliary drums (19, 20, 40). 40. a plant (1) according to any one of claims 36 or 37, further comprising at least one third delivery member of a respective third continuous elongated element of green elastomeric material arranged at said picking position (d) for operatively interacting with one of said auxiliary drums (19, 20, 40). 41. a plant (1) according to any one of claims 36, 37 or 40, wherein the delivery members (25, 26, 41) of said continuous elongated elements (27, 28, 42) comprise at least one extruder (29, 30, 43). 42. a plant (1) according to any one of claims 36 or 40, wherein at least one of said delivery members (25, 26) of the continuous elongated elements (27, 28) delivers said continuous elongated element as a semifinished product of green elastomeric material in the form of a continuous strip. 43. a plant (1) according to any one of claims 36, 37 or 40, wherein said displacing apparatus (18) comprises at least one drum rotation unit (31, 32, 44) adapted to rotate the auxiliary drums (19, 20, 40) about their geometrical axis. 44. a plant (1) according to any one of claims 36, 37 or 40, wherein said auxiliary drums (19, 20, 40) are slidably supported by said displacing apparatus (18). 45. a plant (1) according to any one of claims 36, 37 or 40, wherein said displacing apparatus (18) comprises at least one drum translating unit (33, 34, 45) adapted to carry out controlled axial movements of said auxiliary drums (19, 20, 40) at said working positions (a, b, c) or at said picking position (d). 46. a plant (1) according to any one of claims 36, 37 or 40, wherein said displacing apparatus (18) is of the substantially turret-like type and is adapted to support said auxiliary drums (19, 20, 40) at positions angularly offset with one another. 47. a plant (1) according to claim 46, further comprising at least one driving unit (35) adapted to rotate said displacing apparatus (18) about a substantially vertical rotation axis (y-y). 48. a plant (1) according to claims 45 and 46, wherein said drum translating unit (33, 34, 35) of the displacing apparatus (18) translates said auxiliary drums (19, 20, 40) between said working positions (a, b, c) or said picking position (d) and a stand-by position defined between said working positions (a, b, c) or picking position (d) and a rotation axis (y-y) of the displacing apparatus (18). 49. a plant (1) according to any one of claims 36, 37 or 40, further comprising at least two delivery members (25, 26) of respective continuous elongated elements (27, 28) of green elastomeric material arranged at said at least one second working position (b) for operatively interacting at opposite sides of one of said auxiliary drums (19, 20, 40). 50. a plant (1) according to claim 37, further comprising at least two delivery members of respective continuous elongated elements of green elastomeric material arranged at said at least one third working position (c) for operatively interacting at opposite sides of one of said auxiliary drums (19, 20, 40). 51. a plant (1) according to any one of claims 36 or 37, wherein the picking position (d) of the cylindrical sleeve substantially coincides with said first working position (a). 52. a plant (1) according to any one of claims 36 or 37, further comprising at least one apparatus for shaping said carcass structure (3) according to a substantially toroidal shape so as to associate the substantially cylindrical sleeve comprising the belt structure (4) and the tread band (5) to said carcass structure (3). 53. a plant for making tyres for vehicle wheels, comprising a manufacturing plant (1) „ according to any one of claims from 36 to 52 and at least one vulcanisation station for vulcanising the tyres manufactured in said manufacturing plant (1).
method and plant for manufacturing tyres for vehicle wheels description background of the invention the present invention relates to a method for manufacturing tyres for vehicle wheels. the invention also pertains to a plant for manufacturing vehicle tyres, which may be employed to carry out the above mentioned manufacturing method, as well as to a plant for making tyres for vehicle wheels. prior art a tyre for vehicle wheels generally comprises a carcass structure including at least one carcass ply having respectively opposite end flaps turned up loop- wise around annular anchoring structures, each of said anchoring structures being usually made up of a substantially circumferential annular insert onto which at least one filling insert is ' applied, at a radially external position thereof. a belt structure comprising one or more belt layers, having textile or metallic reinforcing cords arranged at radial superposed relationship with each other and with.the carcass structure, is associated to the latter. a tread band, made of elastomeric material like other semifinished products which constitute the tyre, is applied to the belt structure at a radially external position thereof. within the framework of the present description and in the following claims, the term "elastomeric material" is used to indicate a composition' comprising at least one elastomeric polymer and at least one reinforcing filler. preferably, such composition further comprises additives such as, for example, a cross-linking agent and/or a plasticizer. thanks to the presence of the cross-linking agent, such material can be cross-linked by heating, so as to form the end product. in addition respective side walls of elastomeric material are also applied to the side surfaces of the carcass structure, each of them extending from one of the side edges of the tread band up to the respective annular anchoring structure at the beads, which sidewalls, depending on the different embodiments, can exhibit respective radially outer end edges either superposed on the side edges of the tread band so as to form a design scheme of the type usually referred to as "overlying sidewalls", or interposed between the carcass structure and the side edges of the tread band itself, in accordance with a design scheme of the type referred to as "underlying sidewalls". in most of the conventional processes for tyre manufacture, it is provided that the carcass structure and the belt structure together with the respective tread band are made separately of each other in respective work stations, so as to be mutually assembled at a later time. more particularly, the building of the carcass structure is carried out in a building station, and it first contemplates the deposition of the carcass ply or plies on a first drum usually identified as "building drum" to form a substantially cylindrical sleeve. the annular anchoring structures at the beads are fitted or formed on the opposite end flaps of the carcass ply or plies which in turn are turned up around the annular structures themselves so as to enclose them in a sort of loop. simultaneously, in a finishing station provided with a second drum or auxiliary drum, an outer sleeve is manufactured, which is substantially cylindrical as well, which comprises the belt layers laid down in radially superposed relationship with each other, and the tread band applied to the belt layers at a radially outer position thereof. the outer sleeve is then picked up from the auxiliary drum to be coupled with the carcass sleeve. to this end, the outer sleeve is arranged in coaxial relation around the carcass sleeve, and then the carcass ply or plies are shaped into a toroidal conformation by axially moving the beads close to each other and simultaneously admitting fluid under pressure into the carcass sleeve, so as to determine the application of the belt/tread band sleeve to the carcass structure of the tyre at a radially outer position thereof. assembling of the carcass sleeve with the outer sleeve can be carried out on the same drum used for building the carcass sleeve, in which case reference is made to a "unistage manufacturing process". a manufacturing process of this type is described in document us 3,990, 931, for example. alternatively, assembling may be carried out on a so-called "shaping drum" onto which the carcass sleeve and outer sleeve are transferred, to manufacture the tyre according to a so-called "two-stage manufacturing process", as described in document ep 0 613 757, for example. in conventional manufacturing methods the tread band is usually made of a continuously-extruded section member that, after being cooled for stabilisation of its geometrical conformation, is stored on suitable benches or reels. the semifinished product in the form of sections or of a continuous strip is then sent to a delivering unit which either picks up the sections or cuts the continuous strip into sections of predetermined length, each section constituting the tread band to be circumferentially applied onto the belt structure of a tyre being manufactured. in order to increase the characteristics of resistance to radial stresses which the finished tyre is subject to during use, such as for example the stresses caused by the centrifugal force at high speed, it has been proposed to provide the belt structure with at least one radially outer layer including reinforcing cords substantially parallel to the circumferential development direction of the tyre. these reinforcing cords, usually called "zero-degree cords" are applied on the underlying belt layers - generally provided with textile or metallic reinforcing cords with crossed orientation - by circumferentially winding thereon, according to coils axially arranged side by side, a strip-like element of green elastomeric material including one or more reinforcing cords substantially parallel to one another. from the production point of view, however, this improvement of the overall mechanical characteristics of the tyre implies a difficult problem, that is, reconciling productivity (meaning the number of pieces that can be manufactured in a unit of time) of the carcass structure building station - which is normally high - with the productivity of the finishing station where the substantially cylindrical sleeve comprising the belt layers and the tread band is manufactured. in fact, the productivity of the finishing station is highly affected in this case by the inherent slowness of the coil winding step of the strip-like element including the circumferentially oriented reinforcing cords. in order to somehow obviate this drawback, us patent 4,985,100 suggested to increase the productivity of the finishing station by using a building apparatus rotatable about a substantially vertical rotation axis and provided with two drums symmetrically arranged with respect to the rotation axis of the apparatus and by carrying out the manufacture of the aforementioned substantially cylindrical sleeve by cyclically repeating the steps of: i) applying a radially inner layer of the belt structure on a first drum of the finishing station at a first working position, ii) manufacturing a radially outer layer of the belt structure at a second working position by circumferentially winding according to coils axially arranged side by side a strip-like element of elastomeric material including one or more reinforcing cords on a radially inner layer previously made on a second drum of the finishing station; iii) changing the position of the two drums; iv) applying at the first working position a tread band on the radially outer layer of the belt structure including the circumferentially oriented reinforcing cords by winding a section of a given length of a continuous strip of elastomeric material preformed in advance; v) removing the substantially cylindrical sleeve thus obtained from the first working position; wherein the step ii) of manufacturing the radially outer layer of the belt structure including the circumferentially oriented cords is carried out during the remaining steps iv), v) of manufacturing the tread band and of removing the cylindrical sleeve thus obtained and i) of manufacturing the radially inner layer of the belt structure of a new cylindrical sleeve. in recent times and in order to further improve the mechanical characteristics and the quality of the tyre, it has been proposed to realise the tread band by winding a continuous elongated element according to coils arranged side by side directly on the belt structure rather than by winding and cutting to size sections of a continuous strip extruded in advance and stored on benches or in reels. from the practical point of view, this can be obtained - as described for example in international patent application wo 2004/041521 in the name of the same applicant - by an assembling process comprising the steps of: i) arranging a belt structure comprising at least one belt layer on an auxiliary drum; ii) applying a tread band on the belt structure by winding thereon at least one continuous elongated element of elastomeric material according to contiguous circumferential coils; iii) picking up the belt structure from the auxiliary drum to transfer the same to a position coaxially centred with respect to the carcass sleeve. such continuous elongated element is obtained in situ and forms a plurality of coils the orientation and mutual-overlapping parameters of which are suitably managed so as to control the variations in thickness to be given to the tread band during the manufacture, based on a predetermined deposition scheme preset on an electronic computer, with a considerable increase of the quality characteristics of the tread band, which in turn positively influence the tyre performance and life. from the production point of view, however, the achievement of these improvements increases the problem of reconciling the productivity of the building station of the carcass structure with the productivity of the finishing station where the substantially cylindrical sleeve comprising the belt layers and the tread band is manufactured, in particular if the belt structure includes a layer of circumferentially oriented reinforcing cords. the manufacturing processes suggested by us patent 4,985,100 and by international patent application wo 2004/041521, on the other hand, cannot obtain a sleeve wherein the belt structure includes a layer of circumferentially oriented reinforcing cords and wherein the tread band is made by coil winding of a continuous elongated element of elastomeric material in a cycle time compatible with that of the building station of the carcass structure. problem underlying the invention the applicant intends to solve the problem of manufacturing a high quality tyre reconciling the different productivity rates of the building station of the carcass structure and of the finishing station intended to manufacture the substantially cylindrical belt structure/tread band sleeve also in the event that such sleeve includes a belt structure provided with a layer of zero-degree reinforcing cords and a tread band made by winding coils of at least one continuous elongated element. summary of the invention according to the present invention, the applicant realized the possibility of achieving great improvements in terms of production flexibility and quality of the product by carrying out, in the present tyre manufacturing processes which provide for the assembly of semifinished products, a specific sequence of cyclically repeated steps carried out at least in part simultaneously in a finishing station by supporting the various semifinished products being manufactured on at least two auxiliary drums and operating in at least two different working positions. more particularly, the present invention relates, according to a first aspect thereof, to a method for manufacturing tyres for vehicle wheels comprising the steps of: a) building in a building station a substantially cylindrical carcass structure comprising at least one carcass ply operatively associated to annular anchoring structures axially spaced apart from each other; b) manufacturing in a finishing station a substantially cylindrical sleeve comprising a tread band applied at a radially outer position with respect to a belt structure comprising at least one belt layer including reinforcing cords substantially parallel to a circumferential development direction of the sleeve, said step b) comprising the steps of: bl) assembling a first belt structure at a first working position on a first auxiliary drum of the finishing station; b2) applying a tread band at at least one second working position at a radially outer position with respect to a second belt structure previously assembled on at least one second auxiliary drum of the finishing station, said application step being carried out by laying down according to a predetermined path at least one continuous elongated element of green elastomeric material at a radially outer position with respect to said second belt structure; b3) positioning the first auxiliary drum supporting the first belt structure at said at least one second working position; b4) positioning said at least one second auxiliary drum supporting the substantially cylindrical sleeve thus obtained at a picking position of the finishing station; c) transferring said substantially cylindrical sleeve from said picking position of the finishing station at a radially outer position with respect to a carcass structure built in the meantime in the building station; wherein said steps from bl) to b4) are repeated cyclically; wherein steps bl) and b2) are carried out at least in part simultaneously with one another; and " wherein steps b3) and b4) are carried out at least in part simultaneously with one another. preferred features of the manufacturing method according to the invention are defined in the attached dependent claims -2-35 the content of which is herein integrally incorporated by reference. in accordance with a further aspect of the invention, the above mentioned method can be carried out by means of a plant for manufacturing tyres for vehicle wheels comprising: a) a building station for building a substantially cylindrical carcass structure comprising at least one carcass ply operatively associated to annular anchoring structures axially spaced apart from each other; b) a finishing station for manufacturing a substantially cylindrical sleeve comprising a tread band applied at a radially outer position with respect to a belt structure comprising at least one belt layer including reinforcing cords substantially parallel to the circumferential development direction of the sleeve, said finishing station comprising: b 1 ) a first auxiliary drum; b2) at least one second auxiliary drum; b3) a displacing apparatus adapted to support said auxiliary drums and to position said auxiliary drums at a first working position wherein said belt structure is assembled, at at least one second working position wherein said tread band is applied and at a picking position of said substantially cylindrical sleeve; said first and said at least one second working position being defined in different zones of the finishing station; b4) at least one delivery device of a strip-like element of green elastomeric material including at least one reinforcing cord, arranged at said first working position for operatively interacting with one of said auxiliary drums; b5) at least one delivery member of a continuous elongated element of green elastomeric material arranged at said at least one second working position for operatively interacting with one of said auxiliary drums; c) at least one transfer device of the substantially cylindrical sleeve manufactured in the finishing station, adapted to operatively interact with one of said auxiliary drums at said picking position for transferring said substantially cylindrical sleeve at a radially outer position with respect to a carcass structure built in the building station. preferred features of the manufacturing plant according to the invention are defined in the attached dependent claims 37-52 the content of which is herein integrally incorporated by reference. according to a further aspect thereof, the invention relates to a plant for making tyres for vehicle wheels, comprising a manufacturing plant as defined above and at least one vulcanisation station for vulcanising the tyres manufactured in said manufacturing plant. additional features and advantages of the invention will become more clearly apparent from the detailed description of a preferred, but not exclusive, embodiment of a method and of a plant for manufacturing tyres for vehicle wheels, in accordance with the present invention. brief description of the drawings such a description will be set out hereinafter with reference to the accompanying drawings, given by way of indication and not of limitation, wherein: - figure 1 is a schematic top view of a first preferred embodiment of a plant for manufacturing tyres in accordance with the present invention; - figure 2 is a schematic fragmentary cross-section view of a tyre obtainable in accordance with the method and the plant of the present invention; - figure 3 is a schematic top view of a second preferred embodiment of a plant for manufacturing tyres in accordance with the present invention. detailed description of the preferred embodiments with reference to figure 1, a plant for manufacturing tyres for vehicle wheels, adapted to carry out a manufacturing method in accordance with the present invention, according to a first preferred embodiment, is generally indicated at 1. a tyre that can be manufactured by the plant 1 is generally indicated at 2 in figure 2 and can be a tyre intended to equip the wheels of a car or the wheels of a heavy vehicle. the tyres 2 essentially comprise a carcass structure 3 having a substantially toroidal conformation, a belt structure 4 having a substantially cylindrical conformation, circumferentially extending around the carcass structure 3, a tread band 5 applied to the belt structure 4 at a radially outer position thereof, and a pair of sidewalls 6 laterally applied, on opposite sides, to the carcass structure 3 and each extending from a side edge of the tread band 5 up to a radially inner edge of the carcass structure 3. each sidewall 6 essentially comprises a layer of elastomeric material having a suitable thickness and may have a radially outer end tailpiece 6a at least in part covered by the axial end of the tread band 5, as shown in solid line in figure 2, according to a construction scheme of the type usually identified as "underlying sidewalls". alternatively, the radially outer end tailpieces 6a of the sidewalls 6 can be laterally superposed on the corresponding axial ends of the tread band 5, as shown in dashed line in figure 2, to realise a construction scheme of the type usually identified as "overlying sidewalls". the carcass structure 3 comprises a pair of annular anchoring structures 7 integrated in regions usually identified as "beads", each of them being for example made up of a substantially circumferential annular insert 8, usually called "bead core", and carrying an elastomeric filler 9 at a radially outer position thereof. turned up around each of the annular anchoring structures 7 are the end flaps 10a of one or more carcass plies 10 comprising textile or metallic cords extending transversely with respect to the circumferential development of the tyre 2, possibly according to a predetermined inclination between the two annular anchoring structures 7. the belt structure 4 can in turn comprise one or more belt layers 11a, l ib comprising reinforcing cords made of a suitable material, for example metallic or textile cords. preferably, said reinforcing cords are suitably inclined with respect to the circumferential development of the tyre 2, according to respectively crossed orientations between one belt layer and the other. the belt structure 4 also comprises at least one belt layer 12 at a radially outer position with respect to the belt layers 11a, l ib and including at least one reinforcing cord, preferably a plurality of cords circumferentially wound according to coils axially arranged side by side and usually called "zero-degree cords" in the art. in a preferred embodiment, the belt structure 4 can comprise a belt layer 12 including zero-degree cords substantially extending for the entire transversal development of the belt structure 4; alternatively, the belt structure 4 can comprise a pair of belt layers 12, each including zero-degree cords, arranged near opposite shoulder zones of the tyre 2 and axially extending along a portion of limited width, as schematically shown in figure 2. in heavy-duty tyres, such as tyres for trucks and heavy transport vehicles, the belt structure 4 may also incorporate, at a radially outer position with respect to the belt layer 12, a further layer 13 made of elastomeric material preferably including a plurality of reinforcing cords, usually referred to as "breaker layer" and intended to prevent foreign bodies from entering the underlying belt layers. the tread band 5 may essentially consist of a single elastomeric material or, alternatively, it may comprise portions consisting of respective elastomeric materials having appropriate composition and appropriate mechanical and chemical-physical characteristics. these portions may be constituted by one or more radially superposed layers having suitable thickness, by suitably shaped sectors arranged according to a predetermined configuration along the axial development of the tread band or by a combination of both. thus, for example, the tread band 5 may include a radially inner layer or base layer, essentially consisting of a first elastomeric material having appropriate composition and mechanical and chemical-physical characteristics, for example adapted to reduce the rolling resistance of the tyre, and a radially outer layer essentially consisting of a second elastomeric material having composition and mechanical and chemical-physical characteristics differing from the first elastomeric material, for example adapted to optimise the grip performance on wet surfaces and the wear resistance of the tyre. the individual components of the carcass structure 3 and of the belt structure 4, such as in particular the annular anchoring structures 7, the carcass plies 10, the belt layers 11a, 1 ib and the elements of elastomeric material (strip-like elements) including at least one reinforcing cord and intended to form the belt layer 12 and optionally the breaker layer 13, are supplied to the plant 1 in the form of semifinished products made during preceding manufacturing steps, to be suitably assembled with each other according to the steps described hereinafter. with reference to figure 1, a first preferred embodiment of a plant 1 for manufacturing tyres for vehicle wheels according to the invention, for example for manufacturing a tyre 2 of the type illustrated above, shall now be described. in the following description, reference will be made to the various components of the tyre 2 in their state as semifinished products and, as regards the various elastomeric materials used, in their green state, that is, prior to the vulcanisation operations which link the various semifinished products together to give the final tyre 2. the plant 1 comprises a building station 14 intended to build a substantially cylindrical carcass structure 3 comprising one or more carcass plies 10 operatively associated to the annular anchoring structures 7 axially spaced apart from each other. the building station 14 comprises a primary drum 15, not described in detail as it can be made in any convenient manner, on which the carcass ply or plies 10 are preferably wound; said plies come from a feeding line 16 along which they are cut into sections of appropriate length related to the circumferential extension of the primary drum 15, before being applied thereon to form a so-called substantially cylindrical "carcass sleeve". the building station 14 also comprises a line (not shown) for feeding the sidewalls 6, which line supplies a semifinished product in the form of a continuous strip of elastomeric material from which sections of predetermined length are cut out, said length being related to the circumferential extension of the primary drum 15 and of the tyre 2 to be manufactured. alternatively, the building station 14 can be provided with a further building drum (not shown) on which the assembly of the carcass structure 3 components and possibly also of the sidewalls 6 takes place, and with a transfer device (also not shown) for transferring the assembled carcass sleeve onto the primary drum 15. the plant 1 further comprises a finishing station 17 intended to manufacture a substantially cylindrical sleeve comprising: i) the tread band 5 including one or more green elastomeric materials, the tread band being applied at a radially outer position with respect to ii) the belt structure 4 comprising the layer 12 including reinforcing cords substantially parallel to the circumferential development direction of the substantially cylindrical sleeve, which layer 12, in this preferred variant, is applied at a radially outer position with respect to the layers 11a, l ib including reinforcing cords suitably inclined with respect to the circumferential development of the sleeve according to respectively crossed orientations between one belt layer and the other, and optionally the breaker layer 13 which, in this preferred variant, is applied at a radially outer position with respect to the layer 12. the finishing station 17 comprises in turn a displacing apparatus 18 adapted to support a first auxiliary drum 19 and a second auxiliary drum 20 and to position said auxiliary drums 19, 20 at a plurality of working positions at which the operating steps required for manufacturing the above substantially cylindrical sleeve are carried out. more particularly, the displacing apparatus 18 is adapted to position the auxiliary drums 19, 20 at a first working position, indicated with letter a in figure 1 , wherein the belt structure 4 is assembled, at least one second working position, indicated with letter b in figure 1, wherein the tread band 5 is applied, and a picking position, indicated with letter d in figure 1, of the substantially cylindrical sleeve manufactured in the finishing station 17. in this preferred embodiment, the picking position d of the substantially cylindrical sleeve substantially coincides with the first working position a. the first working position a and the second working position b are defined in different zones of the finishing station 17 and, preferably, they are defined at opposite sides of the displacing apparatus 18. in the preferred embodiment illustrated in figure 1, moreover, it is provided that in the picking position d the auxiliary drum 19, 20 positioned therein by the displacing apparatus 18 is arranged according to a relationship of coaxial alignment with the primary drum 15 of the building station 14. the finishing station 17 comprises an apparatus for applying the belt structure 4 on the same auxiliary drum, generally indicated at 21, adapted to operatively interact with the auxiliary drum 19, 20 positioned at the first working position a by the displacing apparatus 18. the applying apparatus 21 comprises in turn at least one delivery device 24 of the belt layers 11a, l ib arranged at the first working position a for operatively interacting with the auxiliary drum 19, 20 arranged at said working position by the displacing apparatus 18. by way of example, the delivery device 24 may comprise, in a way known per se, at least one feeding line 24a, along which the semifinished products in the form of a continuous strip are caused to move forward, said strip being then cut into sections of a length corresponding to the circumferential development of the auxiliary drums 19, 20 simultaneously with the formation of the respective belt layers 11a, l ib on the same drums. the applying apparatus 21 of the finishing station 17 further comprises at least one delivery device 22 of a strip-like element 23 of green elastomeric material including at least one reinforcing cord, preferably a plurality of textile or metallic reinforcing cords, strip-like element 23 that is applied at a radially outer position with respect to the belt layers 11a, l ib to form axially contiguous circumferential coils intended to form the belt layer 12. to this end, the delivery device 22 is arranged at the first working position a for operatively interacting with the auxiliary drum 19, 20 arranged at said working position by the displacing apparatus 18. hi a preferred embodiment, the apparatus 21 further comprises at least one delivery device of a further belt layer preferably including a plurality of reinforcing cords, arranged at the first working position a for operatively interacting with the auxiliary drum 19, 20 arranged at said working position by the displacing apparatus 18 for forming the aforementioned breaker layer 13. the finishing station 17 further comprises at least one delivery member, preferably at least two delivery members 25, 26, of respective continuous elongated elements 27, 28 of green elastomeric material, which delivery members are arranged at the second working position b for operatively interacting with the auxiliary drum 19, 20 arranged at said working position by the displacing apparatus 18. hi the preferred embodiment shown in figure 1, the delivery members 25, 26 of the continuous elongated elements 27, 28 are arranged at the second working position b for operatively interacting at opposite sides of the auxiliary drum 19, 20 arranged at said working position by the displacing apparatus 18. the delivery members 25, 26 are adapted to lay down the continuous elongated elements 27, 28 according to contiguous circumferential coils on a belt structure 4 previously assembled on the auxiliary drum 19 or 20 arranged at the second working position b. more particularly, the delivery members 25, 26 can for example comprise an extruder or, alternatively, an applicator roller or other member adapted to deliver the continuous elongated elements 27, 28 at a radially outer position with respect to the belt structure 4 supported by the auxiliary drum 19 or 20 at the second working position b, simultaneously with winding of the elongated elements themselves on the belt structure 4 as will be better described hereinafter. preferably, each of the delivery members 25, 26 comprises at least one extruder indicated in figure 1 with reference numerals 29, 30. hi order to wind the continuous elongated elements delivered by the extruders 29, 30 on the belt structure 4, the displacing apparatus 18 of the preferred embodiment shown in figure 1 comprises at least one drum rotation unit, preferably a pair of rotation units 31, 32, adapted to rotate the auxiliary drums 19, 20 about their geometrical axis. > in this way, it is advantageously possible to carry out, in an effective manner, a controlled deposition of the continuous elongated elements 27, 28 at a radially outer position with respect to the belt structure 4. preferably, and according to what is illustrated in figure 1 , the displacing apparatus 18 is of the substantially turret-like type and is adapted to support the auxiliary drums 19, 20 at positions angularly offset with each other, for example offset at an angle of about 180°. preferably, the displacing apparatus 18 is further provided with at least one driving unit 35 adapted to rotate the displacing apparatus 18 as a whole about a substantially vertical rotation axis y-y so as to position the auxiliary drums 19, 20 at the above first and second working positions a, b. preferably, the auxiliary drums 19, 20 and the respective driving units 31, 32 are slidably supported by the displacing apparatus 18 by a supporting carriage, not better shown in figure 1, which is in turn slidably mounted on a rotatable supporting platform 39 of the displacing apparatus 18. preferably, each auxiliary drum 19, 20 is in integral translating motion with the corresponding rotation unit 31, 32 along the rotatable supporting platform 39. in a preferred embodiment, the displacing apparatus 18 comprises at least one drum translating unit adapted to carry out controlled axial movements of the drums 19, 20 at the working positions a, b or at the picking position d of the substantially cylindrical sleeve including the belt structure 4 and the tread band 5 manufactured in the finishing station 17. preferably, said drum translating unit causes controlled axial movements not only of the auxiliary drums 19, 20 but also of the relevant rotation units 31, 32. hi the preferred embodiment shown in figure 1, the displacing apparatus 18 comprises a pair of drum translating units 33, 34, for example of the type comprising a worm screw adapted to engage with a corresponding nut thread associated to said carriage supporting the auxiliary drums 19, 20. clearly, the drum translating units can comprise actuating mechanisms differing from those indicated above by way of example and selectable by a man skilled in the art as a function of specific application requirements. preferably, the drum translating units 33, 34 of the displacing apparatus 18 move the drums 19, 20 between the working positions a, b or the picking position d and a standby position defined between said positions and the rotation axis y-y of the displacing apparatus 18. preferably, said stand-by positions of the auxiliary drums 19, 20 are defined within the outer perimeter of the rotatable supporting platform 39. preferably, the drum translating units 33, 34 move the drums 19, 20 along a radial direction passing through the rotation axis y-y of the displacing apparatus 18 as illustrated by the double arrows f3, f4 in figure 1. the drum translating units 33, 34 thus allow to achieve the following advantageous technical effects: i) that of properly moving the auxiliary drums 19, 20 with respect to the delivery members 25, 26; ii) that of carrying out a controlled deposition of the continuous elongated elements 27, 28 at a radially outer position with respect to the belt structure 4 according to coils partially arranged side by side and/or partially superposed with each other according to what is required to manufacture a tread band 5 having a high quality level, iii) that of carrying out a predetermined offset of the belt layers delivered by the applying apparatus 21, for example to compensate any design asymmetries of the tyre 2; and iv) that of decreasing the transversal dimensions and the inertia forces during the displacement of the auxiliary drums 19, 20 between the working positions a and b by moving the auxiliary drums 19, 20 close to the rotation axis y-y of the displacing apparatus 18. advantageously, moreover, the drum translating units 33, 34 allow to carry out a controlled deposition of the continuous elongated elements 27, 28 while maintaining stationary the delivery members 25, 26 with a simplification of the mechanical application system of the continuous elongated elements and, thereby, with a reduction of the costs for realising the plant 1. the plant 1 further comprises at least one transfer device 36 of the substantially cylindrical sleeve manufactured in the finishing station 17, adapted to operatively interact with one of the auxiliary drums 19, 20 at the above-identified picking position d - in this case substantially coinciding with the first working position a - for transferring the substantially cylindrical sleeve manufactured in the finishing station 17 at a radially outer position with respect to the carcass structure 3 built in the building station 14. the transfer device 36 preferably has a substantially annular conformation and is operated in a way known per se (not shown) so as to be arranged around the auxiliary drum 19, 20 positioned at the picking position d for picking up the substantially cylindrical sleeve including the belt structure 4 and the tread band 5 manufactured in the finishing station 17 and for transferring said sleeve coaxially to the carcass structure 3 built in the building station 14. hi an alternative preferred embodiment, not shown for simplicity, the plant 1 may further comprise a third delivery member of a respective continuous elongated element of green elastomeric material arranged at the picking position d (for example coinciding with the first working position a) of the substantially cylindrical sleeve manufactured in the finishing station 17 for operatively interacting with the auxiliary drum, 19, 20 positioned therein by the displacing apparatus 18. in this case, the plant 1 allows to apply the tread band 5 both in the working position b, and in the picking position d (for example coinciding with the first working position a) of the substantially cylindrical sleeve, wherever this is required to meet specific application requirements. the plant 1 further comprises at least one apparatus (not shown being knovmper se) for shaping the carcass structure 3 according to a substantially toroidal shape so as to associate the substantially cylindrical sleeve comprising the belt structure 4 and the tread band 5 manufactured in the finishing station 17 to the carcass structure 3. preferably, this shaping apparatus is adapted to operatively interact with the primary drum 15 within the building station 14 so as to carry out, as it will be better understood hereinafter, a so-called unistage manufacturing process. the plant 1 finally comprises a control unit 37 by means of which an operator 38 can program and manage the various operating steps that can be carried out by the same manufacturing plant. with reference to the plant 1 described above, a first preferred embodiment of a method according to the invention for manufacturing tyres for vehicle wheels, for example the tyre 2 described above, will now be described. in particular, the method will be illustrated with reference to steady-state working conditions, as illustrated in figure 1, wherein the auxiliary drum 19 is in the first working position a and does not support any semifinished products, whereas the auxiliary drum 20 is in the second working position b and supports a belt structure 4 assembled on said drum in a previous step of the method. hi a first step of the method, a substantially cylindrical carcass structure 3 comprising at least one carcass ply 10 operatively associated to the annular anchoring structures 7 axially spaced apart from each other, is built in the building station 14. in this step, the carcass ply or plies 10 coming from the feeding line 16 along which they are cut into sections of appropriate length, related to the circumferential development of the primary drum 15, before being applied thereto, are wound on the primary drum 15 to form a so-called substantially cylindrical "carcass sleeve". afterwards, the annular anchoring structures 7 are fitted onto the end flaps 10a of ply/plies 10 to subsequently carry out the turning-up of the end flaps themselves to cause an engagement of the anchoring structures 7 into the loops thus formed by the turned-up ply/plies 10. the tyre sidewalls 6 may also be applied to the carcass sleeve, which sidewalls come from at least one respective sidewall-feeding line (not shown) supplying a semifinished product in the form of a continuous strip of elastomeric material, from which sections of predetermined length are cut out, said length being related to the circumferential development of the primary drum 15 and of the tyre 2 to be manufactured. the method of the invention provides for the manufacture, in the finishing station 17, of a substantially cylindrical sleeve comprising the tread band 5 applied at a radially outer position with respect to the belt structure 4 including at least one layer 12 including reinforcing cords substantially parallel to the circumferential development direction of the sleeve. the manufacture of this substantially cylindrical sleeve occurs at least in part simultaneously with the assembly of the components of the carcass structure 3 in the form of cylindrical sleeve (or carcass sleeve) on the primary drum 15. more particularly, the manufacture of the substantially cylindrical sleeve including the belt structure 4 and the tread band 5 carried out in the finishing station 17 comprises the operating steps illustrated hereinafter. according to the invention, these steps are carried out at least in part simultaneously. in a first step, a first belt structure 4 is assembled at the first working position a on the first auxiliary drum 19 of the finishing station 17. hi a preferred embodiment, the assembly step of the first belt structure 4 provides in the first place for carrying out the steps of applying at a radially outer position with respect to the first auxiliary drum 19 the first belt layer 11a comprising respective reinforcing cords inclined with respect to the circumferential development direction of the sleeve and of applying at a radially outer position with respect to the first belt layer 11a the second belt layer 1 ib comprising reinforcing cords inclined along a crossed direction with respect to said reinforcing cords belonging to the first belt layer 11a. advantageously, these steps are carried out by means of the delivery device 24 of the belt layers which operatively interacts with the auxiliary drum 19 positioned at the first working position a by the displacing apparatus 18 and by the rotation unit 31 which rotates the auxiliary drum 19 about its geometrical axis during the application of the various semifinished products. more specifically, the feeding line 24a of the delivery device 24 delivers semifinished products in the form of a continuous strip, which are then cut into sections of a length corresponding to the circumferential development of the auxiliary drum 19 simultaneously with the formation of the respective belt layers 11a, l ib on the same drum, which is simultaneously rotated by the rotation unit 31. hi a preferred embodiment, the assembly step of the first belt structure 4 therefore provides for carrying out the step of applying at a radially outer position with respect to the first auxiliary drum 19 at least one strip-like element 23 of green elastomeric material including the reinforcing cord(s) to form axially contiguous circumferential coils, so as to obtain the belt layer 12 including reinforcing cords substantially parallel to the circumferential development direction of the substantially cylindrical sleeve being manufactured. preferably, said strip-like element 23 is applied at a radially outer position with respect to the second belt layer l ib substantially along the entire transversal development of the first belt structure 4 or, alternatively, only at opposed axial ends of the underlying belt layers 11a, l ib. advantageously, this step is carried out by the delivery device 22 of the application apparatus 21, which is also arranged at the first working position a for operatively interacting with the auxiliary drum 19 arranged therein by the displacing apparatus 18. in a preferred embodiment, the assembly step of the first belt structure 4 finally provides for carrying out the step of applying the breaker layer 13 of green elastomeric material preferably including a plurality of reinforcing cords preferably inclined with respect to the circumferential development direction of the sleeve, at a radially outer position with respect to the belt layer 12. advantageously, this step is carried out by a further delivery device of a belt layer preferably including a plurality of reinforcing cords (delivery device not shown for simplicity in figure 1), arranged at the first working position a for operatively interacting with the auxiliary drum 19 arranged therein by the displacing apparatus 18. the manufacture of the substantially cylindrical sleeve including the belt structure 4 and the tread band 5 provides for forming a tread band 5 at a radially outer position with respect to a second belt structure 4 assembled on the second auxiliary drum 20 in a previous operating step of the method. according to the invention, this forming step of the tread band 5 on the second auxiliary drum 20 arranged at the second working position b is carried out at least in part simultaneously with the step of assembling the first belt structure 4 on the first auxiliary drum 19 arranged at the first working position a. more particularly, the method of the invention provides for applying a tread band 5 at the second working position b at a radially outer position with respect to a second belt structure 4 previously assembled on the second auxiliary drum 20 of the finishing station 17 at the first working position a. advantageously, the application step of the tread band 5 is carried out by laying down according to a predetermined path at least one continuous elongated element of green elastomeric material at a radially outer position with respect to the second belt structure 4 assembled on the second auxiliary drum 20. in a preferred embodiment, the application step of the tread band 5 is carried out at the second working position b by laying down said at least two continuous elongated elements 27, 28 of green elastomeric material at a radially outer position with respect to the belt structure 4 assembled on the second auxiliary drum 20. preferably, the continuous elongated elements 27, 28 are laid down at opposite sides of the second auxiliary drum 20 arranged at the second working position b by the displacing apparatus 18. preferably, the continuous elongated elements 27, 28 consist of respective elastomeric materials having different mechanical and/or chemical-physical characteristics so as to impart the desired performance to the tread band 5. advantageously, the application step of the tread band 5 is carried out by the delivery members 25, 26 arranged at the second working position b for operatively interacting with the auxiliary drum 20 arranged therein by the displacing apparatus 18. in an alternative embodiment, at least one of the delivery members 25, 26 can deliver at least one of said continuous elongated elements 27, 28 in the form of a semifinished product made of elastomeric material in the form of a continuous strip, so as to form a portion of the tread band 5, such as a radially inner layer thereof. preferably, this strip has a width substantially equal to the transversal development of the tread band 5 and is preferably cut into sections of a length corresponding to the circumferential development of the auxiliary drum 20 simultaneously with the formation of at least one portion of the tread band 5 on the same drum, which is simultaneously rotated by the rotation unit 32. preferably, however, the delivery of the continuous elongated elements 27, 28 is carried out by extrusion through the extruders 29, 30 of the delivery members 25, 26. preferably, the continuous elongated elements 27, 28 delivered by each extruder 29, 30 can advantageously possess a flattened section, so as to modulate the thickness of the elastomeric layer formed by them at a radially outer position with respect to the belt structure 4 by changing the overlapping amount of the contiguous coils and/or the orientation of the profile along a transversal direction of each elongated element 27, 28 coming from the corresponding extruder 29, 30 with respect to the underlying surface. preferably, the continuous elongated elements 27, 28 are laid down according to contiguous circumferential coils axially arranged side by side and/or radially superposed at a radially outer position with respect to the second belt structure 4 supported by the auxiliary drum 20 at the second working position b. in this preferred embodiment, the application step of the tread band 5 is carried out by delivering the continuous elongated elements 27, 28 by means of the delivery members 25, 26 arranged at the second working position b near the second auxiliary drum 20, simultaneously with winding of the continuous elongated elements 27, 28 on said drum. in particular, such winding is accomplished by carrying out, simultaneously with the application of the continuous elongated elements 27, 28, the steps of: - imparting to the second auxiliary drum 20 carrying the second belt structure 4 a rotary motion about a geometric axis thereof, so as to circumferentially distribute the continuous elongated elements 27, 28 at a radially outer position with respect to the second belt structure 4; - carrying out controlled relative displacements between the second auxiliary drum 20 and the delivery members 25, 26 to form with the continuous elongated elements 27, 28 a plurality of coils arranged in mutual side by side relationship to define at least one portion of the tread band 5. in this preferred embodiment, the controlled relative displacements between the second auxiliary drum 20 and the delivery members 25, 26 are preferably carried out by moving the second auxiliary drum 20 with respect to said delivery members. preferably, the continuous elongated elements 27, 28 are delivered by the extruders 29, 30 simultaneously with a controlled rotation movement of the auxiliary drum 20 about its geometrical axis and a controlled translation movement of said drum with respect to the delivery members 25, 26, for example along a direction substantially parallel to said axis. advantageously, this rotation-translation movement of the auxiliary drum 20 is carried out by means of the displacing apparatus 18, in particular thanks to the action of the rotation unit 32 and of the translating unit 34 of such apparatus. in this preferred embodiment of the method of the invention and thanks to the delivery of two continuous elongated elements 27, 28, it is advantageously possible to form, in a very flexible manner from the production point of view, a tread band 5 having structural features capable to achieve the desired performance of the tyre 2. thus, for example, it is advantageously possible to form - in a preferred embodiment - a tread band 5 including a pair of radially superposed layers, respectively inner and outer, according to a configuration known in the art with the term of "cap-and-base" '. according to this preferred embodiment, the application step of the tread band 5 is carried out at the second working position b by laying down one of the aforementioned continuous elongated elements, for example the continuous elongated element 27, at a radially outer position with respect to the second belt structure 4 supported by the auxiliary drum 20 along substantially its entire transversal development so as to form a radially inner layer of the tread band 5. afterwards, the application step of the tread band 5 provides for laying down the second continuous elongated element 28 at a radially outer position with respect to the radially inner layer of the tread band 5 thus formed. advantageously, the laying down of the second continuous elongated element 28 is carried out by substantially the entire transversal development of said radially inner layer so as to form a radially outer layer of the tread band 5. in this preferred embodiment, therefore, the laying down of the continuous elongated elements 27, 28 according to contiguous circumferential coils axially arranged side by side and/or radially superposed is carried out in two consecutive steps. in a further preferred alternative embodiment, it is also advantageously possible to form a tread band 5 including two or more axially aligned sectors having specific mechanical characteristics according to a configuration which allows to achieve a plurality of advantageous technical effects, such as for example an improved resistance to the transversal stresses acting on the tread band 5 during use of the tyre 2, or the possibility of keeping the grip performance of the tyre 2 substantially constant as the tread band 5 wears out. according to this preferred alternative embodiment, the application step of the tread band 5 is carried out at the second working position b by laying down one of said continuous elongated elements, for example the continuous elongated element 27, at a radially outer position with respect to at least one portion of the second belt structure 4 supported by the auxiliary drum 20 so as to form a corresponding portion of the tread band 5. afterwards, the application step of the tread band 5 provides for laying down the second continuous elongated element 28 at an axially aligned position with respect to the above portion formed by the continuous elongated element 27, so as to form a further portion of the tread band 5. in this way it is possible to form a tread band 5 having at least two axially aligned portions or sectors having different mechanical and chemical-physical characteristics. in this preferred alternative embodiment, the laying down of the continuous elongated elements 27, 28 can be carried out both in consecutive steps and at least in part simultaneously. once said steps of assembling the first belt structure 4 on the first auxiliary drum 19 and of applying the tread band 5 at a radially outer position with respect to the second belt structure 4 previously assembled on the second auxiliary drum 20 have been completed, the method of the invention provides for carrying out the steps of positioning the first auxiliary drum 19 supporting the first belt structure 4 at the second working position b and of positioning at the picking position d of the finishing station 17 the second auxiliary drum 20 supporting the substantially cylindrical sleeve including the tread band 5 applied at a radially outer position with respect to the second belt structure 4. as described above, in a preferred embodiment of the invention, the picking position d of the substantially cylindrical sleeve thus manufactured substantially coincides with the first working position a. according to the method of the invention, said steps of positioning the auxiliary drums 19 and 20 respectively at the second working position b and at the first working position a, are carried out at least in part simultaneously with each other. in particular, such steps are preferably carried out by means of the displacing apparatus 18. in a preferred embodiment and thanks to the fact that the displacing apparatus 18 is of the substantially turret-like type and supports the auxiliary drums 19, 20 at positions preferably angularly offset with each other, said positioning steps of the auxiliary drums 19 and 20 are carried out by rotating the displacing apparatus 18 about the substantially vertical rotation axis y-y. in particular, such rotary motion is carried out thanks to the driving unit 35. in other words, in this preferred embodiment, the position of the auxiliary drums 19 and 20 is effectively exchanged practically simultaneously by simply rotating the displacing apparatus 18 about the rotation axis y-y, for example according to one of the two directions of rotation, clockwise and counter clockwise, indicated by the arrows fl and respectively f2 in figure 1. in a preferred embodiment and thanks to the fact that the auxiliary drums 19, 20 are slidably supported by the displacing apparatus 18, the method of the invention comprises the further step of translating the auxiliary drums 19, 20 towards the rotation axis y-y of the displacing apparatus 18 before carrying out the rotation step of such apparatus. preferably, said step is carried out by the drum translating units 33 and 34 by translating both auxiliary drums 19, 20 and the relevant rotation units 31, 32, which are preferably translationally integral with the same drums. hi this way, it is advantageously possible to decrease both the transversal dimensions and the inertia forces during the movements of the auxiliary drums 19, 20 between the working positions a and b with an increase of the safety features of the plant 1 and with a reduction of the driving force required to the driving unit 35 for rotating the displacing apparatus 18. preferably, the auxiliary drums 19, 20 and the relevant rotation units 31, 32 are translated towards the rotation axis y-y of the displacing apparatus 18 and are arranged at the above stand-by positions defined inside the outer perimeter of the rotatable supporting platform 39 of such apparatus. advantageously, the auxiliary drums 19, 20 of the displacing apparatus 18 are arranged in this case at a safe distance from both the control unit 37 and the operator 38 during the rotation of the displacing apparatus 18, as schematically illustrated in dotted line in figure 1. once these substantially simultaneous positioning steps of the first auxiliary drum 19 at the second working position b and of the second auxiliary drum at the picking position d (in this case coinciding with the first working position a) have been carried out, the finishing station 17 is in an operating condition in which: i) a substantially cylindrical sleeve including the second belt structure 4 and the tread band 5 ready to be removed from the second auxiliary drum 20 is arranged at the picking position d (in this case coinciding with the same working position a), and ii) the first belt structure 4 previously assembled at the first working position a is supported by the first auxiliary drum 19 and is ready to receive a new tread band 5 at the second working position b. at this point, the method of the invention provides for the step of transferring the substantially cylindrical sleeve from the picking position d of the finishing station 17 at a radially outer position with respect to the carcass structure 3 built in the meantime in the building station 14. advantageously, this transfer step is carried out by the substantially ring-shaped transfer device 36 according to methods knownper se in the art. after said transfer step, the finishing station 17 is at an operating condition in which the second auxiliary drum 20 is already arranged at the first working position a and ready to support a new belt structure 4 thanks to the operating interaction with the application apparatus 21 arranged at the working position a. once the operations described above are carried out, therefore, the finishing station 17 is at an operating conditions totally similar to the starting condition indicated above, except for the fact that the two auxiliary drums 19, 20 have exchanged their position. at this point, the method of the invention provides for cyclically repeating the steps described above which are adapted to manufacture the substantially cylindrical sleeve including the belt structure 4 and the tread band 5, by assembling a new belt structure 4 on the second auxiliary drum 20 at the first working position a, by applying substantially simultaneously at the second working position b a new tread band 5 at a radially outer position with respect to the belt structure 4 previously assembled on the first auxiliary drum 19, exchanging the place of the two drums at the end of these assembling and application steps, and so on. at the end of each cyclical repetition of said steps, a new substantially cylindrical sleeve including the belt structure 4 and the tread band 5 is obtained, supported at the picking position d (in this case position a) alternatively by one of the two auxiliary drums 19, 20 of the finishing station 17. such sleeve is then transferred from the picking position d of the finishing station 17 at a radially outer position with respect to a new carcass structure 3 built in the building station 14 according to the method described above. ' preferably, the steps of manufacturing the substantially cylindrical sleeve including the tread band 5 and the belt structure 4 and of transferring such sleeve from the picking position d of the finishing station 17 are carried out in a time interval substantially equal to or smaller than, the time for carrying out the step of building the carcass structure 3 in the building station 14. in this way, it is advantageously possible to manufacture and transfer the substantially cylindrical sleeve including the tread band 5 and the belt structure 4 in the cycle time used to build the carcass structure 3 in the building station 14 optimising the process times and increasing the productivity of the manufacturing plant 1. in a particularly preferred embodiment of the invention, the assembly of the substantially cylindrical sleeve including the tread band 5 and the belt structure 4 with the carcass structure 3 not toroidally shaped yet (otherwise called "carcass sleeve") is carried out on the same primary drum 15 of the building station 14 used for building the ■ carcass sleeve, thus integrating a unistage manufacturing process. advantageously, a high quality level of the tyre 2 being manufactured is ensured in this way, thanks to the limited number of operations during the assembly of green semifinished products still in a substantially plastic state. such semifinished products are thus subjected to a correspondingly limited number of potentially deforming stresses, thus advantageously limiting the risk of undesired structural alterations of the green tyre being manufactured. within the framework of said unistage manufacturing process, the transfer device 36 having a substantially annular conformation is operated so as to be placed around the auxiliary drum 19 or 20 arranged at the picking position d for picking up the substantially cylindrical sleeve including the belt structure 4 and the tread band 5 from the same drum. in a way known/?er se, the auxiliary drum 19, 20 disengages said sleeve which is then axially translated by the transfer device 36 to be placed in a coaxially centred position on the primary drum 15 supporting the carcass sleeve. alternatively, the assembly of the carcass sleeve with the tread band 5/belt structure 4 sleeve may be carried out on a so-called shaping drum onto which the carcass sleeve and the tread band 5/belt structure 4 sleeve are transferred, to manufacture the tyre according to a so-called "two-stage manufacturing process". in a preferred embodiment, the method further comprises after said transfer step the step of shaping the substantially cylindrical carcass structure 3 according to a substantially toroidal shape so as to associate the same to the substantially cylindrical sleeve including the tread band 5 and the belt structure 4 transferred at a radially outer position with respect to the carcass structure. preferably, this shaping step is carried out by axially moving the annular anchoring structures 7 close to each other and simultaneously admitting fluid under pressure into the assembly consisting of the carcass structure 3 and of the substantially cylindrical sleeve including the tread band 5 and the belt structure 4, so as to place the carcass ply(ies) 10 in contact against the inner surface of the belt structure 4 held by the transfer device 36. in this way, a green tyre is manufactured which can be removed from the primary drum 15 or from the shaping drum to be subjected to a usual vulcanisation step carried out in a vulcanisation station (not shown) of a plant for making a tyre (not shown) comprising the manufacturing plant 1 described above. clearly, the method and the apparatus described above allow to manufacture a tyre 2 having a different structure, for example by applying further layers or elements at the first working position a and/or at the second working position b and/or at the picking position d. all this can be obtained by positioning at such positions suitable delivery equipment adapted to operatively interact with the auxiliary drums 19, 20 arranged therein by the displacing apparatus 18. thus, for example, in an alternative embodiment, the method of the invention may provide for the further step of applying at the picking position d (for example coinciding with the first working position a) at a radially outer position with respect to the belt structure 4 supported by the auxiliary drum 19, 20 arranged therein, an additional first or last continuous elongated element of green elastomeric material according to a respective predetermined path, so as to begin or complete the tread band 5 at the picking position d. hi this case, it is advantageously possible to form the tread band 5 using three different elastomeric materials delivered by the delivery members 25, 26 arranged at the second working position b and by the delivery member arranged at the picking position d of the finishing station 17. preferably, this application step is carried out according to the methods described above, that is, by delivering such continuous elongated element by means of an extruder of a further delivery member (not shown) arranged at the picking position d near the auxiliary drum 19, 20 arranged therein and by winding the continuous elongated element on said drum. also in this case, such winding is accomplished by carrying out, simultaneously with the application of the continuous elongated element, the steps of: - imparting to the auxiliary drum 19 or 20 carrying the belt structure 4 a rotary motion about a geometric axis thereof, so as to circumferentially distribute the continuous elongated element at a radially outer position with respect to the belt structure 4; - carrying oμt controlled relative displacements between the auxiliary drum 19 or 20 and the delivery member to form with the continuous elongated element a plurality of coils arranged in mutual side by side relationship to define at least one portion of the tread band 5. advantageously, this further delivery member may be provided with a respective actuating group (not shown) adapted to move such member to and from the auxiliary drum 19, 20 arranged at the picking position d so as not to interfere with the subsequent picking operations of the substantially cylindrical sleeve including the belt structure 4 and the tread band 5. figure 3 shows a further preferred embodiment of a plant 1 for manufacturing tyres according to the invention. in the following description and in such figure, the elements of the plant 1 structurally or functionally equivalent to those illustrated above with reference to the embodiment shown in figure 1 will be indicated with the same reference numerals and will not be further described. li the embodiment shown in figure 3, the plant 1 further comprises at least one third auxiliary drum 40 supported by the displacing apparatus 18 and at least one delivery member 41 of a further continuous elongated element 42 of green elastomeric material arranged at at least one third working position c for operatively interacting with one of the auxiliary drums 19, 20 or 40. also in this preferred embodiment, the picking position d of the substantially cylindrical sleeve substantially coincides with the first working position a. advantageously, the plant 1 thus structured allows to form the tread band 5 using at least three different elastomeric materials delivered by the delivery members 25 and 26 arranged at the second working position b and by the delivery member 41 arranged at the third working position c of the finishing station 17. the delivery member 41 preferably comprises at least one extruder 43 in a totally similar way to the delivery members 25, 26. similarly to what has been described above, in order to wind the continuous elongated element 42 delivered by the extruder 43 at a radially outer position with respect to the belt structure 4 supported by the third auxiliary drum 40, the displacing apparatus 18 preferably comprises a drum rotation unit 44 adapted to rotate the auxiliary drum 40 about its geometrical axis. in this way, it is advantageously possible to carry out, in an effective manner, a controlled deposition of the continuous elongated element 42 at a radially outer position with respect to the belt structure 4 supported by the third auxiliary drum 40. also in this preferred embodiment, the displacing apparatus 18 is of the substantially turret-like type and is adapted to support the auxiliary drums 19, 20 and 40 at positions angularly offset with each other; in this case, however, the auxiliary drums are angularly offset by an angle equal to about 120°. similarly to what has been described above, the displacing apparatus 18 comprises one drum translating unit 45 - preferably of a type similar to the translating units 33 and 34 - adapted to determine controlled axial movements of the auxiliary drum 40 at the working positions a, b, c or at the picking position d of the substantially cylindrical sleeve including the belt structure 4 and the tread band 5 manufactured in the finishing station 17. preferably, the drum translating unit 45 causes controlled axial movements not just of the auxiliary drum 40 but also of the relevant rotation unit 44. also in this case, the drum translating unit 45 of the displacing apparatus 18 moves the auxiliary drum 40 between the working positions a, b, c or the picking position d and a stand-by position defined between said positions and the rotation axis y-y of the displacing apparatus 18. preferably, the stand-by positions of the auxiliary drums 19, 20 and 40 are defined within the outer perimeter of the rotatable supporting platform 39, schematically indicated with a dashed line in figure 3, which can be of circular type. clearly, the rotatable supporting platform 39 can have any suitable shape different from the circular one, in which case the stand-by positions of the auxiliary drums 19, 20 and 40 are preferably defined within an area - such as that schematically indicated with a dashed line in figure 3 - sufficiently spaced apart from the control unit 37 and from the working position of the operator 38. preferably, also the drum translating unit 45 moves the auxiliary drum 40 along a radial direction passing through the rotation axis y-y of the displacing apparatus 18 as illustrated by the double arrow f5 in figure 3. the drum translating units 33, 34 and 45 thus allow to achieve the advantageous technical effects mentioned above with reference to the previous embodiment of the manufacturing plant 1. in a further preferred embodiment, not shown, the plant 1 can further comprise at least two delivery members of respective continuous elongated elements of green elastomeric material arranged at the third working position c for operatively interacting at opposite sides of the auxiliary drum 19, 20 or 40 arranged therein by the displacing apparatus 18. this further embodiment allows to form the tread band 5 using up to four different elastomeric materials increasing the operating/technological flexibility of the plant and the application possibilities of the method implemented by the same. in a further preferred alternative embodiment, not shown for simplicity, the plant 1 may further comprise a delivery member of a respective continuous elongated element of green elastomeric material arranged at the picking position d of the substantially cylindrical sleeve manufactured in the finishing station 17 for operatively interacting with the drum 19, 20 or 40 positioned therein by the displacing apparatus 18. in this case, the plant 1 allows to apply the tread band 5 both at the second and at the third working position b and c, and at the picking position d of the substantially cylindrical sleeve, wherever this is required to meet specific application requirements. this further embodiment allows to form the tread band 5 using up to five different elastomeric materials, further increasing the operating/technological flexibility of the plant and the application possibilities of the method implemented by the same. with reference to the plant 3 described above, a further preferred embodiment of a method according to the invention for manufacturing tyres for vehicle wheels will now be described. in particular, the method will be illustrated with reference to a steady-state working condition, as illustrated in figure 3, wherein the auxiliary drum 19 is at the first working position a and does not support any semifinished product, the second auxiliary drum 20 is at the second working position b and supports a second belt structure 4 assembled on said drum in a previous step of the method and the third auxiliary drum 40 is at the third working position c and supports an assembly of semifinished products comprising a third belt structure 4 and at least one portion of the tread band 5 applied at a radially outer position with respect to the third belt structure 4 in a previous step of the method. in this alternative embodiment, the method differs from the embodiment illustrated before essentially as regards the application step of the tread band 5 which can be carried out at the working positions b, c and optionally also at the picking position d (in this preferred case substantially coinciding with the first working position a) of the substantially cylindrical sleeve made in the finishing station 17. the step of building the substantially cylindrical carcass structure 3 (or carcass sleeve) in the building station 14 and the step of assembling the belt structure 4 on the first auxiliary drum 19 arranged at the first working position a, are carried out similarly to the operating steps described above with reference to the previous embodiment implemented by means of the embodiment of the plant 1 illustrated in figure 1. in the further alternative embodiment considered herein, the tread band 5 is applied at the working positions b and c and optionally at the picking position d, at a radially outer position with respect to the second belt structure 4 supported by the second auxiliary drum 20 at the second working position b and, respectively, at a radially outer position with respect to the third belt structure 4 supported by the third auxiliary drum 40 at the third working position c or, optionally, at the picking position d. more particularly, in this alternative embodiment the manufacturing method provides for the steps of: - applying at least one first portion of the tread band 5 at the second working position b at a radially outer position with respect to the second belt structure 4 previously assembled on the second auxiliary drum 20 of the finishing station 17; in particular, this application step is carried out by laying down according to a predetermined path at least one continuous elongated element - preferably two continuous elongated elements 27, 28 - of green elastomeric material at a radially outer position with respect to the second belt structure 4; - applying at least one second portion of the tread band 5 at the third working position c at a radially outer position with respect to the third belt structure 4 previously assembled on the third auxiliary drum 40 of the finishing station 17; in particular, this application step is carried out by laying down according to a predetermined path at least one continuous elongated element of green elastomeric material at a radially outer position with respect to the third belt structure 4. hi this preferred embodiment of the method of the invention, this last operating step is therefore carried out by delivering the continuous elongated element 42 from the respective delivery member 41 arranged at the third working position c near the third auxiliary drum 40, simultaneously with winding of the continuous elongated element 42 on said drum. the deposition of the continuous elongated elements 27, 28 and 42 is advantageously carried out at said working positions b and c according to the preferred ways described above, that is, by imparting a rotation and a translation motion to the auxiliary drums 20 and 40, so as to carry out controlled relative movements between such drums and the delivery members 25, 26 and 41. preferably, these rotating and translating motions of the auxiliary drums 20 and 40 are carried out by the displacing apparatus 18, more in particular by the drum rotation units 32 and 44 and, respectively, by the drum translating units 34 and 45 of such apparatus. hi this preferred embodiment of the method of the invention and thanks to the delivery of at least three continuous elongated elements 27, 28 and 42, it is advantageously possible to further increase the flexibility of production of the tread band 5 so as to achieve the desired performance of the tyre 2. thus, for example, it is advantageously possible to form - in a preferred alternative embodiment - a tread band 5 including a pair of radially superposed layers, respectively inner and outer, according to a configuration known in the art with the term of "cαp- and-base". according to this preferred embodiment, the application step of the tread band 5 is carried out at the second working position b by laying down one of said continuous elongated elements, for example the continuous elongated element 27, at a radially outer position with respect to the second belt structure 4 supported by the second auxiliary drum 20 along substantially its entire transversal development so as to form a radially inner layer of the tread band 5. afterwards, the application step of the tread band 5 provides for laying down the second continuous elongated element 28 at a radially outer position with respect to at least one portion of the radially inner layer of the tread band 5 thus formed, so as to form a corresponding first portion of a radially outer layer of the tread band 5. hi this preferred embodiment, therefore, the laying down of the continuous elongated elements 27, 28 according to contiguous circumferential coils axially arranged side by side and/or radially superposed is carried out in two consecutive steps. within the framework of this embodiment, the method thus provides for applying at least one second portion of a radially outer layer of the tread band 5 at the third working position, by laying down at said third working position c the third continuous elongated , element 42 at a radially outer position with respect to a remaining portion of the radially inner layer of the tread band 5, more precisely at an axially aligned position with said first portion of the radially outer layer of the tread band 5 formed by the aforementioned second continuous elongated element 28, so as to form said second portion of the radially outer layer of the tread band 5. in this way, it is possible to form a tread band 5 of the "cap-and-base" type provided with a radially outer layer including two or more axially aligned sectors having specific mechanical characteristics according to a configuration which allows to achieve a plurality of advantageous technical effects, such as for example an improved resistance to the transversal stresses acting on the tread band 5 during use of the tyre 2, or the possibility of keeping the grip performance of the tyre 2 substantially constant as the tread band 2 wears out. in a further preferred alternative embodiment it is also advantageously possible to form a tread band 5 provided with a radially inner layer including two or more axially aligned sectors having different mechanical characteristics. according to this preferred alternative embodiment, the application step of the tread band 5 is carried out at the second working position b by laying down one of the above continuous elongated elements, for example the continuous elongated element 27, at a radially outer position with respect to at least one portion of the second belt structure 4 supported by the second auxiliary drum 20, so as to form at least one first portion of a radially inner layer of the tread band 5. afterwards, the application step of the tread band 5 provides for laying down the second continuous elongated element 28 - again at the second working position b - at a radially outer position with respect to said second belt structure 4, and more precisely at an axially aligned position with respect to said first portion, so as to form at least one second portion of the radially inner layer of the tread band. in this way, it is possible to form a tread band 5 provided with a radially inner layer having at least two axially aligned portions or sectors having different mechanical and chemical-physical characteristics. in this preferred alternative embodiment, the laying down of the continuous elongated elements 27, 28 can be carried out at the second working position b either in successive steps or at least in part simultaneously. afterwards, the application step of the tread band 5 provides for laying down at the third working position c the third continuous elongated element 42 at a radially outer position with respect to the radially inner layer having a plurality of axially aligned sectors of the tread band 5 by substantially the entire transversal development of such layer, so as to form a radially outer layer of the tread band 5. in a further alternative embodiment, it is possible to form a radially outer layer of the tread band 5 including two or more axially aligned portions or sectors, similarly to what has been described above, by providing at least one further delivery member of a respective continuous elongated element arranged at the third working position c of the finishing station 17 and adapted to operatively interact with the third auxiliary drum 40. in this way, it is possible to lay down the continuous elongated element 42 at a radially outer position with respect to at least one portion of the radially inner layer of the tread band 5 supported by the third auxiliary drum 40, so as to form at least one first portion of the radially outer layer of the tread band 5. afterwards, the application step of the tread band 5 provides for laying down this further continuous elongated element, always in the third working position c, at a radially outer position with respect to at least one remaining portion of the radially inner layer of the tread band 5, more precisely at an axially aligned position with respect to said first portion of the radially outer layer of the tread band 5 formed by the third continuous elongated element 42, so as to form at least one second portion of the radially outer layer of the tread band 5. once said steps of assembling the first belt structure 4 on the first auxiliary drum 19 and of applying the tread band 5 at a radially outer position with respect to the second and third belt structures 4 previously assembled on the auxiliary drums 20 and 40 have been completed, the method of the invention provides for carrying out the steps of: - positioning the first auxiliary drum 19 supporting the first belt structure 4 at the second working position b, - positioning the second auxiliary drum 20 supporting the second belt structure 4 and at least one portion of the tread band 5 at the third working position c, and - positioning the third auxiliary drum 40 supporting the substantially cylindrical sleeve, including the tread band 5 applied at a radially outer position with respect to the third belt structure 4, at the picking position d of the finishing station 17. as described above, in the preferred embodiment described with reference to the plant 1 of figure 3, the picking position d of the substantially cylindrical sleeve thus manufactured substantially coincides with the first working position a. according to the method of the invention, the aforementioned steps of positioning the auxiliary drums 19, 20 and 40 respectively: at the second working position b, at the third working position c and at the picking position d, are carried out at least in part simultaneously with each other. in particular, such steps are preferably carried out by means of the displacing apparatus 18 by rotating the same about the substantially vertical rotation axis y-y by means of the driving unit 35, similarly to what has been described with reference to the previous embodiments of the method and of the plant 1 according to the invention. in this preferred embodiment, the auxiliary drums 19, 20 and 40 are effectively moved in a substantially simultaneous manner by rotating the displacing apparatus 18 about the rotation axis y-y, preferably according to a single direction, for example the counter clockwise one indicated by arrow f2 in figure 3. in a preferred embodiment and thanks to the fact that the auxiliary drums 19, 20 and 40 are slidably supported by the displacing apparatus 18, the method of the invention comprises also in this case the further step of translating the auxiliary drums 19, 20 and 40 towards the rotation axis y-y of the displacing apparatus 18 before carrying out the rotation step of such apparatus. also in this case, said step is preferably carried out by the drum translating units 33, 34 and 45 by translating both the auxiliary drums 19, 20 and 40 and the relevant rotation units 31, 32 and 44, which are preferably translationally integral with the drums. in this way, it is advantageously possible to decrease both the transversal dimensions and the inertia forces during the movements of the auxiliary drums 19, 20 and 40 between the working positions a, b and c with an increase of the safety features of the plant 1 and with a reduction of the driving force required to the driving unit 35 for rotating the displacing apparatus 18. preferably, the auxiliary drums 19, 20 and 40 and the corresponding rotation units 31, 32 and 44 are translated towards the rotation axis y-y of the displacing apparatus 18 and are arranged at the aforementioned stand-by positions defined inside the outer perimeter of the rotatable supporting platform 39 of such apparatus. advantageously, the auxiliary drums 19, 20 and 40 of the displacing apparatus 18 are arranged in this case at a safe distance from both the control unit 37 and from the operator 38 during the rotation of the displacing apparatus 18, as schematically illustrated in dotted line in figure 3. once said substantially simultaneous steps of positioning the first auxiliary drum 19 at the second working position b, the second auxiliary drum 20 at the third working position c and the third auxiliary drum at the picking position d (the first working position a) have been carried out, the finishing station 17 is in an operating condition in which: i) a substantially cylindrical sleeve including the third belt structure 4 and the tread band 5 ready ' to be removed from the third auxiliary drum 40 is arranged at the picking position d (the same working position a), and is supported by the third auxiliary drum 40; ii) the first belt structure 4 previously assembled at the first working position a is supported by the first auxiliary drum 19 and is ready to receive at least one portion of a new tread band 5 at the second working position b; and iii) a semifinished product including the second belt structure 4 previously assembled at the first working position a and a portion of the tread band 5 previously applied at the second working position b, is supported by the second auxiliary drum 20 and is ready to receive the remaining portion (or a further portion) of the tread band 5 at the third working position c. at this point, the method of the invention provides for the step of transferring the substantially cylindrical sleeve supported by the third auxiliary drum 40 from the picking position d of the finishing station 17 at a radially outer position with respect to the carcass structure 3 built in the meantime in the building station 14. advantageously, this transfer step is carried out by the substantially ring-shaped transfer device 36 according to the methods described above. after this transfer step, the finishing station 17 is in an operating condition wherein the third auxiliary drum 40 is already arranged at the first working position a and is ready to support a new belt structure 4 thanks to the operating interaction with the application apparatus 21 arranged at such a working position a. at this point, the method of the invention provides for carrying out at least in part simultaneously the steps of: - assembling a new belt structure 4 at the first working position a on the third auxiliary drum 40, - applying at least one first portion of the tread band 5 at the second working position b at a radially outer position with respect to the first belt structure 4 previously assembled on the first auxiliary drum 19; and - applying at least one second portion of the tread band 5 at the third working position c at a radially outer position with respect to the second belt structure 4 assembled on the auxiliary drum 20. once said steps have been carried out, the method of the invention provides for carrying out at least in part simultaneously the steps of positioning the third auxiliary drum 40 supporting the new belt structure 4 at the second working position b, of positioning the first auxiliary drum 19 supporting the first belt structure and at least one first portion of the tread band 5 at the third working position c and of positioning the second auxiliary drum 20 supporting the substantially cylindrical sleeve, including the tread band 5 applied at a radially outer position with respect to the second belt structure 4, at the picking position d of the finishing station 17. at this point, the method of the invention provides for the step of transferring the new substantially cylindrical sleeve just manufactured and supported by the second auxiliary drum 20 from the picking position d of the finishing station 17 at a radially outer position with respect to a new carcass structure 3 built in the meantime in the building station 14. after this transfer step, the finishing station 17 is in an operating condition wherein the second auxiliary drum 20 is already arranged at the first working position a and ready to support a new belt structure 4 thanks to the operating interaction with the application apparatus 21 arranged at such a working position a. at this point, the method of the invention provides for carrying out at least in part simultaneously the steps of: - assembling a new belt structure 4 at the first working position a on the second auxiliary drum 20, - applying at least one first portion of the tread band 5 at the second working position b at a radially outer position with respect to the belt structure 4 previously assembled on the third auxiliary drum 40; and - applying at least one second portion of the tread band 5 at the third working position c at a radially outer position with respect to the first belt structure 4 assembled on the first auxiliary drum 19. once the aforementioned steps have been carried out, the method of the invention provides for carrying out at least in part simultaneously the steps of positioning the second auxiliary drum 20 supporting the new belt structure 4 at the second working position b, of positioning the third auxiliary drum 40 supporting the belt structure 4 previously assembled and at least one first portion of the tread band 5 at the third working position c and of positioning the first auxiliary drum 19 supporting the substantially cylindrical sleeve, including the tread band 5 applied at a radially outer position with respect to the first belt structure 4, at the picking position d of the finishing station 17. at this point, the method of the invention provides for the step of transferring the substantially cylindrical sleeve supported by the first auxiliary drum 19 from the picking position d of the finishing station 17 at a radially outer position with respect to a new carcass structure 3 built in the meantime in the building station 14. once the operations described above have been completed, the finishing station 17 returns to the initial operating condition indicated above. at the end of each cyclical repetition of the aforementioned steps of assembling the belt structure 4/applying portions of the tread band 5 and of rotating the displacing apparatus 18, a new substantially cylindrical sleeve including the belt structure 4 and the tread band 5, supported at the picking position d (in this case the working position a) by one of the auxiliary drums 19, 20 and 40 of the finishing station 17, is manufactured. such sleeve is then transferred from the picking position d of the finishing station 17 at a radially outer position with respect to a new carcass structure 3 built in the building station 14 according to the method described above. also in this alternative embodiment, the steps of manufacturing the substantially cylindrical sleeve including the tread band 5 and the belt structure 4 and of transferring such sleeve from the picking position d of the finishing station 17 are preferably carried out in a time interval substantially equal to or smaller than, the time for carrying out the step of building the carcass structure 3 in the building station 14. in this way, it is advantageously possible to optimise the process times and correspondingly increase the productivity of the manufacturing plant 1. also in this alternative embodiment, the assembly of the substantially cylindrical sleeve including the tread band 5 and the belt structure 4 with the carcass structure 3 not toroidally shaped yet (otherwise called "carcass sleeve") is preferably carried out on the same primary drum 15 of the building station 14 used for building the carcass sleeve, thus integrating a unistage manufacturing process. in a further alternative embodiment, the method of the invention may provide for the step of applying at the picking position d (for example coinciding with the first working position a) at a radially outer position with respect to the belt structure 4 supported by the auxiliary drum 19, 20 or 40 arranged therein, an additional first or last continuous elongated element of green elastomeric material according to a respective predetermined path, so as to begin or complete the tread band 5 at the picking position d. in this case, it is advantageously possible to form the tread band 5 using four or five different elastomeric materials delivered by the delivery members 25, 26 arranged at the second working position b and by one or two delivery members arranged at the third working position c and by a delivery member arranged at the picking position d of the finishing station 17. preferably, this application step is carried out according to the methods described above, that is, by delivering such continuous elongated element by an extruder of a further delivery member (not shown) arranged at the picking position d (for example coinciding with the first working position a) near the auxiliary drum 19, 20 or 40 arranged therein and by winding the continuous elongated element on said drum as illustrated above. advantageously, this further delivery member may be provided with a respective actuating group (not shown) adapted to move such member to and from the auxiliary drum arranged at the picking position d (for example coinciding with the first working position a), so as to not interfere with the subsequent picking operations of the substantially cylindrical sleeve including the belt structure 4 and the tread band 5. from repeated tests carried out by the applicant, it has been found that the manufacturing method and apparatus according to the invention, in their possible alternative embodiments, fully achieve the object of manufacturing a high quality tyre reconciling the different productivity rates of the building station of the carcass structure and of the finishing station intended to manufacture the substantially cylindrical sleeve including a belt structure provided with a layer of zero-degree reinforcing cords and a tread band formed by winding coils of at least one continuous elongated element. in addition, it should be noted that the method according to the invention achieves the aforementioned object thanks to a sequence of operating steps that can be carried out by a structurally simple and easy to manage manufacturing plant. advantageously, the manufacturing plant of the invention can be arranged downstream of an existing station for building the carcass structures, thus increasing the productivity of the tyre manufacturing plant which incorporates the same. advantageously, moreover, the assembly of the carcass sleeve with the outer belt structure/tread band sleeve can be carried out on the same drum used for building the carcass sleeve, integrating a unistage manufacturing process which makes it possible to maximise the productivity of the manufacturing plant and the quality characteristics of the tyres manufactured by the same. finally, it should be observed that the number of auxiliary drums and of the working positions defined in the finishing station 17 can be higher than three depending upon specific application requirements. in this case, the auxiliary drums will be preferably supported by the displacing apparatus 18 at positions angularly offset with each other by an angle substantially equal to about 360°/n where n is the total number of auxiliary drums. li this case, the plant 1 comprises a suitable number of application apparatuses 21 of the belt layers and/or of delivery members of respective continuous elongated elements arranged at the working positions defined in the finishing station 17 for operatively interacting with the auxiliary drums arranged therein by the displacing apparatus 18.
117-707-753-377-85X
JP
[ "WO", "US" ]
H01L27/146,H01L21/22,H01L21/76,H01L31/10,H04N5/369
2017-11-09T00:00:00
2017
[ "H01", "H04" ]
solid-state image capture device and electronic apparatus
the present technology relates to a solid-state image capture device with which it is possible to suppress degradation of dark characteristics, and to an electronic apparatus. the solid-state image capture device is provided with: a photoelectric conversion portion which performs photoelectric conversion; and a pn junction region disposed on a light incident surface-side of the photoelectric conversion portion and comprising a p-type region and an n-type region. in a vertical cross section, the pn junction region is formed on each of three sides including the side of the light incident surface among four sides surrounding the photoelectric conversion portion. the solid-state image capture device further includes a trench which penetrates a semiconductor substrate in a depth direction and which is formed between the photoelectric conversion portions respectively formed in adjacent pixels, wherein the trench has a side wall also provided with a pn junction region. the present technology may be applied in a back-side irradiated type cmos image sensor, for example.
1 . a solid-state imaging device comprising: a photoelectric converting unit configured to perform photoelectric conversion; and a pn junction region including a p-type region and an n-type region on a side of a light incident surface of the photoelectric converting unit. 2 . the solid-state imaging device according to claim 1 , wherein, on a vertical cross-section, the pn junction region is formed at three sides including a side of the light incident surface among four sides enclosing the photoelectric converting unit. 3 . the solid-state imaging device according to claim 1 , further comprising: a trench that penetrates through a semiconductor substrate in a depth direction and is formed between the photoelectric converting units each formed at adjacent pixels, wherein the pn junction region is also provided on a side wall of the trench. 4 . the solid-state imaging device according to claim 3 , wherein the pn junction region formed on the side wall of the trench and the pn junction region formed on the side of the light incident surface of the photoelectric converting unit are made a continuous region. 5 . the solid-state imaging device according to claim 1 , wherein the photoelectric converting unit is an n-type region, and concentration of n-type impurities in the n-type region of the pn junction region is a same level or higher than concentration of n-type impurities of the photoelectric converting unit. 6 . the solid-state imaging device according to claim 1 , wherein an active region adjacent to the photoelectric converting unit is a p-type region, and concentration of p-type impurities in the p-type region of the pn junction region is higher than concentration of p-type impurities of the active region. 7 . the solid-state imaging device according to claim 1 , wherein concentration of n-type impurities of the n-type region is between 1e15 cm-3 and 1e17 cm-3. 8 . the solid-state imaging device according to claim 1 , wherein concentration of p-type impurities of the p-type region is between 1e16 cm-3 and 1e17 cm-3. 9 . the solid-state imaging device according to claim 1 , wherein a plurality of vertical transistor trenches is provided at a transfer transistor, and lengths of the plurality of vertical transistor trenches are different. 10 . the solid-state imaging device according to claim 9 , wherein at least one vertical transistor trench among the plurality of vertical transistor trenches is in contact with the pn junction region. 11 . the solid-state imaging device according to claim 9 , wherein at least one vertical transistor trench among the plurality of vertical transistor trenches is formed to a position deeper than equal to or greater than ½ of the photoelectric converting unit. 12 . the solid-state imaging device according to claim 1 , wherein the p-type region and the n-type region are solid-phase diffused layers. 13 . the solid-state imaging device according to claim 1 , wherein the pn junction region is formed so as to cover a backside of the photoelectric converting unit except part of the backside. 14 . the solid-state imaging device according to claim 1 , wherein the pn junction region is discontinuously formed on a backside of the photoelectric converting unit. 15 . the solid-state imaging device according to claim 1 , wherein the p-type region and the n-type region are regions formed by solid-phase diffusion being performed at a cavity formed using a silicon on nothing (son) technology. 16 . electronic equipment on which a solid-state imaging device is mounted, wherein the solid-state imaging device includes a photoelectric converting unit configured to perform photoelectric conversion, and a pn junction region including a p-type region and an n-type region on a side of a light incident surface of the photoelectric converting unit.
technical field the present technology relates to a solid-state imaging device and electronic equipment, and, particularly, relates to a solid-state imaging device which improves a saturated charge amount qs of each pixel by forming a p-type solid-phase diffused layer and an n-type solid-phase diffused layer on a side wall of an inter-pixel light shielding wall which is formed between the respective pixels to form an intense electric field region, so that charges are held, and electronic equipment. background art in related art, for the purpose of improving a saturated charge amount qs of respective pixels of a solid-state imaging device, a technology of forming a p-type diffused layer and an n-type diffused layer on a side wall of a trench which is formed between the respective pixels to form an intense electric field region, so that charges are held, is known (see, for example, patent document 1). citation list patent document patent document 1: japanese patent application laid-open no. 2015-162603 summary of the invention problems to be solved by the invention however, with a structure disclosed in patent document 1, there has been a possibility that pinning on a light incident side of a silicon (si) substrate is weakened, and the generated charges flow into a photodiode, which degrades dark characteristics, and, for example, may generate white spots or may generate dark currents. the present technology has been made in view of such circumstances, and is directed to being able to suppress degradation of dark characteristics. solutions to problems according to an aspect of the present technology, there is provided a solid-state imaging device including a photoelectric converting unit configured to perform photoelectric conversion, and a pn junction region including a p-type region and an n-type region on a side of a light incident surface of the photoelectric converting unit. according to an aspect of the present technology, there is provided electronic equipment on which a solid-state imaging device is mounted, in which the solid-state imaging device includes a photoelectric converting unit configured to perform photoelectric conversion, and a pn junction region including a p-type region and an n-type region on a side of a light incident surface of the photoelectric converting unit. in the solid-state imaging device according to an aspect of the present technology, a photoelectric converting unit configured to perform photoelectric conversion, and a pn junction region including a p-type region and an n-type region on a side of a light incident surface of the photoelectric converting unit are included. in electronic equipment according to one aspect of the present technology, the solid-state imaging device is mounted. effects of the invention according to the present technology, it is possible to suppress degradation of dark characteristics. note that in this connection, the effects described here are not necessarily limited, and may be any of the effects described in the present disclosure. brief description of drawings fig. 1 is a view illustrating a configuration example of an imaging device. fig. 2 is a view illustrating a configuration example of an imaging element. fig. 3 is a vertical cross-sectional diagram illustrating a first configuration example of a pixel to which the present technology is applied. fig. 4 is a plan view of the pixel to which the present technology is applied, on a surface side in a first embodiment. fig. 5 is a circuit diagram of the pixel. fig. 6 is a view for explaining a method for manufacturing a periphery of a dti 82 . fig. 7 is a vertical cross-sectional diagram illustrating a second configuration example of the pixel to which the present technology is applied. fig. 8 is a vertical cross-sectional diagram illustrating a third configuration example of the pixel to which the present technology is applied. fig. 9 is a vertical cross-sectional diagram illustrating a fourth configuration example of the pixel to which the present technology is applied. fig. 10 is a vertical cross-sectional diagram illustrating a fifth configuration example of the pixel to which the present technology is applied. fig. 11 is a vertical cross-sectional diagram illustrating a sixth configuration example of the pixel to which the present technology is applied. fig. 12 is a vertical cross-sectional diagram illustrating a seventh configuration example of the pixel to which the present technology is applied. fig. 13 is a vertical cross-sectional diagram illustrating an eighth configuration example of the pixel to which the present technology is applied. fig. 14 is a vertical cross-sectional diagram illustrating a ninth configuration example of the pixel to which the present technology is applied. fig. 15 is a vertical cross-sectional diagram illustrating a tenth configuration example of the pixel to which the present technology is applied. fig. 16 is a vertical cross-sectional diagram and a plan view illustrating an eleventh configuration example of the pixel to which the present technology is applied. fig. 17 is a vertical cross-sectional diagram and a plan view illustrating a twelfth configuration example of the pixel to which the present technology is applied. fig. 18 is a vertical cross-sectional diagram illustrating a thirteenth configuration example of the pixel to which the present technology is applied. fig. 19 is a vertical cross-sectional diagram illustrating a fourteenth configuration example of the pixel to which the present technology is applied. fig. 20 is a vertical cross-sectional diagram illustrating a fifteenth configuration example of the pixel to which the present technology is applied. fig. 21 is a vertical cross-sectional diagram illustrating a sixteenth configuration example of the pixel to which the present technology is applied. fig. 22 is a view for explaining a shape of a separation prevention region. fig. 23 is a view for explaining a process relating to formation of an n-type region. fig. 24 is a view for explaining the process relating to formation of the n-type region. fig. 25 is a view for explaining another position where the separation prevention region is constituted. fig. 26 is a vertical cross-sectional diagram illustrating a seventeenth configuration example of the pixel to which the present technology is applied. fig. 27 is a view for explaining change in concentration of impurities. fig. 28 is a vertical cross-sectional diagram illustrating an eighteenth configuration example of the pixel to which the present technology is applied. fig. 29 is a plan view corresponding to the ninth configuration example illustrated in fig. 13 . fig. 30 is a vertical cross-sectional diagram illustrating a nineteenth configuration example of the pixel to which the present technology is applied. fig. 31 is a vertical cross-sectional diagram illustrating a twentieth configuration example of the pixel to which the present technology is applied. fig. 32 is a vertical cross-sectional diagram illustrating a twenty first configuration example of the pixel to which the present technology is applied. fig. 33 is a plan view illustrating a configuration example in a case where an fd, or the like, is shared between two pixels. fig. 34 is a view illustrating outline of a configuration example of a laminated solid-state imaging device to which the technology according to the present disclosure can be applied. fig. 35 is a cross-sectional diagram illustrating a first configuration example of a laminated solid-state imaging device 23020 . fig. 36 is a cross-sectional diagram illustrating a second configuration example of a laminated solid-state imaging device 23020 . fig. 37 is a cross-sectional diagram illustrating a third configuration example of a laminated solid-state imaging device 23020 . fig. 38 is a cross-sectional diagram illustrating another configuration example of the laminated solid-state imaging device to which the technology according to the present disclosure can be applied. fig. 39 is a block diagram depicting an example of schematic configuration of an in-vivo information acquisition system. fig. 40 is a block diagram depicting an example of schematic configuration of a vehicle control system. fig. 41 is a diagram of assistance in explaining an example of installation positions of an outside-vehicle information detecting section and an imaging section. mode for carrying out the invention a best mode for carrying out the present technology (hereinafter, referred to as an embodiment) will be described in detail below with reference to the drawings. because the present technology can be applied to an imaging device, description will be provided here using an example where the present technology is applied to an imaging device. note that, while description will be provided here using an example of an imaging device, application of the present technology is not limited to application to an imaging device, and the present technology can be applied to general electronic equipment using an imaging device as an image capturing unit (photoelectric converting unit), such as an imaging device such as a digital still camera and a video camera, a mobile terminal device such as a mobile phone, which has an imaging function, and a copier using an imaging device as an image reading unit. note that there is a case where a module-like form mounted on electronic equipment, that is, a camera module is used as an imaging device. fig. 1 is a block diagram illustrating a configuration example of an imaging device which is an example of electronic equipment of the present disclosure. as illustrated in fig. 1 , an imaging device 10 includes an optical system including a lens group 11 , or the like, an imaging element 12 , a dsp circuit 13 which is a camera signal processing unit, a frame memory 14 , a display unit 15 , a recording unit 16 , an operation system 17 , a power supply system 18 , or the like. further, the dsp circuit 13 , the frame memory 14 , the display unit 15 , the recording unit 16 , the operation system 17 and the power supply system 18 are connected to each other via a bus line 19 . a cpu 20 controls respective units within the imaging device 10 . the lens group 11 captures incident light (image light) from a subject to form an image on an imaging surface of the imaging element 12 . the imaging element 12 converts a light amount of the incident light of which image is formed on the imaging surface by the lens group 11 into an electric signal in units of pixel and outputs the electric signal as a pixel signal. as this imaging element 12 , an imaging element (image sensor) including pixels which will be described below can be used. the display unit 15 is formed with a panel-type display unit such as a liquid crystal display unit and an organic electro luminescence (el) display unit, and displays a moving image or a still image captured at the imaging element 12 . the recording unit 16 records the moving image or the still image captured at the imaging element 12 in a recording medium such as a video tape and a digital versatile disk (dvd). the operation system 17 issues operation commands for various functions provided at the present imaging device under operation by a user. the power supply system 18 supplies various kinds of power supplies which become operation power supplies of the dsp circuit 13 , the frame memory 14 , the display unit 15 , the recording unit 16 and the operation system 17 to these supply targets as appropriate. <configuration of imaging element> fig. 2 is a block diagram illustrating a configuration example of the imaging element 12 . the imaging element 12 can be a complementary metal oxide semiconductor (cmos) image sensor. the imaging element 12 includes a pixel array portion 41 , a vertical drive unit 42 , a column processing unit 43 , a horizontal drive unit 44 and a system control unit 45 . the pixel array portion 41 , the vertical drive unit 42 , the column processing unit 43 , the horizontal drive unit 44 and the system control unit 45 are formed on a semiconductor substrate (chip) which is not illustrated. in the pixel array portion 41 , unit pixels (for example, pixels 50 in fig. 3 ) having photoelectric conversion elements in which photo-induced charges of a charge amount in accordance with an incident light amount are generated and accumulated inside are arranged in two dimensions in a matrix. note that, in the following description, there is also a case where the photo-induced charges of the charge amount in accordance with the incident light amount will be simply referred to as “charges”, and the unit pixels will be simply referred to as “pixels”. in the pixel array portion 41 , further, pixel drive lines 46 are formed along a horizontal direction in the drawing (pixel arrangement direction of pixel rows) for each row of the pixel array in a matrix, and vertical signal lines 47 are formed along a vertical direction in the drawing (pixel arrangement direction of pixel columns) for each column. respective one ends of the pixel drive lines 46 are connected to output terminals corresponding to the respective rows of the vertical drive unit 42 . the imaging element 12 further includes a signal processing unit 48 and a data storage unit 49 . the signal processing unit 48 and the data storage unit 49 may be external signal processing units which are provided at a substrate different from the imaging element 12 , for example, a digital signal processor (dsp) or a processing using software, or may be mounted on the same substrate as the imaging element 12 . the vertical drive unit 42 is a pixel drive unit which is constituted with a shift register, an address decoder, or the like, and which drives all pixels of the pixel array portion 41 at the same time or drives the pixels in units of row, or the like. while illustration of a specific configuration of the vertical drive unit 42 will be omitted, the vertical drive unit 42 has a configuration having a read scanning system, and a sweep scanning system or batch sweep and batch transmission. the read scanning system sequentially selectively scans the unit pixels of the pixel array portion 41 in units of row to read out signals from the unit pixels. in a case of row driving (rolling shutter operation), sweep scanning is performed on a read row on which read scanning is to be performed by the read scanning system, ahead of the read scanning by a period corresponding to shutter speed. further, in a case of global exposure (global shutter operation), batch sweep is performed ahead of batch transmission by a period corresponding to shutter speed. by this sweep, unnecessary charges are swept (reset) from the photoelectric conversion elements in unit pixels of the read row. then, by sweeping (reset) of unnecessary charges, so-called electronic shutter operation is performed. here, the electronic shutter operation refers to operation of discarding photo-induced charges of the photoelectric conversion elements and starting new exposure (starting accumulation of photo-induced charges). a signal read out by read operation by the read scanning system corresponds to an amount of light incident after the last read operation or the electronic shutter operation. in a case of row driving, a period from a read timing by the last read operation or a sweep timing by the electronic shutter operation until a read timing of read operation of this time becomes an accumulation period (exposure period) of photo-induced charges at the unit pixels. in a case of global exposure, a period from batch sweep until batch transmission becomes an accumulation period (exposure period). a pixel signal output from each unit pixel of the pixel row which is selectively scanned by the vertical drive unit 42 is supplied to the column processing unit 43 through each vertical signal line 47 . the column processing unit 43 performs predetermined signal processing on the pixel signal output from each unit pixel of the selected row through the vertical signal line 47 for each pixel column of the pixel array portion 41 and temporarily holds the pixel signal subjected to the signal processing. specifically, the column processing unit 43 performs at least noise removal processing, for example, correlated double sampling (cds) processing as the signal processing. through the correlated double sampling by this column processing unit 43 , specific pattern noise specific to pixels, such as reset noise and threshold variation of an amplifier transistor is removed. note that it is also possible to provide, for example, an analog-digital (ad) conversion function other than the noise removal processing, to the column processing unit 43 and output a signal level using a digital signal. the horizontal drive unit 44 is constituted with a shift register, an address decoder, or the like, and sequentially selects a unit circuit corresponding to the pixel column of the column processing unit 43 . by selective scanning by this horizontal drive unit 44 , the pixel signals subjected to the signal processing at the column processing unit 43 are sequentially output to the signal processing unit 48 . the system control unit 45 is constituted with a timing generator, or the like, which generates various kinds of timing signals, and controls drive of the vertical drive unit 42 , the column processing unit 43 , the horizontal drive unit 44 , or the like, on the basis of the various kinds of timing signals generated at the timing generator. the signal processing unit 48 has at least an addition processing function and performs various kinds of signal processing such as addition processing on the pixel signals output from the column processing unit 43 . the data storage unit 49 temporarily stores data necessary for processing for the signal processing at the signal processing unit 48 . <structure of unit pixel> next, a specific structure of the unit pixels 50 arranged in a matrix in the pixel array portion 41 will be described. according to the pixels 50 which will be described below, it is possible to reduce a possibility that dark characteristics degrade, and, for example, white spots or dark currents are generated as a result of pinning on a light incident side of a silicon (si) substrate (in fig. 3 , an si substrate 70 ) being weakened, and the generated charges flowing into a photodiode (in fig. 3 , a pd 71 ). configuration example of pixels in first embodiment fig. 3 is a vertical cross-sectional diagram of a pixel 50 a in a first embodiment of the pixels 50 to which the present technology is applied. fig. 4 is a plan view of the pixel 50 a on a surface side. note that fig. 3 corresponds to a position of a line x-x′ in fig. 4 . while description will be provided using an example in a case where the pixel 50 which will be described below is a backside irradiation type pixels, the present technology can be also applied to a surface irradiation type pixel. the pixel 50 illustrated in fig. 3 includes a photodiode (pd) 71 which is a photoelectric conversion element of each pixel formed inside the si substrate 70 . on a light incident side (in the drawing, a lower side and a backside) of the pd 71 , a p-type region 72 is formed, and in a further lower layer of the p-type region 72 , a planarization film 73 is formed. a boundary between this p-type region 72 and the planarization film 73 is set as a backside si interface 75 . a light shielding film 74 is formed on the planarization film 73 . the light shielding film 74 is provided to prevent leakage of light to adjacent pixels, and is formed between adjacent pds 71 . the light shielding film 74 is formed with a metal material such as, for example, tungsten (w). on the planarization film 73 and on a backside of the si substrate 70 , an on-chip lens (ocl) 76 for collecting the incident light at the pd 71 is formed. the ocl 76 can be formed with an inorganic material, and, for example, sin, sio, sioxny (where 0<x≤1, and 0<y≤1) can be used. while not illustrated in fig. 3 , it is also possible to employ a configuration where a transparent plate such as cover glass and a resin adheres on the ocl 76 . further, while not illustrated in fig. 3 , it is also possible to employ a configuration where a color filter layer is formed between the ocl 76 and the planarization film 73 . further, it is also possible to employ a configuration where, in the color filter layer, a plurality of color filters is provided for each pixel, and color of the respective color filters is arranged, for example, in accordance with a bayer array. an active region (pwell) 77 is formed on an opposite side (in the drawing, in an upper part, and on a surface side) of the light incident side of the pd 71 . in the active region 77 , an element isolation region (hereinafter, referred to as a shallow trench isolation (sti)) 78 which isolates pixel transistors, or the like, is formed. on the surface side (upper side in the drawing) of the si substrate 70 , and in the active region 77 , a wiring layer 79 is formed, and a plurality of transistors is formed on this wiring layer 79 . fig. 3 illustrates an example where a transfer transistor 80 is formed. the transfer transistor (gate) 80 is formed with a vertical transistor. that is, in the transfer transistor (gate) 80 , a vertical transistor trench 81 is open, and a transfer gate (tg) 80 for reading out charges from the pd 71 is formed at the opening. further, pixel transistors such as an amplifier (amp) transistor, a select (sel) transistor and a reset (rst) transistor are formed on the surface side of the si substrate 70 . arrangement of these transistors will be described with reference to fig. 4 , and operation will be described with reference to a circuit diagram in fig. 5 . a trench is formed between the pixels 50 a. this trench will be described as a deep trench isolation (dti) 82 . this dti 82 is formed between adjacent pixels 50 a in a shape which penetrates the si substrate 70 in a depth direction (in the drawing, in a vertical direction, and a direction from a surface to a backside). further, the dti 82 also functions as a light shielding wall between pixels so that unnecessary light does not leak to the adjacent pixels 50 a. the p-type solid-phase diffused layer 83 and the n-type solid-phase diffused layer 84 are sequentially formed from the dti 82 side to the pd 71 between the pd 71 and the dti 82 . the p-type solid-phase diffused layer 83 is formed until it contacts the backside si interface 75 of the si substrate 70 along the dti 82 . the n-type solid-phase diffused layer 84 is formed until it contacts the p-type region 72 of the si substrate 70 along the dti 82 . note that, while the solid-phase diffused layer indicates a layer where a p-type layer and an n-type layer are formed through impurity doping using a manufacturing method which will be described later, in the present technology, the manufacturing method is not limited to solid-phase diffusion, and a p-type layer and an n-type layer generated using another manufacturing method such as ion implantation may be respectively provided between the dti 82 and the pd 71 . further, the pd 71 in the embodiment is constituted in the n-type region. photoelectric conversion is performed in part or all of the n-type region. while the p-type solid-phase diffused layer 83 is formed until it contacts the backside si interface 75 , the n-type solid-phase diffused layer 84 does not contact the backside si interface 75 , and a gap is provided between the n-type solid-phase diffused layer 84 and the backside si interface 75 . according to such a configuration, a pn junction region of the p-type solid-phase diffused layer 83 and the n-type solid-phase diffused layer 84 forms an intense electric field region, so that charges generated at the pd 71 are held. according to such a configuration, the p-type solid-phase diffused layer 83 and the n-type solid-phase diffused layer 84 which are formed along the dti 82 form an intense electric field region, so that charges generated at the pd 71 can be held. in a case where the n-type solid-phase diffused layer 84 is formed until it contacts the backside si interface 75 of the si substrate 70 along the dti 82 , because pinning of charges is weakened at a portion where the backside si interface 75 of the si substrate 70 which is a light incident surface side contacts the n-type solid-phase diffused layer 84 , there is a possibility that dark characteristics degrade as a result of the generated charges flowing into the pd 71 , and, for example, white spots are generated, or dark currents are generated. however, in the pixel 50 a illustrated in fig. 3 , a configuration is employed where the n-type solid-phase diffused layer 84 does not contact the backside si interface 75 of the si substrate 70 , and the n-type solid-state diffused layer 84 is formed so as to contact the p-type region 72 of the si substrate 70 along the dti 82 . with such a configuration, it is possible to prevent pinning of charges from being weakened, so that it is possible to prevent degradation of dark characteristics as a result of the charges flowing into the pd 71 . further, in the pixel 50 a illustrated in fig. 3 , a side wall film 85 formed with sio2 is formed on an inner wall of the dti 82 , and a filling material 86 formed with polysilicon is embedded inside the side wall film 85 . the pixel 50 a in the first embodiment has a configuration where the p-type region 72 is provided on the backside, and the pd 71 and the n-type solid-phase diffused layer 84 do not exist around the backside si interface 75 . by this means, because pinning is not weakened around the backside si interface 75 , it is possible to prevent degradation of the dark characteristics as a result of the generated charges flowing into the pd 71 . note that, concerning the dti 82 , it is also possible to employ sin in place of sio2 employed for the side wall film 85 . further, it is also possible to use doping polysilicon in place of polysilicon employed for the filling material 86 . in a case where doping polysilicon is filled, or in a case where after polysilicon is filled, n-type impurities or p-type impurities are doped, if negative bias is applied to the portion, because it is possible to strengthen pinning of the side wall of the dti 82 , it is possible to further improve the dark characteristics. arrangement of the transistors formed at the pixel 50 a and operation of each transistor will be described with reference to fig. 4 and fig. 5 . fig. 4 is a plan view of nine pixels 50 a of 3×3 arranged in the pixel array portion 41 ( fig. 2 ), seen from the surface side (in fig. 3 , an upper side in the drawing), and fig. 5 is a circuit diagram for explaining connection relationship of the respective transistors illustrated in fig. 4 . in fig. 4 , one rectangle indicates one pixel 50 a. as illustrated in fig. 4 , the dti 82 is formed so as to enclose the pixel 50 a (the pd 71 included in the pixel 50 a ). further, the transfer transistor (gate) 80 , a floating diffusion (fd) 91 , a reset transistor 92 , an amplifier transistor 93 , and a select transistor 94 are formed on the surface side of the pixel 50 a. the pd 71 generates and accumulates charges (signal charges) in accordance with a received light amount. an anode terminal of the pd 71 is grounded, and a cathode terminal of the pd 71 is connected to the fd 91 via the transfer transistor 80 . when the transfer transistor 80 is powered on by a transfer signal tr, the transfer transistor 80 reads out the charges generated at the pd 71 and transfers the charges to the fd 91 . the fd 91 holds the charges read out from the pd 71 . when the reset transistor 92 is powered on by a reset signal rst, the reset transistor 92 resets a potential of the fd 91 as a result of the charges accumulated in the fd 91 being discharged to a drain (constant voltage source vdd). the amplifier transistor 93 outputs a pixel signal in accordance with the potential of the fd 91 . that is, the amplifier transistor 93 constitutes a negative mos (not illustrated) as a constant current source connected via a vertical signal line 33 , and a source follower circuit, and a pixel signal indicating a level in accordance with the charges accumulated in the fd 91 is output from the amplifier transistor 93 to the column processing unit 43 ( fig. 2 ) via the select transistor 94 and the vertical signal line 47 . the select transistor 94 is powered on when a pixel 31 is selected by a select signal sel, and outputs a pixel signal of the pixel 31 to the column processing unit 43 via the vertical signal line 33 . the respective signal lines through which the transfer signal tr, the select signal sel and the reset signal rst are transmitted correspond to the pixel drive lines 46 in fig. 2 . while the pixel 50 a can be constituted as described above, the configuration of the pixel 50 a is not limited to this configuration, and it is also possible to employ other configurations. <manufacturing method of periphery of dti 82 > fig. 6 is a view for explaining a manufacturing method of a periphery of the dti 82 . when the dti 82 is open on the si substrate 70 , as illustrated in a in fig. 6 , a portion other than a position where the dti 82 is to be formed on the si substrate 70 is covered with a hard mask using sin and sio2, and a groove is open through dry etching in a vertical direction to a predetermined depth of the si substrate 70 at a portion which is not covered with the hard mask. then, an sio2 film including p (phosphorous) which is an n-type impurity is formed on an inner side of the open groove, and heat treatment is performed so that p (phosphorous) is doped (hereinafter, referred to as solid-phase diffused) on the si substrate 70 side from the sio2 film. then, as illustrated in b in fig. 6 , after the sio2 film including p, formed on the inner side of the open groove is removed, by heat treatment being performed again so that p (phosphorous) is diffused to inside of the si substrate 70 , the n-type solid-phase diffused layer 84 which is self-aligned in a current groove shape is formed. thereafter, the groove extends in a depth direction by a bottom portion of the groove being etched through dry etching. then, as illustrated in c in fig. 6 , after an sio2 film including b (boron) which is a p-type impurity is formed on an inner side of the extending groove, by heat treatment being performed so that b (boron) being solid-phase diffused on the si substrate 70 side from the sio2 film, the p-type solid-phase diffused layer 83 which is self-aligned in an extending groove shape is formed. thereafter, the sio2 film including b (boron), formed on an inner wall of the groove is removed. then, as illustrated in d in fig. 6 , the side wall film 85 formed with sio2 is formed on the inner wall of the open groove, and the groove is filled with polysilicon to form the dti 82 . thereafter, pixel transistors and wirings are formed. thereafter, the si substrate 70 is made thinner from the backside. when the si substrate 70 is made thinner, the bottom portion of the dti 82 including the p-type solid-phase diffused layer 83 is also made thinner at the same time. the si substrate 70 and the bottom portion of the dti 82 are made thinner to a depth which does not reach the n-type solid-phase diffused layer 84 . through the above-described process, an intense electric field region including the n-type solid-phase diffused layer 84 which does not contact the backside si interface 75 and the p-type solid-phase diffused layer 83 which contacts the backside si interface 75 can be formed adjacent to the pd 71 . second embodiment fig. 7 is a vertical cross-sectional diagram of a pixel 50 b in a second embodiment to which the present technology is applied. the second embodiment is different from the first embodiment in that the dti 82 is formed at an sti 78 , and, because other configurations are similar to those in the first embodiment, the same reference numerals will be assigned to similar portions, and description will be omitted as appropriate. also in the following description of the pixel 50 , the same reference numerals will be assigned to portions which are the same as portions of the pixel 50 b in the first embodiment, and description thereof will be omitted as appropriate. in the pixel 50 b illustrated in fig. 7 , an sti 78 b which is formed in the active region 77 is formed to a portion where a dti 82 b is to be formed (formed to an end portion of the pixel 50 b ). then, the dti 82 b is formed at a lower portion of the sti 78 b. in other words, the sti 78 b is formed at a portion where the dti 82 b is formed, and the sti 78 b and the dti 82 b are formed so that the sti 78 b contacts the dti 82 b. according to such formation, it is possible to make the pixel 50 b smaller than in a case where the sti 78 b and the dti 82 b are formed at different positions (for example, the pixel 50 a in the first embodiment ( fig. 3 )). further, also with the pixel 50 b in the second embodiment, it is possible to obtain effects similar to those obtained with the pixel 50 a in the first embodiment, that is, effects of being able to prevent degradation of the dark characteristics. third embodiment fig. 8 is a vertical cross-sectional diagram of a pixel 50 c in a third embodiment to which the present technology is applied. a third embodiment is different from the pixel 50 a and the pixel 50 b in the first and the second embodiments in that a film 101 having a negative fixed charge is formed on a side wall of a dti 82 c, and the inner side of the film 101 is filled with sio2 as a filler 86 c. while, in the pixel 50 a in the first embodiment, the side wall film 85 of sio2 is formed on the side wall of the dti 82 , and a portion is filled with polysilicon, in the pixel 50 c in the third embodiment, the film 101 having a negative fixed charge is formed on the side wall of the dti 82 c, and the inner side of the film 101 is filled with sio2. the film 101 having a negative fixed charge, formed on the side wall of the dti 82 c can be formed with, for example, a hafnium oxide (hfo2) film, an aluminum oxide (al2o3) film, a zirconium oxide (zro2) film, a tantalum oxide (ta2o5) film or a titanium oxide (tio2) film. because the above-described types of films have a record of use in a gate insulating film, or the like, of an insulating gate type field effect transistor, and its film formation method has been established, it is possible to easily form a film. while examples of the film formation method can include, for example, a chemical vapor deposition method, a sputtering method, an atomic layer deposition method, or the like, the atomic layer deposition method is preferable, because it is possible to form an sio2 layer of approximately 1 nm at the same time while reducing an interface state during film formation. further, other than the above-described materials, examples of the material can include lanthanum oxide (la2o3), praseodymium oxide (pr2o3), cerium oxide (ceo2), neodymium oxide (nd2o3), promethium oxide (pm2o3), samarium oxide (sm2o3), europium oxide (eu2o3), gadolinium oxide (gd2o3), terbium oxide (tb2o3), dysprosium oxide (dy2o3), holmium oxide (ho2o3), erbium oxide (er2o3), thulium oxide (tm2o3), ytterbium oxide (yb2o3), lutetium oxide (lu2o3), yttrium oxide (y2o3), or the like. further, the above-described film 101 having a negative fixed charge can be formed with a hafnium nitride film, an aluminum nitride film, a hafnium oxynitride film or an aluminum oxynitride film. to the above-described film 101 having a negative fixed charge, silicon (si) or nitride (n) may be added within a range which does not impair insulation properties. concentration of them is determined as appropriate within a range which does not impair insulation properties of the film. however, to prevent image defects such as white spots, it is preferable that the above-described additives such as silicon and nitride are added on a surface of the above-described film 101 having a negative fixed charge, that is, a surface opposite to the above-described pd 71 side. in this manner, by silicon (si) or nitride (n) being added, it is possible to improve heat resistance of the film and capability of blocking ion implantation during a process. in the third embodiment, it is possible to strengthen pinning of the trench side wall of the dti 82 . therefore, for example, when the pixel 50 c in the third embodiment is compared with the pixel 50 a in the first embodiment, according to the pixel 50 c, it is possible to reliably prevent degradation of the dark characteristics. to form the dti 82 in the third embodiment, it is only necessary to remove the filler 86 (polysilicon) and the side wall film 85 (sio2) inside the groove through photoresist and wet etching after the backside is polished until the polysilicon as the filler 86 is exposed from the state illustrated in d in fig. 6 , and form the film 101 and fill the groove with sio2. note that it is also possible to fill inside of the groove with w (tungsten), or the like, in place of the sio2 as the filling material. in this case, because light transmission at the dti 82 with respect to incident light from an oblique direction is suppressed, it is possible to improve color mixture. fourth embodiment fig. 9 is a vertical cross-sectional diagram of a pixel 50 d in a fourth embodiment to which the present technology is applied. a fourth embodiment is different from the pixel 50 a in the first embodiment in that a n-type solid-phase diffused layer 84 d formed along the dti 82 has a concentration gradient in the depth direction of the si substrate 70 , and other configurations are similar to those of the pixel 50 a in the first embodiment. while the concentration of the n-type impurities of the n-type solid-phase diffused layer 84 of the pixel 50 a in the first embodiment is constant regardless of the depth direction, the concentration of the n-type impurities of the n-type solid-phase diffused layer 84 d of the pixel 50 d in the fourth embodiment is different depending on the depth direction. that is, the n-type impurities of an n-type solid-phase diffused layer 84 d - 1 which is closer to the surface side of the n-type solid-phase diffused layer 84 d of the pixel 50 d are formed to have higher concentration, while the n-type impurities of an n-type solid-phase diffused layer 84 d - 2 which is closer to the backside are formed to have lower concentration. in the pixel 50 d in the fourth embodiment, as well as effects similar to those of the pixel 50 a in the first embodiment can be obtained, it is also possible to obtain a new effect that charges can be easily read out as a result of a potential on the backside becoming shallow by a concentration gradient being provided to the n-type solid-phase diffused layer 84 d. to provide a concentration gradient at the n-type solid-phase diffused layer 84 d, for example, because an etching damage occurs on a side wall of a groove when the groove of the dti 82 is open, it is possible to utilize a difference in a solid-phase diffused doping amount by an amount of the damage. note that, instead of providing a concentration gradient at the n-type solid-phase diffused layer 84 d, it is also possible to make the concentration of p-type impurities of a p-type solid-phase diffused layer 83 d which is closer to the surface side lower, so that the concentration of the p-type impurities of the p-type solid-phase diffused layer 83 d which is closer to the backside becomes higher. also in this case, it is possible to obtain effects similar to those obtained in a case where the concentration gradient is provided at the n-type solid-phase diffused layer 84 d. further, it is also possible to provide respective concentration gradients at both the n-type solid-phase diffused layer 84 d and the p-type solid-phase diffused layer 83 d. fifth embodiment fig. 10 is a vertical cross-sectional diagram of a pixel 50 e in a fifth embodiment to which the present technology is applied. the pixel 50 e in the fifth embodiment is different from that in the first embodiment in that a side wall film 85 e formed with sio2, formed on an inner wall of a dti 82 e is formed thicker than the side wall film 85 of the pixel 50 e in the first embodiment, and other configurations are similar to those in the first embodiment. because an optical refraction index of sio2 is lower than that of si, while incident light which is incident on the si substrate 70 is reflected in accordance with the snell's law, and transmission of light to the adjacent pixel 50 is suppressed, if a film thickness of the side wall film 85 is thin, the snell's law does not completely hold, and there is a possibility that transmitted light increases. because the side wall film 85 e of the pixel 50 e in the fifth embodiment is formed to have a thick film thickness, it is possible to reduce deviation from the snell's law, so that it is possible to reduce transmission to the adjacent pixel 50 e as a result of reflection of the incident light at the side wall film 85 e increasing. therefore, with the pixel 50 e in the fifth embodiment, as well as effects similar to those of the pixel 50 a in the first embodiment can be obtained, it is also possible to obtain an effect that color mixture to the adjacent pixel 50 e due to oblique incident light can be suppressed. sixth embodiment fig. 11 is a vertical cross-sectional diagram of a pixel 50 f in a sixth embodiment to which the present technology is applied. the pixel 50 f in the sixth embodiment is different from the pixel 50 a in the first embodiment in that a concentration gradient is provided so that the concentration of the p-type impurities at the si substrate 70 becomes higher on the backside than on the surface side by the p-type impurities being doped in a region 111 between the pd 71 and the backside si interface 75 , and other configurations are similar to those of the pixel 50 a in the first embodiment. referring to fig. 3 again, there is no concentration gradient at the si substrate 70 in the pixel 50 a in the first embodiment, and the p-type region 72 is formed between the pd 71 and the backside si interface 75 . in the pixel 50 f in the sixth embodiment, a concentration gradient is provided at the si substrate 70 . this concentration gradient is set so that the concentration of the p-type impurities becomes higher on the backside (p-type region 111 side) than on the surface side. according to the pixel 50 f in the sixth embodiment having such a concentration gradient, as well as effects similar to those of the pixel 50 a in the first embodiment can be obtained, it is also possible to obtain an effect that charges can be read out more easily than in the pixel 50 a in the first embodiment. seventh embodiment fig. 12 is a vertical cross-sectional diagram of a pixel 50 g in a seventh embodiment to which the present technology is applied. the pixel 50 g in the seventh embodiment is different from the pixel 50 a in that a thickness of the si substrate 70 is thicker than that in the pixel 50 a in the first embodiment, and the dti 82 , or the like, is formed more deeply in accordance with the thickness of the si substrate 70 becomes thicker. in the pixel 50 g in the seventh embodiment, a si substrate 70 g is formed to have a thick thickness. in accordance with the thickness of the si substrate 70 g being thicker, an area (volume) of a pd 71 g increases, and a dti 82 g is formed more deeply. further, in accordance with the dti 82 g being formed more deeply, a p-type solid-phase diffused layer 83 g and an n-type solid-phase diffused layer 84 g are also formed more deeply (widely). as a result of the p-type solid-phase diffused layer 83 g and the n-type solid-phase diffused layer 84 g being wider, an area of a pn junction region including the p-type solid-phase diffused layer 83 g and the n-type solid-phase diffused layer 84 g becomes wider. therefore, with the pixel 50 g in the seventh embodiment, as well as effects similar to those of the pixel 50 g in the first embodiment can be obtained, it is also possible to further increase a saturated charge amount qs compared to the pixel 50 a in the first embodiment. eighth embodiment fig. 13 is a vertical cross-sectional diagram of a pixel 50 h in an eighth embodiment to which the present technology is applied. the pixel 50 h in the eighth embodiment is a pixel obtained by extending a length of a si substrate 70 g in a depth direction in a similar manner to the pixel 50 g in the seventh embodiment illustrated in fig. 12 . further, in a pixel 50 r, a p-type region 121 - 1 , an n-type region 122 and a p-type region 121 - 2 are formed through ion implantation on the backside of the pd 71 . because an intense electric field is generated at a pn junction formed with the p-type region 121 - 1 , the n-type region 122 and the p-type region 121 - 2 , it is possible to hold charges. therefore, with the pixel 50 h in the eighth embodiment, as well as effects similar to those of the pixel 50 g in the seventh embodiment can be obtained, it is also possible to further increase the saturated charge amount qs. ninth embodiment fig. 14 is a vertical cross-sectional diagram of a pixel 50 i in a ninth embodiment to which the present technology is applied. the pixel 50 i in the ninth embodiment is different from the pixel 50 a in the first embodiment in that a mos capacitor 131 and a pixel transistor (not illustrated) are formed on the surface side of the si substrate 70 , and other configurations are similar to those of the pixel 50 a in the first embodiment. normally, even if the saturated charge amount qs of the pd 71 is increased, output is limited with an amplitude limit of a vertical signal line vsl (vertical signal line 47 illustrated in fig. 2 ) if conversion efficiency is not lowered, so that it is difficult to sufficiently utilize the increased saturated charge amount qs. to lower the conversion efficiency of the pd 71 , it is necessary to add capacity to the fd 91 ( fig. 4 ). therefore, the pixel 50 i in the ninth embodiment has a configuration where the mos capacitor 131 is added as capacity to be added to the fd 91 (not illustrated in fig. 11 ). with the pixel 50 i in the ninth embodiment, as well as effects similar to those of the pixel 50 a in the first embodiment can be obtained, it is possible to lower conversion efficiency of the pd 71 by adding the mos capacitor 131 to the fd 91 , and it is possible to employ a configuration where the increased saturated charge amount qs can be sufficiently utilized. tenth embodiment fig. 15 is a vertical cross-sectional diagram of a pixel 50 j in a tenth embodiment to which the present technology is applied. the pixel 50 j in the tenth embodiment is different from the pixel 50 a in the first embodiment in that two contacts 152 are formed at a well contact portion 151 formed in the active region 77 , and the contacts 152 are connected to a cu wiring 153 , and other configurations are similar to those of the pixel 50 a in the first embodiment. in this manner, it is also possible to employ a configuration where the well contact portion 151 is provided. note that, while, in fig. 15 , an example has been described where two contacts 152 are formed, two or more contacts 152 may be formed at the well contact portion 151 . according to the pixel 50 j in the tenth embodiment, as well as effects similar to those of the pixel 50 a in the first embodiment can be obtained, it is also possible to improve a major defect yield ratio. eleventh embodiment fig. 16 is a vertical cross-sectional diagram of a pixel 50 k in an eleventh embodiment to which the present technology is applied. the pixel 50 k in the eleventh embodiment is different from the pixel 50 a in the first embodiment in that a vertical transistor trench 81 k is open at the center of the pixel 50 k, and a transfer transistor (gate) 80 k is formed, and other configurations are similar to those of the pixel 50 a in the first embodiment. the pixel 50 k illustrated in fig. 16 is formed in a state where the transfer transistor (gate) 80 k is located at equal distances from respective outer peripheries of the pd 71 . therefore, according to the pixel 50 k in the eleventh embodiment, as well as effects similar to those of the pixel 50 a in the first embodiment can be obtained, because the transfer transistor (gate) exists at equal distances from the respective outer peripheries of the pd 71 , it is possible to improve transfer of charges. twelfth embodiment fig. 17 is a vertical cross-sectional diagram of a pixel 50 m in a twelfth embodiment to which the present technology is applied. the pixel 50 m in the twelfth embodiment is different from the pixel 50 a in the first embodiment in that a transfer transistor 80 m is formed with two vertical transistor trenches 81 - 1 and 81 - 2 , and other points in the configuration are similar. while the pixel 50 a ( fig. 3 ) in the first embodiment has a configuration where the transfer transistor 80 includes one vertical transistor trench 81 , in the pixel 50 m in the twelfth embodiment, the transfer transistor 80 m is formed with two vertical transistor trenches 81 - 1 and 81 - 2 . in this manner, as a result of a configuration where two vertical transistor trenches 81 - 1 and 81 - 2 are provided being employed, followability of a potential in a region between the two vertical transistor trenches 81 - 1 and 81 - 2 is improved when a potential of the transfer transistor 80 k is changed. therefore, it is possible to increase a modulation factor. as a result, it is possible to improve charge transfer efficiency. further, effects similar to those of the pixel 50 a in the first embodiment can be also obtained. note that, while description has been described here using an example where the transfer transistor 80 k includes two vertical transistor trenches 81 - 1 and 81 - 2 , two or more vertical transistor trenches 81 may be formed in each pixel region. further, while an example has been described where the two vertical transistor trenches 81 - 1 and 81 - 2 are formed to have the same size (length and diameter), in a case where a plurality of vertical transistor trenches 81 is formed, vertical transistor trenches 81 having different sizes may be formed. for example, it is also possible to form one of the two vertical transistor trenches 81 - 1 and 81 - 2 longer than the other, or form one of the two vertical transistor trenches 81 - 1 and 81 - 2 to have a greater diameter. thirteenth embodiment fig. 18 is a vertical cross-sectional diagram of a pixel 50 n in a thirteenth embodiment to which the present technology is applied. the pixel 50 n in the thirteenth embodiment is different from the pixel 50 a in the first embodiment in a configuration of the light shielding film 74 , and other configurations are similar. in the pixel 50 n in the thirteenth embodiment, a light shielding film 74 n - 1 and a light shielding film 74 n - 2 are respectively formed on an upper side and a lower side of the dti 82 n. while, in the pixel 50 a ( fig. 3 ) in the first embodiment, the light shielding film 74 which covers the backside is formed on the backside (lower part in the drawing) of the dti 82 , in the pixel 50 n ( fig. 18 ), inside of the dti 82 n is filled with a metal material (for example, tungsten) which is the same as that of the light shielding film 74 , and the surface side (upper part in the drawing) of the si substrate 70 is also covered. that is, a configuration is employed where a portion other than the backside (other than a light incident surface) of each pixel region is enclosed with a metal material. however, in a case where the pixel 50 n has a configuration where a portion other than the backside of the pixel 50 n is enclosed with the metal material, an opening portion is provided as appropriate at a necessary portion, for example, a portion of the light shielding film 74 n - 2 , at which the transfer transistor 80 n is located is open, and a terminal for external connection is formed. note that a metal material other than tungsten (w) may be used as the light shielding film 74 , or the like. according to the pixel 50 n in the thirteenth embodiment, because it is possible to prevent incident light from leaking to the adjacent pixel 50 n, it is possible to suppress color mixture. further, it is possible to employ a configuration where light which is incident from the backside and reaches the surface side without being subjected to photoelectric conversion is reflected by the metal material (light shielding film 74 n - 2 ) and is incident on the pd 71 again. therefore, with the pixel 50 n in the thirteenth embodiment, as well as effects similar to those of the pixel 50 a in the first embodiment can be obtained, it is also possible to further improve sensitivity of the pd 71 . fourteenth embodiment fig. 19 is a vertical cross-sectional diagram of a pixel 50 p in a fourteenth embodiment to which the present technology is applied. in the above-described first to thirteenth embodiments, description has been provided using an example where the p-type solid-phase diffused layer 83 and the n-type solid-phase diffused layer 84 (for example, fig. 3 ) are formed along the dti 82 formed on the side wall of the pd 71 . in other words, in the first to the thirteenth embodiments, description has been provided using an example of a configuration where a pn junction region of the p-type solid-phase diffused layer 83 and the n-type solid-phase diffused layer 84 is formed on the side wall of the pd 71 , so that an intense electric field region is generated, and charges generated at the pd 71 are held. further, it is also possible to employ a configuration where a pn junction region is formed also on the backside, and the charges generated at the pd 71 are further held. the pixel 50 p in the fourteenth embodiment illustrated in fig. 19 is different from the pixel 50 a in the first embodiment in that, compared to the pixel 50 a in the first embodiment, a pn junction region is formed also on the backside which is the light incident side, and other configurations are similar. in the pixel 50 p illustrated in fig. 19 , a p-type region 72 p is formed on the backside si interface 75 (on the pd 71 side). further, an n-type region 211 is formed on the p-type region 72 p (on the pd 71 side). by these p-type region 72 p and n-type region 211 , it is also possible to employ a configuration where a pn junction region is formed also on the light incident surface side, and an intense electric field region is generated. in a case where a pn junction region is formed also on the backside in this manner, as illustrated in the pixel 50 p in fig. 19 , it is possible to employ a configuration where three sides among four sides enclosing the pd 71 are enclosed with the pn junction region when seen in a vertical cross-section of the pixel 50 p. in a case where a configuration is employed where three sides among four sides enclosing the pd 71 are enclosed with the pn junction region when seen in a vertical cross-section of the pixel 50 p, as illustrated in fig. 19 , it is also possible to continuously form the n-type solid-phase diffused layer 84 p and the n-type region 211 by making the concentration of the n-type solid-phase diffused layer 84 p formed on the side wall the same as the concentration of the n-type region 211 . further, the n-type region 211 and the p-type region 72 p can be made a solid-phase diffused layer. further, in a case where the n-type solid-phase diffused layer 84 p and the n-type region 211 are continuously formed by making the concentration of the n-type solid-phase diffused layer 84 p the same as the concentration of the n-type region 211 , it is possible to form the n-type solid-phase diffused layer 84 p and the n-type region 211 at the same timing upon formation (upon manufacturing). for example, as will be described later, it is possible to form the n-type solid-phase diffused layer 84 p and the n-type region 211 at the same timing using a manufacturing method using solid-phase diffusion. alternatively, it is also possible to make the concentration of the n-type solid-phase diffused layer 84 p different from the concentration of the n-type region 211 . for example, the n-type solid-phase diffused layer 84 p may be formed to have lower concentration than the concentration of the n-type region 211 . in a case where the concentration of the n-type solid-phase diffused layer 84 p is made different from the concentration of the n-type region 211 , it is possible to manufacture the n-type solid-phase diffused layer 84 p and the n-type region 211 in different manufacturing processes. according to a configuration where three sides among four sides enclosing the pd 71 are enclosed with the pn junction region in this manner, in the pixel 50 p in the fourteenth embodiment, an intense electric field region wider than that of the pixel 50 a in the first embodiment is generated, so that it is possible to hold more charges generated at the pd 71 . fifteenth embodiment fig. 20 is a vertical cross-sectional diagram of a pixel 50 q in a fifteenth embodiment to which the present technology is applied. while an example has been described where the above-described pixel 50 p in the fourteenth embodiment illustrated in fig. 19 has a configuration where three sides among four sides enclosing the pd 71 are enclosed with the pn junction region in a vertical cross-section, as in the pixel 50 q in the fifteenth embodiment illustrated in fig. 20 , it is also possible to employ a configuration where a pn junction region is formed at only one side on the backside among the four sides enclosing the pd 71 . in the pixel 50 q illustrated in fig. 20 , a p-type region 72 q and an n-type region 211 q are formed on the backside si interface 75 (pd 71 side), and a pn junction region is formed only on the backside. also in a case of such formation, because an intense electric field region is generated on the backside of the pd 71 , it is possible to employ a configuration where the charges generated at the pd 71 are held. if a pn junction region is formed at at least one side among the four sides enclosing the pd 71 in a vertical cross-section of the pixel 50 as illustrated in the fourteenth and fifteenth embodiments, because an intense electric field region is generated at the pn junction region, it is possible to employ a configuration where the charges generated at the pd 71 are held. sixteenth embodiment fig. 21 is a vertical cross-sectional diagram of a pixel 50 r in a sixteenth embodiment to which the present technology is applied. while the pixel 50 r in the sixteenth embodiment illustrated in fig. 21 has a configuration similar to that of the pixel 50 p illustrated in fig. 19 , the pixel 50 r is different from the pixel 50 p in that the pn junction region formed on the backside is not formed to the dti 82 , but breaks. referring to fig. 21 , while a p-type region 72 r and an n-type region 211 r formed on the backside of the pixel 50 r are formed so as to contact the dti 82 in a left part of the drawing, the p-type region 72 r and the n-type region 211 r are formed so as not to contact the dti 82 in a right part of the drawing, and a separation prevention region 231 is formed between the p-type region 72 r and the n-type region 211 r, and the dti 82 . this separation prevention region 231 is a region provided so that the si substrate 70 is not separated upon manufacturing of the pixel 50 r in a case where the p-type region 72 r and the n-type region 211 r are formed using a manufacturing method which will be described later. when the pixel 50 r is seen from the surface or the backside, the separation prevention region 231 is formed at a position (region) as illustrated in fig. 22 . referring to a in fig. 22 , the dti 82 is formed so as to enclose the pd 71 . this point is similar to other embodiments. the pn junction region and the separation prevention region 231 including the p-type region 72 r and the n-type region 211 r are formed so as to be alternately arranged. alternatively, as illustrated in b in fig. 22 , the separation prevention region 231 may be formed such that a separation prevention region 231 - 1 is formed in a vertical direction in the drawing, and a separation prevention region 231 - 2 is formed in a horizontal direction. in other words, the separation prevention region 231 may be formed in a cross shape. in this case, the pn junction region including the p-type region 72 r and the n-type region 211 r is formed so as to be enclosed with the separation prevention region 231 . in this manner, the separation prevention region 231 is a region formed on the backside of the pd 71 . in other words, the pn junction region including the p-type region 72 r and the n-type region 211 r is formed so as to cover the backside of the pd 71 except part of the backside, and the part which is not covered is made the separation prevention region 231 . further, in other words, the pn junction region is discontinuously formed on the backside of the pd 71 . such a separation prevention region 231 is formed using a manufacturing method which will be described below when the pn junction region including the p-type region 72 r and the n-type region 211 r is formed. fig. 23 and fig. 24 are views for explaining a process relating to manufacturing of the pixel 50 r illustrated in fig. 21 , particularly, manufacturing of the pn junction region including the p-type region 72 r and the n-type region 211 r. the pn junction region c including the p-type region 72 r and the n-type region 211 r of the pixel 50 r can be formed using a silicon on nothing (son) technology. in step s 11 , a plurality of trenches 251 which is vertical to the surface of the si substrate 70 is formed at predetermined intervals. while a plurality of trenches 251 is formed within a region which becomes the pixel 50 r, the trench 251 is not formed in a region which becomes the separation prevention region 231 . in part of the step s 11 in fig. 23 , the trench 251 is not formed in a right portion of a portion which is described as a pixel region, and this portion becomes the separation prevention region 231 through the subsequent process. therefore, a reference numeral is also assigned to the portion in step s 11 in fig. 23 to clearly indicate a portion which becomes the separation prevention region 231 . in step s 12 , annealing treatment is performed on the si substrate 70 on which the trench 251 is formed, using an h2 gas for approximately ten minutes under an environment of approximately 1100° c. (this temperature and period are an example, and not limitative). by this means, as illustrated in step s 12 in fig. 23 , a cavity portion 252 in a horizontal direction is formed on the si substrate 70 . note that a tip of the cavity portion 252 has a slightly rounded shape. in step s 13 , a trench 253 which leads to the cavity portion 252 is open on the surface of the si substrate 70 . in step s 14 , a p-type region 254 and an n-type region 255 are formed on the respective side surfaces of the cavity portion 252 and the trench 253 by impurity diffusion (solid-phase diffusion) being executed. the p-type region 254 formed around the trench 253 in step s 14 becomes the p-type solid-phase diffused layer 83 of the pixel 50 r illustrated in fig. 21 , and the n-type region 255 becomes the n-type solid-phase diffused layer 84 . further, the p-type region 254 formed around the cavity portion 252 in step s 14 becomes the p-type region 72 r of the pixel 50 r, and the n-type region 255 becomes the n-type region 211 r. further, in a case where the p-type region 254 and the n-type region 255 are formed by solid-phase diffusion treatment being performed on a portion of the cavity portion 252 and the trench 253 in this manner, the p-type solid-phase diffused layer 83 , the n-type solid-phase diffused layer 84 , the p-type region 72 r and the n-type region 211 r are formed at the same timing. therefore, because the p-type solid-phase diffused layer 83 and the p-type region 72 r are formed in the same step, concentration becomes substantially the same level. further, in a similar manner, the p-type solid-phase diffused layer 83 and the p-type region 72 r are formed as a continuous p-type region. further, in a similar manner, because the n-type solid-phase diffused layer 84 and the n-type region 211 r are formed in the same process, concentration becomes substantially the same level. further, the n-type solid-phase diffused layer 84 and the n-type region 211 r are formed as a continuous n-type region. in step s 15 ( fig. 24 ), the cavity portion 252 is filled with a filler 256 from the trench 253 . as the filler 256 , polysilicon can be used. further, the filler 256 used for filling in step s 15 corresponds to the filler 86 ( fig. 21 ) used for filling inside of the dti 82 . by filling with the filler 256 in this manner and by forming the separation prevention region 231 between the cavity portions 252 (leaving the si substrate 70 instead of making a cavity portion), it is possible to improve mechanical strength of the si substrate 70 , so that it is possible to prevent occurrence of deformation and damage of the si substrate 70 upon processing. in a case where the separation prevention region 231 is not formed, in other words, the adjacent cavity portions 252 are connected, while there is a possibility that the pixel portion (pd 71 ) is lifted-off, and there is a possibility that the cavity portion 431 is separated into an upper portion and a lower portion, by providing the separation prevention region 231 , it is possible to prevent occurrence of such a situation. in step s 16 , an n-type semiconductor region which becomes the pd 71 is formed by n+ ions being injected to the si substrate 70 , and a p-type semiconductor region which becomes a pwell region 77 is formed by p+ ions being injected. in step s 17 , the backside of the si substrate 70 is polished and planarized through chemical mechanical polishing (cmp). polishing is performed to an extent until the p-type region 254 on an upper side (the pd 71 side) of the cavity portion 431 is exposed. in step s 18 , the ocl 76 , or the like, is laminated on the backside, a transistor is formed on the surface side, and the wiring layer 79 is laminated. the pixel 50 r is manufactured in this manner. in a case where a cavity is formed on the si substrate 70 and the pn junction region is formed through an son process, the pixel 50 r at which the separation prevention region 231 is provided is formed. referring to the pixel 50 r illustrated in fig. 21 again, the separation prevention region 231 is provided in a lower right part in the drawing. this separation prevention region 231 is provided at a position separate from the vertical transistor trench 81 in the example illustrated in fig. 21 in positional relationship with the vertical transistor trench 81 . that is, in the example illustrated in fig. 21 , the vertical transistor trench 81 is formed in an upper left part in the drawing, and the separation prevention region 231 is provided in a lower right part in the drawing. further, while not illustrated, the separation prevention region 231 may be provided at a position closer to the vertical transistor trench 81 in the positional relationship with the vertical transistor trench 81 . for example, as illustrated in fig. 21 , in a case where the vertical transistor trench 81 is formed in an upper left part in the drawing, the separation prevention region 231 may be provided in a lower left part in the drawing. further, the separation prevention region 231 may be formed near the center as illustrated in fig. 25 instead of being formed on a side surface side of the pixel 50 r. in the pixel 50 r illustrated in fig. 25 , a separation prevention region 231 ′ is formed at the center of the pixel 50 r. seventeenth embodiment fig. 26 is a vertical cross-sectional diagram of a pixel 50 s in a seventeenth embodiment to which the present technology is applied. as described with reference to fig. 19 to fig. 25 , in a case where the pn junction region is formed also on the backside of the pixel 50 so that an intense electric field region is generated also on the backside, it is considered that a charge amount held on the backside increases. therefore, a configuration for efficiently and more reliably transferring charges held on the backside will be illustrated in fig. 26 . the pixel 50 s illustrated in fig. 26 is different from the pixel 50 p in the fourteenth embodiment illustrated in fig. 19 in that two transfer transistor gates 80 (vertical transistor trenches 81 ) are provided, and other portions have similar configurations. in the pixel 50 s, two vertical transistor trenches 81 s - 1 and 81 s - 2 are formed. the vertical transistor trench 81 s - 2 among the two vertical transistor trenches 81 s is formed at a position closer to the center of the pixel 50 s than the vertical transistor trench 81 s - 1 . further, the vertical transistor trench 81 s - 2 among the two vertical transistor trenches 81 s is formed to be longer than the vertical transistor trench 81 s - 1 . in this manner, the vertical transistor trench 81 s located closer to the center of the pixel 50 s can be formed to be longer than the vertical transistor trench 81 s located farther from the center. alternatively, while not illustrated, the vertical transistor trench 81 s located closer to the center of the pixel 50 s may be formed to be shorter than the vertical transistor trench 81 s located farther from the center. further, the vertical transistor trench 81 s formed to be longer (in fig. 26 , the vertical transistor trench 81 s - 2 ) may be formed so as to contact the pn junction region (an n-type region 211 s and a p-type region 72 s ) formed on the backside, or so as to reach inside of the pn junction region. of course, the present technology can be also applied in a case where the vertical transistor trench 81 s - 2 does not contact the pn junction region (n-type region 211 s ) formed on the backside. the vertical transistor trench 81 s formed to be longer (in fig. 26 , the vertical transistor trench 81 s - 2 ) is formed to be at least longer (deeper) than equal to or greater than ½ of a thickness of the pd 71 . in other words, the vertical transistor trench 81 s which is formed to be longer is formed to reach, at least a position exceeding a central position of the pd 71 . by forming the vertical transistor trenches 81 s having different lengths in this manner, it is possible to optimize each transfer of charges at a shallow portion and a deep portion of the pd 71 . while, in the example illustrated in fig. 26 , a case has been described where the two vertical transistor trenches 81 s are provided, it is also possible to provide two or more (a plurality of) vertical transistor trenches 81 s. in a case where a plurality of vertical transistor trenches 81 s is provided, at least one vertical transistor trench 81 s among the plurality of vertical transistor trenches 81 s may be formed so as to contact the pn junction region formed on the backside. further, a position where the vertical transistor trench 81 s is formed may be a position closer to the dti 82 side than to the central portion as illustrated in fig. 26 or may be a central portion. <concentration of pn junction region on backside> concentration of the pn junction region in a case where the pn junction region is provided on the backside in the pixels 50 p to 50 s described as the fourteenth to the seventeenth embodiments will be described. fig. 27 is a view for explaining change in the respective concentration of the p-type impurities and the n-type impurities of the pn junction region. an upper part of fig. 27 illustrates the pixel 50 p illustrated in fig. 19 . the backside si interface 75 of the pixel 50 p is set as a position a, and a position near a central portion of the pd 71 is set as a position b. change in concentration of impurities from the position a to the position b is illustrated in a lower part of fig. 27 . in a graph of the change of concentration illustrated in the lower part of fig. 27 , a horizontal axis indicates a depth from an si interface (position a) on a light receiving surface side, and a vertical axis indicates concentration of impurities. further, a graph indicated with a solid line indicates concentration of p-type impurities and a graph indicated with a dotted line indicates concentration of n-type impurities. the concentration of p-type impurities is the highest near the position a, that is, near the center of the p-type region 72 p, and becomes rapidly lower as the position is closer (deeper) to the center (position b) of the pd 71 . that is, the concentration of p-type impurities is the highest near the center of the p-type region 72 p, and becomes rapidly lower when the position is away from a position near the center of the p-type region 72 p. the p-type region 72 p is formed to have higher concentration than the concentration of the p-type impurities of a pwell region 77 . further, concentration in a region where the concentration of the p-type impurities is the highest, that is, in this case, the concentration of the central portion of the p-type region 72 p can be, for example, of the order of 1e16 cm-3 to 1e17 cm-3. meanwhile, the concentration of the n-type impurities is the highest at a position a little away from the position a, that is, near the center of the n-type region 211 , and becomes gradually lower as the position is closer (deeper) to the center (position b) of the pd 71 . that is, the concentration of the n-type impurities is the highest near the center of the n-type region 211 , becomes gradually lower when the position is away from a portion near the center of the n-type region 211 , and is maintained at fixed concentration until the central portion of the pd 71 . the n-type region 211 is formed to have concentration equal to the concentration of the n-type impurities of the pd 71 or concentration higher than the concentration of the n-type impurities of the pd 71 . further, the concentration in a region where the concentration of the n-type impurities is the highest, that is, in this case, the concentration at the central portion of the n-type region 211 can be, for example, of the order of 1e15 cm-3 to 1e17 cm-3. change in the concentration of n-type impurities becomes the change as illustrated in the graph in fig. 27 also in a case where the position becomes closer to a portion near the center of the pd 71 from the n-type solid-phase diffused layer 84 . that is, the concentration of n-type impurities changes such that the concentration is the highest near the center of the n-type solid-phase diffused layer 84 , becomes gradually lower when the position is away from a portion near the center of the n-type solid-phase diffused layer 84 , and is maintained at fixed concentration until the central portion of the pd 71 . in this manner, the n-type solid-phase diffused layer 84 and the n-type region 211 having high concentration of n-type impurities are provided on the side surface and the backside of the pd 71 . further, in a region adjacent to the n-type solid-phase diffused layer 84 and the n-type region 211 having high concentration of n-type impurities, the p-type solid-phase diffused layer 83 and the p-type region 72 p having high concentration of p-type impurities are provided. therefore, the side surface and the backside of the pd 71 can constitute the pixel 50 where pn junction is precipitous. according to such a configuration, as described above, it is possible to employ a configuration where an intense electric field region is generated on the side surface and the backside of the pd 71 , so that it is possible to employ a configuration where the charges generated at the pd 71 can be held more easily. eighteenth embodiment fig. 28 is a vertical cross-sectional diagram of a pixel 50 t in an eighteenth embodiment to which the present technology is applied. further, fig. 29 is a plan view of the pixel 50 t including an al pad take-out portion included in the eighteenth embodiment. as the eighteenth embodiment, a configuration including an al pad which connects the pixel 50 and other semiconductor substrates, or the like, will be described. while fig. 28 illustrates an example where an al pad is provided at the pixel 50 a in the first embodiment illustrated in fig. 3 , it is also possible to employ a configuration where an al pad is provided at any pixel 50 among the pixels 50 b to 50 s in the second to the seventeenth embodiments, by combining the eighteenth embodiment. as illustrated in fig. 28 and fig. 29 , the pixel array portion 41 ( fig. 2 ) is formed in a left part of the drawing, and an al pad take-out portion 301 is provided in a right part of the drawing. at the al pad take-out portion 301 , an al pad 302 which becomes a connection terminal between the pixel 50 t and other semiconductor substrates, or the like, is formed on the surface of the substrate (upper part in the drawing). as illustrated in fig. 28 , a solid-phase diffused trench 303 which is formed in a similar manner to the dti 82 in the first embodiment, is formed around each al pad 302 at the al pad take-out portion 301 . by this means, it is possible to electrically insulate each al pad 302 from the pixel array portion 41 and other peripheral circuit portions (not illustrated). note that the solid-phase diffused trench 303 formed at the al pad take-out portion 301 can be utilized as, for example, a mark in photoresist. further, by this means, it is possible to use the solid-phase diffused trench 303 as an alignment mark in the subsequent process. nineteenth embodiment fig. 30 is a vertical cross-sectional diagram of a pixel 50 u in a nineteenth embodiment to which the present technology is applied. as the nineteenth embodiment, a configuration including the pixel 50 and a peripheral circuit portion, or the like, will be described. while fig. 30 illustrates an example where a peripheral circuit is provided at the pixel 50 a in the first embodiment illustrated in fig. 3 , it is also possible to employ a configuration where a peripheral circuit is provided at any pixel 50 among the pixels 50 b to 50 s in the second to the seventeenth embodiments, by combining the nineteenth embodiment. as illustrated in fig. 30 , the pixel array portion 41 ( fig. 2 ) is formed in a left part of the drawing, and a peripheral circuit portion 311 is formed in a right part of the drawing. at the peripheral circuit portion 311 , a solid-phase diffused trench 321 which is formed in a similar manner to the dti 82 in the first embodiment, is formed. a surface side (upper part in the drawing) of a p-type solid-phase diffused layer 83 u formed along the solid-phase diffused trench 321 is electrically connected to a p+ diffused layer 312 formed on the surface of the si substrate 70 . further, a backside (lower part in the drawing) of the p-type solid-phase diffused layer 83 u is electrically connected to a pwell region 313 formed near the backside si interface 75 or a hole layer 315 formed with a pinning film near the backside interface of the si substrate 70 . the pwell region 313 is connected to the light shielding film 74 formed with a metal material such as w (tungsten) via the backside contact 314 . by this means, the surface side and the backside of the si substrate 70 are electrically connected, so that the potential is fixed at a potential of the light shielding film 74 . in the nineteenth embodiment, because the p-type solid-phase diffused layer 83 u can also play a role of a pwell region which has been required to connect the surface side and the backside of the si substrate 70 in related art, it is possible to reduce a step for forming a pwell region. twentieth embodiment fig. 31 is a vertical cross-sectional diagram of a pixel 50 v in a twentieth embodiment to which the present technology is applied. as the twentieth embodiment, same as the nineteenth embodiment, a configuration including the pixel 50 and a peripheral circuit portion, or the like, will be described. while fig. 31 illustrates an example where a peripheral circuit is provided at the pixel 50 a in the first embodiment illustrated in fig. 3 , it is also possible to employ a configuration where a peripheral circuit is provided at any pixel 50 among the pixels 50 b to 50 s in the second to the seventeenth embodiments, by combining the twentieth embodiment. in the pixel 50 v in the twentieth embodiment, as in the pixel 50 t in the nineteenth embodiment, as illustrated in fig. 31 , the pixel array portion 41 is formed in a left part of the drawing, and the peripheral circuit portion 331 is provided in a right part of the drawing. at the peripheral circuit portion 331 , a solid-phase diffused trench 321 v formed in a similar manner to the dti 82 in the first embodiment is formed. at the peripheral circuit portion 331 , a solid-phase diffused trench 321 v formed in a similar manner to the dti 82 in the first embodiment is formed. a surface side (upper side in the drawing) of the p-type solid-phase diffused layer 83 v formed along the solid-phase diffused trench 321 v is electrically connected to a p+ diffused layer 312 v formed on the surface of the si substrate 70 via the pwell region 332 . the pixel 50 v is different from the pixel 50 u illustrated in fig. 30 in this point. further, a backside (lower side in the drawing) of the p-type solid-phase diffused layer 83 v is electrically connected to a pwell region 313 formed near the backside si interface 75 or a hole layer 315 . the pwell region 313 is connected to the light shielding film 74 formed with a metal material such as w via the backside contact 314 . by this means, the surface side and the backside of the si substrate 70 are electrically connected, so that the potential is fixed at a potential of the light shielding film 74 . in the eleventh embodiment, because the p-type solid-phase diffused layer 83 v can also play a role of a pwell region which has been required to connect the surface side and the backside of the si substrate 70 in related art, it is possible to reduce a step for forming a pwell region. twenty first embodiment fig. 32 is a vertical cross-sectional diagram of a pixel 50 w in a twenty first embodiment to which the present technology is applied. as the twenty first embodiment, same as the nineteenth embodiment, a configuration including the pixel 50 and a peripheral circuit portion, or the like, will be described. while fig. 32 illustrates an example where a peripheral circuit is provided at the pixel 50 a in the first embodiment illustrated in fig. 3 , it is also possible to employ a configuration where a peripheral circuit is provided at any pixel 50 of any pixel 50 among the pixels 50 b to 50 s in the second to the seventeenth embodiments, by combining the twenty first embodiment. in the pixel 50 w in the twenty first embodiment, as in the pixel 50 t in the nineteenth embodiment, as illustrated in fig. 32 , the pixel array portion 41 is formed in a left part of the drawing, and the peripheral circuit portion 371 is provided in a right part of the drawing. a solid-phase diffused trench 303 is formed at a boundary portion 372 located at a boundary between the pixel array portion 41 and the peripheral circuit portion 371 . therefore, with the pixel 50 w in the twenty-first embodiment, as well as effects similar to those of the pixel 50 a in the first embodiment can be obtained, it is also possible to prevent light emission which can occur at the peripheral circuit portion 371 from intruding to the pixel array portion 41 side by the solid-phase diffused trench 303 w. note that the above-described first to twenty-first embodiments can be combined as appropriate. first modified example while, in the above-described first to twenty-first embodiments, each pixel 50 has the fd 91 ( fig. 4 ) and the pixel transistor (such as, for example, the reset transistor 92 ( fig. 2 )), the fd 91 and the pixel transistor can be shared among a plurality of pixels 50 . fig. 33 illustrates a plan view in a case where the fd 91 and the pixel transistor are shared between two pixels 50 adjacent in a vertical direction. in the example illustrated in fig. 33 , for example, the fd 91 and the pixel transistor are shared between a pixel 50 - 1 located in a lower right part and a pixel 50 - 2 located above the pixel 50 - 1 . an fd 91 ′- 1 of the pixel 50 - 1 , an fd 91 ′- 2 of the pixel 50 - 2 , a conversion efficiency switching transistor 412 , and an amplifier transistors 93 ′- 2 of the pixel 50 - 2 are connected with a wiring 411 - 1 . further, a mos capacitor 413 of the pixel 50 - 1 and the conversion efficiency switching transistor 412 of the pixel 50 - 2 are connected with a wiring 411 - 2 . according to such as sharing structure, because there is room for an occupation area of each pixel as a result of the number of elements per pixel being reduced, it is possible to provide the conversion efficiency switching transistor 412 and the mos capacitor 413 to be added to the fd 91 ′. the conversion efficiency switching transistor 412 can switch conversion efficiency to high conversion efficiency in use application for improving sensitivity output, and can switch conversion efficiency to low conversion efficiency in use application for improving the saturated charge amount qs. because the mos capacitor 413 added to the fd 91 ′ can increase fd capacity, it is possible to realize low conversion efficiency, so that it is possible to improve the saturated charge amount qs. other modified examples the first to the twenty-first embodiments can be also applied to the pixel 50 which is constituted by, for example, laminating a plurality of substrates as will be described below. <configuration example of laminated solid-state imaging device to which technology according to present disclosure can be applied> fig. 34 is a view illustrating outline of a configuration example of a laminated solid-state imaging device to which the technology according to the present disclosure can be applied. a in fig. 34 illustrates a schematic configuration example of a non-laminated solid-state imaging device. as illustrated in a in fig. 34 , a solid-state imaging device 23010 has one die (semiconductor substrate) 23011 . on this die 23011 , a pixel region 23012 in which pixels are arranged in an array, a control circuit 23013 which performs various kinds of control including driving of pixels, and a logic circuit 23014 for performing signal processing are mounted. b and c in fig. 34 illustrate a schematic configuration example of a laminated solid-state imaging device. as illustrated in b and c in fig. 34 , in the solid-state imaging device 23020 , two dies of a sensor die 23021 and a logic die 23024 are laminated and electrically connected to be constituted as one semiconductor chip. in b in fig. 34 , the pixel region 23012 and the control circuit 23013 are mounted on the sensor die 23021 , and the logic circuit 23014 including a signal processing circuit which performs signal processing is mounted on the logic die 23024 . in c in fig. 34 , the pixel region 23012 is mounted on the sensor die 23021 , and the control circuit 23013 and the logic circuit 23014 are mounted on the logic die 23024 . fig. 35 is a cross-sectional diagram illustrating a first configuration example of a laminated solid-state imaging device 23020 . a photodiode (pd) which constitutes pixels which become the pixel region 23012 , an floating diffusion (fd), a tr (mosfet), and a tr which becomes the control circuit 23013 , or the like, are formed at the sensor die 23021 . further, a wiring layer 23101 having wirings 23110 having a plurality of layers, in the present example, three layers, is formed at the sensor die 23021 . note that (a tr which becomes) the control circuit 23013 can be constituted at the logic die 23024 , not at the sensor die 23021 . at the logic die 23024 , a tr constituting the logic circuit 23014 is formed. further, at the logic die 23024 , a wiring layer 23161 having a wiring 23170 having a plurality of layers, in the present example, three layers, is formed. further, at the logic die 23024 , a connection hole 23171 in which an insulating film 23172 is formed on an inner wall surface is formed, and a connection conductor 23173 to be connected to a wiring 23170 , or the like, is embedded in the connection hole 23171 . the sensor die 23021 is pasted to the logic die 23024 so that the wiring layers 23101 and 23161 face each other, and thereby the laminated solid-state imaging device 23020 in which the sensor die 23021 and the logic die 23024 are laminated is constituted. a film 23191 such as a protection film is formed on a surface at which the sensor die 23021 is pasted to the logic die 23024 . at the sensor die 23021 , a connection hole 23111 which penetrates through the sensor die 23021 from a backside (side at which light is incident on the pd) (upper side) of the sensor die 23021 and reaches a wiring 23170 in the uppermost layer of the logic die 23024 is formed. further, at the sensor die 23021 , a connection hole 23121 which reaches a wiring 23110 in the first layer from the backside of the sensor die 23021 is formed in the vicinity of the connection hole 23111 . on an inner wall surface of the connection hole 23111 , an insulating film 23112 is formed, and, on an inner wall surface of the connection hole 23121 , an insulating film 23122 is formed. then, connection conductors 23113 and 23123 are respectively embedded into the connection holes 23111 and 23121 . the connection conductor 23113 is electrically connected to the connection conductor 23123 on the backside of the sensor die 23021 , and thereby, the sensor die 23021 is electrically connected to the logic die 23024 via the wiring layer 23101 , the connection hole 23121 , the connection hole 23111 and the wiring layer 23161 . fig. 36 is a cross-sectional diagram illustrating a second configuration example of a laminated solid-state imaging device 23020 . in the second configuration example of the solid-state imaging device 23020 , ((the wiring 23110 ) of the wiring layer 23101 of) the sensor die 23021 is electrically connected to ((the wiring 23170 of) the wiring layer 23161 of) the logic die 23024 by one connection hole 23211 formed at the sensor die 23021 . that is, in fig. 36 , the connection hole 23211 is formed to penetrate through the sensor die 23021 from the backside of the sensor die 23021 , reach the wiring 23170 in the uppermost layer of the logic die 23024 , and reach the wiring 23110 in the uppermost layer of the sensor die 23021 . on an inner wall surface of the connection hole 23211 , an insulating film 23212 is formed, and a connection conductor 23213 is embedded into the connection hole 23211 . while, in the above-described fig. 35 , the sensor die 23021 is electrically connected to the logic die 23024 by two connection holes 23111 and 23121 , in fig. 36 , the sensor die 23021 is electrically connected to the logic die 23024 by one connection hole 23211 . fig. 37 is a cross-sectional diagram illustrating a third configuration example of a laminated solid-state imaging device 23020 . the solid-state imaging device 23020 in fig. 37 is different from that in a case of fig. 35 where the film 23191 such as a protection film is formed on a surface on which the sensor die 23021 is pasted to the logic die 23024 , in that the film 23191 such as a protection film is not formed on a surface on which the sensor die 23021 is pasted to the logic die 23024 . the solid-state imaging device 23020 in fig. 37 is constituted by superimposing the sensor die 23021 and the logic die 23024 so that the wiring 23110 directly contacts the wiring 23170 , and applying heat while applying desired weight, to directly join the wirings 23110 and 23170 . fig. 38 is a cross-sectional diagram illustrating another configuration example of the laminated solid-state imaging device to which the technology according to the present disclosure can be applied. in fig. 38 , a solid-state imaging device 23401 has a three-layer structure in which three dies of a sensor die 23411 , a logic die 23412 and a memory die 23413 are laminated. the memory die 23413 includes, for example, a memory circuit which temporarily stores required data in signal processing to be performed at the logic die 23412 . while, in fig. 38 , the logic die 23412 and the memory die 23413 are laminated in this order under the sensor die 23411 , the logic die 23412 and the memory die 23413 may be laminated in reverse order, that is, in order of the memory die 23413 and the logic die 23412 , under the sensor die 23411 . note that, in fig. 38 , at the sensor die 23411 , a pd which becomes a photoelectric converting unit of pixels, and source/drain regions of the pixels tr are formed. a gate electrode is formed around the pd via a gate insulating film, and a pixel tr 23421 and a pixel tr 23422 are formed by the source/drain regions which are paired with the gate electrode. the pixel tr 23421 adjacent to the pd is a transfer tr, and one of a pair of source/drain regions constituting the pixel tr 23421 is an fd. further, at the sensor die 23411 , an interlayer dielectric is formed, and a connection hole is formed at the interlayer dielectric. at the connection hole, a connection conductor 23431 to be connected to the pixel tr 23421 and the pixel tr 23422 is formed. further, at the sensor die 23411 , a wiring layer 23433 having a wiring 23432 having a plurality of layers to be connected to respective connection conductors 23431 is formed. further, an aluminum pad 23434 which becomes an electrode for external connection is formed in the lowermost layer of the wiring layer 23433 of the sensor die 23411 . that is, at the sensor die 23411 , the aluminum pad 23434 is formed at a position closer to a joining surface 23440 to the logic die 23412 than to the wiring 23432 . the aluminum pad 23434 is used as one end of a wiring relating to input and output of signals to and from outside. further, at the sensor die 23411 , a contact 23441 to be used for electrical connection to the logic die 23412 is formed. the contact 23441 is connected to a contact 23451 of the logic die 23412 , and is also connected to the aluminum pad 23442 of the sensor die 23411 . then, a pad hole 23443 is formed at the sensor die 23411 so as to reach the aluminum pad 23442 from the backside (upper side) of the sensor die 23411 . the technology according to the present disclosure can be applied to the solid-state imaging device as described above. <application example to in-vivo information acquisition system> a technology (present technology) according to an embodiment of the present disclosure can be applied to various products. for example, the technology according to an embodiment of the present disclosure may be applied to an endoscopic operation system. fig. 39 is a block diagram illustrating an example of a schematic configuration of an in-vivo information acquisition system of a patient for which a capsular endoscope to which the technology (the present technology) according to an embodiment of the present technology is applicable. an in-vivo information acquisition system 10001 includes a capsular endoscope 10100 and an external control device 10200 . the capsular endoscope 10100 is swallowed by a patient at the time of examination. the capsular endoscope 10100 has an imaging function and a wireless communication function. the capsular endoscope 10100 sequentially captures internal images (hereinafter also referred to as in-vivo images) of an organ such as a stomach or bowels at a predetermined interval and wirelessly transmit information regarding the in-vivo images in sequence to the external control device 10200 outside of the patient while moving inside the organ by peristaltic movement or the like until spontaneously excreted. the external control device 10200 generally controls an operation of the in-vivo information acquisition system 10001 . in addition, the external control device 10200 receives the information regarding the in-vivo images transmitted from the capsular endoscope 10100 and generates image data for displaying the in-vivo images on a display device (not illustrated) on the basis of the received information regarding the in-vivo image. in the in-vivo information acquisition system 10001 , an in-vivo image obtained by imaging an in-vivo form of a patient can be obtained at any time in this way until the capsular endoscope 10100 is swallowed and excreted. configurations and functions of the capsular endoscope 10100 and the external control device 10200 will be described in more detail. the capsular endoscope 10100 includes a capsular casing 10101 . the capsular casing 10101 accommodates a light source unit 10111 , an imaging unit 10112 , an image processing unit 10113 , a wireless communication unit 10114 , a power feeding unit 10115 , a power unit 10116 , and a control unit 10117 . the light source unit 10111 is configured of, for example, a light source such as light-emitting diode (led) and radiates light for an imaging visual field of the imaging unit 10112 . the imaging unit 10112 includes an image sensor and an optical system that includes a plurality of lens installed on the front stage of the imaging element. reflected light (hereinafter referred to observation light) of light emitted to a body tissue which is an observation target is condensed by the optical system to be incident on the imaging element. in the imaging unit 10112 , the image sensor photoelectrically converts the observation light incident on the imaging element to generate an image signal corresponding to the observation light. the image signal generated by the imaging unit 10112 is supplied to the image processing unit 10113 . the image processing unit 10113 is configured of a processor such as a central processing unit (cpu) or a graphics processing unit (gpu) and performs various kinds of signal processing on the image signal generated by the imaging unit 10112 . the image processing unit 10113 supplies the image signal subjected to the signal processing as raw data to the wireless communication unit 10114 . the wireless communication unit 10114 performs a predetermined process such as a modulation process on the image signal subjected to the signal processing by the image processing unit 10113 and transmits the image signal to the external control device 10200 via an antenna 10114 a. in addition, the wireless communication unit 10114 receives a control signal for driving control of the capsular endoscope 10100 from the external control device 10200 via the antenna 10114 a. the wireless communication unit 10114 supplies the control signal received from the external control device 10200 to the control unit 10117 . the power feeding unit 10115 includes a power reception antenna coil, a power generation circuit that reproduces power from a current generated in the antenna coil, a voltage boosting circuit, or the like. the power feeding unit 10115 generates power using a so-called contactless charging principle. the power unit 10116 is configured of a secondary cell and stores the power generated by the power feeding unit 10115 . in fig. 39 , an arrow or the like indicating a supply source of power from the power unit 10116 is not illustrated to avoid complexity of the drawing. however, the power stored in the power unit 10116 is supplied to the light source unit 10111 , the imaging unit 10112 , the image processing unit 10113 , the wireless communication unit 10114 , and the control unit 10117 , and thus can be used to drive these units. the control unit 10117 is configured of a processor such as a cpu and appropriately controls driving of the light source unit 10111 , the imaging unit 10112 , the image processing unit 10113 , the wireless communication unit 10114 , and the power feeding unit 10115 in accordance with control signals transmitted from the external control device 10200 . the external control device 10200 is configured of a processor such as a cpu or a gpu, a microcomputer in which a processor and a storage element such as a memory are mixed, a control substrate, or the like. the external control device 10200 controls an operation of the capsular endoscope 10100 by transmitting a control signal to the control unit 10117 of the capsular endoscope 10100 via an antenna 10200 a. in the capsular endoscope 10100 , for example, a radiation condition of light for an observation target in the light source unit 10111 can be changed in accordance with a control signal from the external control device 10200 . in addition, an imaging condition (for example, a frame rate or an exposure value in the imaging unit 10112 , or the like) can be changed in accordance with a control signal from the external control device 10200 . in addition, content of a process in the image processing unit 10113 or a condition (for example, a transmission interval or the number of transmitted images, or the like) in which an image signal is transmitted by the wireless communication unit 10114 may be changed in accordance with the control signal from the external control device 10200 . in addition, the external control device 10200 performs various kinds of image processing on the image signals transmitted from the capsular endoscope 10100 to generate image data for displaying the captured in-vivo images on the display device. as the image processing, for example, various kinds of signal processing such as a development process (demosaic processing), high-quality processing (a band enhancement process, superresolution processing, a noise reduction (nr) process, and/or a camera-shake correction process), and/or an expansion process (electronic zoom processing) can be performed. the external control device 10200 controls driving of the display device to display the captured in-vivo image on the basis of the generated image data. alternatively, the external control device 10200 may cause a recording device (not illustrated) to record the generated image data or may cause a printing device (not illustrated) to print and output the generated image data. an example of the in-vivo information acquisition system to which the technology according to the present disclosure may be applied has been described above. the technology according to the present disclosure may be applied to the imaging unit 10112 among the configurations described above. <application example to mobile body> a technology (present technology) according to an embodiment of the present disclosure can be applied to various products. for example, the technology according to the present disclosure may also be realized as a device mounted in a mobile body of any type such as automobile, electric vehicle, hybrid electric vehicle, motorcycle, bicycle, personal mobility, airplane, drone, ship, or robot. fig. 40 is a block diagram depicting an example of schematic configuration of a vehicle control system as an example of a mobile body control system to which the technology according to an embodiment of the present disclosure can be applied. the vehicle control system 12000 includes a plurality of electronic control units connected to each other via a communication network 12001 . in the example depicted in fig. 40 , the vehicle control system 12000 includes a driving system control unit 12010 , a body system control unit 12020 , an outside-vehicle information detecting unit 12030 , an in-vehicle information detecting unit 12040 , and an integrated control unit 12050 . in addition, a microcomputer 12051 , a sound/image output unit 12052 , and a vehicle-mounted network interface (i/f) 12053 are illustrated as a functional configuration of the integrated control unit 12050 . the driving system control unit 12010 controls the operation of devices related to the driving system of the vehicle in accordance with various kinds of programs. for example, the driving system control unit 12010 functions as a control device for a driving force generating device for generating the driving force of the vehicle, such as an internal combustion engine, a driving motor, or the like, a driving force transmitting mechanism for transmitting the driving force to wheels, a steering mechanism for adjusting the steering angle of the vehicle, a braking device for generating the braking force of the vehicle, and the like. the body system control unit 12020 controls the operation of various kinds of devices provided to a vehicle body in accordance with various kinds of programs. for example, the body system control unit 12020 functions as a control device for a keyless entry system, a smart key system, a power window device, or various kinds of lamps such as a headlamp, a backup lamp, a brake lamp, a turn signal, a fog lamp, or the like. in this case, radio waves transmitted from a mobile device as an alternative to a key or signals of various kinds of switches can be input to the body system control unit 12020 . the body system control unit 12020 receives these input radio waves or signals, and controls a door lock device, the power window device, the lamps, or the like of the vehicle. the outside-vehicle information detecting unit 12030 detects information about the outside of the vehicle including the vehicle control system 12000 . for example, the outside-vehicle information detecting unit 12030 is connected with an imaging unit 12031 . the outside-vehicle information detecting unit 12030 makes the imaging unit 12031 image an image of the outside of the vehicle, and receives the imaged image. on the basis of the received image, the outside-vehicle information detecting unit 12030 may perform processing of detecting an object such as a human, a vehicle, an obstacle, a sign, a character on a road surface, or the like, or processing of detecting a distance thereto. the imaging unit 12031 is an optical sensor that receives light, and which outputs an electric signal corresponding to a received light amount of the light. the imaging unit 12031 can output the electric signal as an image, or can output the electric signal as information about a measured distance. in addition, the light received by the imaging unit 12031 may be visible light, or may be invisible light such as infrared rays or the like. the in-vehicle information detecting unit 12040 detects information about the inside of the vehicle. the in-vehicle information detecting unit 12040 is, for example, connected with a driver state detecting unit 12041 that detects the state of a driver. the driver state detecting unit 12041 , for example, includes a camera that images the driver. on the basis of detection information input from the driver state detecting unit 12041 , the in-vehicle information detecting unit 12040 may calculate a degree of fatigue of the driver or a degree of concentration of the driver, or may determine whether the driver is dozing. the microcomputer 12051 can calculate a control target value for the driving force generating device, the steering mechanism, or the braking device on the basis of the information about the inside or outside of the vehicle which is obtained by the outside-vehicle information detecting unit 12030 or the in-vehicle information detecting unit 12040 , and output a control command to the driving system control unit 12010 . for example, the microcomputer 12051 can perform cooperative control intended to implement functions of an advanced driver assistance system (adas) which functions include collision avoidance or shock mitigation for the vehicle, following driving based on a following distance, vehicle speed maintaining driving, a warning of collision of the vehicle, a warning of deviation of the vehicle from a lane, or the like. in addition, the microcomputer 12051 can perform cooperative control intended for automatic driving, which makes the vehicle to travel autonomously without depending on the operation of the driver, or the like, by controlling the driving force generating device, the steering mechanism, the braking device, or the like on the basis of the information about the surroundings of the vehicle which is obtained by the outside-vehicle information detecting unit 12030 or the in-vehicle information detecting unit 12040 . in addition, the microcomputer 12051 can output a control command to the body system control unit 12020 on the basis of the information about the outside of the vehicle which is obtained by the outside-vehicle information detecting unit 12030 . for example, the microcomputer 12051 can perform cooperative control intended to prevent a glare by controlling the headlamp so as to change from a high beam to a low beam, or the like, in accordance with the position of a preceding vehicle or an oncoming vehicle detected by the outside-vehicle information detecting unit 12030 . the sound/image output unit 12052 transmits an output signal of at least one of a sound or an image to an output device capable of visually or auditorily notifying an occupant of the vehicle or the outside of the vehicle of information. in the example of fig. 40 , an audio speaker 12061 , a display unit 12062 , and an instrument panel 12063 are illustrated as the output device. the display unit 12062 may, for example, include at least one of an on-board display or a head-up display. fig. 41 is a diagram depicting an example of the installation position of the imaging unit 12031 . in fig. 41 , the vehicle 12100 includes imaging units 12101 , 12102 , 12103 , 12104 , and 12105 as the imaging unit 12031 . the imaging units 12101 , 12102 , 12103 , 12104 , and 12105 are, for example, disposed at positions on a front nose, sideview mirrors, a rear bumper, and a back door of the vehicle 12100 as well as a position on an upper portion of a windshield within the interior of the vehicle. the imaging unit 12101 provided to the front nose and the imaging unit 12105 provided to the upper portion of the windshield within the interior of the vehicle obtain mainly an image of the front of the vehicle 12100 . the imaging units 12102 and 12103 provided to the sideview mirrors obtain mainly an image of the sides of the vehicle 12100 . the imaging unit 12104 provided to the rear bumper or the back door obtains mainly an image of the rear of the vehicle 12100 . the images of the front obtained by the imaging units 12101 and 12105 are used mainly to detect a preceding vehicle, a pedestrian, an obstacle, a signal, a traffic sign, a lane, or the like. incidentally, fig. 41 depicts an example of imaging ranges of the imaging units 12101 to 12104 . an imaging range 12111 represents the imaging range of the imaging unit 12101 provided to the front nose. imaging ranges 12112 and 12113 respectively represent the imaging ranges of the imaging units 12102 and 12103 provided to the sideview mirrors. an imaging range 12114 represents the imaging range of the imaging unit 12104 provided to the rear bumper or the back door. a bird's-eye image of the vehicle 12100 as viewed from above is obtained by superimposing image data imaged by the imaging units 12101 to 12104 , for example. at least one of the imaging units 12101 to 12104 may have a function of obtaining distance information. for example, at least one of the imaging units 12101 to 12104 may be a stereo camera constituted of a plurality of imaging elements, or may be an imaging element having pixels for phase difference detection. for example, the microcomputer 12051 can determine a distance to each three-dimensional object within the imaging ranges 12111 to 12114 and a temporal change in the distance (relative speed with respect to the vehicle 12100 ) on the basis of the distance information obtained from the imaging units 12101 to 12104 , and thereby extract, as a preceding vehicle, a nearest three-dimensional object in particular that is present on a traveling path of the vehicle 12100 and which travels in substantially the same direction as the vehicle 12100 at a predetermined speed (for example, equal to or more than 0 km/hour). further, the microcomputer 12051 can set a following distance to be maintained in front of a preceding vehicle in advance, and perform automatic brake control (including following stop control), automatic acceleration control (including following start control), or the like. it is thus possible to perform cooperative control intended for automatic driving that makes the vehicle travel autonomously without depending on the operation of the driver or the like. for example, the microcomputer 12051 can classify three-dimensional object data on three-dimensional objects into three-dimensional object data of a two-wheeled vehicle, a standard-sized vehicle, a large-sized vehicle, a pedestrian, a utility pole, and other three-dimensional objects on the basis of the distance information obtained from the imaging units 12101 to 12104 , extract the classified three-dimensional object data, and use the extracted three-dimensional object data for automatic avoidance of an obstacle. for example, the microcomputer 12051 identifies obstacles around the vehicle 12100 as obstacles that the driver of the vehicle 12100 can recognize visually and obstacles that are difficult for the driver of the vehicle 12100 to recognize visually. then, the microcomputer 12051 determines a collision risk indicating a risk of collision with each obstacle. in a situation in which the collision risk is equal to or higher than a set value and there is thus a possibility of collision, the microcomputer 12051 outputs a warning to the driver via the audio speaker 12061 or the display unit 12062 , and performs forced deceleration or avoidance steering via the driving system control unit 12010 . the microcomputer 12051 can thereby assist in driving to avoid collision. at least one of the imaging units 12101 to 12104 may be an infrared camera that detects infrared rays. the microcomputer 12051 can, for example, recognize a pedestrian by determining whether or not there is a pedestrian in imaged images of the imaging units 12101 to 12104 . such recognition of a pedestrian is, for example, performed by a procedure of extracting characteristic points in the imaged images of the imaging units 12101 to 12104 as infrared cameras and a procedure of determining whether or not it is the pedestrian by performing pattern matching processing on a series of characteristic points representing the contour of the object. when the microcomputer 12051 determines that there is a pedestrian in the imaged images of the imaging units 12101 to 12104 , and thus recognizes the pedestrian, the sound/image output unit 12052 controls the display unit 12062 so that a square contour line for emphasis is displayed so as to be superimposed on the recognized pedestrian. in addition, the sound/image output unit 12052 may also control the display unit 12062 so that an icon or the like representing the pedestrian is displayed at a desired position. the above description describes an example of a vehicle control system to which the technology according to the present disclosure can be applied. the technology according to the present disclosure may be applied to the imaging unit 12031 or the like among the configurations described above. note that the embodiments of the present technology are not limited to the above-described embodiments, and various changes can be made within a scope not deviating from the gist of the present technology. additionally, the present technology may also be configured as below. (1) a solid-state imaging device including: a photoelectric converting unit configured to perform photoelectric conversion; and a pn junction region including a p-type region and an n-type region on a side of a light incident surface of the photoelectric converting unit. (2) the solid-state imaging device according to (1), in which, on a vertical cross-section, the pn junction region is formed at three sides including a side of the light incident surface among four sides enclosing the photoelectric converting unit. (3) the solid-state imaging device according to (1), further including: a trench that penetrates through a semiconductor substrate in a depth direction and is formed between the photoelectric converting units each formed at adjacent pixels, in which the pn junction region is also provided on a side wall of the trench. (4) the solid-state imaging device according to (3), in which the pn junction region formed on the side wall of the trench and the pn junction region formed on the side of the light incident surface of the photoelectric converting unit are made a continuous region. (5) the solid-state imaging device according to any one of (1) to (4), in which the photoelectric converting unit is an n-type region, and concentration of n-type impurities in the n-type region of the pn junction region is a same level or higher than concentration of n-type impurities of the photoelectric converting unit. (6) the solid-state imaging device according to any one of (1) to (5), in which an active region adjacent to the photoelectric converting unit is a p-type region, and concentration of p-type impurities in the p-type region of the pn junction region is higher than concentration of p-type impurities of the active region. (7) the solid-state imaging device according to any one of (1) to (6), in which concentration of n-type impurities of the n-type region is between 1e15 cm-3 and 1e17 cm-3. (8) the solid-state imaging device according to any one of (1) to (7), in which concentration of p-type impurities of the p-type region is between 1e16 cm-3 and 1e17 cm-3. (9) the solid-state imaging device according to any one of (1) to (8), in which a plurality of vertical transistor trenches is provided at a transfer transistor, and lengths of the plurality of vertical transistor trenches are different. (10) the solid-state imaging device according to (9), in which at least one vertical transistor trench among the plurality of vertical transistor trenches is in contact with the pn junction region. (11) the solid-state imaging device according to (9), in which at least one vertical transistor trench among the plurality of vertical transistor trenches is formed to a position deeper than equal to or greater than ½ of the photoelectric converting unit. (12) the solid-state imaging device according to any one of (1) to (11), in which the p-type region and the n-type region are solid-phase diffused layers. (13) the solid-state imaging device according to any one of (1) to (12), in which the pn junction region is formed so as to cover a backside of the photoelectric converting unit except part of the backside. (14) the solid-state imaging device according to any one of (1) to (12), in which the pn junction region is discontinuously formed on a backside of the photoelectric converting unit. (15) the solid-state imaging device according to any one of (1) to (14), in which the p-type region and the n-type region are regions formed by solid-phase diffusion being performed at a cavity formed using a silicon on nothing (son) technology. (16) electronic equipment on which a solid-state imaging device is mounted, in which the solid-state imaging device includes a photoelectric converting unit configured to perform photoelectric conversion, and a pn junction region including a p-type region and an n-type region on a side of a light incident surface of the photoelectric converting unit. reference signs list 10 imaging device12 imaging element41 pixel array portion50 pixel70 si substrate71 pd72 p-type region74 light shielding film76 ocl77 active region75 backside si interface78 sti81 vertical transistor trench82 dti83 p-type solid-phase diffused layer84 n-type solid-phase diffused layer85 side wall film86 filling material101 layer121 p-type region122 n-type region131 mos capacitor151 well contact portion152 contact153 cu wiring211 n-type region231 separation prevention region301 al pad take-out portion302 al pad303 solid-phase diffused trench311 peripheral circuit portion312 p+ diffused layer313 pwell region314 backside contact315 hole layer321 peripheral circuit portion332 pwell region371 peripheral circuit portion372 boundary portion411 fd wiring412 conversion efficiency switching transistor413 mos capacitor
119-904-419-583-135
US
[ "WO" ]
B65G47/84
1996-09-11T00:00:00
1996
[ "B65" ]
transferring bodies between two spaced locations
a method and apparatus (9) for picking up semi-rigid to non-rigid bodies (10) having slippery surfaces and irregular shapes from a bin or conveyor (12) and transferring the bodies one at a time to a reception conveyor (64). pick-up devices (18) subjected to a vacuum move up and down as they travel in a circle to pick-up a body (10) in a first location, transport the body to a second location, and deposit the body (10) in a reception conveyor (64).
claims 1. a method of transferring objects from a first location to a second location comprising: locating an object pick-up means above the first location containing at least one object, moving the object pick-up means downwardly toward and in close proximity to the object in the first location, applying a vacuum to said pick-up means to move the object into engagement with the pick-up means and retain the object on said pick-up means, moving the pick-up means and object on retained thereon during the application of vacuum to said pick-up means upwardly above said first location, transporting the pick-up means and object retained thereon during application of vacuum to said pick-up means from above said first location to a position above the second location, and releasing the object from the pick-up means whereby the object moves away from the pick-up means to the second location. 2. the method of claim 3 including: randomly arranging a plurality of objects in intermingled and potentially overlapping relative relationship in the first location. 3. the method of claim 1 including: randomly arranging a. plurality of objects in intermingled and potentially overlapping relative relationship in the first location. 4. the method of claim 1 including: placing a plurality of objects in a bin residing in the first location. 5. the method of claim 1 including: placing a plurality of objects on a conveyor located in the first position. 6. the method of claim 1 wherein: the pick-up means and object retained thereon are transported in a generally circular path from above first location to a position above the second location. 7. the method of claim 1 including: moving the object in the second location from the second location to a third location. 8. the method of claim 7 wherein: the object is moved along a path away from the first location at a continuous speed. 9. the method of claim 1 including: cleaning the pick-up means after the object has been released from the pick-up means. 10. the method of claim 9 wherein: the pick-up means is cleaned by subjecting the pickup means with flowing liquid dispensed from a nozzle aligned with the pick-up means. 11. the method of claim 1 wherein: the object is released from the pick-up means by subjecting the object retained on the pick-up means with air under pressure whereby the object is placed in the second location. 12. a method of transferring bodies having irregular shapes, semi rigid to non-rigid structure and slippery surfaces one at a time in a singulating manner from a first means for holding the bodies to a conveyor comprising: randomly arranging a plurality of bodies in intermingled relative relationship in the first means, locating a pick-up means for picking up one body from the first means above the first means and bodies in the first means, moving the pick-up means in a downward direction to close proximity to one body in the first means, applying a vacuum to the pick-up means to move the one body in to engagement with the pick-up means and retain the body on the pick-up means, moving the pick-up means and the one body retained thereon during application of vacuum to the pick-up mean in an upward direction to a first location above the first means, transporting the pick-up means and the one body retained on the pick-up means during application of vacuum to the pick-up means from the first location to a second location above the conveyor, releasing the one body from the pick-up means to allow the body to fall away from the pick-up means onto the conveyor, and moving the conveyor to carry the one body to a third location remote from the bin. 13. the method of claim 12 wherein: the pick-up means for picking up one body and the one body retained thereon are transported in a generally circular path from the first location above the first means to the second location above the conveyor. 14. the method of claim 12 including: moving the conveyor at a continuous speed to locate bodies released from the pick-up means in spaced relation along the conveyor. 15. the method of claim 12 including: cleaning the pick-up means for picking up one body after a plurality of bodies have been released from the pick-up. 16. the method of claim 12 wherein: the pick-up means for picking up one body is cleaned by subjecting the pick-up means with flowing liquid. 17. the method of claim 12 wherein: the one body is released from the pick-up means for picking up one body by subjecting the one retained on the pick-up means with air under pressure whereby the one body is places on the conveyor. 18. an apparatus for transferring objects from a first location to a second location comprising: a frame, first means mounted on the frame for holding a plurality of objects, second means for moving objects away from the first means, third means mounted on the frame for picking up objects from the first means and depositing the objects on the second means, said third means including a movable member, means supporting the member on the frame for movement relative to the first and second means, drive means for moving the member, at least one object pick-up mechanism mounted on the member operable to pick-up an object in the first means, transport the object to a location above the second means and release the object thereby allowing the object to fall onto the second means. 19. the apparatus of claim 18 wherein: the first means is a bin for holding a plurality of objects 20. the apparatus of claim 18 wherein: the first means is a conveyor for holding a plurality of objects. 21. the apparatus of claim 18 wherein: the second means is a conveyor for sequentially receiving objects and carrying the objects to a location remote from the first means. 22. the apparatus of claim 18 wherein: the third means has a support mounted on the frame, said support having a bottom surface and a chamber open to the bottom surface, said member being located in engagement with said bottom surface, an object pick-up device mounted on the member having a lower open end and a passage open to the chamber, and means to apply a vacuum to the chamber whereby flowing air is drawn through the lower open end and passage of the pick-up device, said flowing air and vacuum being operable to move an object into engagement with the pick-up device and retain the object on the pick-up device. 23. the apparatus of claim 22 wherein: the pick-up device comprises a first tube having a passage open to the chamber mounted on the member, a second tube having a passage open to the passage in the first tube located in telescopic relationship relative to the first tube, a receptacle mounted on the second tube, said receptacle having an opening in communication with the passage in the second tube, and means to move the second tube relative to the first tube during movement of the member so that the receptacle moves downwardly in close proximity to an object in the first means to allowing the flowing air and vacuum to pick-up an object and then move upward and transport the object to the second means. 24. the apparatus of claim 23 wherein: the means to move the second tube relative to the first tube comprises, a cam track mounted on the frame, and a cam follower engageable with said track mounted on the first tube.
method and apparatus for transferring bodies from a bin or conveyor to reception line field of the invention this invention relates to a method and apparatus for transferring bodies of semi-rigid to non-rigid structure, slippery surface, and irregular shape from a bin or conveyor to a reception line. the apparatus is particularly suitable for transferring fresh raw meat, poultry, and fish located in a bin or conveyor to a reception conveyor. background of the invention food processing facilities commonly have operations in which bodies are delivered to a bulk feed bin or a conveyor in random and overlapping fashion. the bodies are then manually removed from a bin or conveyor one at a time and placed into an individual compartment on an exit conveyor. this is a time consuming and tiring repetitive manual task. the exit conveyor transports the individual bodies to a packaging or weighing operation. prior art body handling machines for automatically picking up, transporting and releasing bodies of the type described herein are none. the closest prior art is u.s. patent no. 3,589,531 to poviacs (1971). although machines of this type have been quite satisfactory in operation, they are, nevertheless, subject to inherent limitations including being unable to perform consistently without plugging, being unable to withstand a harsh food processing environment, and not suitable for the sanitary requirements for food processing. other prior art article-feeding machines u.s. patent no. 3, 941,233 to aluola and rueff (1976) and u.s. patent no. 5,381,884 to spatafora and strazzori (1995) need to have the product substantially aligned on the first conveyor or carrier before being transferred, via suction, to a second conveyor. summary of the invention the invention relates to a method and apparatus for transferring bodies of semi-rigid to non-rigid structure, slippery surface, and irregular shape from a bin or conveyor to a reception conveyor. the bodies can be arranged in an intermingled and overlapping fashion in the bulk feed bin or conveyor and are removed one at a time by the apparatus. an example of the bodies, are food products, such as poultry meat. the apparatus has a frame supported on a floor. a bin or feed conveyor is mounted on the frame for accommodating a plurality of bodies, herein called objects, in a first location. object pick-up devices mounted on a rotatable disk move in a circular path are operable to pick-up one object from the first location and transfer the object to a second location above a reception conveyor. the object is released from the pick-up device and deposited in the reception conveyor. reception conveyor moves the object to a remote location for further processing, packaging, weighing and labeling procedures. the preferred .embodiment of the apparatus has a frame supporting a base. a disk is rotatable supported on rollers connected to the frame. the disk has a flat surface located adjacent a flat bottom surface of the base. a drive unit mounted on the frame rotates the disk at a selected speed. object pick-up devices are secured to the disk in a circle arrangement adjacent the outer edge of the disk. each object pick-up device has a fixed tube and a movable tube telescoped on fixed tube. the movable tube is attached to a receptacle having a opening to allow air to flow into the pick-up device. the base has an arcuate groove or manifold open to the disk and aligned with the tubes of the object pick-up devices. a vacuum pump draws air out of the groove and pick-up device during the time that the pick-up devices are aligned with the groove. the air flowing through the opening in the receptacle draws an object from the first location into engagement with the receptacle. the vacuum force established by the vacuum pump retains the object on the receptacle until the vacuum is cut off. a rail assembly mounted on the frame surrounds the object pick- up device has cam tracks that control the up and down movements of the movable tubes and receptacles of the pick-up devices as they move in a circular path. each pick-up device has a bushing or cam follower engagable with the track so that the movable tube of the pick-up devices follows the route of the tracks. the tracks have a downwardly curved section above the first location which causes the receptacles to be moved down into close proximity with an object in the first location and move up away from the first location as the pick-up device is moved to the second location. the tracks have an upwardly directed section at the second location to move the movable tube of the pick-up device relative to a rod to locate the rod through the opening in the receptacle to mechanically separate the object from the receptacle and deposit the object into a reception conveyor. the reception conveyor has an endless belt and pockets to accommodate the objects. the belt is moved at a speed substantially the same as the circumferential speed of the pick-up device whereby an object is deposited in each pocket of the reception conveyor. the reception conveyor moves the objects to a remote location for further processing, such as packaging, weighing and labeling. the pick-up devices are cleaned with water or a cleaning solution discharged from a nozzle mounted on the base. water under pressure is supplied to the nozzle. the nozzle directs the water into the tubes and through the opening in the receptacles to clean the insides of the pick-up devices. the disk is rotated with the device unit so that the pick-up devices are sequentially cleaned. the objects and advantages of the transfer apparatus and method of my invention include: (a) to provide an automated machine for transferring bodies of semi-rigid to non-rigid structure, slippery surface, and irregular shape from a bin or conveyor to a reception line, and disposed on these lines at constant but different spacing; (b) to provide a body transfer apparatus whose production allows for a convenient and extremely rapid and economical construction; (c) to proved a body transfer apparatus that can be built many different sizes to adapt to bodies of different sizes; (d) to provide a body transfer apparatus that can be completely support from above; (e) to provide a body transfer apparatus that will not plug and halt operation; (f) to provide a body transfer apparatus that can be easily cleaned and sanitized; (g) to provide a body transfer apparatus that can be easily assembled and disassembled; and (h) to provide a body transfer apparatus that can be easily installed into a present equipment configuration. still further objects and advantages will become apparent from a consideration of the ensuring description and accompanying drawings. description of the drawings figure 1 is a perspective view of the apparatus of the invention for transferring bodies of semi-rigid to non-rigid structure from a first location to a second location; figure 2 is an elongated front elevational view thereof ; figure 3 is a top plan view partly sectioned thereof; figure 4 is an enlarged sectional view taken along line 4-4 of figure 3; figure 5 is an enlarged sectional view taken along line 5-5 of figure 3; figure 6 is a sectional view taken along line 6-6 of figure 4; figure 7 is an enlarged sectional view taken along line 7-7 of figure 4; figure 8 is an enlarged sectional view taken along line 8-8 of figure 4; figure 9 is an enlarged side elevational view of the apparatus showing the body pick-up operation; and figure 10 is an enlarged side elevational view of the apparatus showing the body release operation. description of preferred embodiment of invention the object transfer apparatus, indicated generally in figure 1 at 9 , operates to pick-up an object 10 from a first location, transport the object 10 to a second location and deposit the object in the second location. the object is moved from the second location to a selected third location remote from the apparatus. the object 10 described herein is a food product having an irregular shaped body of semi-rigid to non-rigid structure with a generally slippery outer surface. an example of the product is poultry meat. other types of non-food products and food products can be handled with the transfer apparatus 9. apparatus 9 has an object recirculating conveyor 12 for receiving objects 10 in a first location from an object cutting line of a food processing system. conveyor 12 has a pair of endless belts 12b trained around separate drive rollers 12a operable to move the upper runs of the belts in opposite directions as shown by the arrows 12c. a curved deflector 11 adjacent the top runs of belts 12b guide objects to adjacent belts to recirculate objects in the first location. objects 10 are randomly arranged in intermingled and overlapping relative relationship in the first location. objects 10 can be directed into a bin or holding structure which provides the first location for the objects. a conveyor or chute can deliver the objects to the bin. deflector 11 and belts 12b form a bin-like structure for retaining objects in the first location. an overflow pan 13 is associated with conveyor 12 to channel excess objects 10 if overfilling of the conveyor 12 occurs. pan 13 is a u-shaped chute which directs objects away from apparatus 9 to a location where the objects can be recycled in the processing system. apparatus 9 has a stationary frame, indicated generally at 15, comprising a pair of inverted tubular u- shaped frame members 38. each u-shaped member 38 has a pair of upright legs and a generally horizontal cross tube or top member. the cross tubes of the u-shaped members are normally disposed relative to each other. the lower ends of the legs rest on the floor or support for apparatus 9. an object transfer assembly 30 rotatably mounted on the legs of frame 15 operates to pick-up objects 10 from conveyor 12 and transport the objects to an exit conveyor 60. transfer assembly 30 has a cylindrical base 34 suspended from cross tubes 38 with a plurality of vertical bolts or rods 47. base 34 has a flat bottom surface. a rigid disk 32 is positioned in surface engagement with the flat bottom surface of base 34. an upright shaft 40 is fixed to the center of disk 32 and journaled on base 34 and the cross tubes of frame 15. as shown in figure 4, a drive unit 41 is operatively connected to shaft 40 to turn disk 32 about the upright axis of shaft 40. a bolt 42 connects a sleeve secured to shaft 40 to disk 32. disk 32 rides on rollers 50 journaled on studs 48 that are mounted on the legs of frame 15. washers 52 connected to inner ends of studs 48 maintain rollers 50 on studs 48. studs 48 are secured to sleeves 53 located about the legs of frame 15. the vertical positions of sleeves 53 on the legs of frame 15 are adjustable to position disk 32 in surface engagement with the flat bottom surface of base 34. a plurality of object pick-up units 18 are mounted on disk 32. the pick-up units 18 are arranged in a circle adjacent the out circular peripheral edge of disk 32. each pick-up unit 18 has cylindrical tube 19 surrounded with a bushing 16. the upper end of tube 19 is threaded into a hole in disk 32. perpendicular to the bottom of tube 19 and bisecting its circumference in two equal parts in a support bar 20. bar 20 has a length greater than the outside diameter of tube 19 so that it retains busing 16 on tube 19. as shown in figure 7, the width of bar 20 is less than the diameter of the passage of tube 19 so that air flows past bar 20 into the passage of tube 19. a plunge rod 22, shown as a solid cylindrical rod, is fixed with its top horizontal center point joined to the midpoint of bar 20. a cylindrical tube 14 is telescoped on bushing 16 for sliding up and down movements. tube 14 has a lower portion that extends below the bottom of tube 19. an object receptacle 24 is secured to the lower end of tube 14. receptacle 24 has a cone shaped bottom surface and a vertical orifice or passage 24a open to the interior of the passage of tube 14 and the bottom of receptacle 24. passage 24a is smaller in diameter at its lower end than at its upper end. the smaller diameter extends upward to about the middle of passage 24a. clamp 25 is a rectangular member that is formed to fit in a groove on the outside of tube 14 and mounted on tube 14. a short shaft 26, shown as a solid rod, is secured to and extends outwardly from clamp 25. a cylindrical bushing 27 is rotatable carried on shaft 26. a stop washer 28 attached to shaft 26 retains bushing 27 on shaft 26. clamp 25 has an inwardly directed arm 29 having a vertical hole accommodating a guide post 30. the upper end of post 30 is secured to disk 32. post 30 prevents rotation of clamp 25 on tube 14 and allows vertical up an down movement of clamp 25 and tube 14 relative to base 34. bushing 27 is a cam follower that rides on a rail assembly 54 to control the up and down movements of receptacle 24 as the pick-up unit 18 moves in its circular path. rail assembly 54 is a circular rods or track members having inwardly located beads. the rods are vertically spaced from each other and serve as tracks or guides for bushing 27. rail assembly 54 is mounted on frame 15 with channel brackets and sleeves 56. set screws secure sleeves 56 to frame 15. the set screws and sleeves 56 permit vertical adjustment of rail assembly 54 thereby adjusting the upper and lower locations of receptacle 24. as shown in figure 9, rail assembly 54 has a downwardly curved section 54a located about the first location containing objects 10 on conveyor 12. as shown in figure 10, rail assembly 54 has a short upwardly curved section 54b above the second location for accommodating objects 10 in exit conveyor 60. returning to figure 2, rail assembly 54 inclines upwardly from section 54a to section 54b to elevate objects from conveyor 12 to a position above reception conveyor 60. as shown in figure 6, base 34 has an arcuate groove 44 or vacuum manifold open to the upper ends of tubes 19. groove 44 has an arcuate extent of about 150 degrees from above the first location to above the second location. a pipe or conduit 36 mounted on base 34 is open to groove 44. as shown in figure 4, conduit 36 is connected with a line 58 to a tank and vacuum pump operable to draw air from groove 44 and pick-up devices 18 that move under groove 44. the vacuum force and air moving through port 24a of receptacle 24 picks-up and hold objects on receptacle 24 as it moves from the pick-up first location to the discharge second location onto conveyor 60. the exit or reception conveyor 60 has a flexible closed loop belt 64 located about rollers 62. belt 64 has side walls 69 and transverse flat member or projections 66 extended between the side walls 69 to provide pockets for accommodating objects. belt 66 is driven at a speed approximately constant and equal to the peripherial speed of the pick-ups devices 18 so that the pick-up devices 18 deposit one object in each pocket. the conveyor 60 moves the objects to a third location remote from the apparatus for packaging, weighing, labeling procedures. as shown in figures 3 and 6, an air nozzle 72 and water nozzle 70 is mounted on base 34 over separate holes through he base. the holes are aligned with the passage of tubes 19 as the pick-up devices 18 turn relative to base 34. air nozzle 72 is connected to a clean air supply, for applying air under pressure downward toward the pick-up device 18. water nozzle 70 is connect to a clean water supply (not shown) or a cleaning solution supply (not shown) mounted on base 34 in communication with a hole aligned with pick-up devices 18. clean water can also be supplied to groove 44 to clean the inside of base 34 and the pick-up devices. in use of apparatus 9 the drive unit turns disk 32 to move the pick-up devices 18 in a circular path. the vacuum source operates to withdraw air from groove 44 and move air through the pick-up devices 18 that are in communication with the groove 44. vacuum is not supplied to pick-up devices 18 that are not aligned with groove 44. bushings 27 engagable with rail assembly 54 control the up and down movements of pick-up devices 18. when the pick-up devices 18 move toward the first location they travel downward into the first or pick-up location. the pick-up devices 18 also extend downward. receptacle 24 is in close relationship to an object 10 on conveyor 12. the receptacle 24 may be moved into direct engagement with the object 10 on conveyor 12. the vacuum force and air moving into receptacle orifice 24a move the object up into holding engagement with the receptacle 24. the vacuum force retains the object on the receptacle 24. the vacuum force is maintained as long as the pick-up devices 18 are aligned with groove 44. the pick-up devices 18 move the objects upwardly away from the first location and transport the objects to a second location above the exit conveyor 60. as shown in figure 10, when pick-up device 18 is above conveyor 60, the vacuum force is cut-off and the bushing rides up track section 54b. this causes rod 22 to move through orifice 24a thereby pushing the object off receptacle 24. the object 10 falls onto conveyor 60. air under pressure can be supplied to nozzle 72 to facilitate removal of object from receptacle 24 and direct the object on to conveyor 60. a friction conveyor belt also can be used to assist in removal of the object from the receptacle 24. the apparatus can be cleaned by supplying water under pressure to nozzle 70. disk 32 is rotated during the time that the water is supplied to nozzle 72. water directed into the insides of pick-up devices 18 flushes foriegn material from the pick-up device 18. object transfer apparatus 9 is used to remove an individual object or product from a large group of intermingled and overlapping objects in a bin or on a conveyor and carry the objects to a location above a compartmentalized exit conveyor and deposit the objects into separate pockets in the exit conveyor. apparatus 9 is economical in construction and use and is easy to clean and maintain. frame 15 supports apparatus 9 above the object transfer operation so as to conserve desirable floor space. furthermore, apparatus 9 is adaptable to current equipment configuration. although the description of the object transfer apparatus 9 contains many specificities, these should not construed as limiting the scope of the invention but as merely providing illustrations of the presently preferred embodiment of this invention. various other embodiments and ramifications are possible within its scope. for example, the transfer device could pick up flexible pouches one at a time from a bin or conveyor of intermingled and overlapping flexible pouches. the invention is defined in the following claims.
120-466-938-333-291
DE
[ "CN", "US", "DE" ]
B25F5/00,B24B23/02,B24B55/00,F16D55/02,F16D59/00,B24B41/04,B23Q11/00,B24B47/26,B23B19/02,B23Q1/26
2013-01-21T00:00:00
2013
[ "B25", "B24", "F16", "B23" ]
power tool braking device
a power tool braking device of a portable power tool includes at least one braking unit (14a; 14b) that is configured to brake a spindle (16a; 16b) of the portable power tool and/or a machining tool (18a) in at least one braking position of the braking unit (14a; 14b). the power tool braking device further includes at least one spindle fixing unit (20a; 20b) that is configured to fix the spindle (16a; 16b) in at least one fixing position of the spindle fixing unit. the braking unit (14a; 14b) and the spindle fixing unit (20a; 20b) are at least partially configured as a single piece.
1. a power tool braking device of a portable power tool, comprising: at least one braking unit configured to brake one or more of a spindle and a machining tool in at least one braking position of the braking unit; and at least one spindle fixing unit configured to fix the spindle in at least one fixing position of the spindle fixing unit, wherein the braking unit and the spindle fixing unit include at least one common activating element configured, in at least one operating state, to activate a transfer of the braking unit into the braking position and to activate a transfer of the spindle fixing unit into the fixing position. 2. the power tool braking device according to claim 1 , wherein the at least one common activating element is connected to a braking element of the braking unit so as to rotate with said braking element. 3. the power tool braking device according to claim 1 , wherein the at least one common activating element has at least one coupling region, the coupling region, in a fitted state, being configured to engage in a recess of a driver element of the braking unit. 4. the power tool braking device according to claim 1 , wherein the braking unit comprises at least one driver element configured to move a braking element of the braking unit relative to the activating element in response to a relative movement of the common activating element and also of the driver element. 5. the power tool braking device according to claim 1 , wherein the braking unit comprises at least one driver element having at least one ramp-shaped braking element movement region. 6. the power tool braking device according to claim 1 , wherein the braking unit comprises at least one driver element configured, in at least one operating state, to move at least one spindle fixing element of the spindle fixing unit relative to the common activating element of the braking unit and of the spindle fixing unit. 7. the power tool braking device according to claim 1 , wherein the braking unit comprises at least one driver element having at least one clamping contour configured to clamp at least one spindle fixing element of the spindle fixing unit. 8. the power tool braking device according to claim 1 , wherein the spindle fixing unit comprises at least one spindle fixing element configured as a rolling element. 9. the power tool braking device according to claim 1 , further comprising at least one output unit including at least one output element, wherein the common activating element is arranged on the output element. 10. a portable power tool, comprising: at least one power tool braking device including; at least one braking unit configured to brake one or more of a spindle and a machining tool in at least one braking position of the braking unit; and at least one spindle fixing unit configured to fix the spindle in at least one fixing position of the spindle fixing unit, wherein the braking unit and the spindle fixing unit include at least one common activating element configured, in at least one operating state, to activate a transfer of the braking unit into the braking position and to activate a transfer of the spindle fixing unit into the fixing position. 11. the portable power tool according to claim 10 , wherein the portable power tool is configured as an angle grinder.
this application claims priority under 35 u.s.c. §119 to patent application no. de 10 2013 200 867.8 filed on jan. 21, 2013 in germany, the disclosure of which is incorporated herein by reference in its entirety. background de 195 10 291 c2 already discloses a power tool braking device of a portable power tool, which has a braking unit for braking a spindle and/or a machining tool in a braking position of the braking unit. in addition, the power tool braking device comprises a spindle fixing unit for fixing the spindle in at least one fixing position of the spindle fixing unit. summary the disclosure is based on a power tool braking device of a portable power tool, comprising at least one braking unit for braking a spindle and/or a machining tool in at least one braking position of the braking unit, and comprising at least one spindle fixing unit for fixing the spindle in at least one fixing position of the spindle fixing unit. it is proposed that the braking unit and the spindle fixing unit are at least partially designed as a single piece. the braking unit and the spindle fixing unit preferably form a common assembly. the braking unit is preferably provided in order at least partially to transfer a relative movement between at least one driver element of the braking unit and at least one braking element of the braking unit into a further relative movement between the driver element and the braking element in order to produce a braking force in a braking position of the braking unit. the driver element is preferably provided here in order, in at least one operating state, to move a spindle fixing element of the spindle fixing unit relative to the braking element. the braking unit is preferably designed as a mechanical braking unit. the expression “mechanical braking unit” is intended here in particular to define a braking unit which is provided for transferring at least the braking element and/or a counterbraking element, in particular a brake lining, of the braking unit into a braking position and/or into a release position as a consequence of mechanical actuation, in particular as a consequence of a force of a component being exerted on the braking element and/or the counterbraking element by means of direct contact between the component and the braking element and/or the counterbraking element, in particular in a manner decoupled from a magnetic force. “provided” is intended to be understood in particular as meaning specially designed and/or specially equipped. a “braking position” is intended to be understood here as meaning in particular a position of the braking element and/or of the counterbraking element, in which, in at least one operating state, at least one braking force is exerted on a moving component in order to reduce a speed of a moving component, in particular by at least more than 50%, preferably by at least more than 65% and particularly preferably by at least more than 80%, of said moving component within a predetermined period of time. the predetermined period of time here is in particular shorter than 5 s. the term “release position” here is intended in particular to define a position of the braking element and/or of the counterbraking element, in which an action of the braking force on the moving component in order to reduce the speed is at least substantially prevented. the mechanical braking unit is preferably provided in order to brake the component from a working speed, in particular to brake the component to a speed which is less than 50% of the working speed, preferably less than 20% of the working speed and particularly preferably to brake the component to a speed of 0 m/s, in particular within a predetermined period of time of greater than 0.1 s, preferably of greater than 0.5 s and particularly preferably of shorter than 3 s. the mechanical braking unit is particularly preferably designed as a friction brake. the braking element here is preferably designed as a brake disk. the brake disk is preferably formed from stainless steel and/or from another material appearing expedient to a person skilled in the art, such as, for example, sintered bronze, steel, nitrided steel, aluminum or another surface-treated steel and/or metal. a brake lining which is arranged on the braking element or on the counterbraking element and with which the counterbraking element or the braking element interacts in order to generate a braking force is preferably designed as a sintered brake lining, as an organic brake lining, as a brake lining made from carbon, as a brake lining made from ceramic or as another brake lining appearing expedient to a person skilled in the art. preferably, for transferring the relative movement between the driver element and the braking element into a further relative movement between the driver element and the braking element, the braking unit preferably comprises a movement conversion unit. a “movement conversion unit” here is intended to be understood in particular as meaning a unit which comprises a mechanism, in particular a ramp, a thread, a cam mechanism, a coupling mechanism or another mechanism appearing expedient to a person skilled in the art, by means of which a type of movement, such as, for example, a translation, can be converted into a different type of movement, such as, for example, a rotation and/or a combining of rotation and translation, and/or a movement of a component in one direction can be converted into a movement of a further component in a further direction. a relative movement, which is designed as rotation, between the driver element and the braking element is preferably converted into a further relative movement, which is designed as translation, between the driver element and the braking element by means of the movement conversion unit. thus, in order to generate a braking force in a braking position of the braking unit, the braking element is preferably moved relative to the driver element by a combination of rotation and translation. by this means, as a consequence of contact of the braking element with a counterbraking element, which is mounted so as to rotate with said braking element, of the braking unit, a braking force is generated as a consequence of friction between the braking element and the counterbraking element. the term “driver element” here is intended in particular to define an element which is provided in order to be moved at the same time during a movement of a further element, in particular in order to be moved at the same time with a time delay at least at the beginning of a movement relative to the further element, and/or which is provided in order to carry along or at the same time to move a further element as a consequence of a connection during a movement. the driver element is preferably provided in order to be moved at the same time by the braking element at the beginning of a rotational movement of the braking element with a time delay relative to the movement of the braking element. the driver element therefore preferably has rotational play relative to the braking element, the rotational play permitting a relative movement between the driver element and the braking element about an axis of rotation over an angular range of greater than 1°, preferably greater than 2°, and particularly preferably greater than 4°. the term “spindle fixing unit” is intended here in particular to define a unit which, at least in one fixing position, prevents a movement of a spindle, in particular a rotational movement of the spindle, for fastening a machining tool to and/or on the spindle and/or for release of the machining tool from the spindle, for example when changing a machining tool, in particular except for a play-induced and/or tolerance-induced movement possibility of the spindle. the spindle fixing unit is particularly preferably designed as a “spindle lock unit”. “at least partially as a single piece” is intended here to be understood in particular as meaning that the braking unit and the spindle fixing unit together use at least one element, in particular an element separate to an output element, which is designed as an output gearwheel, of an output unit, to carry out a function of the braking unit to brake the spindle or to carry out a function of the spindle fixing unit to fix the spindle. the braking unit and the spindle fixing unit particularly preferably together form an installation module. the expression “installation module” is intended here in particular to define a construction of a unit, in which a plurality of components are preassembled and the unit is installed as a whole in an overall system, in particular in a portable power tool. the installation module preferably has at least one fastening element which is provided to releasably connect the installation module to the overall system. the installation module can advantageously be removed from the overall system in particular with fewer than 10 fastening elements, preferably with fewer than 8 fastening elements and particularly preferably with fewer than 5 fastening elements. the fastening elements are particularly preferably designed as screws. however, it is also conceivable for the fastening elements to be designed as different elements appearing expedient to a person skilled in the art, such as, for example, rapid clamping elements, fastening elements actuable without a tool, etc. at least one function of the installation module can preferably be ensured in a state removed from the overall system. the installation module can particularly preferably be removed by an end user. the installation module is therefore designed as an interchangeable unit which can be replaced by a further installation module, such as, for example, in the event of a defect of the installation module or a function extension and/or function change of the overall system. a compact power tool braking device can advantageously be realized by means of the configuration according to the disclosure. construction space, components and outlay on installation can advantageously be saved. in addition, a high level of operating convenience can advantageously be achieved. by means of a configuration as an installation module, the power tool braking device can simply be interchanged by an operator or an expert dealer or a workshop. in addition, the power tool braking device can be fitted on existing portable power tools in the form of an upgrade. variants of a portable power tool with/without a braking unit and/or spindle fixing unit are therefore advantageously usable in a simple manner in a manufacturing process. furthermore, it is proposed that the braking unit and the spindle fixing unit comprise at least one common activating element which is provided in order, in at least one operating state, to activate a transfer of the braking unit into the braking position and to activate a transfer of the spindle fixing unit into the fixing position. an “activating element” is intended here to be understood in particular as meaning an element which releases and/or moves at least one element in order to initiate a braking operation or a fixing operation. in this connection, the activating element can release and/or move the element directly or indirectly. the activating element can be designed as a servo motor, as an actuator, as a movement conversion element, as a lever, as a switch, as an extension, etc., wherein the activating element can release and/or move at least one element of the braking unit and at least one element of the spindle fixing unit. by means of the configuration according to the disclosure, it is advantageously possible for a transfer of the braking unit into a braking position and advantageously possible for a transfer of the spindle fixing unit into a fixing position to be realized by means of an individual element. by this means, a particularly reliable activation of the braking unit and of the spindle fixing unit can be made possible. furthermore, it is proposed that the braking unit and the spindle fixing unit comprise at least the common activating element which is connected to a braking element of the braking unit for rotation with said braking element. “connected for rotation with” is intended to be understood in particular as meaning a connection which, averaged over a complete revolution, transmits a power flux with an unchanged torque, an unchanged direction of rotation and/or an unchanged rotational speed. in this connection, the activating element can be connected to the braking element for rotation therewith by means of at least one fastening element of the braking unit and/or of the spindle fixing unit, or can be formed integrally with the braking element. “integrally” is intended to be understood in particular as meaning connected at least in an integrally bonded manner, for example by a welding process, an adhesive bonding process, an injection molding process and/or another process appearing expedient to a person skilled in the art, and/or as advantageously meaning formed as a single piece, such as, for example, by means of production from a cast part and/or by means of production in a single- or multi-component injection molding process and advantageously from an individual blank. the braking element is preferably arranged axially spaced apart relative to the activating element, as viewed along an axis of rotation of the activating element and/or of the braking element. it is conceivable in this connection for the braking element to be arranged in an axially captive manner on the activating element by means of the fastening element. “arranged in a captive manner” is intended here to be understood in particular as meaning an arrangement of at least two elements relative to each other, in which the elements are movable relative to each other, in particular are movable at least axially relative to each other, but a removal or separating of the elements is possible only by means of release of a captive-keeping element, such as, for example, the fastening element. by means of the configuration according to the disclosure, a rotational movement can advantageously be used in order to activate the braking unit. furthermore, reliable activation of the braking unit can advantageously be made possible. in addition, it is proposed that the braking unit and the spindle fixing unit comprise at least one common activating element which has at least one coupling region which, in a fitted state, engages in a recess of a driver element of the braking unit. the term “driver element” is intended here in particular to define an element which is provided in order to be moved at the same time during a movement of a further element, in particular in order to be moved at the same time with a time delay relative to the further movement at least at the beginning of a movement, and/or which is provided in order to carry along or to move at the same time a further element as a consequence of a connection during a movement. the driver element preferably has rotational play relative to the braking element, said rotational play permitting a relative movement between the driver element and the braking element about an axis of rotation through an angular range of greater than 1°, preferably greater than 2° and particularly preferably greater than 4°. the carry-along element is particularly preferably connected to an output element, in particular to the spindle, for rotation therewith. the coupling region is preferably designed as an extension. the recess of the driver element preferably has a larger extent than the coupling region, in particular as viewed in a circumferential direction running in a plane extending at least substantially perpendicularly to an axis of rotation of the driver element. the expression “substantially perpendicularly” is intended here in particular to define an alignment of a direction relative to a reference direction, wherein the direction and the reference direction, in particular as viewed in a plane, enclose an angle of 90°, and the angle has a maximum deviation of in particular less than 8°, advantageously less than 5° and particularly advantageously less than 2°. by means of the configuration according to the disclosure, a structurally simple interlocking connection can be achieved. in addition, a compact arrangement of components can advantageously be achieved. furthermore, it is proposed that the braking unit comprises at least the driver element which is provided to move the braking element of the braking unit relative to the activating element as a consequence of a relative movement of the common activating element of the braking unit and of the spindle fixing unit and also of the driver element. for this purpose, the driver element preferably comprises a braking element movement range which is provided in order, as a consequence of a relative movement between the driver element and the activating element, to generate at least one force component in a direction facing away from the driver element in order to move the braking element. the braking element movement range can have various configurations appearing expedient to a person skilled in the art, such as, for example, a configuration as a bolt which is fixedly connected to the driver element, in particular is integrally formed on the driver element, and engages in a radial cam designed as a groove, or a configuration as a ramp which is arranged on the driver element and interacts with a further ramp arranged on the braking element. by means of the configuration according to the disclosure, a relative movement between the driver element and the activating element, in particular a rotation about an axis of rotation of the driver element and/or of the activating element, can advantageously be used for producing a movement of the braking element. a relative movement, caused by the spindle coming to a stop, between the driver element and the activating element in order to activate the braking unit and/or the spindle fixing unit can be achieved in a structurally simple manner. furthermore, it is proposed that the braking unit comprises at least the driver element which comprises at least one ramp-shaped braking element movement region. “ramp-shaped” is intended here to be understood in particular as meaning a geometrical shape which has a mathematically defined pitch along a section from a starting point toward an end point such that there is a height difference between the starting point and the end point and/or the starting point is arranged in a plane which runs offset at least substantially parallel to a plane in which the end point is arranged. a plane formed by a sliding surface of the braking element movement region therefore preferably encloses an angle differing from 90° and from an integral multiple of 90° with a plane running at least substantially parallel to a side of the driver element that faces the braking element. “substantially parallel” is intended here as meaning in particular an alignment of a direction relative to a reference direction, in particular in a plane, wherein the direction has a deviation in particular of smaller than 8°, advantageously smaller than 5° and particularly advantageously smaller than 2° in relation to the reference direction. by means of the configuration according to the disclosure, a movement unit, in which an axial movement of the braking element in a direction facing away from the driver element can be achieved in a structurally simple manner as a consequence of a relative movement between the driver element and the braking element, can be achieved cost-effectively. in addition, it is proposed that the braking unit comprises at least the driver element which is provided in order, in at least one operating state, to move at least one spindle fixing element of the spindle fixing unit relative to the common activating element of the braking unit and of the spindle fixing unit. the driver element preferably moves the spindle fixing element relative to the activating element as a consequence of a relative movement between the driver element and the activating element. the spindle fixing element in this case preferably has at least one movement component which runs at least substantially perpendicularly to an axis of rotation of the driver element and/or of the activating element, in particular radially with respect to the axis of rotation. the spindle fixing element in this case is preferably clamped between a clamping element of the spindle fixing unit and the driver element in order to fix the spindle in the fixing position. the clamping element is preferably arranged outside the spindle fixing element, as viewed in a direction extending from an axis of rotation of the driver element and/or of the activating element at least substantially perpendicularly to the axis of rotation of the driver element and/or of the activating element. the clamping element preferably surrounds the driver element and/or the spindle fixing element in the circumferential direction. in this case, the clamping element is advantageously designed in the shape of a circular ring, for example in the shape of a hollow cylinder, as a clamping drum, etc. the driver element is preferably part of the braking unit and part of the spindle fixing unit. by means of the configuration according to the disclosure, a structural interrelationship between the braking unit and the spindle fixing unit can advantageously be achieved in order to permit a high level of operating convenience. in addition, a compact construction of the brake unit and of the spindle fixing unit can advantageously be made possible. furthermore it is proposed that the braking unit comprises at least the driver element which has at least one clamping contour for clamping at least the spindle fixing element of the spindle fixing unit. the clamping contour preferably extends in the circumferential direction on an outer circumference of the driver element. however, it is also conceivable for the clamping contour to be arranged on the driver element at a different position appearing expedient to a person skilled in the art. by means of the configuration according to the disclosure, a compact configuration of the power tool braking device can advantageously be achieved. furthermore, it is proposed that the spindle fixing unit comprises at least one spindle fixing element which is designed as a rolling element. a “rolling element” is intended here to be understood in particular as meaning an element which is formed in a rotationally symmetrical manner at least about one axis, in particular an axis of rotation. in particular, the rolling element is provided in order, at least in one operating state, to roll with at least one surface, in particular a lateral area, on a surface of a component as a consequence of a rotational movement about the axis of rotation. the rolling element is preferably designed as a cylinder. however, it is also conceivable for the rolling element to be designed as a ball, as a cone, as a barrel or as another rotational body appearing expedient to a person skilled in the art. a rolling movement of the rolling element can advantageously be used in order to reach a fixing position of the spindle fixing unit. in addition, a low frictional resistance can advantageously be achieved during a transfer of the spindle fixing unit into the fixing position. in addition, it is proposed that the power tool braking device comprises at least one output unit which comprises at least one output element on which a common activating element of the braking unit and of the spindle fixing unit is arranged. the activating element in this case can be connected to the output element for rotation therewith by means of fastening elements of the braking unit and/or of the spindle fixing unit, or can be formed integrally with the output element. the fastening elements here can be designed as rivets, as screws and/or as further fastening elements appearing expedient to a person skilled in the art. the output element is preferably designed as a gearwheel, in particular as a ring gear, of the output unit. the output element is supported in this case with one side on the driver element. with a side facing away from the driver element, the output element can be supported on a securing ring arranged on the spindle or on a driving element designed as a pinion. an “output unit” is intended here to be understood in particular as meaning a unit which is drivable by means of a drive unit of a portable power tool and transmits forces and/or torques generated by the drive unit to a machining tool and/or to a tool-holding fixture of a portable power tool. the output unit is preferably designed as an angular mechanism. an “angular mechanism” is intended here to be understood in particular as meaning a mechanism which, in order to transmit forces and/or torques, has an axis of rotation of an outlet element, which axis of rotation is arranged at an angle relative to an axis of rotation of an inlet element, wherein the axis of rotation of the inlet element and the axis of rotation of the outlet element preferably have a common intersecting point. “arranged at an angle” is intended here to be understood in particular as meaning an arrangement of an axis relative to a further axis, in particular of two intersecting axes, wherein the two axes enclose an angle differing from 180°. the axis of rotation of the inlet element and the axis of rotation of the outlet element preferably enclose an angle of 90° in a fitted state of the output unit designed as an angular mechanism. by means of the configuration according to the disclosure, a compact arrangement of the braking unit and of the spindle fixing unit, which can advantageously act on an output element, can be achieved in a structurally simple manner. furthermore, the disclosure is based on a portable power tool with a power tool braking device according to the disclosure. a “portable power tool” is intended here to be understood in particular as meaning a power tool for machining workpieces, which power tool can be transported without a transport machine by an operator. the portable power tool has in particular a mass which is less than 40 kg, preferably less than 10 kg and particularly preferably less than 5 kg. the portable power tool is preferably designed as an angle grinder. however, it is also conceivable for the portable power tool to have a different configuration appearing expedient to a person skilled in the art, such as, for example, a configuration as a circular saw, as a drill, as a hammer drill and/or as a chisel hammer, as a garden implement, etc. by means of the configuration according to the disclosure, a high level of operating convenience for an operator of the portable power tool can advantageously be achieved, since in particular untrue running can advantageously be ensured by means of a movement of the braking element in the direction of the driver element when the portable power tool is set into operation. the power tool braking device according to the disclosure and/or the power tool according to the disclosure is/are not intended to be restricted here to the use and embodiment described above. in particular the power tool braking device according to the disclosure and/or the power tool according to the disclosure can have a number of individual elements, components and units differing from a number mentioned herein in order to carry out a function described herein. brief description of the drawings further advantages emerge from the description below of the drawings. the drawings illustrate exemplary embodiments of the disclosure. the drawings, the description and the claims contain numerous features in combination. a person skilled in the art will expediently also consider the features individually and put them together to form meaningful further combinations. in the drawings: fig. 1 shows a portable power tool according to the disclosure with a power tool braking device according to the disclosure in a schematic illustration, fig. 2 shows a sectional view of the power tool braking device according to the disclosure in a gear housing of the portable power tool according to the disclosure, said gear housing having been removed from a motor housing of the portable power tool according to the disclosure, in a schematic illustration, fig. 3 shows a view of a detail of the power tool braking device according to the disclosure in the form of an installation module, in a schematic illustration, fig. 4 shows a view of a detail of the power tool braking device according to the disclosure with a removed output element of an output unit of the power tool braking device according to the disclosure, in a schematic illustration, fig. 5 shows a view of a detail of a braking element, which is arranged in a bearing flange of the output unit, of a braking unit of the power tool braking device according to the disclosure, in a schematic illustration, fig. 6 shows a view of a detail of a driver element, which is arranged on the output element, of the braking unit, in a schematic illustration, fig. 7 shows a view of a detail of a common activating element, which is arranged on the output element, of the braking unit and of a spindle fixing unit of the power tool braking device according to the disclosure, in a schematic illustration, fig. 8 shows a view of a detail of an alternative power tool braking device according to the disclosure with a removed output element of an output unit of the alternative power tool braking device according to the disclosure, in a schematic illustration, and fig. 9 shows a view of a detail of a common activating element, which is arranged on the output element, of a braking unit and a spindle fixing unit of the alternative power tool braking device according to the disclosure, in a schematic illustration. detailed description fig. 1 shows a portable power tool 12 a which is designed as an angle grinder and has a power tool braking device 10 a . the power tool braking device 10 a is therefore designed as a hand-held power tool braking device. the portable power tool 12 a comprises a protective hood unit 68 a , a power tool housing 70 a and a main handle 72 a . the main handle 72 a extends from a gear housing 74 a of the power tool housing 70 a in a direction which faces away from the gear housing 74 a and runs at least substantially parallel to a main direction of extent 76 a of the portable power tool 12 a as far as a side 78 a of the power tool housing 70 a , on which a cable of the portable power tool 12 a is arranged for supplying power. the main handle 72 a is fixed on a motor housing 80 a of the power tool housing 70 a . it is conceivable here for the main handle 72 a to be connected to the motor housing 80 a via a handle damping unit (not illustrated specifically here). a spindle 16 a of an output unit 64 a of the power tool braking device 10 a ( fig. 2 ) extends out of the gear housing 74 a , with it being possible for a machining tool 18 a for machining a workpiece (not illustrated specifically here) to be fixed on said spindle. the machining tool 18 a is designed as a grinding wheel. however, it is also conceivable for the machining tool 18 a to be designed as a cutoff wheel or polishing wheel. the power tool housing 70 a comprises the motor housing 80 a for receiving a drive unit 82 a of the portable power tool 12 a , and the gear housing 74 a for receiving the output housing 64 a and the power tool braking device 10 a . the drive unit 82 a is provided to drive the machining tool 18 a in a rotating manner via the output unit 64 a . furthermore, the machining tool 18 a can be connected to the spindle 16 a for rotation therewith by means of a fastening element (not illustrated specifically here) in order to machine a workpiece. the machining tool 18 a can therefore be driven in a rotating manner during operation of the portable power tool 12 a . the output unit 64 a is connected to the drive unit 82 a in a manner already known to a person skilled in the art via a drive element (not illustrated specifically here) of the drive unit 82 a , which drive element is designed as a pinion and is drivable in a rotating manner. in addition, an additional handle 84 a is arranged on the gear housing 74 a . the additional handle 84 a extends transversely with respect to the main direction of extent 76 a of the portable power tool 12 a. the output unit 64 a furthermore comprises a bearing flange 86 a and a bearing element 88 a , which is arranged in the bearing flange 86 a , for the mounting of the spindle 16 a ( fig. 2 ). the bearing flange 86 a is connectable releasably to the gear housing 74 a by means of fastening elements (not illustrated specifically here) of the output unit 64 a . furthermore, the bearing flange 86 a has a hybrid construction. the bearing flange 86 a is therefore at least partially formed from plastic and partially from a material different from plastic. in this connection, the material different from a plastic can be, for example, aluminum, steel, carbon, an alloy of one of the abovementioned materials or a different material appearing expedient to a person skilled in the art. furthermore, the power tool braking device 10 a has a runoff protection unit 134 a ( figs. 3 to 7 ) which is already known to a person skilled in the art and is provided to prevent the machining tool 18 a and/or the fastening element for fastening the machining tool 18 a from running off from the spindle 16 a in a braking mode of the power tool braking device 10 a . the runoff protection unit 134 a here is designed as a groove 126 a which is provided in the spindle 16 a . however, it is also conceivable for the runoff protection unit 134 a to be designed as a receiving flange which is connectable to the spindle 16 a for rotation therewith by means of an interlocking connection and has a function already known to a person skilled in the art. fig. 2 shows a sectional view of the power tool braking device 10 a according to the disclosure in a gear housing 74 a which has been removed from the motor housing 80 a . the power tool braking device 10 a of the portable power tool 12 a comprises at least one braking unit 14 a for braking the spindle 16 a and/or the machining tool 18 a in at least one braking position of the braking unit 14 a . furthermore, the power tool braking device 10 a comprises at least one spindle fixing unit 20 a for fixing the spindle 16 a in at least one fixing position of the spindle fixing unit 20 a . the braking unit 14 a and the spindle fixing unit 20 a are at least partially formed as a single piece. in this connection, the braking unit 14 a and the spindle fixing unit 20 a comprise at least one common activating element 22 a which is provided in order, in at least one operating state, to activate a transfer of the braking unit 14 a into the braking position and to activate a transfer of the spindle fixing unit 20 a into the fixing position. the common activating element 22 a is arranged on an output element 66 a of the output unit 64 a . in this connection, the common activating element 22 a is arranged on a side of the output element 66 a that faces away from a toothing 90 a of the output element 66 a , which is in the form of a ring gear ( figs. 3, 6 and 7 ; the toothing 90 a is indicated merely in a partial region). the common activating element 22 a is connected to the output element 66 a for rotation therewith by means of at least one connecting element 92 a of the braking unit 14 a and/or of the spindle fixing unit 20 a . overall, the common activating element 22 a is connected to the output element 66 a for rotation therewith by means of at least three connecting elements 92 a , 94 a , 96 a . the connecting elements 92 a , 94 a , 96 a here are designed as screws. however, it is also conceivable for the braking unit 14 a and/or the spindle fixing unit 20 a to comprise a number of connecting elements 92 a , 94 a , 96 a differing from 3 for forming a connection of the common activating element 22 a and of the output element 66 a for rotation together. in addition, it is conceivable for the common activating element 22 a to be connected to the output element 66 a for rotation therewith by means of a different type of connection appearing expedient to a person skilled in the art, such as, for example, by means of a frictional and/or by means of an integrally bonded connection, in particular an integral configuration of the common activating element 22 a with the output element 66 a . furthermore, the common activating element 22 a of the braking unit 14 a and of the spindle fixing unit 20 a is connected to a braking element 24 a of the braking unit 14 a for rotation with said braking element. the braking element 24 a here is designed as a brake disk. the braking unit 14 a comprises at least one rotational carry-along element 102 a , 104 a , 106 a which is provided for connecting the braking element 24 a to the common activating element 22 a in an interlocking manner. in this connection, the rotational carry-along element 102 a , 104 a , 106 a connects the braking element 24 a in an interlocking manner in a circumferential direction 100 a running in a plane extending at least substantially perpendicularly to an axis of rotation 98 a of the spindle 16 a . overall, the braking unit 14 a has three rotational carry-along elements 102 a , 104 a , 106 a ( figs. 4 and 5 ) which are provided for connecting the braking element 24 a to the common activating element 22 a in an interlocking manner. however, it is also conceivable for the braking unit 14 a to have a number of rotational carry-along elements 102 a , 104 a , 106 a differing from 3 for forming an interlocking connection. in addition, the rotational carry-along elements 102 a , 104 a , 106 a are provided for mounting the braking element 24 a in an axially movable manner relative to the common activating element 22 a , as viewed along the axis of rotation 98 a of the spindle 16 a . a possibility of moving the braking element 24 a relative to the common activating element 22 a in two opposite directions, as viewed along the axis of rotation 98 a of the spindle 16 a , is limited by means of the rotational carry-along elements 102 a , 104 a , 106 a . however, it is also conceivable for the braking element 24 a to be acted upon by a spring force by means of a spring element (not illustrated specifically here) in the direction of the common activating element 22 a and for the rotational carry-along elements 102 a , 104 a , 106 a to limit a possibility of moving the braking element 24 a relative to the common activating element 22 a in the direction of the common activating element 22 a , as viewed along the axis of rotation 98 a of the spindle 16 a. the rotational carry-along elements 102 a , 104 a , 106 a are of cylindrical design. however, it is also conceivable for the rotational carry-along elements 102 a , 104 a , 106 a to have a different configuration appearing expedient to a person skilled in the art, for example a polygonal configuration. the rotational carry-along elements 102 a , 104 a , 106 a are fastened to the braking element 24 a . the rotational carry-along elements 102 a , 104 a , 106 a here can be formed integrally with the braking element 24 a or can be fastened to the braking element 24 a by means of an interlocking, frictional and/or integrally bonded connection. furthermore, in a fitted state, the rotational carry-along elements 102 a , 104 a , 106 a extend from the braking element 24 a in the direction of the common activating element 22 a at least substantially parallel to the axis of rotation 98 a of the spindle 16 a . furthermore, the rotational carry-along elements 102 a , 104 a , 106 a respectively engage in a rotational carry-along recess 108 a , 110 a , 112 a of the common activating element 22 a for the interlocking connection with the common activating element 22 a . the rotational carry-along recesses 108 a , 110 a , 112 a here are respectively arranged in a coupling region 26 a , 28 a , 30 a of the common activating element 22 a . the common activating element 22 a of the braking unit 14 a and of the spindle fixing unit 20 a therefore has at least the coupling region 26 a , 28 a , 30 a which, in a fitted state, engages in a recess 32 a , 34 a , 36 a of a driver element 38 a of the braking unit 14 a . overall, the activating element 22 a has three coupling regions 26 a , 28 a , 30 a which are arranged distributed uniformly on the activating element in the circumferential direction 100 a running in the plane which extends at least substantially perpendicularly to the axis of rotation 98 a of the spindle 16 a . however, it is conceivable for the common activating element 22 a to have a number of coupling regions 26 a , 28 a , 30 a differing from three for forming a connection of the common activating element 22 a and of the braking element 24 a . the coupling regions 26 a , 28 a , 30 a are designed as extensions. the coupling regions 26 a , 28 a , 30 a here extend from the common activating element 22 a at least substantially parallel to the axis of rotation 98 a of the spindle 16 a in the direction of the driver element 38 a and in the direction of the braking element 24 a. the braking unit 14 a is provided for at least partially converting a relative movement between the driver element 38 a and the braking element 24 a into a further relative movement between the driver element 38 a and the braking element 24 a in order to produce a braking force in a braking position of the braking unit 14 a . the driver element 38 a is arranged on the spindle 16 a for rotation therewith. the output element 66 a , which is in the form of a ring gear, is mounted rotatably on the spindle 16 a together with the braking element 24 a and the common activating element 22 a so as to be rotatable about an axis of rotation of the driver element 38 a , which axis of rotation runs coaxially with respect to the axis of rotation 98 a of the spindle 16 a , by an angle of less than 90° relative to the spindle 16 a. the coupling regions 26 a , 28 a , 30 a are provided to limit a relative rotational movement of the common activating element 22 a , and therefore of the braking element 24 a and also of the output element 66 a , to a predetermined angular range about the axis of rotation of the driver element 38 a relative to the driver element 38 a . the coupling regions 26 a , 28 a , 30 a are arranged distributed uniformly on the common activating element 22 a in the circumferential direction 100 a ( fig. 7 ). the coupling regions 26 a , 28 a , 30 a each engage in one of the recesses 32 a , 34 a , 36 a of the driver element 38 a . the driver element 38 a overall comprises three recesses 32 a , 34 a , 36 a . however, it is also conceivable for the driver element 38 a to have a number of recesses 32 a , 34 a , 36 a differing from three, in each of which a coupling region 26 a , 28 a , 30 a of the common activating element 22 a engages. the recesses 32 a , 34 a , 36 a of the driver element 38 a are arranged distributed uniformly in the circumferential direction 100 a on the driver element 38 a ( figs. 4 and 6 ). furthermore, the recesses 32 a , 34 a , 36 a of the driver element 38 a have a greater extent in the circumferential direction 100 a than the coupling regions 26 a , 28 a , 30 a . rotational play between the common activating element 22 a and the driver element 38 a is therefore achieved in the circumferential direction 100 a . the rotational play is formed by an angular range by which the common activating element 22 a together with the output element 66 a and the braking element 24 a can be rotated relative to the driver element 38 a . the angular range by which the common activating element 22 a and therefore the output element 66 a and the braking element 24 a are mounted rotatably about the axis of rotation of the driver element 38 a relative to the driver element 38 a is therefore limited by means of interaction of the coupling regions 26 a , 28 a , 30 a and the recesses 32 a , 34 a , 36 a of the driver element 38 a. furthermore, on a side facing the driver element 38 a in a fitted state, the braking element 24 a has at least one lifting element 114 a , 116 a , 118 a which is provided for moving the braking element 24 a toward a counterbraking element 120 a of the braking unit 14 a during a rotational movement relative to the driver element 38 a and/or to the spindle 16 a for producing a braking force for braking a rotational movement of the output element 66 a and of the spindle 16 a in a direction running at least substantially parallel to the axis of rotation 98 a of the spindle 16 a and facing away from the driver element 38 a . the counterbraking element 120 a is arranged in the bearing flange 86 of the output unit 64 a for rotation with said bearing flange. the counterbraking element 120 a here can be formed with a brake lining. overall, the braking element 24 a has three lifting elements 114 a , 116 a , 118 a . however, it is also conceivable for the braking element 24 a to have a number of lifting elements 114 a , 116 a , 118 a differing from three. the lifting elements 114 a , 116 a , 118 a are arranged distributed uniformly in the circumferential direction 100 a on the braking element 24 a . the lifting elements 114 a , 116 a , 118 a here are formed as a single piece with the braking element 24 a . however, it is also conceivable for the lifting elements 114 a , 116 a , 118 a to be formed separately from the braking element 24 a and to be connected fixedly to the braking element 24 a by means of a type of connection appearing expedient to a person skilled in the art, such as, for example, by means of an interlocking and/or frictional type of connection. the lifting elements 114 a , 116 a , 118 a are formed in a ramp-shaped manner. the lifting elements 114 a , 116 a , 118 a therefore each have a geometrical configuration which, in a fitted state, has a mathematically defined pitch from the braking element 24 a in the direction of the driver element 38 a , wherein the pitch has a value differing from 0. the driver element 38 a has at least one ramp-shaped braking element movement region 40 a , 42 a , 44 a for interaction with the lifting elements 114 a , 116 a , 118 a of the braking element 24 a for producing a movement of the braking element 24 a relative to the driver element 38 a in the direction of the counterbraking element 120 a as a consequence of a rotational movement of the braking element 24 a relative to the driver element 38 a ( fig. 6 ). the braking unit 14 a therefore comprises at least the driver element 38 a which has at least one ramp-shaped braking element movement region 40 a , 42 a , 44 a . overall, the driver element 38 a has three braking element movement regions 40 a , 42 a , 44 a . however, it is also conceivable for the driver element 38 a to have a number of braking element movement regions 40 a , 42 a , 44 a differing from three. a number of braking element movement regions 40 a , 42 a , 44 a of the driver element 38 a is in particular dependent on a number of lifting elements 114 a , 116 a , 118 a of the braking element 24 a , said lifting elements corresponding to the braking element movement regions 40 a , 42 a , 44 a of the driver element 38 a . the braking element movement regions 40 a , 42 a , 44 a of the driver element 38 a are arranged distributed uniformly in the circumferential direction 100 a on the driver element 38 a . the braking element movement regions 40 a , 42 a , 44 a here are formed as a single piece with the driver element 38 a . however, it is also conceivable for the braking element movement regions 40 a , 42 a , 44 a to be formed separately from the driver element 38 a and to be fixedly connected to the driver element 38 a by means of a type of connection appearing expedient to a person skilled in the art, such as, for example, an interlocking and/or frictional type of connection. the braking element movement regions 40 a , 42 a , 44 a have a geometrical configuration which, in a fitted state, has a mathematically defined pitch from the driver element 38 a in the direction of the braking element 24 a , wherein the pitch has a value differing from 0. the lifting elements 114 a , 116 a , 118 a of the braking element 24 a and the braking element movement regions 40 a , 42 a , 44 a together form a movement conversion unit of the braking unit 14 a , which is provided for producing a braking force in a braking position of the braking unit 14 a in order to move the braking element 24 a relative to the driver element 38 a by a combination of a rotation and a translation. the braking unit 14 a therefore comprises at least the driver element 38 a which is provided in order to move a braking element 24 a of the braking unit 14 a relative to the activating element 22 a as a consequence of a rotational movement of the common activating element 22 a of the braking unit 14 a and of the spindle fixing unit 20 a and also of the driver element 38 a. furthermore, the driver element 38 a is provided for moving at least one spindle fixing element 46 a , 48 a , 50 a , 52 a , 54 a , 56 a of the spindle fixing unit 20 a relative to the common activating element 22 a of the brake unit 14 a and of the spindle fixing unit 20 a in at least one operating state. the braking unit 14 a therefore comprises at least the driver element 38 a which is provided for moving at least one spindle fixing element 46 a , 48 a , 50 a , 52 a , 54 a , 56 a of the spindle fixing unit 20 a relative to the common activating element 22 a of the braking unit 14 a and of the spindle fixing unit 20 a in at least one operating state. the spindle fixing element 46 a , 48 a , 50 a , 52 a , 54 a , 56 a here is designed as a rolling element. however, it is also conceivable for the spindle fixing element 46 a , 48 a , 50 a , 52 a , 54 a , 56 a to have another configuration appearing expedient to a person skilled in the art. overall, the spindle fixing unit 20 a has six spindle fixing elements 46 a , 48 a , 50 a , 52 a , 54 a , 56 a which have an analogous configuration. in addition, however, it is conceivable for the spindle fixing unit 20 a to have a number of spindle fixing elements 46 a , 48 a , 50 a , 52 a , 54 a , 56 a differing from six. the spindle fixing elements 46 a , 48 a , 50 a , 52 a , 54 a , 56 a are arranged between the driver element 38 a and a clamping element 122 a of the spindle fixing unit 20 a , as viewed in a direction running at least substantially perpendicularly to the axis of rotation 98 a of the spindle 16 a ( fig. 4 ). the clamping element 122 a here is arranged on the bearing flange 86 a for rotation therewith. the clamping element 122 a is therefore fixed on the bearing flange 86 a so as not to be rotatable relative to the bearing flange 86 a . in this case, the clamping element 122 a can be fixed to the bearing flange 86 a by means of an interlocking, frictional and/or integrally bonded connection. in addition, the clamping element 122 a is of annular design. the clamping element 122 a therefore surrounds the driver element 38 a in the circumferential direction 100 a. furthermore, the driver element 38 a comprises at least one clamping contour 58 a , 60 a , 62 a , 128 a , 130 a , 132 a for clamping at least one of the spindle fixing elements 46 a , 48 a , 50 a , 52 a , 54 a , 56 a of the spindle fixing unit 20 a in order to fix the spindle 16 a in at least one operating state or in the fixing position of the spindle fixing unit 20 a . the braking unit 14 a therefore comprises at least the driver element 38 a which has at least one clamping contour 58 a , 60 a , 62 a , 128 a , 130 a , 132 a for clamping at least one spindle fixing element 46 a , 48 a , 50 a , 52 a , 54 a , 56 a of the spindle fixing unit 20 a . overall, the driver element 38 a has six clamping contours 58 a , 60 a , 62 a , 128 a , 130 a , 132 a . however, it is also conceivable for the driver element 38 a to have a number of clamping contours 58 a , 60 a , 62 a , 128 a , 130 a , 132 a differing from six. the clamping contours 58 a , 60 a , 62 a , 128 a , 130 a , 132 a are arranged distributed uniformly in the circumferential direction 100 a on the driver element 38 a . in this case, the clamping contours 58 a , 60 a , 62 a , 128 a , 130 a , 132 a are arranged on an outer circumference of the driver element 38 a , said outer circumference running in the circumferential direction 100 a . the clamping contours 58 a , 60 a , 62 a , 128 a , 130 a , 132 a have a ramp-shaped configuration. the clamping contours 58 a , 60 a , 62 a , 128 a , 130 a , 132 a therefore each have a geometrical configuration which has a mathematically defined pitch in the circumferential direction 100 a , wherein the pitch has a value differing from 0. in the event of an interruption to a power supply of the drive unit 82 a , such as, for example, as a consequence of an actuation of an operating element 124 a ( fig. 1 ), which is designed as an on/off switch, of the portable power tool 12 a , an armature shaft (not illustrated specifically here) of the drive unit 82 a is braked as a consequence of an action, already known to a person skilled in the art, of forces and/or torques, such as, for example, of frictional forces and magnetic forces of the drive unit 82 a . the output element 66 a which is in the form of a ring gear and meshes with the drive element (not illustrated specifically here) of the drive unit 82 a , said drive element being arranged on the armature shaft for rotation therewith, the spindle 16 a , the machining tool 18 a fastened to the spindle 16 a , the driver element 38 a arranged on the spindle 16 a for rotation therewith and the common activating element 22 a connected to the output element 66 a for rotation therewith, and also the braking element 24 a connected to the common activating element 22 a for rotation therewith oppose a braking of the armature shaft as a consequence of a mass inertia of the components. the output element 66 a , the spindle 16 a , the machining tool 18 a , the driver element 38 a and the common activating element 22 a and also the braking element 24 a endeavor here to rotate further about the axis of rotation 98 a of the spindle 16 a as a consequence of a mass inertia of the components. as a consequence of the drive element of the drive unit 82 a meshing with the output element 66 a and as a consequence of the rotational play between the driver element 38 a arranged on the spindle 16 a for rotation therewith and the common activating element 22 a arranged on the output element 66 a , and also the braking element 24 a , the output element 66 a and the common activating element 22 a and the braking element 24 a are rotated relative to the spindle 16 a and to the driver element 38 a in the event of an interruption to a power supply of the drive unit 82 a . the lifting elements 114 a , 116 a , 118 a of the braking element 24 a here slide on the braking element movement regions 40 a , 42 a , 44 a of the driver element 38 a . in addition to a relative rotational movement, the braking element 24 a is moved linearly relative to the driver element 38 a in the direction of the counterbraking element 120 a by means of an interaction of the lifting elements 114 a , 116 a , 118 a of the braking element 24 a and of the braking element movement regions 40 a , 42 a , 44 a of the driver element 38 a . by this means, the braking element 24 a and the counterbraking element 120 a enter into contact, as a result of which, in a braking position of the braking unit 14 a , a braking force is produced for braking a rotational movement of the spindle 16 a , of the machining tool 18 a and of the driver element 38 a . the braking unit 14 a is therefore in a braking position. the braking force is produced by means of friction between the braking element 24 a and the counterbraking element 120 a . in this case, it is conceivable for the braking unit 14 a to comprise a heat sink element formed separately from the bearing flange 86 a , such as, for example a cooling rib, etc., in order to dissipate frictional heat, or for the bearing flange 86 a to be at least partially formed from an advantageous heat-conducting material. in this case, the braking element 24 a can carry out a rotational movement relative to the driver element 38 a until the coupling regions 26 a , 28 a , 30 a strike against edge regions of the recesses 32 a , 34 a , 36 a of the driver element 38 a . the lifting elements 114 a , 116 a , 118 a of the braking element 24 a and the braking element movement regions 40 a , 42 a , 44 a of the driver element 38 a slide further on one another until the coupling regions 26 a , 28 a , 30 a strike against the edge regions of the recesses 32 a , 34 a , 36 a of the driver element 38 a as a consequence of a frictional force between the braking element 24 a and the counterbraking element 120 a and therefore move the braking element 24 a further in the direction of the counterbraking element 120 a . as a consequence of the further movement of the braking element 24 a in the direction of the counterbraking element 120 a , an axial force acting from the braking element 24 a on the counterbraking element 120 a in the braking position of the braking unit 14 a is increased. the increased axial force results in an increase in the braking force. the braking unit 14 a therefore has a self-locking function. by this means, the spindle 16 a , the machining tool 18 a and the driver element 38 a and also the armature shaft, the output element 66 a , the common activating element 22 a and the braking element 24 a are braked to a standstill. if, after the spindle 16 a and the machining tool 18 a are at a standstill, the spindle 16 a is rotated about the axis of rotation 98 a of the spindle 16 a in order to change a tool, the spindle fixing unit 20 a is transferred into the fixing position. in this case, the spindle fixing elements 46 a , 48 a , 50 a , 52 a , 54 a , 56 a are moved relative to the common activating element 22 a in a direction running at least substantially perpendicularly to the axis of rotation 98 a of the spindle 16 a and in the circumferential direction 100 a by means of the clamping contours 58 a , 60 a , 62 a , 128 a , 130 a , 132 a of the driver element 38 a . by this means, the spindle fixing elements 46 a , 48 a , 50 a , 52 a , 54 a , 56 a are clamped between the clamping contours 58 a , 60 a , 62 a , 128 a , 130 a , 132 a of the driver element 38 a and the clamping element 122 a . a rotational movement of the spindle 16 a is therefore prevented. however, it is also conceivable in this case for the spindle fixing elements 46 a , 48 a , 50 a , 52 a , 54 a , 56 a to each be prestressed by means of a force, in particular a spring force, in the direction of a clamping position and, when a transfer of the spindle fixing unit 20 a is activated by means of the common activating element 22 a , to be moved into the clamping position as a consequence of the force. when the portable power tool 12 a is set into operation, at least one of the spindle fixing elements 46 a , 48 a , 50 a , 52 a , 54 a , 56 a is moved by a drive force acting on the output element 66 a as a consequence of a rotation of the common activating element 22 a out of a clamping position between the clamping element 122 a and the clamping contours 58 a , 60 a , 62 a , 128 a , 130 a , 132 a by means of at least one adjustment element 136 a , 138 a , 140 a of the common activating element 22 a . the adjustment element 136 a , 138 a , 140 a is designed as a radial extension. overall, the common activating element 22 a has three adjustment elements 136 a , 138 a , 140 a which have an analogous configuration. the adjustment elements 136 a , 138 a , 140 a are arranged distributed uniformly in the circumferential direction 100 a on the common activating element 22 a . at least three spindle fixing elements 46 a , 48 a , 50 a , 52 a , 54 a , 56 a are therefore moved by a drive force acting on the output element 66 a as a consequence of a rotation of the common activating element 22 a out of a clamping position between the clamping element 122 a and the clamping contours 58 a , 60 a , 62 a , 128 a , 130 a , 132 a by means of the adjustment elements 136 a , 138 a , 140 a . as a consequence of a rotational movement of the driver element 38 a , the remaining spindle fixing elements 46 a , 48 a , 50 a , 52 a , 54 a , 56 a are positioned in a position relative to the clamping element 122 a in such a manner that clamping between the clamping element 122 a and the clamping contours 58 a , 60 a , 62 a , 128 a , 130 a , 132 a is avoided. furthermore, when the portable power tool 12 a is set into operation, it is intended to be ensured that the braking element 24 a and the counterbraking element 120 a are reliably disengaged and contact between the braking element 24 a and the counterbraking element 120 a is reliably cancelled. for this purpose, the braking unit 14 a comprises at least one spring element (not illustrated specifically) which acts upon the braking element 24 a with a spring force in the direction of the driver element 38 a . however, it is also conceivable for the power tool braking device 10 a to comprise at least one movement unit which, in at least one operating state, is provided for moving the braking element 24 a in the direction of the driver element 38 a at least in order to produce a force component in the direction of the driver element 38 a , such as, for example, by means of a cam mechanism, by means of ramp elements which have an opposed pitch to the lifting elements 114 a , 116 a , 118 a , etc. furthermore, the power tool braking device 10 a is designed as an installation module 144 a ( fig. 3 ). the installation module 144 a comprises four fastening elements (not illustrated specifically here) which are designed as screws. the screws are provided for connecting the installation module 144 a releasably to the gear housing 74 a . when required, an operator can remove the installation module 144 a from the gear housing 74 a . the portable power tool 12 a and the power tool braking device 10 a therefore form a power tool system. the power tool system can comprise a further installation module. the further installation module can comprise, for example, an output unit which is designed as an angular mechanism and is formed in a manner decoupled from a braking unit. the further installation module could be fitted, for example by an operator, as an alternative to the installation module 144 a to the gear housing 74 a . an operator therefore has the possibility of equipping the portable power tool 12 a with the installation module 144 a or with the further installation module with an output unit which is decoupled from a braking unit. for a use situation in which the portable power tool 12 a is intended to be operated in a manner decoupled from the power tool braking device 10 a , the installation module 144 a can be interchanged for the further installation module of the power tool system by an operator. for this purpose, an operator merely removes the installation module 144 a from the gear housing 74 a and fits the further installation module to the gear housing 74 a. figs. 8 and 9 illustrate an alternative exemplary embodiment. components, features and functions substantially remaining the same are basically denoted by the same reference numbers. in order to distinguish between the exemplary embodiments, the letters a and b are added to the reference numbers of the exemplary embodiments. the description below is substantially restricted to the differences over the first exemplary embodiment described in figs. 1 to 7 , wherein reference can be made to the description of the first exemplary embodiment in figs. 1 to 7 with regard to components, features and functions remaining the same. fig. 8 shows a power tool braking device 10 b which has been removed from a gear housing of a portable power tool (not illustrated specifically here). the power tool braking device 10 b here is arrangeable in a portable power tool (not illustrated specifically here) which has an at least substantially analogous configuration to the portable power tool 12 a described in figs. 1 to 7 . the power tool braking device 10 b of the portable power tool comprises at least one braking unit 14 a for braking a spindle 16 b and/or a machining tool (not illustrated specifically here) in at least one braking position of the braking unit 14 b , and at least one spindle fixing unit 20 b for fixing the spindle 16 b in at least one fixing position of the spindle fixing unit 20 b . the braking unit 14 b and the spindle fixing unit 20 b are at least partially formed as a single piece. in comparison to the power tool braking device 10 a described in figs. 1 to 7 , the power tool braking device 10 b comprises at least one damping unit 146 b for damping torque surges. the damping unit 146 b here comprises at least one damping element 148 b , 150 b , 152 b which is provided in order to damp vibrations in an output unit 64 b of the power tool braking device 10 b . the damping element 148 b , 150 b , 152 b here can be formed from elastomer, from a gel pad with viscous liquid or from another material appearing expedient to a person skilled in the art. the damping unit 146 b comprises a total of three damping elements 146 b , 150 b , 152 b . however, it is also conceivable for the damping unit 146 b to comprise a number of damping elements 148 b , 150 b , 152 b differing from three. the damping elements 148 b , 150 b , 152 b here are in each case arranged in a recess 32 b , 34 b , 36 b of a driver element 38 b of the braking unit 14 b . in this case, the damping elements 148 b , 150 b , 152 b are in each case arranged, as viewed in a circumferential direction 100 b , between an edge region of the recesses 32 b , 34 b , 36 b and a coupling region 26 b , 28 b , 30 b , which engages in the respective recess 32 b , 34 b , 36 b , of a common activating element 22 b of the braking unit 14 b and of the spindle fixing unit 20 b . in an alternative configuration (not illustrated here) of the power tool braking device 10 b , two damping elements 148 b , 150 b , 152 b are in each case arranged in a recess 32 b , 34 b , 36 b , wherein in each case one coupling region 26 b , 28 b , 30 b is arranged between the two damping elements 148 b , 150 b , 152 b in the respective recess 32 b , 34 b , 36 b , as viewed in the circumferential direction 100 b . with regard to further features and functions of the power tool braking device 10 b , reference should be made to the power tool braking device 10 a described in figs. 1 to 7 .
121-532-494-291-850
US
[ "KR", "JP", "WO", "EP", "US", "CA" ]
G06Q10/00,G06Q10/06,G06Q10/10,G06F16/28,G06F16/90,G06Q50/04,G06Q30/02,G06N7/00
2017-03-16T00:00:00
2017
[ "G06" ]
homogeneous model of hetergeneous product lifecycle data
a method and system is disclosed for modeling product data related to lifecycle of a product, including an application program interface configured to connect with one or more data sources of different types via one or more computer based product management tools. a digital twin graph is constructed to include a plurality of graphical models of product data with related nodes inter-linked by edges via a linking algorithm. models of the digital twin graph include an ontological model having nodes of ontological information related to the product data, an instance model having instance nodes related to the product data, and a probabilistic model having conditional probability distribution nodes from which causal and predictive reasoning information is generated
claims what is claimed is: 1 . a system for modeling product data related to lifecycle of a product, comprising: at least one server, comprising: an application program interface configured to connect with one or more data sources of different types via one or more computer based product management tools; and at least one processor configured to: construct a digital twin graph comprising a plurality of graphical models of product data, each model having nodes and edges, each node having a uniquely identifiable label, each edge being directional or bi-directional, the models comprising: an ontological model having nodes of ontological information related to the product data; an instance model having instance nodes related to the product data, each instance node generated in response to receiving new product data; and a probabilistic model having conditional probability distribution nodes from which causal and predictive reasoning information is generated; and execute a linking algorithm to construct edges that inter-link data determined to be related between a pair of models. 2. the system of claim 1 , wherein the instance model includes at least one digital twin unit comprising: a payload with pointer values corresponding to a location of data stored in an external data store; and a characteristic feature extracted from the payload. 3. the system of claim 2, further comprising a distiller algorithm configured to extract the characteristic feature. 4. the system of claim 2, wherein at least one digital twin unit includes product data related to one of engineering-at-work data, computer aided design (cad) data, engineering tool code, or human-product interaction. 5. the system of claim 1 , wherein the ontological information defines a set of concepts, categories, relationships, or a combination thereof, for the product data. 6. the system of claim 1 , wherein the linking algorithm inter-links the instance model and the probability model by searching instance nodes and obtaining evidence for a conditional probability distribution node of the probability model. 7. the system of claim 1 , wherein the processor is further configured to generate and record the plurality of models at intervals in a time series to form a temporal evolution of the plurality of models, the system further comprising a database for storing the temporal evolution. 8. the system of claim 1 , wherein the processor is further configured to execute an algorithm that triggers a simulation by a first pdm system and sends the result to a second pdm system, and the transaction is recorded in the digital twin graph. 9. the system of claim 1 , wherein the processor is further configured to execute an algorithm that deploys a pseudocode to a controller based on the topography of the digital twin graph. 10. the system of claim 1 , wherein the processor is further configured to execute algorithms that combine sensor data with simulation data to construct a diagnostic model with parameterized data, generate new control parameters, and generate a service interval schedule based on the diagnostic model. 1 1 . a method for modeling product data related to lifecycle of a product, comprising: using an application program interface to connect with one or more data sources of different types via one or more computer based product management tools; constructing a digital twin graph comprising a plurality of graphical models of product data, each model having nodes and edges, each node having a uniquely identifiable label, each edge being directional or bi-directional, the models comprising: an ontological model having nodes of ontological information related to the product data; an instance model having instance nodes related to the product data, each instance node generated in response to receiving new product data; and a probabilistic model having conditional probability distribution nodes from which causal and predictive reasoning information is generated; and executing a linking algorithm to construct edges that inter-link data determined to be related between a pair of models. 12. the method of claim 1 1 , wherein the instance model includes at least one digital twin unit comprising: a payload with pointer values corresponding to a location of data stored in an external data store; and a characteristic feature extracted from the payload. 13. the method of claim 12, further comprising executing a distiller algorithm to extract the characteristic feature. 14. the method of claim 12, wherein at least one digital twin unit includes product data related to one of engineering-at-work data, computer aided design (cad) data, engineering tool code, or human-product interaction. 15. the method of claim 1 1 , wherein the ontological information defines a set of concepts, categories, relationships, or a combination thereof, for the product data. 16. the method of claim 1 1 , wherein the linking algorithm inter-links the instance model and the probability model by searching instance nodes and obtaining evidence for a conditional probability distribution node of the probability model. 17. the method of claim 1 1 , further comprising: generating and recording the plurality of models at intervals in a time series to form a temporal evolution of the plurality of models, and storing the temporal evolution in a database. 18. the method of claim 1 1 , further comprising : executing an algorithm that triggers a simulation by a first pdm system and sends the result to a second pdm system, and recording the transaction in the digital twin graph. 19. the method of claim 1 1 , further comprising executing an algorithm that deploys a pseudocode to a controller based on the topography of the digital twin graph. 20. the method of claim 1 1 , further comprising executing algorithms that combine sensor data with simulation data to construct a diagnostic model with parameterized data, generate new control parameters, and generate a service interval schedule based on the diagnostic model.
homogeneous model of hetergeneous product lifecycle data technical field [0001] this application relates to product lifecycle data. more particularly, this application relates to graphical modeling of heterogeneous product lifecycle data. background [0002] there can be a tremendous amount of data generated related to lifecycle of a product (or production system or process), from product conception, to its design, production, and service, until the moment it ceases to exist or to function. in addition to the large volume of the data, the variety and heterogeneous nature of the data continues to expand as more and more sources of data are introduced to keep up with technology and market demands. product data management (pdm) systems have been developed to aggregate product data over its lifecycle. pdm systems provide built-in functionality to find data, to create variants of the data, label the data for classification, and store the data. conventional pdm systems typically handle design and engineering data, but fail to account for operational data generated while products and systems are in use. time-series database systems have been developed for storing operational data. while there is a vast landscape of product data stored in various repositories, the data is fragmented, and linking related data from such incongruous sources in an accurate and useful way with conventional tools is very difficult, if not impossible. moreover, without a reliable model that can establish links, and extract conceptual knowledge from the linked data, there is currently no practical mechanism available to develop inferential information, which is essential for product lifetime management factors, such as failure diagnosis, prediction of failure, and degradation, among others. brief description of the drawings [0003] non-limiting and non-exhaustive embodiments of the present disclosure are described with reference to the following figures, wherein like reference numerals refer to like elements throughout the drawings unless otherwise specified. [0004] fig. 1 illustrates an example of a temporally evolved digital twin graph (dtg) model constructed by interlinking an ontology model, an instance model, and a probabilistic graph model of heterogeneous product data. [0005] fig. 2 illustrates an example of interconnections between an ontology model and an instance model of a dtg according to one or more embodiments of the disclosure. [0006] fig. 3 illustrates an example of interconnecting data stores maintained by multiple product data tools in accordance with one or more embodiments of the disclosure. [0007] fig. 4 illustrates an example of a dtg supporting diagnostic analysis in accordance with one or more embodiments of the disclosure. [0008] fig. 5 illustrates an example of a dtg supporting control code deployment in accordance with one or more embodiments of the disclosure. [0009] fig. 6 illustrates an example of a dtg that provides simulation assisted prognostics based on data-driven and simulation-based models in accordance with one or more embodiments of the disclosure. [0010] fig. 7 illustrates an example of a dtg that provides simulation assisted prognostics based on data-driven and simulation-based models in accordance with one or more embodiments of the disclosure. [0011] fig. 8 illustrates an example of a computing environment within which embodiments of the present disclosure may be implemented. detailed description [0012] fig. 1 illustrates an example of a temporally evolved digital twin graph (dtg) constructed by interlinking formalisms that include an ontology model, an instance model, and a probabilistic graph model of heterogeneous product data. each model of the dtg may organize information and optimize for search, data streaming, inference, reasoning, and learning. each model may be hosted centrally, distributed or on the edge. as shown, the dtg 100 may comprise instance model 1 10, ontology model 120, and probabilistic graph model 130, and may be extended to integrate new formalistic models. the instance model 1 10 includes instance nodes that represent physical world entities by a one-to-one mapping. the ontology model 120 includes nodes that define a set of concepts and categories, linked by their relationships. the probabilistic graph model 130 includes one or more nodes to implement a highly flexible mechanism for the integration of evidence from multiple sources. the probabilistic graph model 130 may enables probabilistic reasoning and causal exploration of the product data. [0013] an api 145 may provide abstractions to simplify user access to internal structures of the dtg. the api 145 may provide a unified interface for interaction between the dtg 100 and various product data tools, such as tools implementing product data management (pdm). algorithms 140 may execute searches of product data among the different models. for example, knowledge of the ontology model 120 may be extracted to initiate prognostic or diagnostic reasoning, and simulations where additional product data needs to be extracted to support probabilistic modeling. algorithms 140 may also construct and maintain the various links between nodes in the dtg 100. such links may be one-to-one, one-to-many, many-to-many relationships between the models, allowing model-specific algorithms to combine knowledge and insights globally. the dtg 100 represents a temporal evolution of the models 1 10, 120 and 130 as shown the time series 180 of dtg snapshots. the temporal evolution allows access to historical generation and expiration of nodes, edges and links, and may be maintained and tracked in a system memory. storing the time series snapshots 180 permits the inference of cause-effect relationships in the product data over time, and the prediction of nodes or edges on the dtg 100 based on observation of historical dtg snapshots. in an embodiment, the maintenance may be implemented using blockchain. the dtg 100 may support data-driven and/or model driven construction of the models 1 10, 120, 130. [0014] the dtg 100 may be defined as a graph g=(v, e), where v is a set of uniquely identifiable labeled nodes, and e is a set of uniquely identifiable labeled edges. edges may be directed or symmetric (bi-directional). each model 1 10, 120, 130 may provide a different modeling and functional capability to digital twin representation of the product data. [0015] as shown in fig. 1 , the instance model 1 10 may include instance nodes inter-linked to one another, and some of which are related to a digital twin unit (dt unit). herein, the dt unit is an entity of the dtg that updates very frequently to feed new information to the dtg. each dt unit may be linked to data related to a physical twin, and may be constructed to include a payload with pointers to the location of the data as stored in a data store (e.g. , a database located in one or more local or remote servers). the dt unit also includes characteristic feature information extracted from the payload by a distiller. examples of physical twins that generate the data linked to the dt units may include engineering data, tools, real world objects, and interactions thereof. the dtg 100 includes dt unit 161 for engineering data generated by physical twin 151 , dt unit 162 related to engineering tool 152 (e.g., cad data, control code, etc.) dt unit 165 related to product in use data such as human interaction 155. instance nodes 1 17 and 1 18 are related to a physical twin crane 153 and physical twin object 154. [0016] instance nodes may be interlinked with edges by an algorithm 140. for example, algorithm 141 may establish edge 1 15 upon recognition of a relationship between instance nodes. algorithms 140 may also establish edge links between instance nodes and ontology nodes, as shown between nodes 1 14 and 123, and between nodes 1 1 1 and 121 . algorithms 140 may also construct the dtg by establishing links between ontology nodes 122, 121 and probabilistic graph nodes 132, 131 . a further link is shown between nodes 133 and 1 13, linking the instance model 1 10 to the probabilistic graph model 130. [0017] fig. 2 illustrates an example of interconnections between an ontology model and an instance model of a dtg according to one or more embodiments of the disclosure. the dtg 200 may contain various inter-linked relationships within the instance model 1 10, and to the ontology model 120 and the probabilistic graph model 130, which allows access to a greater volume of digital information related to a physical twin, such as vehicle 201 . in this example, product data relating to a model of a car, such as a model b, is represented by the temporal dtg 200. one physical twin 201 is represented by digital twin instance node 21 1 labeled by its vehicle identification number (vin) b1 1 in the instance model 1 10. an instance node 214 for model b may be linked to various instance nodes of particular vehicles, such as nodes 21 1 , 212, 213 for vehicles vin b1 1 , vin b22, vin b33 respectively. the ontology model 120 may represent the concepts generally relating to a car, such that each node represents a related concept, and each edge is the linking relationship. for example car node 221 may be linked to vehicle node 223 by edge 'is a' 261 . vehicle node 223 may be linked to transport node 225 by an edge 'provides' 263. transport node 225 may be linked to people node 224 by an edge 'moves' 264, and an edge 'contains' 265 may link car node 221 to people node 224. there may be several links between the instance model 1 10 and the ontology model 120, including link 266 between model b node 212 and car node 221 , which represents the notion that bmw 335 is a car. each instance node may have links to dt units, such as dt unit 214 and dt unit 215 related to product in use data generated by different sensors, such as an engine sensor for dt unit 214 and an abs sensor for dt unit 215 related to the instance node 21 1 . for each dt unit, payload information 291 stored in a data store may be extracted by a distiller to produce the characteristic information 292 contained in the dt unit. for example, an average speed characteristic of dt unit 214 for the vehicle of instance node 21 1 may be distilled from a data store 282 of a time series based plm tool (e.g., siemens mindsphere) using a pointer stored the payload 291 of the dt unit 214. [0018] the model instance node 214 may represent a blueprint for all corresponding vehicles of that model. the model instance node 214 may be linked to dt units tied to data stores, such as an aggregated plm data tool. for example, cad dtunits 217 may contain characteristic information such as relevant geometric properties (e.g., eight, wheelbase distance), and payload information as a link to the actual cad file stored in an aggregated type plm tool database 281 (e.g., siemens teamcenter or nx). the sysml dt units 216 may contain relevant architectural properties such as subsystem hierarchy. by linking design information, such as cad related dt units 217 and sysml dt units 216 to the model instance node 214, a vast amount of product information corresponding to a particular vehicle may be accessed. for example, as defined in system modeling language sysml of dt unit 216, the car model was designed to have four tires. each vehicle instance node may include a link to instance nodes for the four tires currently installed on the vehicle. for example, the physical tire 209 has a corresponding instance node 219 identified by the tire model 99, and linked to the car instance node 21 1 . [0019] fig. 3 illustrates an example of interconnecting data stores maintained by multiple product data tools in accordance with one or more embodiments of the disclosure. in this example, one or more dtg algorithms 140 may perform reasoning tasks among the inter-linked instance model 1 10, ontology model 120, and probabilistic graph model 130, triggering roundtrip requests related to a data inquiry. the dtg 300 includes a dt unit 314 linked to database 381 of a time series based plm tool, a dt unit 315 linked to a database 382 of an aggregated based plm tool, and a probabilistic graph model node 331 linked to simulation database 383. using a plm software tool stored at a server 392, a user may submit an inquiry 393 for a particular predictive report via gui 391 . the api 145 may route the inquiry to an appropriate algorithm 341 to the aggregated data store 382. the algorithm 341 may first search for all instance nodes in the instance model 1 10 that satisfy the query constraints to get a historical perspective. the algorithm 341 may link dt unit 314the algorithm 341 may find a probabilistic model 331 that relates to the inquiry, but lacks enough evidence because only two instances of evidence 351 and 352 were found to support the probabilistic graph model 331 . the algorithm 341 may then trigger a roundtrip to a simulation tool 394 (e.g., siemens simcenter) to execute a simulation, and forward the results 395 as evidence in the probabilistic graph model 331 . algorithm 341 may also initiate a report that sends the result of the probabilistic graph model 331 to the user via the api 145, the product tool 392, and gui 391 . [0020] fig. 4 illustrates an example of a dtg supporting diagnostic analysis in accordance with one or more embodiments of the disclosure. in this example, the dtg 400 is modeling an instance model 1 10, ontology model 201 and probabilistic graph model 130 related to data for a car experiencing a maintenance alarm. physical twin 402 is a vehicle identified by vi n 222 and has an instance node 412. upon detection of an alarm condition by an abs sensor in the vehicle, a time series database 481 receives the information via a wireless telemetry signal. an algorithm 841 may be triggered to create a new dt unit 452 for abs sensor data as a pointer to the new data in database 481 and linked to telemetry instance 416. at a later time, the vehicle is brought to a service station, and a service technician may search the dtg 400 for diagnosing the trouble and making any required repairs. using knowledge from the ontology model 120, an inquiry to the dtg 400 may trigger algorithm 842 to search and locate an existing 3d model, which has an instance node 41 1 linked to database 482 of an aggregated product data tool which highlights the abs subsystem for the model of the vehicle. algorithm 442 may further search and locate within the dtg 400 any log files related to an abs alarm, which identifies the dt units 451 , 452 indicating there have been two such alarms. the dt units 451 , 452 have payloads that point to database modules 491 and 492 in database 481 of a time series based product data tool. the search of dtg 400 may also locate any simulations performed relating to the abs subsystem for this model vehicle, such as instance node 419 for simulation data of the product simulation tool database 483. within the simulation database 483, there are two relevant simulation animations 493, 494 which show a connection to deflated tires. from this information, the cause of the abs alarm may be diagnosed as deflated tires based on the results of the dtg 400 data model application. the temporal dtg 400 may be updated with the new information by encoding the ontology node 423 for a tire with a new knowledge link 426 to abs system node 424. any future alarms in other vehicles, such as vehicle 403, may then use the new knowledge via a link between instance model 1 10 and ontology model 120, such as link 427 to instance node 413. because a new link has been recorded in the ontology model 120 of the dtg 400, the information can be located with less steps for such alarms in vehicles of a similar model, related to car model instance node 461 , having the benefit of the searching performed for vehicle 402. [0021] fig. 5 illustrates an example of a dtg supporting control code deployment in accordance with one or more embodiments of the disclosure. in this example, a dtg 500 includes an instance model 1 10, ontology model 120 and probabilistic graph model 130 related to abs controllers for a vehicle. the physical twins include wheels w1 , w2, brakes b1 , b2, sensors s1 , s2, abs controllers 501 , 502, brake controller 503, and control logic codes 504, 505, 506. a control program maybe written assisted by the interconnected data for control architecture specified in the ontology model 120, and then deployed to the dtg 500. a wheel node 521 has links to an angular velocity (av) sensor node 522 and a brake node 524. an abs controller node 523 has links to the brake node 524, angular velocity sensor node 522 and control program node 525. using the ontology knowledge of the configuration of these control elements, a design engineer, via a gui, may instantiate the ontology by constructing corresponding instance nodes, such as node 51 1 for wheel w1 , node 551 for av sensor s1 , node 561 for brake b1 , node 571 for abs controller a1 . a gui application may permit the design engineer to copy the instance nodes related to wheel w1 and paste as instance nodes forwheel w2 (i.e., nodes 512, 662, 562, 572). at this point, each of the physical twins has a corresponding digital twin node in the instance model 1 10. pseudocode for control logic code 504 may be written during the engineering phase based on the knowledge of the ontology model 120. using ati 145, the instance node 581 for the control logic code 504 may be deployed to the dtg 500 with an edge 585 to instance node 571 for abs controller a1 . as an example for how the edge may be constructed, an application tool may prompt the design engineer as to which object is the control logic code written, and in reply, the node id ά1 ' may be entered by typing or by operating a displayed pull down menu, or the like. in a similar process, the control logic code 505 for the brake controller 503, and the control logic code 506 for the abs controller 502 may be written and deployed to the instance model 120. the control logic code 506 may be copied from control logic code 504, deployed as instance node 582, and then linked to instance node 572 by edge 586. the top level control code 505 for brake controller 503 may be deployed to the dtg 500 as instance node 583 and linked by edges 587 and 588 to the instance nodes 581 and 582, mirroring the physical twin arrangement of controllers 501 , 502 and 503. an embedded system such as an electronic control unit (ecu) may host subgraphs of the instance model 1 10 of the dtg 500 to provide context to a controller necessary to perform the control task. for example, an ecu for abs controller 501 may host a subgraph of instance nodes 51 1 , 551 , 561 , 571 and 581 . an ecu for the brake controller 503 may host a subgraph of instance nodes 51 1 , 561 , 571 , 581 , 512, 562, 572, 582 and 583. thus, as demonstrated by the dtg 500, control programs may be modeled in a dtg and deployed to controllers using a portion (i.e., subgraph) of the dtg. [0022] fig. 6 illustrates an example of a dtg that provides simulation assisted prognostics based on data-driven and simulation-based models in accordance with one or more embodiments of the disclosure. in this example, dtg 600 applies simulations of the instance model 1 10 combined with sensor data to derive a prognostic analysis for a hybrid car. the instance model includes node 61 1 for car a, node 612 for car b and node 613 for car c, each linked to instance node 614 for the same car model. an instance node 615 for hybrid drivetrain battery is linked to ontology model 120 relating to battery knowledge via battery node 625. for example, the ontology model 120 may include a car node 627 linked to diagnostics node 628 and repair history node 629. a measurement node 626 may be linked to various types of relevant measurement nodes. during a scheduled maintenance service visit for car a, an algorithm 641 may retrieve state of charge (soc) sensor data for cars a, b, c in order to estimate remaining life of the drivetrain battery in car a. algorithm 641 may create a predictive battery model instance 619 by combining soc sensor data 656 with battery simulation data 658 related to the car model, and linked to the instance node 614 for the car model. a dt unit 659 for the battery model data may be parameterized according to the data for car ac. a diagnostic analysis for the hybrid car battery may indicate several options for a service plan based on the results. for example, new parameters 657 may be recommended for battery controllers, represented by instance node 617. a service schedule node 651 may add dt unit 661 data for a maintenance order to return to a service center within a particular service interval. [0023] fig. 7 illustrates an example of a dtg that provides simulation assisted prognostics based on data-driven and simulation-based models in accordance with one or more embodiments of the disclosure. in an embodiment, a dtg 700 implements a probabilistic graphical modeling of manufacturing processes to provide a probabilistic reasoning framework, such as variables implemented in a bayesian network. for this example, reasoning patterns may be developed in a predictive top- down flow or in an evidential bottom-up flow using probabilistic reasoning tied to quality of a manufactured product of a 3d printer. an instance model 1 10 may include a lab node 71 1 and a scanner node 713 for a physical twin laser scanner 701 . the probabilistic graph model 130 includes nodes that represent random variables, including printer type 731 , material 732, layer thickness 733, manufacture duration 734, design/manufacture grade 735, and quality 736. random variables may or may not be physical entities (e.g., quality), and may provide a straightforward abstraction to any other nodes in the ontology model 120 or instance model 1 10. the probabilistic values associated with each node may define important properties or attributes of the world. random variables to each node may be defined as follows: printer type = {mb, st, 3dw} material = {pla, abs} layer thickness = {500, 200, 100} manufacture duration = {short, medium, long} design/manufacture grade = {*, **, ***} quality = {pass, fail} [0024] probabilistic nodes may be linked to instance nodes for receiving data. for example, a link exists between design/manufacture grade node 735 and scanner node 713, which relates to physical twin scanner 701 that scans the manufactured product for tolerances to design specifications. directed edges between the nodes may represent dependence assertions on random variables. for example, edges 772, 773 represent dependency of layer thickness 733 on material 732 and printer type 731 nodes. conditional probability distributions (cpd) represent joint distributions. for example probability p values may be tabulated for the expression p(material | type of printer) as follows: table 1 [0025] cpds support a data-driven approach to model construction. evidence that populates the cpd may include: no prior knowledge at all, expert knowledge, field data, simulation results, engineer at work or product in use time series, sensor data, and other sources. for example, evidence 774 for the cpd of table 1 may be provided from production data dt unit 712 linked to lab instance node 71 1 as shown in fig. 7. in an embodiment, a causal reasoning or prediction (i.e., cause - effects) may be derived from the dtg 700 to determine a probability of a pass quality when given the following variable values: mb printer type, pla material, 500 micron layer thickness, which follows the probabilistic graph model 130 from top to bottom. in an embodiment, evidential reasoning or explanation (i.e. , effects- reason) may be derived from the dtg 700 to determine a probability of the printer type being st when given the following variable value: a product produced with "failed" quality, which follows the probabilistic graph model 130 from bottom to top. [0026] fig. 8 illustrates an example of a computing environment within which embodiments of the present disclosure may be implemented. a computing environment 800 includes a computer system 810 that may include a communication mechanism such as a system bus 821 or other communication mechanism for communicating information within the computer system 810. the computer system 810 further includes one or more processors 820 coupled with the system bus 821 for processing the information. [0027] the processors 820 may include one or more central processing units (cpus), graphical processing units (gpus), or any other processor known in the art. more generally, a processor as described herein is a device for executing machine- readable instructions stored on a computer readable medium, for performing tasks and may comprise any one or combination of, hardware and firmware. a processor may also comprise memory storing machine-readable instructions executable for performing tasks. a processor acts upon information by manipulating, analyzing, modifying, converting or transmitting information for use by an executable procedure or an information device, and/or by routing the information to an output device. a processor may use or comprise the capabilities of a computer, controller or microprocessor, for example, and be conditioned using executable instructions to perform special purpose functions not performed by a general purpose computer. a processor may include any type of suitable processing unit including, but not limited to, a central processing unit, a microprocessor, a reduced instruction set computer (risc) microprocessor, a complex instruction set computer (cisc) microprocessor, a microcontroller, an application specific integrated circuit (asic), a field- programmable gate array (fpga), a system-on-a-chip (soc), a digital signal processor (dsp), and so forth. further, the processor(s) 820 may have any suitable microarchitecture design that includes any number of constituent components such as, for example, registers, multiplexers, arithmetic logic units, cache controllers for controlling read/write operations to cache memory, branch predictors, or the like. the microarchitecture design of the processor may be capable of supporting any of a variety of instruction sets. a processor may be coupled (electrically and/or as comprising executable components) with any other processor enabling interaction and/or communication there-between. a user interface processor or generator is a known element comprising electronic circuitry or software or a combination of both for generating display images or portions thereof. a user interface comprises one or more display images enabling user interaction with a processor or other device. [0028] the system bus 821 may include at least one of a system bus, a memory bus, an address bus, or a message bus, and may permit exchange of information (e.g., data (including computer-executable code), signaling, etc.) between various components of the computer system 810. the system bus 821 may include, without limitation, a memory bus or a memory controller, a peripheral bus, an accelerated graphics port, and so forth. the system bus 821 may be associated with any suitable bus architecture including, without limitation, an industry standard architecture (isa), a micro channel architecture (mca), an enhanced isa (eisa), a video electronics standards association (vesa) architecture, an accelerated graphics port (agp) architecture, a peripheral component interconnects (pci) architecture, a pci-express architecture, a personal computer memory card international association (pcmcia) architecture, a universal serial bus (usb) architecture, and so forth. [0029] continuing with reference to fig. 8, the computer system 810 may also include a system memory 830 coupled to the system bus 821 for storing information and instructions to be executed by processors 820. the system memory 830 may include computer readable storage media in the form of volatile and/or nonvolatile memory, such as read only memory (rom) 831 and/or random access memory (ram) 832. the ram 832 may include other dynamic storage device(s) (e.g., dynamic ram, static ram, and synchronous dram). the rom 831 may include other static storage device(s) (e.g., programmable rom, erasable prom, and electrically erasable prom). in addition, the system memory 830 may be used for storing temporary variables or other intermediate information during the execution of instructions by the processors 820. a basic input/output system 833 (bios) containing the basic routines that help to transfer information between elements within computer system 810, such as during start-up, may be stored in the rom 831 . ram 832 may contain data and/or program modules that are immediately accessible to and/or presently being operated on by the processors 820. system memory 830 may additionally include, for example, operating system 834, application programs 835, and other program modules 836. [0030] the operating system 834 may be loaded into the memory 830 and may provide an interface between other application software executing on the computer system 810 and hardware resources of the computer system 810. more specifically, the operating system 834 may include a set of computer-executable instructions for managing hardware resources of the computer system 810 and for providing common services to other application programs (e.g., managing memory allocation among various application programs). in certain example embodiments, the operating system 834 may control execution of one or more of the program modules depicted as being stored in the data storage 840. the operating system 834 may include any operating system now known or which may be developed in the future including, but not limited to, any server operating system, any mainframe operating system, or any other proprietary or non-proprietary operating system. [0031] the computer system 810 may also include a disk/media controller 843 coupled to the system bus 821 to control one or more storage devices for storing information and instructions, such as a magnetic hard disk 841 and/or a removable media drive 842 (e.g., floppy disk drive, compact disc drive, tape drive, flash drive, and/or solid state drive). storage devices 840 may be added to the computer system 810 using an appropriate device interface (e.g., a small computer system interface (scsi), integrated device electronics (i de), universal serial bus (usb), or firewire). storage devices 841 , 842 may be external to the computer system 810. [0032] the computer system 810 may include a user input interface or gui 861 , which may comprise one or more input devices, such as a keyboard, touchscreen, tablet and/or a pointing device, for interacting with a computer user and providing information to the processors 820. a graphical user interface (gui), as used herein, may include a display processor for generating one or more display images, and may enable user interaction with a processor or other device and associated data acquisition and processing functions. the gui also includes an executable procedure or executable application. the executable procedure or executable application conditions the display processor to generate signals representing the gui display images. these signals are supplied to a display device which displays the image for viewing by the user. the processor, under control of an executable procedure or executable application, manipulates the gui display images in response to signals received from the input devices. in this way, the user may interact with the display image using the input devices, enabling user interaction with the processor or other device. [0033] the computer system 810 may perform a portion or all of the processing steps of embodiments of the invention in response to the processors 820 executing one or more sequences of one or more instructions contained in a memory, such as the system memory 830. such instructions may be read into the system memory 830 from another computer readable medium, such as the magnetic hard disk 841 or the removable media drive 842. the magnetic hard disk 841 may contain one or more data stores and data files used by embodiments of the present invention. the data store may include, but are not limited to, databases (e.g., relational, object-oriented, etc.), file systems, flat files, distributed data stores in which data is stored on more than one node of a computer network, peer-to-peer network data stores, or the like. the data stores may store various types of data such as, for example, control data, sensor data, or any other data generated in accordance with the embodiments of the disclosure. data store contents and data files may be encrypted to improve security. the processors 820 may also be employed in a multi-processing arrangement to execute the one or more sequences of instructions contained in system memory 830. in alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions. thus, embodiments are not limited to any specific combination of hardware circuitry and software. [0034] as stated above, the computer system 810 may include at least one computer readable medium or memory for holding instructions programmed according to embodiments of the invention and for containing data structures, tables, records, or other data described herein. the term "computer readable medium" as used herein refers to any medium that participates in providing instructions to the processors 820 for execution. a computer readable medium may take many forms including, but not limited to, non-transitory, non-volatile media, volatile media, and transmission media. non-limiting examples of non-volatile media include optical disks, solid state drives, magnetic disks, and magneto-optical disks, such as magnetic hard disk 841 or removable media drive 842. non-limiting examples of volatile media include dynamic memory, such as system memory 830. non-limiting examples of transmission media include coaxial cables, copper wire, and fiber optics, including the wires that make up the system bus 821 . transmission media may also take the form of acoustic or light waves, such as those generated during radio wave and infrared data communications. [0035] computer readable medium instructions for carrying out operations of the present disclosure may be assembler instructions, instruction-set-architecture (isa) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as smalltalk, c++ or the like, and conventional procedural programming languages, such as the "c" programming language or similar programming languages. the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. in the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (lan) or a wide area network (wan), or the connection may be made to an external computer (for example, through the internet using an internet service provider). in some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (fpga), or programmable logic arrays (pla) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present disclosure. [0036] aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. it will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, may be implemented by computer readable medium instructions. [0037] the computing environment 800 may further include the computer system 810 operating in a networked environment using logical connections to one or more remote computers, such as remote computing device 880. the network interface 870 may enable communication, for example, with other remote devices 880 or systems and/or the storage devices 841 , 842 via the network 871 . remote computing device 880 may be a personal computer (laptop or desktop), a mobile device, a server, a router, a network pc, a peer device or other common network node, and typically includes many or all of the elements described above relative to computer system 810. when used in a networking environment, computer system 810 may include modem 872 for establishing communications over a network 871 , such as the internet. modem 872 may be connected to system bus 821 via user network interface 870, or via another appropriate mechanism. [0038] network 871 may be any network or system generally known in the art, including the internet, an intranet, a local area network (lan), a wide area network (wan), a metropolitan area network (man), a direct connection or series of connections, a cellular telephone network, or any other network or medium capable of facilitating communication between computer system 810 and other computers (e.g., remote computing device 880). the network 871 may be wired, wireless or a combination thereof. wired connections may be implemented using ethernet, universal serial bus (usb), rj-6, or any other wired connection generally known in the art. wireless connections may be implemented using wi-fi, wimax, and bluetooth, infrared, cellular networks, satellite or any other wireless connection methodology generally known in the art. additionally, several networks may work alone or in communication with each other to facilitate communication in the network 871 . [0039] it should be appreciated that the program modules, applications, computer- executable instructions, code, or the like depicted in fig. 8 as being stored in the system memory 830 are merely illustrative and not exhaustive and that processing described as being supported by any particular module may alternatively be distributed across multiple modules or performed by a different module. in addition, various program module(s), script(s), plug-in(s), application programming interface(s) (api(s)), or any other suitable computer-executable code hosted locally on the computer system 810, the remote device 880, and/or hosted on other computing device(s) accessible via one or more of the network(s) 871 , may be provided to support functionality provided by the program modules, applications, or computer-executable code depicted in fig. 8 and/or additional or alternate functionality. further, functionality may be modularized differently such that processing described as being supported collectively by the collection of program modules depicted in fig. 8 may be performed by a fewer or greater number of modules, or functionality described as being supported by any particular module may be supported, at least in part, by another module. in addition, program modules that support the functionality described herein may form part of one or more applications executable across any number of systems or devices in accordance with any suitable computing model such as, for example, a client-server model, a peer-to-peer model, and so forth. in addition, any of the functionality described as being supported by any of the program modules depicted in fig. 8 may be implemented, at least partially, in hardware and/or firmware across any number of devices. [0040] it should further be appreciated that the computer system 810 may include alternate and/or additional hardware, software, or firmware components beyond those described or depicted without departing from the scope of the disclosure. more particularly, it should be appreciated that software, firmware, or hardware components depicted as forming part of the computer system 810 are merely illustrative and that some components may not be present or additional components may be provided in various embodiments. while various illustrative program modules have been depicted and described as software modules stored in system memory 830, it should be appreciated that functionality described as being supported by the program modules may be enabled by any combination of hardware, software, and/or firmware. it should further be appreciated that each of the above-mentioned modules may, in various embodiments, represent a logical partitioning of supported functionality. this logical partitioning is depicted for ease of explanation of the functionality and may not be representative of the structure of software, hardware, and/or firmware for implementing the functionality. accordingly, it should be appreciated that functionality described as being provided by a particular module may, in various embodiments, be provided at least in part by one or more other modules. further, one or more depicted modules may not be present in certain embodiments, while in other embodiments, additional modules not depicted may be present and may support at least a portion of the described functionality and/or additional functionality. moreover, while certain modules may be depicted and described as sub-modules of another module, in certain embodiments, such modules may be provided as independent modules or as sub- modules of other modules. [0041] although specific embodiments of the disclosure have been described, one of ordinary skill in the art will recognize that numerous other modifications and alternative embodiments are within the scope of the disclosure. for example, any of the functionality and/or processing capabilities described with respect to a particular device or component may be performed by any other device or component. further, while various illustrative implementations and architectures have been described in accordance with embodiments of the disclosure, one of ordinary skill in the art will appreciate that numerous other modifications to the illustrative implementations and architectures described herein are also within the scope of this disclosure. in addition, it should be appreciated that any operation, element, component, data, or the like described herein as being based on another operation, element, component, data, or the like can be additionally based on one or more other operations, elements, components, data, or the like. accordingly, the phrase "based on," or variants thereof, should be interpreted as "based at least in part on." [0042] although embodiments have been described in language specific to structural features and/or methodological acts, it is to be understood that the disclosure is not necessarily limited to the specific features or acts described. rather, the specific features and acts are disclosed as illustrative forms of implementing the embodiments. conditional language, such as, among others, "can," "could," "might," or "may," unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments could include, while other embodiments do not include, certain features, elements, and/or steps. thus, such conditional language is not generally intended to imply that features, elements, and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements, and/or steps are included or are to be performed in any particular embodiment. [0043] the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present disclosure. in this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. for example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. it will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
122-557-821-623-291
US
[ "US", "JP", "CA", "WO", "EP" ]
C12N15/74,A01K67/00,C07H21/04,C07K14/005,C07K14/46,C07K19/00,C12N7/00,C12N9/24,C12N15/63,C12N15/64,C12N15/90,C12N15/09,C12N1/15,C12N1/19,C12N1/21,C12N5/10,C07K14/025,C07K14/16,C07K14/435,C12N15/62,C12N15/85,C12N15/87,C12P21/02,C12N15/79,A61K31/70,C07K1/00,A61K48/00
2014-02-14T00:00:00
2014
[ "C12", "A01", "C07", "A61" ]
nucleic acid vectors and uses thereof
there are disclosed nucleic acid vectors for use in both gram positive and gram negative bacteria. in embodiments the vectors comprise a prokaryotic expression cassette and in embodiments comprise a eukaryotic expression cassette. in embodiments the vectors encode a hybrid protein comprising a dna binding domain, a cpp domain and a signal sequence.
1. a dna plasmid comprising: a prokaryotic expression cassette for expression in a bifidobacteria sp. bacterium of a hybrid protein comprising at least one dna binding domain, at least one cell penetrating peptide (cpp) domain c-terminal relative to the at least one dna binding domain, and at least one bifidobacterium secretion signal sequence directing secretion of a protein:dna plasmid complex from the bifidobacteria sp. bacterium, wherein the at least one bifidobacterium secretion signal sequence is n-terminal in the hybrid protein relative to the at least one dna binding domain; at least one dna motif recognized by the at least one dna binding domain of the hybrid protein; and a eukaryotic expression cassette which expresses a cargo sequence in a eukaryotic cell. 2. the dna plasmid according to claim 1 , wherein said at least one dna binding domain is one of a zinc finger dna binding domain, a homeobox dna binding domain, a hu dna binding domain, and a merr dna binding domain, or combinations thereof. 3. the dna plasmid according to claim 1 , wherein said cpp comprises a tat domain, a v22 protein of herpes simplex virus, or the protein transduction domain of the antennapedia (antp) protein, or combinations thereof. 4. the dna plasmid according to claim 1 , wherein the at least one secretion signal sequence is alpha-l-arabinosidase signal sequence or alpha amylase signal sequence. 5. the dna plasmid according to claim 1 wherein the cargo sequence is selected from the group consisting of: an oncogene, a tumour suppressor gene, a growth factor gene, a growth factor receptor gene, and a marker protein gene. 6. the dna plasmid according to claim 5 wherein the cargo sequence encodes a fluorescent protein. 7. the dna plasmid according to claim 1 wherein said at least one dna motif has at least 90% sequence identity to seq id no: 41 or 43. 8. the dna plasmid according to claim 1 , which further comprises at least one selection marker suitable for selection in a gram-positive bacterium and a gram-negative bacterium. 9. the dna plasmid according to claim 1 , which further comprises one or more than one origin of replication that is functional in a gram-negative bacterium and a gram-positive bacterium. 10. the dna plasmid according to claim 8 , wherein said selection marker confers resistance to at least one of spectinomycin, chloramphenicol, erythromycin, and tetracycline. 11. the dna plasmid according to claim 8 , wherein the at least one selection marker is resistance to an antibiotic effective against both said gram-positive and said gram-negative bacteria. 12. the dna plasmid according to claim 9 , wherein said gram-negative bacterium is e. coli. 13. the dna plasmid of claim 1 , wherein the eukaryotic cell is a mammalian cell.
priority claims this application claims priority under 35 usc § 119(e) of u.s. provisional patent application no. 61/940,274, filed feb. 14, 2014, u.s. provisional patent application no. 61/940,258 filed feb. 14, 2014; and u.s. provisional patent application no. 62/013,852 filed jun. 18, 2014. the specifications of which are hereby incorporated herein by reference wherever permissible by law. reference to submission of a sequence listing as a text file the sequence listing written in file seqtxt_96062-935529.txt, created on apr. 13, 2015, 47,565 bytes, machine format ibm-pc, ms-windows operating system, is hereby incorporated by reference in its entirety for all purposes. background 1. field the subject matter disclosed generally relates to novel shuttle vectors suitable for propagation in both gram positive and gram negative bacteria and comprising a eukaryotic expression cassette. 2. related art a variety of vectors and methods are known in the art for propagating nucleic acid sequences in bacteria and for introducing such sequences into eukaryotic cells. the following publications are of note: salomone, f., et al., “a novel cell-penetrating peptide with membrane disruptive properties for efficient endosomal escape” (2010), journal of controlled release 163 (293-303)stentz, r. et al., “controlled release of protein from viable lactococcus cells” (2010) applied and environmental microbiology 76, 3026-3031.christy, b., and nathans, d. “dna binding site of the growth factor-inducible protein zif268” (1989) 86 proc. nat. acad. sci. 8737-8741.khokhlova, e. v., et al., “ bifidobacterium longum modified recombinant hu protein as vector for nonviral delivery of dna to hek293 human cell culture” (2011) 151 bulletin of experimental biology and medicine 717-721. summary in an embodiment there is disclosed a nucleic acid vector comprising: an origin of replication for replication in gram-positive bacteria and gram-negative bacteria, at least one selection marker for selection in both gram-positive and gram-negative bacteria; and at least one expression cassette. in alternative embodiments, the nucleic acid vector may comprise a first origin of replication functional in a gram-negative bacteria and a second origin of replication functional in a gram-positive bacteria. in alternative embodiments, the first origin of replication is functional in e. coli and the second origin of replication is functional in at least one of staphylococcus and bifidobacteria. in alternative embodiments, the first and second origins of replication are comprised within a single bifunctional origin. in alternative embodiments, the nucleic acid vector may further comprise a prokaryotic expression cassette suitable to express a first cargo sequence in the gram-positive bacteria. in alternative embodiments, the first cargo sequence is a hybrid protein comprising at least one dna binding domain, at least one cell penetrating peptide (cpp) domain, and at least one secretion signal sequence. in alternative embodiments, the dna binding domain is one of a zinc finger dna binding domain, a homeobox dna binding domain, and a merr dna binding domain, or combinations thereof. in alternative embodiments, the cpp comprises a tat domain, a v22 protein of herpes simplex virus, or the protein transduction domain of the antennapedia (antp) protein, or combinations thereof. in alternative embodiments the secretion signal sequence is alpha-l-arabinosidase signal sequence, alpha amylase signal sequence or a truncated alpha amylase signal sequence. in alternative embodiments, the nucleic acid vector may further comprise at least one dna motif that binds to the at least one dna binding domain of the hybrid protein. in alternative embodiments, the nucleic acid vector may further comprise a eukaryotic expression cassette for expressing a second cargo sequence in a eukaryotic cell. in alternative embodiments, the second cargo sequence is selected from the group consisting of: an oncogene, a tumour suppressor, a growth factor, a growth factor receptor, and a marker protein. in alternative embodiments, the second cargo sequence is a fluorescent protein. in alternative embodiments, at least one dna motif has at least 90% sequence identity to seq id no: 41 or 43. in alternative embodiments, the at least one selection marker is resistance to an antibiotic effective against both gram positive and gram negative bacteria. in an embodiment there is disclosed a method for transforming a eukaryotic target cell with a candidate dna sequence, the method comprising the steps of: expressing in a first cell the hybrid protein from the first expression cassette of a nucleic acid vector disclosed herein, to form a complex between the hybrid protein and the nucleic acid vector; contacting the target cell with the formed hybrid protein and nucleic acid vector complex, the candidate dna sequence is the second cargo sequence. in an embodiment there is disclosed a method for transforming a eukaryotic target cell with a candidate dna sequence, the method comprising the steps of: contacting the eukaryotic target cell with a hybrid protein and nucleic acid vector complex formed by the expression of a hybrid protein from the first expression cassette of a nucleic acid vector disclosed herein, in a prokaryotic cell, the candidate dna sequence is the second cargo sequence. in an embodiment there is disclosed a method for transforming a prokaryotic target cell with a candidate dna sequence, the method comprising the steps of: providing a nucleic acid vector disclosed herein the candidate dna sequence is the first cargo sequence; binding the nucleic acid vector to a hybrid protein comprising at least one dna binding domain suitable to bind to the nucleic acid vector, at least one cell penetrating peptide (cpp) domain, and at least one secretion signal sequence, to form a complex between the hybrid protein and the nucleic acid vector; contacting the prokaryotic target cell with the complex formed by the hybrid protein and the nucleic acid vector. in an embodiment there is disclosed a method for transforming a prokaryotic target cell with a candidate dna sequence, the method comprising the steps of: contacting the prokaryotic target cell with a hybrid protein and nucleic acid vector complex formed by contacting with a hybrid protein comprising at least one dna binding domain suitable to bind to the nucleic acid vector, at least one cell penetrating peptide (cpp) domain, and at least one secretion signal sequence, a nucleic acid vector disclosed herein the candidate dna sequence is the first cargo sequence. in an embodiment there is disclosed a bacterial cell containing the nucleic acid vector disclosed herein. in an embodiment there is disclosed a eukaryotic cell containing the nucleic acid vector disclosed herein. in an embodiment there is disclosed a kit for transforming a cell with the vector disclosed herein, the kit comprising a quantity of a hybrid protein comprising at least one dna binding domain suitable to bind to the nucleic acid vector, at least one cell penetrating peptide (cpp) domain and at least one secretion signal sequence, and instructions on how to use the kit. in alternative embodiments, the kit may further comprise a quantity of the nucleic acid vector. in alternative embodiments, the target prokaryotic cell is a staphylococcus or bifidobacteria. in an embodiment there is disclosed a nucleic acid vector, also referred to as an expression vector, comprising: a first origin of replication functional in a gram-negative bacteria, and a second origin of replication functional in a gram-positive bacteria; a selection marker suitable for selection in both gram-positive and gram-negative bacteria a prokaryotic expression cassette for expression in the gram-positive bacteria of a hybrid protein comprising at least one dna binding domain, at least one cell penetrating peptide (cpp) domain, and at least one secretion signal sequence; at least one dna motif recognized by the at least one dna binding domain of the hybrid protein; and a eukaryotic expression cassette for expressing a second cargo sequence in a eukaryotic cell. in alternative embodiments, the gram-negative bacteria is e. coli. in alternative embodiments, the gram-positive bacteria is at least one of staphylococcus and bifidobacteria. in alternative embodiments, the selection marker confers resistance to at least one of spectinomycin, chloramphenicol, erythromycin and tetracycline. features and advantages of the subject matter hereof will become more apparent in light of the following detailed description of selected embodiments, as illustrated in the accompanying figures. as will be realized, the subject matter disclosed and claimed is capable of modifications in various respects, all without departing from the scope of the claims. accordingly, the drawings and the description are to be regarded as illustrative in nature, and not as restrictive and the full scope of the subject matter is set forth in the claims. brief description of the drawings fig. 1 . shows the structure of a vector according to a first embodiment of the present subject matter and identified as pbra2.0 sht. fig. 2 . shows the structure of a vector according to a second embodiment of the present subject matter and identified as pfrg1.5-sht. fig. 3 shows a gel shift assay demonstrating the binding of sht and szt hybrid proteins with pbra2.0 sht. sht and szt interact with pbra2.0 and result in gel-shift with increasing concentrations of peptide. fig. 4 shows the results of transforming e. coli with pbra2.0-sht vector according to an embodiment of the present subject matter, to confirm the effectiveness of the e. coli (puc) origin of replication. fig. 5 shows the results of transfecting hek-293 and hela cells with pbra2.0 sht vector according to an embodiment of the present subject matter. fig. 6 shows the secretion of pbra2.0 sht from bifidobacterium longum cells hosting the vector. fig. 7 shows digestion products of pbra2.0 sht from cell supernatant of pbra2.0 sht infected bifidobacterium longum cells. fig. 8 shows the visualisation and digestion of concentrate cell supernatants showing secretion of pbra2.0 sht and its characterisation. fig. 9 shows immunoblotting analysis of the szt hybrid protein demonstrating the expression of the szt protein in bifidobacterium. fig. 10 shows the transfection of mammalian cells by protein complexes comprising pbra2.0-sht or pbra2.0-szt, and the expression of the cargo gfp sequences in the mammalian cell lines. therapeutic molecules pbra2.0-sht and pbra2.0-szt complexes can transfect and express gfp in mammalian cell lines. detailed description of embodiments the following sequence listings are presented herein and form a part of this disclosure: seq id no: 1 is the sequence of pbra2.0 sht wherein the eukaryotic expression cassette encodes gfp protein and the prokaryotic expression cassette encodes the sht protein. source: artificial. seq id no: 2. is the sequence of pfrg1.5-sht wherein the eukaryotic expression cassette comprises the green fluorescent protein (gfp) gene. source: artificial. seq id no: 3 is the nucleotide sequence encoding the lac i dna binding domain according to embodiments. source: e. coli. seq id no: 4 is the nucleotide sequence encoding the hu dna binding domain according to embodiments. source: bifidobacterium. seq id no: 5 is the nucleotide sequence encoding the mer r dna binding domain according to embodiments. source: bifidobacterium. seq id no: 6 is the nucleotide sequence encoding the zinc finger dna binding domain according to embodiments. source: artificial. seq id no: 7 is the nucleotide sequence encoding the smt hybrid protein source: artificial. seq id no: 8 is the nucleotide sequence encoding the sht hybrid protein. source: artificial. seq. id. no. 9 is the nucleotide sequence encoding the slt hybrid protein. source: artificial. seq id no: 10 is the nucleotide sequence encoding the szt hybrid protein. source: artificial. seq id no: 11 is the amino acid sequence of the trans-activator of transcription (tat) transduction domain [hiv]. seq id no: 12 is the amino acid sequence of the antennapedia (antp) transduction domain source: drosophila melanogaster. seq id no: 13 is the amino acid sequence of the hiv rev transduction domain. source: human hiv virus. seq id no: 14 is the amino acid sequence of the herpes simplex virus vp22 transduction domain. source: human hsv virus. seq id no: 15 is the amino acid sequence of the p-beta mpg (gp41-sv40) transduction domain. source: sv40. seq id no: 16 is the amino acid sequence of the transportan (galanin-mastoparan) transduction domain source: eukaryote, species unknown. seq id no: 17 is the amino acid sequence of the pep-1 (trp-rich motif-sv40) transduction domain. source: sv40 seq id no: 18 is the dna sequence encoding the alpha amylase signal sequence comprising a cleavage site. source: bifidobacterium. seq id no: 19 is the 46 amino acid sequence encoding the alpha amylase signal sequence including a putative cleavage site. source: bifidobacterium. seq id no: 20 is the dna sequence encoding the cleaved alpha amylase signal sequence. source: bifidobacterium. seq id no: 21 is the 44 amino acid sequence of the cleaved alpha amylase signal sequence. source: bifidobacterium. seq id no: 22 is the dna sequence encoding the alpha arabinosidase signal sequence. source: bifidobacterium. seq id no: 23 is the amino acid sequence for the alpha arabinosidase signal sequence. source: bifidobacterium. seq id no: 24 is the amino acid sequence of the hybrid protein embodiment designated sht. source: artificial. seq id no: 25 is the amino acid sequence of the hybrid protein embodiment designated slt. source: artificial. seq id no: 26 is the amino acid sequence of the hybrid protein embodiment designated smt. source: artificial. seq id no: 27 is the amino acid sequence of the hybrid protein embodiment designated szt source: artificial. seq id no: 28 is a dna sequence comprising the pdojhr ori. source: bifidobacterium. seq id no: 29 is a dna sequence comprising the puc/ e. coli ori. source: e. coli. seq id no: 30 is a dna sequence comprising the pb44 ori. source: bifidobacterium. seq id no: 31 is the dna sequence encoding the tat domain. source: human hiv virus. seq id no: 32 is the dna sequence encoding the p-beta mpg (gp41-sv40) transduction domain. source: sv40 virus. seq id no: 33 is the dna sequence encoding the transportan (galanin-mastoparan) transduction domain source: eukaryote—species unknown, seq id no: 34 is the dna sequence encoding the pep-1 (trp-rich motif-sv40) transduction domain. source: sv40. seq id no: 35 gfp forward primer source: aequorea victoria seq id no: 36 gfp reverse primer. source: aequorea victoria seq id no: 37 spectinomycin resistance forward primer. source: streptomyces spectabilis. seq id no: 38 spectinomycin reverse primer: source: streptomyces spectabilis. seq id no: 39 cmv forward primer. source: cytomegalovirus. seq id no: 40 cmv reverse primer. source: cytomegalovirus. seq id no: 41 merr dna recognition site. source: bifidobacterium. seq id no: 42 laci repressor dna binding site. source. e. coli. seq id no: 43 synthetic zinc finger dna recognition site. source: artificial. seq id no: 44 hu dna binding domain. source: bifidobacterium. seq id no: 45 mer r dna binding domain. source: bifidobacterium. seq id no: 46 zn finger dna binding domain. source: artificial. seq id no: 47 lac i dna binding domain. source: e. coli. e. coli. seq id no: 48 prokaryotic expression cassette from pfrg ( fig. 2 ) comprising hu promoter and terminator. source: artificial/ bifidobacteria. seq id no: 49 eukaryotic expression cassette from pfrg ( fig. 2 ) comprising cmv promoter, kozak sequence and tk poly a site and terminator, flanking gfp coding sequences to be expressed. source: artificial. definition of terms: in this disclosure, the word “comprising” is used in a non-limiting sense to mean that items following the word are included, but items not specifically mentioned are not excluded. a reference to an element by the indefinite article “a” does not exclude the possibility that more than one of the elements is present, unless the context clearly requires that there be one and only one of the elements. in this disclosure the recitation of numerical ranges by endpoints includes all numbers subsumed within that range including all whole numbers, all integers and all fractional intermediates (e.g., 1 to 5 includes 1, 1.5, 2, 2.75, 3, 3.80, 4, and 5 etc.). in this disclosure the singular forms a “an”, and “the” include plural referents unless the content clearly dictates otherwise. thus, for example, reference to a composition containing “a compound” includes a mixture of two or more compounds. in this disclosure term “or” is generally employed in its sense including “and/or” unless the content clearly dictates otherwise. in this disclosure, unless otherwise indicated, all numbers expressing quantities or ingredients, measurement of properties and so forth used in the specification and claims are to be understood as being modified in all instances by the term “about”. accordingly, unless indicated to the contrary or necessary in light of the context, the numerical parameters set forth in the disclosure are approximations that can vary depending upon the desired properties sought to be obtained by those skilled in the art utilizing the teachings of the present disclosure and in light of the inaccuracies of measurement and quantification. without limiting the application of the doctrine of equivalents to the scope of the claims, each numerical parameter should at least be construed in light of the number of reported significant digits and by applying ordinary rounding techniques. notwithstanding that the numerical ranges and parameters setting forth the broad scope of the disclosure are approximations, their numerical values set forth in the specific examples are understood broadly only to the extent that this is consistent with the validity of the disclosure and the distinction of the subject matter disclosed and claimed from the prior art. the vectors disclosed herein are able to replicate in both gram negative and gram positive bacteria. in embodiments the gram negative bacteria is e. coli . in embodiments the gram positive bacteria is bifidobacteria . in embodiments the gram positive bacterium is staphylococcus . in embodiments the gram positive bacterium is streptococcus . in embodiments the gram positive bacterium is lactococcus . in embodiments the gram positive bacterium is clostridium or is lactobacillus . the full range of possible target bacterial strains will be readily understood by one skilled in the art, and a listing of strains is available at lpsn bacterio.net at http://www.bacterio.net/-alintro.html, the contents of which is hereby incorporated herein where permissible by law. in particular embodiments the bacteria are probiotic bacteria or are gras bacteria. in embodiments where the target gram positive bacteria is bifidobacteria , then illustrative possible strains or species of bifidobacteria , without limitation, include: bifidobacterium, bifidobacterium actinocoloniiforme, bifidobacterium adolescentis, bifidobacterium angulatum, bifidobacterium animalis, bifidobacterium animalis subsp. animalis, bifidobacterium animalis subsp. lactis, bifidobacterium asteroids, bifidobacterium biavatii, bifidobacterium bifidum, bifidobacterium bohemicum, bifidobacterium bombi, bifidobacterium boum, bifidobacterium breve, bifidobacterium callitrichos, bifidobacterium catenulatum, bifidobacterium choerinum, bifidobacterium coryneforme, bifidobacterium vectocuniculi, bifidobacterium denticolens, bifidobacterium dentium, bifidobacterium gallicum, bifidobacterium gallinarum, bifidobacterium globosum, bifidobacterium indicum, bifidobacterium infantis, bifidobacterium inopinatum, bifidobacterium kashiwanohense, bifidobacterium lactis, bifidobacterium longum, bifidobacterium longum subsp. infantis, bifidobacterium longum subsp. longum, bifidobacterium longum subsp. suis, bifidobacterium magnum, bifidobacterium merycicum, bifidobacterium minimum, bifidobacterium mongoliense, bifidobacterium pseudocatenulatum, bifidobacterium pseudolongum, bifidobacterium pseudolongum subsp. globosum, bifidobacterium pseudolongum subsp. pseudolongum, bifidobacterium psychraerophilum, bifidobacterium pullorum, bifidobacterium reuteri, bifidobacterium ruminantium, bifidobacterium saeculare, bifidobacterium saguini, bifidobacterium scardovii, bifidobacterium stellenboschense, bifidobacterium stercoris, bifidobacterium subtile, bifidobacterium suis, bifidobacterium thermacidophilum, bifidobacterium thermacidophilum subsp. porcinurn, bifidobacterium thermacidophilum subsp. thermacidophilum, bifidobacterium thermophilum, bifidobacterium tsurumiense; bifidobacterium longum, bifidodobacterium bifidum , and bifidobacterium infantis. where the target bacterium is staphylococcus then the illustrative possible strains or species of staphylococcus , without limitation, include: s. arlettae; s. agnetis; s. aureus; s. auricularis; s. capitis; s. caprae; s. carnosus; s. caseolyticus; s. chromogenes; s. cohnii; s. condimenti; s. delphini; s. devriesei; s. epidermidis; s. equorum; s. felis; s. fleurettii; s. gallinarum; s. haemolyticus; s. hominis; s. hyicus; s. intermedius; s. kloosii; s. leei; s. lentus; s. lugdunensis; s. lutrae; s. massiliensis; s. microti; s. muscae; s. nepalensis; s. pasteuri; s. pettenkoferi; s. piscifermentans; s. pseudintermedius; s. pseudolugdunensis; s. pulvereri; s. rostri; s. saccharolyticus; s. saprophyticus; s. schleiferi; s. sciuri; s. simiae; s. simulans; s. stepanovicii; s. succinus; s. vitulinus; s. warneri ; and s. xylosus. in this disclosure the term “vector” refers to at least one of a plasmid, bacteriophage, cosmid, artificial chromosome, or other nucleic acid vector. in embodiments the vector encodes or is suitable to generate at least one therapeutic agent or comprises at least one therapeutic sequence. vectors suitable for microbiological applications are well known in the art, and are routinely designed and developed for particular purposes. some non-limiting published examples of vectors that have been used to transform bacterial strains include the following plasmids: pmw211, pbad-dest49, pdonrp4-p1r, pentr-pbad, pentr-dual, pentr-term, pbr322, pdestr4-r3, pbgs18-n9uc8, pbs24ub, pubnuc, pixy154, pbr322dest, pbr322dest-pbad-dual-term, pjim2093, ptg2247, pmec10, pmec46, pmec127, ptx, psk360, pacyc184, pboe93, pbr327, pdw205, pkcl11, pkk2247, pmr60, pou82, pr2172, psk330, psk342, psk355, puhe21-2, pehlya2-sd. see, for example, stritzker, et al. intl. j. med. microbiol. vol. 297, pp. 151-162 (2007); grangette et al., infect. immun. vol. 72, pp. 2731-2737 (2004), knudsen and karlstrom, app. and env. microbiol. pp. 85-92, vol. 57, no. 1 (1991), rao et al., pnas pp. 11193-11998, vol. 102, no. 34 (2005), each of which is incorporated herein by reference. those skilled in the art, in light of the teachings of this disclosure, will understand that alternative embodiments of the subject matter claimed herein are possible and will understand how to combine sequences taken from known vectors in order to construct such alternative embodiments. by way of example, existing vectors from the foregoing list may be modified by inserting additional origins of replication, or additional expression cassettes (non-limiting examples of expression cassettes are presented as seq id nos 48 and 49) comprising suitable promoter and termination sequences, or additional dna binding sequences, or coding sequences for the hybrid protein (nhp) described herein. those skilled in the art will understand and adopt the various alternatives known in the art, and will do so using techniques well known in the art. in particular embodiments, suitable origins of replication for use in embodiments include an origin of replication comprised within pdojhr as well as the pb44 and puc ( e. coli ) origins of replication. these are presented herein as seq id nos 28, 30, 29 respectively. in some embodiments adjacent functional components of a vector are joined by linking sequences. in embodiments the vector comprises a eukaryotic expression cassette (a non-limiting example comprising gfp coding sequences is presented as seq id no: 49) containing a marker sequence to confirm both transformation and gene expression in the target eukaryotic cell. it will be understood that in alternative embodiments a range of alternative marker proteins and sequences are possible and for example in selected embodiments and without limitation the marker sequence encodes gfp (green fluorescent protein), rfp (red fluorescent protein), cat (chloramphenicol acetyltransferase), luciferase, gal (beta-galactosidase), or gus (beta-glucuronidase). those skilled in the art will readily understand and use all such marker sequences and reporter genes using standard techniques and materials readily available in the art. vectors according to embodiments comprise one or more prokaryotic expression cassettes. a non-limiting example of a prokaryotic expression cassette suitable for the expression of sequences in gram positive bacteria, and in embodiments in staphylococcus and bifidobacteria is presented as seq id no: 48. in embodiments such a cassette comprises a hybrid protein and embodiments of such hybrid proteins are disclosed herein. it will be understood that in embodiments vectors may comprise at least or only one, two, three or more eukaryotic expression cassettes, or may comprise at least or only one, two, three or more prokaryotic expression cassettes for expression in gram positive bacteria, or may comprise combinations of the foregoing. in this disclosure the term cell penetrating peptide (“cpp”) means a protein that is able to penetrate the cell membrane of a eukaryotic cell. the term cpp includes tat, also referred to as a trans-activator of transcription. for greater certainty, but without limitation, reference to tat or any other cpp will be understood to mean the native sequence, and also a full range of sequence variants thereof which are suitable to carry out the desired function of the protein or protein domain in question. by way of example, a range of functional variants of the tat protein are described in f. salomone et al., (2010) “a novel chimeric cell-penetrating peptide with membrane-disruptive properties for efficient endosomal escape” journal of controlled release 163, 293-303. other non-limiting examples of cpp's according to embodiments include the vp22 protein of herpes simplex virus, and the protein transduction domain of the antennapedia (antp) protein as well as the protein transduction domains presented in table 1, namely tat, rev, antp, vp22, pep1 and transportan. in embodiments the cpp domain is the domain described in salomone f., et al, “a novel chimeric cell-penetrating peptide with membrane disruptive properties for endosomal escape.” j control. release. 2012 oct. 2 (epub ahead of print). table 1 shows a non-limiting selection of exemplary cpp domains. table 1sequences of exemplary cpp (transduction) domainstatseq id no: 11ygrkkrrqrrrantpseq id no: 12rqikiwfqnrrmkwkkrevseq id no: 13trqarrnrrrrwrerqfvp22seq id no: 14naktrrherrrklaierp-beta mpgseq id no: 15galflgflgaagstmg(gp41-sv40)awsqpkkkrkv-cystransportanseq id no: 16gwtlnsagyllgkinlkalaa(galanin-lakkilmastoparan)pep-1 (trp-richseq id no: 17ketwwetwwtewsqpkkkrrvmotif-sv40) in this disclosure the term “dna binding domain” means a protein sequence able to reversibly but tightly or with high affinity bind specifically to a suitable dna sequence. in embodiments a dna binding sequence may be a zinc finger binding domain, and while a wide range of suitable domains and their complementary dna binding sequences will be readily identified by those skilled in the art, a number of illustrative examples are disclosed in u.s. pat. no. 6,007,988, issued on dec. 28, 1999. in particular embodiments hereof, the dna binding motif or domain is or is derived from, mer r, zinc finger, or histone like dna binding protein or is or is derived from the hu protein or is or is derived from a homeobox dna binding protein. it will be understood that the hu protein is generally considered a homeobox-like protein. while many types of dna binding domains will be readily identified by those skilled in the art using available databases, screening methodologies and well known techniques, non-limiting examples of suitable dna binding domains for use in alternative embodiments can be derived from a wide range of dna binding proteins. in embodiments suitable dna binding domains may be of any general type, including but not limited to helix-turn-helix, zinc finger, leucine zipper, winged helix, winged helix turn helix, helix loop helix, hmg box, wor 3 and rna guided binding domains. illustrative examples of dna binding proteins whose dna binding domains may be utilized in embodiments include histones, histone like proteins, transcription promoters, transcription repressors, transcriptional regulators, which may be drawn from a wide range of alternate sources and operons. in this disclosure the terms “polypeptide”, “peptide”, “oligopeptide” and “protein” are used interchangeably herein to refer to a polymer of amino acid residues. the terms apply to amino acid polymers in which one or more amino acid residues is an artificial chemical analogue of a corresponding naturally occurring amino acid, or is a completely artificial amino acid with no obvious natural analogue as well as to naturally occurring amino acid polymers. in embodiments the eukaryotic expression cassettes comprised in the vectors comprise suitable kozak sequences, and the possible variations thereon and positioning thereof will be readily understood by those skilled in the art. one non limiting example is presented as seq id no: 49. a peptide or peptide fragment is “derived from” a parent peptide or polypeptide if it has an amino acid sequence that is homologous to the amino acid sequence of, or is a conserved fragment from, the parent peptide or polypeptide. it will be understood that such sequences will, in alternative embodiments, comprise natural amino acids or will comprise artificially created amino acids. all of the foregoing will be readily identified by those skilled in the art. in embodiments vectors have sequences which differ from the disclosed examples. it will be understood by those skilled in the art that a range of sequence variations are possible that do not affect or do not prevent the suitability of the vector for the purposes disclosed herein. by way of example and not limitation those skilled in the art will recognise that certain dna sequences can be varied without materially affecting the function of the vector and others cannot. again by way of illustration and not limitation, those skilled in the art will recognise and adopt sequence modifications known to enhance the function of a selected sequence, and will reject sequence modifications known to diminish such function. with particular reference to protein coding sequences those skilled in the art will recognise variable and conserved regions and will recognise mutations likely to change the structure and/or function of the relevant protein or polypeptide and those unlikely to do so. in general it will be understood that in particular variant embodiments the nucleic acid sequence of a vector will be 100% identical to one of the examples disclosed, or will be at least about, or about, or less than about 99%, 98%, 97%, 96%, 95%, 94%, 93%, 92%, 91%, 90%, 89%, 88%, 87%, 86%, 95%, 84%, 83%, 82%, 81%, or 80% identical to one of the examples disclosed and that in embodiments such sequence identity will extend over all or only part of the length of the vector. it will be understood that in embodiments vectors will comprise alternative origins of replication, promoters, polyadenylation sites and the like. in embodiments vectors will comprise sequences inserted in the expression cassettes, and in embodiments vectors will have no insertions in the expression cassettes. in embodiments the hybrid protein or polypeptide sequence encoded by a sequence comprised in an expression cassette according to an embodiment is 100% identical to one of the examples disclosed, or is at least about, or about, or less than about 99%, 98%, 97%, 96%, 95%, 94%, 93%, 92%, 91%, 90%, 89%, 88%, 87%, 86%, 95%, 84%, 83%, 82%, 81%, or 80% identical to one of the examples disclosed as seq id nos 24, 25, 26, 27 and such sequence identity extends over all or only part of the length of the protein or polypeptide. in embodiments homologies and sequence identities extend over all or only a part of the sequence or sequences of interest. in embodiments homologies and sequence identities are limited to particular functional or sequence domains. in embodiments homologies and sequence identities continuous and in embodiments are separated by regions of lower sequence identity or homology. construction of vectors: in embodiments vectors and sequences set out herein are synthesized de novo using known techniques and commercially available dna synthesis services. standard techniques for the construction of the vectors of the present invention are well-known to those of ordinary skill in the art and can be found in such references as sambrook et al., molecular cloning: a laboratory manual, 2nd ed., cold spring harbor laboratory press, new york, (1989). a variety of strategies are available for ligating fragments of dna, the choice of which depends on the nature of the termini of the dna fragments and which choices can be readily made by the skilled artisan. in embodiments the sequences of vectors are determined using suitable dna, rna and protein sequence manipulation software, and the desired sequence is synthesized using suitable synthesis methods, a wide variety of which are readily available and will be immediately understood and implemented by those skilled in the art. as described herein, an aspect of the present disclosure concerns isolated nucleic acids and methods of use of isolated nucleic acids. plasmid preparations: plasmid preparations and replication means are well known in the art. see for example, u.s. pat. nos. 4,273,875 and 4,567,146 incorporated herein in their entirety. some embodiments of the present invention include providing a portion of genetic material of a target microorganism and inserting the portion of genetic material of a target microorganism into a plasmid for use as an internal control plasmid. nucleic acids used as a template for amplification are isolated from cells according to standard methodologies. (sambrook et al., 1989) the nucleic acid may be genomic dna or fractionated or whole cell rna. where rna is used, it may be desired to convert the rna to a complementary cdna. pairs of primers that selectively hybridize to nucleic acids corresponding to specific sequences are contacted with the isolated nucleic acid under conditions that permit selective hybridization. once hybridized, the nucleic acid primer complex is contacted with one or more enzymes that facilitate template-dependent nucleic acid synthesis. multiple rounds of amplification, also referred to as “cycles,” are conducted until a sufficient amount of amplification product is produced. next, the amplification product is detected. in certain applications, the detection may be performed by visual means. alternatively, the detection may involve indirect identification of the product via chemiluminescence, radioactive scintilography of incorporated radiolabel or fluorescent label or even via a system using electrical or thermal impulse signals (affymax technology; bellus, 1994). primers: the term primer, as defined herein, is meant to encompass any nucleic acid that is capable of priming the synthesis of a nascent nucleic acid in a template-dependent process. typically, primers are oligonucleotides from ten to twenty base pairs in length, but longer sequences may be employed. primers may be provided in double-stranded or single-stranded form, although the single-stranded form is preferred. specific primers used to amplify portions of vectors and nucleotide sequences according to embodiments are presented as seq id nos 35-40. those skilled in the art will readily select alternative suitable primers for particular requirements. template dependent amplification methods: a number of template dependent processes are available to amplify the sequences present in a given template sample. one of the best known amplification methods is the polymerase chain reaction (referred to as pcr) which is described in detail in u.s. pat. nos. 4,683,195, 4,683,202 and 4,800,159, and in innis et al., 1990, each of which is incorporated herein by reference in its entirety. a reverse transcriptase pcr amplification procedure may be performed in order to quantify the amount of mrna amplified. methods of reverse transcribing rna into cdna are well known and described in sambrook et al., 1989. alternative methods for reverse transcription utilize thermostable dna polymerases. these methods are described in wo 90/07641 filed dec. 21, 1990. polymerase chain reaction methodologies are well known in the art. other amplification methods are known in the art besides pcr such as lcr (ligase chain reaction), disclosed in european application o. 320 308, incorporated herein by reference in its entirety. in another embodiment, qbeta replicase, may also be used as still another amplification method in the present invention. in this method, a replicative sequence of rna which has a region complementary to that of a target is added to a sample in the presence of an rna polymerase. the polymerase will copy the replicative sequence which may then be detected. nucleic acid synthesis: those skilled in the art will readily recognize a range of methods and apparatuses for synthesizing desired nucleic acid sequences. by way of example and not of limitation in a series of embodiments plasmids were synthesized in silico by geneart®, life technologies™ while the scope of the of methods for making embodiments includes any suitable methods (for example, polymerase chain reaction, i.e., pcr, and nucleic acid sequence based amplification, i.e., nasba) for amplifying at least a portion of the microorganism's genetic material, for one example, the present invention describes embodiments in reference to pcr technique. amplification of a genetic material, e.g., dna, is well known in the art. see, for example, u.s. pat. nos. 4,683,202, and 4,994,370, which are incorporated herein by reference in their entirety. by knowing the nucleotide sequences of desired genetic material or target nucleic acid sequence, specific primer sequences can be designed in one embodiment of the present invention, the primer is about, but not limited to 5 to 50 oligonucleotides long, or about 10 to 40 oligonucleotides long or more about 10 to 30 oligonucleotides long. suitable primer sequences can be readily synthesized by one skilled in the art or are readily available from third party providers such as brl (new england biolabs™), etc. other reagents, such as dna polymerases and nucleotides, that are necessary for a nucleic acid sequence amplification such as pcr are also commercially available. separation methods: following amplification, it may be desirable to separate the amplification product from the template and the excess primer for the purpose of determining whether specific amplification has occurred. in one embodiment, amplification products are separated by agarose, agarose-acrylamide or polyacrylamide gel electrophoresis using standard methods. see sambrook et al., 1989. alternatively, chromatographic techniques may be employed to effect separation. there are many kinds of chromatography which may be used in the present invention: adsorption, partition, ion-exchange and molecular sieve, and many specialized techniques for using them including column, paper, thin-layer and gas chromatography (freifelder, 1982). identification methods: amplification products must be visualized in order to confirm amplification of the desired sequences. one typical visualization method involves staining of a gel with ethidium bromide and visualization under uv light. alternatively, if the amplification products are integrally labeled with radio- or fluorometrically-labeled nucleotides, the amplification products may then be exposed to x-ray film or visualized under the appropriate stimulating spectra, following separation. in one embodiment, visualization is achieved indirectly. following separation of amplification products, a labeled, nucleic acid probe is brought into contact with the amplified marker sequence. the probe preferably is conjugated to a chromophore but may be radiolabeled. in another embodiment, the probe is conjugated to a binding partner, such as an antibody or biotin, where the other member of the binding pair carries a detectable moiety. in one embodiment, detection is by southern blotting and hybridization with a labeled probe. the techniques involved in southern blotting are well known to those of skill in the art and may be found in many standard books on molecular protocols. see sambrook et al., 1989. briefly, amplification products are separated by gel electrophoresis. the gel is then contacted with a membrane, such as nitrocellulose, permitting transfer of the nucleic acid and non-covalent binding. subsequently, the membrane is incubated with a chromophore-conjugated probe that is capable of hybridizing with a target amplification product. detection is by exposure of the membrane to x-ray film or ion-emitting detection devices. in general, prokaryotes used for cloning dna sequences in constructing the vectors useful in the invention include for example, any gram negative bacteria such as e. coli, e. coli strain k12 and bifidobacteria and staphylococcus other microbial strains which may be used include p. aeruginosa strain pao1, and e. coli b strain and bifidobacteria . these examples are illustrative rather than limiting. in particular embodiments steps in the construction of vectors may include cloning and propagation in suitable e. coli strains or in suitable bifidobacterium strains. in general, plasmid vectors containing promoters and control sequences which are derived from species compatible with the host cell are used with these hosts. the vector ordinarily carries a replication site as well as one or more marker sequences which are capable of providing phenotypic selection in transformed cells. for example, a pbbr1 replicon region which is useful in many gram negative bacterial strains or any other replicon region that is of use in a broad range of gram negative host bacteria can be used in the present invention. the term “recombinant polypeptide”, “recombinant protein: or “fusion protein” or “hybrid protein” or like terms is used herein to refer to polypeptides that have been artificially designed and which comprise at least two polypeptide sequences that are not found as contiguous polypeptide sequences in their initial natural environment, or to refer to polypeptides which have been expressed from a recombinant polynucleotide. in particular embodiments, fusion proteins contain joining sequences to join protein domains that are not normally associated. in broad concept, in embodiments, fusion proteins comprise at least one sequence selected from the group consisting of: a dna binding domain, a secretion signal sequence, and a trans-activator of transcription that is functional in eukaryotic cells. in further embodiments fusion proteins comprise a cpp domain and in embodiments the cpp domain comprises a tat domain. the hybrid proteins comprising signal sequence, transduction domain and dna binding domain, are also referred to herein as simply “hybrid proteins” or as “nhps” depending on the context. it will be understood that the different domains comprised in hybrid proteins according to embodiments may be present in any arrangement and any combination and any numbers of copies consistent with their function. thus in embodiments signal sequences may be internal or may be terminal signal sequences and there may be multiple copies of one or more of each of the domains. in embodiments vectors may comprise one, two, three, four or more dna sequences suitable for binding by the one or more dna binding domains comprised in a hybrid protein according to embodiments. in embodiments vectors comprise a prokaryotic expression cassette for expressing an nhp in the gram positive bacterium and a eukaryotic expression cassette for expressing a candidate gene or sequence in a eukaryotic cell. it will be understood that in alternative embodiments the prokaryotic expression cassette will contain other proteins desired to be expressed in the target gram positive bacterium, instead of one of the hybrid proteins disclosed herein. thus in embodiments, vectors may be used to express a sequence or gene of interest in a prokaryotic cell. in this disclosure the terms sht, szt, slt, smt are acronyms indicating currently preferred forms of the nhp. in the foregoing acronym descriptors z indicates the presence of at least one zinc finger dna binding domain an example of the dna sequence coding for which is presented as seq id no: 6 and the amino acid sequence as seq id no: 46, h means the presence of at least one hu dna binding domain an example of the dna sequence coding for which is presented as seq id no: 44 and the amino acid sequence at seq id no: 44, t means a cpp or protein transduction domain examples of which are presented as seq id nos 11 through 17. s means a suitable signal sequence examples of which are presented as seq id nos 18 through 23. m means the presence of at least one mer r dna binding domain an example of the dna sequence coding for which is presented as seq id no: 5 and the protein sequence of which is presented at seq id no: 45. l represents a lac-i dna binding domain which is presented herein as seq id no: 47, it being understood that the inventors have found that hybrid proteins comprising the lac-i dna binding domain are not effective for the purposes hereof and are not embodiments of the subject matter claimed herein. in embodiments presented the signal sequence is an alpha-l-arabinosidase signal sequence seq id nos 22, 23 but in alternative embodiments is the alpha amylase signal sequence seq id nos 18, 19 or a truncated version thereof seq. id nos. 20, 21, are non-limiting alternatives. in embodiments the transduction domain t is tat seq id no: 11, but a number of non-limiting possible alternatives are presented herein. sequences of exemplary sht, smt, szt hybrid proteins are presented as seq id nos 24, 26 and 27. it will be understood that variant signal domains, cpp domains and dna binding domains may be comprised in the hybrid proteins. thus by way of example and not limitation, the s or signal sequence domain may be any suitable signal sequence, non-limiting examples being the alpha arabinosidase signal sequence, the alpha amylase signal sequence, and truncated versions of these and other signal sequences. the foregoing sequences, or their corresponding dna sequence, are presented as seq id nos 18 through 23. possible cpp domains (identified as “t” in the abbreviations for the hybrid proteins according to embodiments) include tat, antp. rev, vp22, pbetampg (gp41-sv40), transportan (galanan mastoparan) and pep1 (trp rich motif sv40), or the biologically effective portions thereof. amino acid sequences of the foregoing are presented as seq id nos 11-17. it will be understood that in embodiments the foregoing hybrid protein constructions will be comprised in the prokaryotic gene expression cassette of the vector and the foregoing prokaryotic expression cassette will be designed to be functional in a desired bacterial strain. in particular embodiments such strain is a gram positive bacterium and in embodiments is bifidobacterium . in embodiments it will be staphylococcus . it will likewise be understood that in embodiments the desired dna for transformation into a target cell comprises a mammalian gene expression cassette for expression of a candidate sequence in the target cell. in embodiments the mammalian cell is a human cell. the term “expression cassette” is used herein to describe a dna sequence context comprising one or more locations into which a selected sequence may be inserted so as to be expressible as rna. in general an expression cassette will comprise a promoter, transcription start site, transcription termination site. a sequence inserted at an appropriate location, between the transcription start and termination sites, will then be expressed when the cassette is introduced into a suitable host cell. in embodiments the choices of flanking sequences will be chosen to function in a chosen cell. those skilled in the art will thus insert suitable sequences to be expressed in appropriate expression cassettes so that sequences desired to be expressed in a prokaryotic cell will be in a suitable sequence context and those desired to be expressed in eukaryotic cells will likewise be in a suitable sequence context. those skilled in the art will readily adjust the sequences, structure and other aspects of the cassette to suit particular purposes or to function in particular bacterial strains. in one series of embodiments the coding sequences for a hybrid protein according to embodiments is bounded by a hu promoter and transcription initiation site and a hu transcription termination site. in another embodiment (pfrg1.5, fig. 1 , seq id no: 2) the promoter is a ribosomal rna promoter found in bifidobacteria , and the terminator is a ribosomal rna terminator found in bifidobacteria . in embodiments the eukaryotic expression cassette comprises a suitable kozak sequence. in this disclosure the term “secretion signal”, “secretion sequence”, “secretion signal sequence”, “signal sequence” and the like, refer to a protein motif that is effective to cause secretion of the protein across a cell membrane. in embodiments a secretion sequence is, or is derived from the alpha-l-arabinosidase signal sequence or is an alpha-amylase signal sequence or is a truncated alpha amylase signal sequence. protein sequences for the foregoing are presented as seq id nos 23, 19 and 21 respectively and the encoding dna sequences are presented as seq id nos 22, 18 and 20 respectively. those skilled in the art will readily identify and implement suitable signal sequences which are useable in alternative embodiments. one listing of possible signal sequences is to be found in the signal sequence database at <www.signalpeptide.de> and in other resources such as spdb—signal peptide resource at <http://proline.bic.nus.edu.sg>. while thousands of suitable signal sequences will be identified by those skilled in the art, using available databases, screening methodologies and well known techniques, non-limiting examples of sources for suitable signal sequences for use in alternative embodiments can optionally be derived from a range of secreted enzymes and other proteins, examples including carbohydrases such as amylases, sucrases, galactosidases, monoshaccharide transferrases, lipases, phospholipases, reductases, oxidases, peptidases, transferases, methylases, ethylases, cellulases, ligninases, secreted signalling proteins, toxins and all manner of other secreted proteins. construction of suitable vectors containing the desired coding and control sequences can be achieved employing standard ligation techniques. isolated plasmids or dna fragments are cleaved, tailored, and religated in the form desired to form the plasmids required. in embodiments the vectors are synthesized de novo using suitable nucleic acid synthesis procedures as explained elsewhere herein. a range of alternative promoters, polyadenylation signals, dna binding domains, cell penetrating peptide domains, signal sequences and the like will be readily identified by those skilled in the art using conventional methods and resources. for analysis to confirm correct sequences in plasmids constructed, the ligation mixtures may be used to transform a bacteria strain such as e. coli k12 and successful transformants selected by antibiotic resistance using suitable selection markers and in embodiments the selection markers are antibiotics such as tetracycline, ampicillin spectinomycin, penicillin, kanamycin, gentamycin, zeomycin, methicillin, hygromycin b and others, all of which will be immediately recognized and used by those skilled in the art. plasmids from the transformants are prepared, analyzed by restriction and/or sequenced. in alternative embodiments bacteria may be selected using suitable auxotrophic selection markers, non-limiting examples of which include the leu2 and ura3 selection markers whose nature and use are well understood by those skilled in the art. in particular embodiments the selection marker confers resistance antibiotics effective against both gram positive and gram negative bacteria and in embodiments this is spectinomycin, tetracycline or chloramphenicol or erythromycin. in alternative embodiments the vector comprises multiple selection markers to permit selection using different antibiotics in gram positive and gram negative bacteria. host cells can be transformed with nucleic acid vectors of this invention and cultured in conventional nutrient media modified as is appropriate for inducing promoters, selecting transformants or amplifying genes. the culture conditions, such as temperature, ph and the like, will be apparent to the ordinarily skilled artisan. “transformation” refers to the taking up of vector or of a desired dna sequence by a host cell whether or not any coding sequences are in fact expressed. numerous methods are known to the ordinarily skilled artisan, for example, ca salts and electroporation. successful transformation is generally recognized when any indication of the operation of the vector or stable propagation of the introduced dna occurs within the host cell. it will be understood that the vectors and hybrid proteins disclosed herein are of particular value where standard or commonly used transformation procedures are not reliable. as used interchangeably herein, the terms “nucleic acid molecule(s)”, “oligonucleotide(s)”, and “polynucleotide(s)” and “nucleic acids” and the like include rna or dna (either single or double stranded, coding, complementary or antisense), or rna/dna hybrid sequences of more than one nucleotide in either single chain or duplex form (although each of the above species may be particularly specified), as is consistent or necessary in context. the term “nucleotide” is used herein as an adjective to describe molecules comprising rna, dna, or rna/dna hybrid sequences of any length in single-stranded or duplex form. more precisely, the expression “nucleotide sequence” encompasses the nucleic material itself and is thus not restricted to the sequence information (i.e. the succession of letters chosen among the four base letters) that biochemically characterizes a specific dna or rna molecule. the term “nucleotide” is also used herein as a noun to refer to individual nucleotides or varieties of nucleotides, meaning a molecule, or individual unit in a larger nucleic acid molecule, comprising a purine or pyrimidine, a ribose or deoxyribose sugar moiety, and a phosphate group, or phosphodiester linkage in the case of nucleotides within an oligonucleotide or polynucleotide. the term “upstream” is used herein to refer to a location which is toward the 5′ end of the polynucleotide from a specific reference point. the terms “base paired” and “watson & crick base paired” are used interchangeably herein to refer to nucleotides which can be hydrogen bonded to one another by virtue of their sequence identities in a manner like that found in double-helical dna with thymine or uracil residues linked to adenine residues by two hydrogen bonds and cytosine and guanine residues linked by three hydrogen bonds (see stryer, 1995, which disclosure is hereby incorporated by reference in its entirety). the terms “complementary” or “complement thereof” are used herein to refer to the sequences of polynucleotides which is capable of forming watson & crick base pairing with another specified polynucleotide throughout the entirety of the complementary region. for the purpose of the present invention, a first polynucleotide is deemed to be complementary to a second polynucleotide when each base in the first polynucleotide is paired with its complementary base. complementary bases are, generally, a and t (or a and u), or c and g. “complement” is used herein as a synonym from “complementary polynucleotide”, “complementary nucleic acid” and “complementary nucleotide sequence”. these terms are applied to pairs of polynucleotides based solely upon their sequences and not any particular set of conditions under which the two polynucleotides would actually bind. unless otherwise stated, all complementary polynucleotides are fully complementary on the whole length of the considered polynucleotide. digestion of dna refers to catalytic cleavage of the dna with a restriction enzyme that acts only at certain sequences in the dna. the various restriction enzymes used herein are commercially available and their reaction conditions, cofactors and other requirements were used as would be known to the ordinarily skilled artisan. for analytical purposes, typically 1 μg of plasmid or dna fragment is used with about 2 units of enzyme in about 20 μl of buffer solution. for the purpose of isolating dna fragments for plasmid construction, typically 5 to 50 μg of dna are digested with 20 to 250 units of enzyme in a larger volume. appropriate buffers and substrate amounts for particular restriction enzymes are specified by the manufacturer. incubation times of about 1 hour at 37° c. are ordinarily used, but may vary in accordance with the supplier's instructions. after digestion the reaction is electrophoresed directly on a polyacrylamide gel to isolate the desired fragment. recovery or isolation of a given fragment of dna from a restriction digest means separation of the digest on polyacrylamide or agarose gel by electrophoresis, identification of the fragment of interest by comparison of its mobility versus that of marker dna fragments of known molecular weight, removal of the gel section containing the desired fragment, and separation of the gel from dna. this procedure is known generally (lawn, r. et al., nucleic acids res. 9: 6103 6114 [1981], and goeddel, d. et al., nucleic acids res. 8: 4057 [1980]). dephosphorylation refers to the removal of the terminal 5′ phosphates by treatment with bacterial alkaline phosphatase (bap). this procedure prevents the two restriction cleaved ends of a dna fragment from “circularizing” or forming a closed loop that would impede insertion of another dna fragment at the restriction site. procedures and reagents for dephosphorylation are conventional (maniatis, t. et al., molecular cloning, 133 134 cold spring harbor, [1982]). reactions using bap are carried out in 50 mm tris at 68° c. to suppress the activity of any exonucleases which may be present in the enzyme preparations. reactions are run for 1 hour. following the reaction the dna fragment is gel purified. ligation refers to the process of forming phosphodiester bonds between two double stranded nucleic acid fragments (maniatis, t. et al., id. at 146). unless otherwise provided, ligation may be accomplished using known buffers and conditions with 10 units of t4 dna ligase (“ligase”) per 0.5 μg of approximately equimolar amounts of the dna fragments to be ligated. filling or blunting refers to the procedures by which the single stranded end in the cohesive terminus of a restriction enzyme-cleaved nucleic acid is converted to a double strand. this eliminates the cohesive terminus and forms a blunt end. this process is a versatile tool for converting a restriction cut end that may be cohesive with the ends created by only one or a few other restriction enzymes into a terminus compatible with any blunt-cutting restriction endonuclease or other filled cohesive terminus. in one embodiment, blunting is accomplished by incubating around 2 to 20 μg of the target dna in 10 mm mgcl 2 , 1 mm dithiothreitol, 50 mm nacl, 10 mm tris (ph 7.5) buffer at about 37° c. in the presence of 8 units of the klenow fragment of dna polymerase i and 250 μm of each of the four deoxynucleoside triphosphates. the incubation generally is terminated after 30 min. phenol and chloroform extraction and ethanol precipitation the terms “polypeptide” and “protein”, used interchangeably herein, refer to a polymer of amino acids without regard to the length of the polymer; thus, peptides, oligopeptides, and proteins are included within the definition of polypeptide. this term also does not specify or exclude chemical or post-expression modifications of the polypeptides of the invention, although chemical or post-expression modifications of these polypeptides may be included excluded as specific embodiments. therefore, for example, modifications to polypeptides that include the covalent attachment of glycosyl groups, acetyl groups, phosphate groups, lipid groups and the like are expressly encompassed by the term polypeptide. further, polypeptides with these modifications may be specified as individual species to be included or excluded from the present invention. the natural or other chemical modifications, such as those listed in examples above can occur anywhere in a polypeptide, including the peptide backbone, the amino acid side-chains and the amino or carboxyl termini. it will be appreciated that the same type of modification may be present in the same or varying degrees at several sites in a given polypeptide. also, a given polypeptide may contain many types of modifications. polypeptides may be branched, for example, as a result of ubiquitination, and they may be cyclic, with or without branching. modifications include acetylation, acylation, adp-ribosylation, amidation, covalent attachment of flavin, covalent attachment of a heme moiety, covalent attachment of a nucleotide or nucleotide derivative, covalent attachment of a lipid or lipid derivative, covalent attachment of phosphotidylinositol, cross-linking, cyclization, disulfide bond formation, demethylation, formation of covalent cross-links, formation of cysteine, formation of pyroglutamate, formylation, gamma-carboxylation, glycosylation, gpi anchor formation, hydroxylation, iodination, methylation, myristoylation, oxidation, pegylation, proteolytic processing, phosphorylation, prenylation, racemization, selenoylation, sulfation, transfer-rna mediated addition of amino acids to proteins such as arginylation, and ubiquitination. (see, for instance creighton (1993); seifter et al., (1990); rattan et al., (1992)). also included within the definition are polypeptides which contain one or more analogs of an amino acid (including, for example, non-naturally occurring amino acids, amino acids which only occur naturally in an unrelated biological system, modified amino acids from mammalian systems, etc.), polypeptides with substituted linkages, as well as other modifications known in the art, both naturally occurring and non-naturally occurring. as used herein, the term “operably linked” refers to a linkage of polynucleotide elements in a functional relationship. a sequence which is “operably linked” to a regulatory sequence such as a promoter means that said regulatory element is in the correct location and orientation in relation to the nucleic acid to control rna polymerase initiation and expression of the nucleic acid of interest. for instance, a promoter or enhancer is operably linked to a coding sequence if it affects the transcription of the coding sequence. in this disclosure the term “shuttle vector” means a dna vector which is able to replicate in both gram positive and gram negative bacteria and comprises a eukaryotic expression cassette suitable to express a sequence or gene of interest when introduced to a eukaryotic cell and a prokaryotic expression cassette able to express a gene of interest in a eukaryotic cell. in embodiments a first and second strains of host bacteria are from different genera, in embodiments they are from different species, and in embodiments they are from different subspecies. in embodiments the gram negative bacteria is an e. coli and in embodiments the gram positive bacteria is lactococcus, lactobacillus, bifidobacterium , or staphylococcus . it will be understood that a wide variety of combinations of first and second strains of host bacteria are possible in alternative embodiments and the foregoing exemplary combination of e. coli with other strains is in no way limiting. in embodiments shuttle vectors are plasmids. thus the nucleic acid vectors disclosed herein are shuttle vectors able to replicate in both gram positive and gram negative bacteria of suitable types. a “promoter” refers to a dna sequence recognized by the synthetic machinery of the cell, or introduced synthetic machinery, required to initiate the specific transcription of a gene. the phrase “under transcriptional control” means that the promoter is in the correct location and orientation in relation to the nucleic acid to control rna polymerase initiation and expression of the gene. the particular promoter employed to control the expression of a nucleic acid sequence of interest is not believed to be important, so long as it is capable of directing the expression of the nucleic acid in the targeted cell. in embodiments the target eukaryotic cell is a mammalian cell. in embodiments the mammalian cell is a human cell and in embodiments is a cancer cell. in embodiments the cancer cell is a lung cancer cell, a colon (colorectal) cancer cell, a kidney cancer cell, or an ovarian cancer cell. in particular embodiments the cell is hek-29 (human embryonic kidney); ht29, caco2 (both human adenocarcinoma); hela, ll2 (lung carcinoma). in embodiments the cells are cultured cells. where a cdna insert is employed, typically one will typically include a polyadenylation signal to effect proper polyadenylation of the gene transcript. the nature of the polyadenylation signal is not believed to be crucial to the successful practice of the invention, and any such sequence may be employed. also contemplated as an element of the expression construct is a terminator. these elements can serve to enhance message levels and to minimize read through from the construct into other sequences. kits: in some embodiments, there are disclosed kits comprising the vectors disclosed herein. in embodiments the kits comprise a quantity of vector dna. embodiments of exemplary nucleic acid vectors and their sequences are presented as figs. 1 and 2 and seq id nos 1 and 2. in embodiments kits comprise a quantity of nhp. in embodiments the kits comprise instructions. the container means of the kits will generally include at least one vial, test tube, flask, bottle, syringe or other container means, into which the vector, cells and/or primers may be placed, and preferably, suitably aliquoted. where an additional component is provided, the kit will also generally contain additional containers into which this component may be placed. the kits of the present invention will also typically include a means for containing the probes, primers, and any other reagent containers in close confinement for commercial sale. in embodiments a kit will include injection or blow-molded plastic containers into which the desired vials are retained and in embodiments a kit will include instructions regarding the use of the materials comprised in the kit. in this disclosure “gene of interest”, “sequence of interest”, “candidate gene” or “cargo gene” and like terms, refer to a sequence or gene that it is desired to transform into a target cell or that it is desired to express in a target cell. in embodiments hereof such sequences and genes are incorporated into vectors capable of expression of such sequence in particular target cells. in this disclosure the term “target cell” means a cell into which it is desired to introduce a candidate or cargo gene or sequence. in embodiments a target cell is a eukaryotic cell and in embodiments is a mammalian cell. in embodiments a target cell is a prokaryotic cell. in embodiments, vectors contain sequences necessary for efficient transcription and translation of specific genes or sequences encoding specific mrna or sirna sequences, in a target probiotic cell and may thus comprise transcription initiation and termination sites, enhancers and the like. similarly any expressed rna may comprise suitable translation start sites, ribosome binding sites, and the like. all of which will be readily identified by those skilled in the art. more generally the term “expression cassette” is used herein to describe a dna sequence context comprising one or more locations into which a selected sequence may be inserted so as to be expressible when present in a suitable cell type. in general an expression cassette will comprise a promoter, transcription start site, transcription termination site and other necessary or desirable sequences. in embodiments the expression cassette comprises a multiple cloning site suitable to permit the convenient insertion thereinto of a dna sequence having compatible ends, and is able to be expressed as a translatable rna by suitable cell types. in embodiments, rather than having a multiple cloning site, each expression cassette has restriction cut sites flaking 5′ and 3′ ends of promoter, gene of interest and terminator. it will be understood that in order for a protein to be expressed the insertion of the dna sequence must be in the correct reading frame. a sequence inserted at an appropriate location, between the transcription start and termination sites, will then be expressed when the cassette is introduced into a suitable host cell and in embodiments such host cell is bifidobacterial cell. it will be understood that where a chosen nucleotide sequence is said to be inserted into an expression cassette, or where it is said that an expression cassette comprises or includes a chosen nucleotide sequence, then such chosen nucleotide sequence will be inserted into such expression cassette in a form suitable for transcription and/or translation to generate a biologically active form of any protein or oligopeptide encoded thereby or will be expressible in the form of a suitable rna species. those skilled in the art will readily adjust the sequences, structure and other aspects of the cassette to suit particular purposes or to function in particular bacterial strains. for clarity, this disclosure refers to both first and second expression cassettes non-limiting examples of which are presented as seq id nos 48, 49. in embodiments the first expression cassette is a prokaryotic expression cassette a non-limiting example being presented as seq id no: 48 and serves to express a desired fusion protein according to embodiments and the second expression cassette is a eukaryotic expression cassette and in embodiments serves to express a candidate dna in a target cell. thus in embodiments fusion proteins are encoded by a prokaryotic gene expression cassette and a cargo gene or sequence of interest is comprised in a eukaryotic or mammalian gene expression cassette. in embodiments the construction of suitable vectors containing the desired coding and control sequences is achieved using standard ligation techniques. isolated plasmids or dna fragments are cleaved, tailored, and religated in the form desired to form the plasmids required. in alternative embodiments vectors, inserts, plasmids and any other nucleic acid sequences of interest, are synthesized de novo using standard nucleic acid synthesis techniques. in series of embodiments plasmids were synthesized in silico by geneart®, life technologies™, (gene art/life technologies™, im gewerbepark b35, regensburg, 93059 germany). examples of other commercial nucleic acid synthesis providers are: dna2.0 1140 o'brian drive suite a, menlo park calif., 94025 whose website is to be found at www.dna20.com; and genewiz, 115 corporate boulevard, south plainfield, n.j. 07080 with a website at www.genewiz.com. without limitation, the direct synthesis of desired sequences of nucleic acids and amino acids will be readily achieved by those skilled in the art using a range of known techniques. exemplary references describing relevant synthetic methods are described in the following publications, the content of which is incorporated herein in its entirety to the full extent permissible by law: khorana h g, agarwal k l, büchi h et al. (december 1972). “studies on polynucleotides. 103. total synthesis of the structural gene for an alanine transfer ribonucleic acid from yeast”. j. mol. biol. 72 (2): 209-217; itakura k, hirose t, crea r et al. (december 1977). “expression in escherichia coli of a chemically synthesized gene for the hormone somatostatin”. science 198 (4321): 1056-1063; edge m d, green a r, heathcliffe g r et al. (august 1981). “total synthesis of a human leukocyte interferon gene”. nature 292 (5825): 756-62; “difficult to express proteins”. sixth annual pegs summit . cambridge healthtech institute. 2010; liszewski, kathy (1 may 2010). “new tools facilitate protein expression”. genetic engineering & biotechnology news . bioprocessing 30 (9) (mary ann liebert). pp. 1, 40-41; welch m, govindarajan m, ness j e, villalobos a, gurney a, minshull j, gustafsson c (2009). kudla, grzegorz, ed. “design parameters to control synthetic gene expression in escherichia coli”. plos one 4 (9): e7002; “protein expression”. dna2.0. retrieved 11 may 2010; fuhrmann m, oertel w, hegemann p (august 1999). “a synthetic gene coding for the green fluorescent protein (gfp) is a versatile reporter in chlamydomonas reinhardtii ”. plant j. 19 (3): 353-61; mandecki w, bolling t j (august 1988). “fokl method of gene synthesis”. gene 68(1): 101-7; stemmer w p, crameri a, ha k d, brennan t m, heyneker h l (october 1995). “single-step assembly of a gene and entire plasmid from large numbers of oligodeoxyribonucleotides”. gene 164 (1): 49-53; gao x, yo p, keith a, ragan t j, harris t k (november 2003). “thermodynamically balanced inside-out (tbio) pcr-based gene synthesis: a novel method of primer design for high-fidelity assembly of longer gene sequences”. nucleic acids res. 31(22): e143; young l, dong q (2004). “two-step total gene synthesis method”. nucleic acids res. 32 (7): e59; hillson n h, rosengarten r d, keasling j d (2012). “j5 dna assembly design automation software”. acs synthetic biology 1 (1): 14-21; hoover d m, lubkowski j (may 2002). “dnaworks: an automated method for designing oligonucleotides for pcr-based gene synthesis”. nucleic acids res. 30(10): e43; villalobos a, ness j e, gustafsson c, minshull j, govindarajan s (2006). “gene designer: a synthetic biology tool for constructing artificial dna segments”. bmc bioinformatics 7: 285; tian j, gong h, sheng n et al. (december 2004). “accurate multiplex gene synthesis from programmable dna microchips”. nature 432 (7020): 1050-4. similarly those skilled in the art will immediately recognise and use a range of suitable software applications and online resources for the prediction and design of desired nucleotide and protein sequences including hybrid sequences, plasmid and other vector sequences, coding and other sequences. by way of example and not limitation, a variety of online dna databases referred to elsewhere herein contain sequences of a variety of signal peptides, dna binding domains and their recognition sequences, selection markers, resistance genes, auxotrophic mutants, promoters, and origins of replication. thus for example: one listing of possible signal sequences is to be found in the signal sequence database at <http://www.signalpeptide.de/index.php?m=listspdbat> and further information is to be found at the main website at http://www.signalpeptide.de/. possible transduction sequences are described herein and these and other similar sequences will be readily identified using online resources. a range of sequence analysis and manipulation software is readily available and used by those skilled in the art. by way of example and not limitation, suitable software for the manipulation and analysis of nucleotide and/or protein sequences includes snapgene, pdraw32, dnastar, blast, cs-blast, fasta, mb-dna analysis software, dnadynamo, plasma dna, sequencher; suitable motif prediction and analysis software includes but is not limited to fmm, pms, emotif, phi-blast, phyloscan, and an exemplary library of protein motifs is i-sites. links to all of the foregoing are to be found on the wikipedia web page at http://en.wikipedia.org/wiki/list_of_sequence_alignment_software which was accessed on jan. 19, 2015. more generally extensive databases comprising dna, rna, and protein sequences include but are not limited to genbank, refseq, tpa, pdb and the ncbi database at http://www.ncbi.nlm.nih.gov. these and other suitable tools and resources will be readily identified and used by those skilled in the art. description of embodiments embodiments of the invention are hereafter described with general reference to figs. 1 through 10 and seq id nos 1 through 49 all of which form a part of and are incorporated in this disclosure. first embodiment in a first general aspect of the embodiment there is disclosed a nucleic acid vector comprising: a first origin of replication for replication in gram-positive bacteria and second origin of replication for replication in gram-negative bacteria, at least one selection markers for selection in both gram-positive and gram-negative bacteria; a first gene expression cassette functional in gram positive bacteria and a second gene expression cassette functional in a mammalian cell an in embodiments this is a human cell. in embodiments the vector comprises a single selection marker effective in both gram positive and gram negative bacteria. a first illustrative embodiment of a vector according to the first embodiment is shown in fig. 1 and the corresponding vector sequence is presented as seq id no: 1. the design of suitable expression cassettes will be readily understood by one skilled in the art. in the embodiment pbra2.0 sht the first expression cassette for expression in a gram positive bacterium comprises an hu promoter and terminator flanking the nhp encoding sequences as will be seen in fig. 1 . in the embodiment pbra2.0 sht the second expression cassette for expression in a eukaryotic cell, which in embodiments is a mammalian cell and in embodiments is a human cell, comprises the cmv (cytomegalovirus) promoter, a kozak sequence and a thymidine kinase polyadenylation/termination sequence, the foregoing flanking an insertion sequence for the insertion of a cargo sequence. this will be seen in seq id no: 1 wherein a gfp protein is inserted into the second expression cassette. in particular variants of the first embodiment, a first origin of replication is functional in e. coli and second origin of replication is functional in at least one of staphylococcus and bifidobacteria . in further variants a single bifunctional origin of replication is functional in e. coli and in at least one of staphylococcus and bifidobacteria. in the embodiment pbra2.0 sht the vector comprises a puc origin of replication for replication in e. coli and a pb44 origin of replication for replication in bifidobacterium. it will be understood that suitable origins of replication for use in a gram negative bacterium, namely e. coli , include the puc origin of replication presented as seq id no: 29 and a wide range of other gram negative origins, which will be readily identified and adopted by those skilled in the art. suitable origins of replication for bifidobacterium and other gram positive strains of bacteria include pb44 ori presented as seq id no: 30, and the pdojhr ori presented as seq id no: 28, within which the origin is comprised. dna-bs denotes a nucleotide binding site (seq. id. no. 43) for a zinc finger dna binding protein (seq id no: 46). it will therefore be understood that in alternative embodiments the first gram negative origin of replication will comprise a sequence having at least 90, 91, 92, 93, 94, 95, 96, 97, 98, 99 or 100% sequence identity over at least 20, 40, 60, 80 or more contiguous nucleotides to the puc ( e. coli ) ori presented as seq id no: 29. it will likewise be understood that in alternative embodiments the second or gram positive origin of replication will comprise a sequence having at least 90, 91, 92, 93, 94, 95, 96, 97, 98, 99 or 100% sequence identity over at 20, 40, 60, 80 or more contiguous nucleotides to the pb44 or pdojhr origins of replication whose sequences are presented as seq id nos 30 and 28 respectively. in embodiments the selection marker is effective in both gram positive and gram negative bacteria. in embodiments the selection marker is resistance to spectinomycin. in embodiments the selection marker comprised in the vector is resistance to tetracycline. in embodiments the selection marker comprised in the vector is resistance to chloramphenicol. in alternative embodiments the vector comprises separate selection markers for gram positive and gram negative bacteria. accordingly, in embodiments the vector comprises a selection marker or a combination of selection markers selected from resistance to ampicillin, penicillin, tetracycline, chloramphenicol, streptomycin, quinolone, fluoroquinolone, gentamycin, neomycin, kanamycin, spectinomycin. those skilled in the art will readily identify additional selection markers and the gene sequences responsible therefore. listings of possible selection markers that can be adopted in variant embodiments are to be found in stryer, sambrook et al. 1989, maniatis the contents of which are all incorporated herein by reference to wherever permissible by law. in the illustrated embodiments the first or gram negative origin of replication is functional in e. coli . in the illustrated embodiments the second or gram positive origin of replication is functional in at least one of bifidobacterium and staphylococcus . in alternative embodiments the origin is functional in a bacterium selected from the group consisting of bifidobacteria, lactococcus , and clostridium, staphylococcus and streptococcus. in the first embodiment the first gene expression cassette encodes a hybrid protein comprising at least one dna binding domain, at least one cell penetrating peptide (cpp) domain, and at least one secretion signal sequence. alternative embodiments of the nhp are possible and examples are disclosed below. in embodiments the nucleic acid vectors according to embodiments further comprise at least one dna motif that binds to the at least one dna binding domain of said hybrid protein. it will be understood that in embodiments the said dna binding domain is sequence specific and that in other embodiments, such as embodiments wherein the nhp comprises an hu domain, the binding is not sequence specific and in such embodiments the vector does need not comprise a complementary binding motif. second embodiment a second illustrative embodiment of a plasmid according to the subject matter hereof is presented in fig. 2 and a sequence for an embodiment of the vector is presented as seq id no: 2. this second embodiment is designated pfr1,5 sht, reflecting the identity of the inserted hybrid protein. the vector comprises gram-positive origin of replication, namely the pdojhr origin of replication (2) whose sequence is comprised within seq id no: 28 and a gram negative origin of replication, namely the puc ori (1) presented as seq id no: 29. the vector comprises prokaryotic and eukaryotic expression cassettes. the prokaryotic expression cassette comprises a ribosomal rna promoter (8) and terminator (10) flanking coding sequences (9) for the hybrid protein sht, defined elsewhere herein. the mammalian gene expression cassette comprises a cmv promoter (4) and a tk poly a site and terminator (6). in this case the inserted gene is a gfp reporter sequence (5). in the example the plasmid comprises dna sequences (11) suitable for binding to the sequence comprised in the sht hybrid protein. the selectable marker in this plasmid is a spectinomycin resistance gene (3) but alternative selection markers, including those disclosed herein, will be readily identified and adopted by those skilled in the art. it will be understood that alternative forms of hybrid protein, alternative cargo sequences, and alternative protein binding motifs can readily be incorporated in variants of the illustrated embodiment. likewise it will be understood that a wide range of well-known gram negative origins of replication will be selected amongst by those skilled in the art. in a further aspect of the first embodiment there are disclosed hybrid proteins or fusion proteins. the hybrid proteins comprise at least one dna binding domain, at least one cell penetrating peptide (cpp) domain, and at least one secretion signal sequence. in embodiments there is disclosed a hybrid protein as described below or as otherwise disclosed herein. in embodiments the hybrid protein comprises: at least one signal sequence; at least one dna binding domain; and at least one cell penetrating peptide (cpp) domain. it will be understood that in embodiments the protein comprises only one copy of a signal sequence domain, cpp domain and dna binding domain but that in alternative embodiments any one, two or three of such domains may be present in more than one copy, or may comprise one, two, three or more different domains. thus in embodiments, by way of example and not limitation, a protein may comprise multiple copies of the same dna binding domain, or may contain copies of two, three or more different dna binding domains. similarly in embodiments the protein may comprise multiple copies of a cpp domain which copies may be the same or may be different. similarly in embodiments the protein may comprise multiple signal sequences. in embodiments the at least one cpp domain is or comprises a tat domain, a vp22 domain, an antp domain or a rev domain. in alternative embodiments the cpp domain is or comprises a p-beta mpg (gp41-sv40) domain, a transportan (galanin mastoparan) domain, or a pep-1 (trp rich motif—sv40) domain. the sequences of such domains are presented herein as seq id nos 11 through 17. in embodiments the at least one secretion signal sequence is selected from the group consisting of an alpha amylase signal sequence and an alpha arabinosidase signal sequence, or a truncated form of such signal sequences. the sequences of these exemplary signal sequences and their corresponding dna sequences are presented as seq id nos 18-23. in embodiments the at least one dna binding domain is or comprises at least one sequence specific dna binding domain. in embodiments the at least one dna binding domain comprises at least one domain selected from the group consisting of: a zinc finger dna binding domain, a homeobox dna binding domain, a merr dna binding domain, and a hu dna binding domain. dna sequences encoding a selected dna binding domain are presented as seq id nos 3 through 6. sequences of variant hybrid proteins comprising the foregoing domains are presented as seq id nos 24 through 27 and corresponding dna sequences are presented as seq id nos 7 through 10. in embodiments the hybrid protein comprises an alpha arabinosidase signal sequence (seq id no: 23), a tat domain (seq id no: 11) and a hu dna binding domain (seq id no: 4). in a further series of embodiments there is disclosed a method for transforming a target cell with a desired dna, the method comprising the step of contacting the target cell with a protein-dna complex comprising the desired dna and the hybrid protein according to embodiments. in embodiments the method further comprises the step of contacting said dna with said hybrid protein to form said protein-dna complex. in embodiments the target cell is a eukaryotic cell and in embodiments is a prokaryotic cell and in embodiments is a gram positive prokaryotic cell. in embodiments of the method the dna is a plasmid and in embodiments the plasmid comprises an expression cassette for expressing the hybrid protein in a gram positive bacterium. in embodiments the plasmid comprises a eukaryotic expression cassette and in embodiments the eukaryotic expression cassette comprises inserted thereinto a dna sequence for expression in a target eukaryotic cell. in embodiments the plasmid is a nucleic acid vector according to other embodiments disclosed herein. in other embodiments of the method the at least one dna binding domain is a sequence specific dna binding domain and the plasmid comprises at least one nucleotide sequence for binding to said at least one sequence specific dna binding domain. in embodiments of the method the at least one cpp sequence has at least 90% amino acid sequence identity over 10 contiguous amino acids to seq id. no. 11, 12, 13, 14, 15, 16, or 17 in embodiments of the method said at least one signal sequence has at least 90%, 91%, 92%, 93%, 94%, 95%, 96%, 97%, 98%, 99% or 100% amino acid sequence identity over at least 10, 15, 20 or more contiguous amino acids to seq id no: 19, 21, or 23. in embodiments of the method said at least one dna binding domain has at least 0%, 91%, 92%, 93%, 94%, 95%, 96%, 97%, 98%, 99% or 100% amino acid sequence identity over at least 10, 15, 20 or more contiguous amino acids to the amino acid sequence encoded by seq id nos 3, 4, 5 or 6. in embodiments of the method the cell to be transformed is a gram positive bacterium and in embodiments is a eukaryotic cell which in embodiments is a mammalian cell and in embodiments is a human cell. in related embodiments there is disclosed a cell transformed with an exogenous dna using the hybrid protein according to embodiments wherein the cell is a gram positive bacterium or is a mammalian cell. in embodiments there is also disclosed the use of the hybrid protein according to embodiments hereof, to transform a gram positive bacterium or a mammalian cell. in embodiments the cell is a human cell. examples of cells transformed using embodiments are disclosed elsewhere herein. in embodiments the cell to be transformed using the hybrid protein is a staphylococcus, streptococcus, bifidobacterium, lactococcus, lactobacillus, clostridium or any other bacterial type disclosed herein. in embodiments the cell to be transformed is a mammalian cell and in embodiments is a bifidobacterium or a staphylococcus. in embodiments there is disclosed a bacterial cell able to synthesize the hybrid protein according to embodiments. in embodiments the cell comprises a plasmid encoding the hybrid protein in a suitable expression context and in embodiments the cell is a bifidobacterium. in embodiments the hybrid protein is synthesised by a bacterial cell and in alternative embodiments the hybrid protein is synthesized in silico using methods well known in the art and as further indicated herein. kit embodiments in a further series of embodiments there are disclosed kits for transforming a target cell with a desired dna, the kit comprising a quantity of the hybrid protein according to embodiments, and instructions to: form a complex of the said hybrid protein with the desired dna; and contact the said complex with said target cell. in embodiments the kits comprise suitable media or buffers and solutions for said contacting or contain recipes or components for said media or buffers. in embodiments the hybrid protein is encoded by a vector according to embodiments and is synthesized by a suitable host gram positive bacterium using a suitable expression cassette in the vector. in alternative embodiments the nhp is artificially synthesized using solid phase synthesis of peptide according to known techniques. in alternative embodiments the nhp is synthesized by a separate bacterium. in embodiments the purified or enriched nhp is added directly to a solution comprising a vector desired to be transformed into a target prokaryotic or eukaryotic cell, and the mixture containing the dna/protein complex is contacted with the target cell. in embodiments such contacting of the protein and dna occurs under suitable conditions for the formation of the complex to occur. while those skilled in the art will readily determine and optimize the conditions for the binding of particular combinations of hybrid protein and dna using existing resources and the common general knowledge in the art, specific conditions used in non-limiting examples are presented in the examples section hereof. in embodiments of the nhp the dna binding domain is one of a hu dna binding domain, a zinc finger dna binding domain, a homeobox dna binding domain, a homeobox dna binding domain and a merr dna binding domain, or combinations thereof. exemplary dna sequences encoding suitable binding domains are presented as seq id nos 3, 4, 5, 6. in a range of alternative embodiments of the embodiments presented herein suitable dna binding domains may be of any general type, including but not limited to helix-turn-helix, zinc finger, leucine zipper, winged helix, winged helix turn helix, helix loop helix, hmg box, wor 3 and rna guided binding domains. illustrative examples of dna binding proteins whose dna binding domains may be utilized in embodiments include histones, histone like proteins, transcription promoters, transcription repressors, transcriptional regulators, which may be drawn from a wide range of alternate sources and operons. in embodiments the dna binding domain is a hu dna binding domain from the bacterial hu dna binding protein. the sequence of the hu binding domain is presented at seq id no: 44 and its encoding dna as seq id no: 4. in embodiments the cpp comprises a tat domain (seq id no: 11), a vp22 protein of herpes simplex virus (seq id no: 14), or the protein transduction domain of the antennapedia (antp) protein (seq, id. no. 12), or combinations thereof. those skilled in the art will recognise that additional protein transduction domains can be used in alternative variants of the embodiment. in one series of alternative embodiments the cpp domain comprises a rev domain. details of the foregoing and alternative possible transduction domains are described in sugita et al., “comparative study on transduction and toxicity of protein transduction domains” br j pharmacol. march 2008; 153(6): 1143-1152. fig. 11 is a table showing the protein sequences of the four foregoing protein transduction domains and is taken from sugita et al. in embodiments the secretion signal sequence is alpha-l-arabinosidase (seq id no: 23), or a full length (seq id no: 19) or truncated (seq id no: 21)alpha-amylase signal sequence. the dna sequences encoding the foregoing are presented as seq id no: 22, 18 and 20 respectively). thus, in embodiments, the nhp comprises an alpha-l-arabinosidase signal sequence (seq id no: 23), a zinc finger dna binding domain (seq id no: 46), and a tat domain (seq id no: 11). thus, in embodiments, the nhp comprises an alpha-l-arabinosidase signal sequence (seq id no: 23), a zinc finger dna binding domain (seq id no:46), and a vp22 domain (seq id no: 14). thus, in embodiments, the nhp comprises an alpha-l-arabinosidase signal sequence (seq id no: 23), a zinc finger dna binding domain (seq id no: 46), and the protein transduction domain of the antp protein (seq id no:12). thus, in embodiments, the nhp comprises an alpha-l-arabinosidase signal sequence (seq id no: 23), a zinc finger dna binding domain. (seq id no: 46), thus, in embodiments, the nhp comprises an alpha-l-arabinosidase signal sequence (seq id no: 23), a hu dna binding domain, (seq id no: 44), and a tat domain (seq id no: 11). thus, in embodiments, the nhp comprises an alpha-l-arabinosidase signal sequence (seq id no: 23), a hu dna binding domain (seq id no: 44), and a v22 domain (seq id no: 14). thus, in embodiments, the nhp comprises an alpha-l-arabinosidase signal sequence (seq id no: 23), a hu dna binding domain (seq id no: 44), and the protein transduction domain of the antp protein (seq id no: 12). thus, in embodiments, the nhp comprises an alpha-l-arabinosidase signal sequence (seq id no: 23), a hu dna binding domain (seq id no: 44). a) variants based on alpha-l-arabinosidase thus, in embodiments, the nhp comprises an alpha-l-arabinosidase signal sequence (seq id no: 23), a mer r dna binding domain (seq id no: 41), and a tat domain (seq id no: 11). thus, in embodiments, the nhp comprises an alpha-l-arabinosidase signal sequence a mer r dna binding domain (seq id no: 41), and a vp22 domain (seq id no: 14). thus, in embodiments, the nhp comprises an alpha-l-arabinosidase signal sequence a mer r dna binding domain (seq id no: 41), and the protein transduction domain of the antp protein (seq id no: 12). thus, in embodiments, the nhp comprises an alpha-l-arabinosidase signal sequence, a mer r dna binding domain (seq id no:41). thus, in embodiments, the nhp comprises an alpha-l-arabinosidase signal sequence (seq id no: 23), a hu dna binding domain (seq id no: 44), and a tat domain (seq id no:11). thus, in embodiments, the nhp comprises an alpha-l-arabinosidase signal sequence, a hu dna binding domain (seq id no: 44), and a vp22 domain (seq id no: 14). thus, in embodiments, the nhp comprises an alpha-l-arabinosidase signal sequence, a hu dna binding domain (seq id no: 44), and the protein transduction domain of the antp protein (seq id no: 12). thus, in embodiments, the nhp comprises an alpha-l-arabinosidase signal sequence, a hu dna binding domain (seq id no: 44). b) variants based on alpha amylase signal sequence thus, in embodiments, the nhp comprises an alpha-amylase signal sequence (seq id nos 19, 21), a zinc finger dna binding domain (seq id no: 46), and a tat domain (seq id no: 11). thus, in embodiments, the nhp comprises an alpha-l-amylase signal sequence (seq id nos 19, 21), a zinc finger dna binding domain (seq id no: 46), and a vp22 domain (seq id no: 14). thus, in embodiments, the nhp comprises an alpha-l-amylase signal sequence (seq id nos 19, 21), a zinc finger dna binding domain (seq id no: 46), and the protein transduction domain of the antp protein (seq id no: 12). thus, in embodiments, the nhp comprises an alpha-l-amylase signal sequence (seq id nos 19, 21), a zinc finger dna binding domain (seq id no: 46). thus, in embodiments, the nhp comprises an alpha-l-amylase signal sequence (seq id nos 19, 21), a homeobox dna binding domain and a tat domain (seq id no: 11). thus, in embodiments, the nhp comprises an alpha-l-amylase signal sequence (seq id nos, 19, 21), a homeobox dna binding domain and a vp22 domain (seq id no: 14). thus, in embodiments, the nhp comprises an alpha-l-amylase signal sequence (seq id nos 19, 21), a homeobox dna binding domain, and the protein transduction domain of the antp protein (seq id no: 12). thus, in embodiments, the nhp comprises an alpha-l-amylase signal sequence (seq id nos 19, 21), a homeobox dna binding domain. thus, in embodiments, the nhp comprises an alpha-l-amylase signal sequence (seq id nos 19, 21), a mer r dna binding domain (seq id no: 45), and a tat domain (seq id no: 11). thus, in embodiments, the nhp comprises an alpha-l-amylase signal sequence (seq id nos 19, 21), a mer r dna binding domain (seq id no: 45), and a vp22 domain (seq id no:14). thus, in embodiments, the nhp comprises an alpha-l-amylase signal sequence, a mer r dna binding domain (seq id no: 45), and the protein transduction domain of the antp protein (seq id no: 12). thus, in embodiments, the nhp comprises an alpha-l-amylase signal sequence, a mer r dna binding domain (seq id no: 45). thus, in embodiments, the nhp comprises an alpha-l-amylase signal sequence (seq id nos 19, 21), a hu dna binding domain (seq id no: 44), and a tat domain (seq id no: 11). thus, in embodiments, the nhp comprises an alpha-l-amylase signal sequence, a hu dna binding domain (seq id no: 44), and a vp22 domain (seq id no: 14). thus, in embodiments, the nhp comprises an alpha-l-amylase signal sequence (seq id nos 19, 21), a hu dna binding domain (seq id no: 44), and the protein transduction domain of the antp protein (seq id no: 12). thus, in embodiments, the nhp comprises an alpha-l-amylase signal sequence (seq id nos 19, 21), a hu dna binding domain (seq id no: 44), and a cpp domain non limiting examples of which are presented as seq id nos 11 through 17. in embodiments each component domains of the nhp is substantially full length. in embodiments each of the component domains of the nhp is a functional but partial domain. thus in embodiments there are disclosed the foregoing nhps, as well as nucleic acid sequences encoding such nhp's and vectors comprising the nucleic acid sequences encoding the nhps. further in embodiments the encoding sequences are operatively linked to and transcribable by a host bacterium as part of a eukaryotic expression cassette. in a series of embodiments of the first embodiment, the nucleic acid vectors comprise a cargo or candidate or exogenous gene or sequence is inserted into the eukaryotic expression cassette of the vector. the cargo gene may be any gene or nucleic acid sequence. in particular examples the gene is a marker gene such as gfp or is a tumor suppressor. it will be understood that the range of possible sequences that may be inserted into the eukaryotic expression cassette is not limited in any way. one skilled in the art will readily understand the adaptation and insertion of suitable sequences for expression in vectors according to this disclosure. in embodiments the cargo or candidate sequence is a fluorescent or marker protein. in embodiments the vector comprises at least one dna motif for binding of a suitable dna binding domain of a protein or of a hybrid protein according to an embodiment, has at least 90%, 91%, 92%, 93%, 94%, 95%, 96%, 97%, 98%, 99% or 100% sequence identity to seq id no: 41 or 43 over at least about 10, 15, 20, or more contiguous nucleotides. in embodiments the selection marker is effective in both gram positive and gram negative bacteria in embodiments the selection marker is resistance to spectinomycin. in embodiments the selection marker comprised in the vector is resistance to tetracycline. in embodiments the selection marker comprised in the vector is resistance to chloramphenicol. in embodiments the selection marker is any antibiotic effective against both gram negative and gram positive bacteria. third embodiment in a third series of embodiments there are disclosed methods and compositions for transforming cells with dna and expressing an exogenous or candidate or cargo dna sequence in a target eukaryotic cell. in embodiments the target cell is a mammalian cell. in embodiments the target cell is a cancer cell. in embodiments the cancer cell is a lung or colon carcinoma cell and in embodiments is hek-29 (human embryonic kidney); ht29, caco (both human adenocarcinoma); hela, ll2 (lung carcinoma) thus in one variant of the embodiment there is disclosed a method for transforming a eukaryotic target cell with a candidate dna sequence, the method comprising the steps of: a) expressing in a first cell the hybrid protein from the first expression cassette of a nucleic acid vector according to the present invention, to form a complex between said hybrid protein and said nucleic acid vector; and b) contacting the target cell with the formed hybrid protein and nucleic acid vector complex, wherein said candidate dna sequence is comprised in the second expression cassette. in a further aspect of the embodiment there is disclosed a method for transforming a eukaryotic target cell with a candidate dna sequence, the method comprising the step of: contacting the eukaryotic target cell with a nucleic acid vector complex formed by binding a vector according to the present invention with a hybrid protein comprising a signal sequence, a cpp sequence and a dna binding sequence suitable to bind the vector. in a further aspect of the embodiment there is disclosed a method for transforming a prokaryotic target cell with a candidate dna sequence, the method comprising the step of: contacting the prokaryotic target cell with a nucleic acid vector complex formed by binding a vector according to the present invention with a hybrid protein comprising a signal sequence, a cpp sequence and a dna binding sequence suitable to bind the vector. in particular embodiments the hybrid protein-dna complex is suitable to transform bifidobacteria, e. coli and staphylococcus. fourth series of embodiments in alternative embodiments there are disclosed cells containing the nucleic acid vector according to any of the other embodiments. in embodiments the cell is a prokaryote a eukaryote, a mammalian cell, a human cell, a cancer cell, a probiotic bacterium, a gram negative bacterium or a gram positive bacterium. in embodiments the cell is a bacterial cell, or is a gram positive or gram negative cell. in embodiments the cell is a human kidney cell, a human adenocarcinoma cell or a human lung carcinoma cell. in embodiments the cell is an hek-29 cell (human embryonic kidney); an ht29, caco2 cell (both human adenocarcinoma); or an hela or ll2 (lung carcinoma) in embodiments a bacterial cell is e. coli, bifidobacteria, lactococcus , and clostridium, staphylococcus or streptococcus. fifth series of embodiments in embodiments there are disclosed kits comprising vectors according to embodiments, or nhps according to embodiments, or a combination of vectors and nhps according to embodiments. in embodiments there is disclosed a kit for transforming a cell with the vector according to claim 1 , the kit comprising a quantity of the vector and a quantity of the vector and quantity of a hybrid protein comprising a dna binding domain, a signal sequence domain and a cpp domain. in embodiments the kit further comprises a quantity of the nucleic acid vector. in use the fusion protein according to an embodiment is used to transform a target prokaryotic cell or a target eukaryotic cell with a desired dna. in embodiments the desired dna is a plasmid or other vector. in embodiments the desired dna comprises an expression cassette suitable to be expressed in the target prokaryotic cell. in embodiments said at least one recognition nucleotide sequence has at least 90%, 91%, 92%, 93%, 94%, 95%, 96%, 97$, 98%, 99% or more sequence identity to seq id. no. 41 which is the merr dna motif or to seq id no: 43 which is the zinc finger dna motif. in embodiments the fusion protein comprises, sequentially: signal sequence-hu dna binding domain-tat domain; or signal sequence-mer r dna binding domain-tat domain; or signal sequence-zn finger dna binding domain-tat domain; or signal sequence-zinc finger dna binding domain-tat domain. the nucleotide sequences encoding non-limiting examples of hybrid proteins according to embodiments are presented as seq id nos 7 through 10, and amino acid sequences are presented as seq id nos 24 through 27. in embodiments said at least one recognition nucleotide sequence has at least about 99%, 98%, 97%, 96%, 95%, 94%, 93%, 92%, 91% or 90% sequence identity to seq id no: 41 or 43 over a sequence of at least 10, 15, or 20 contiguous nucleotides. example 1 preparation of nhp nhp preparation: a nhp library was designed and created. this library was used to test hypothesis regarding the dna binding, transfection and transformation capabilities of nhps. the library consisted of: h (hu. seq id no: 44), m (mer r seq id no: 45), l (lac i seq id no: 47), z (zn finger seq id no: 46), and of combinations of a cpp domain and dna binding domain ht (hu-tat combining seq id nos 44 and 11 in sequence), th (tat hu combining seq id nos 11 and 44 in sequence), lt (lac tat combining seq id nos 47 and 11 in sequence), tl (tat-lac i combining seq id nos 11 and 47 in sequence), tm (tat-mer r combining seq id nos 11 and 45 in sequence), mt (mer r-tat combining seq id nos 45 and 11 in sequence), tz (tat-zn finger combining seq id nos 11 and 46 in sequence), zt (zn finger tat combining seq id nos 46 and 1 in sequence, sht (arabinosidase signa, hu, tat seq id no: 24), ths (tat-hu-arabinosidase signal) and szt (arabinosidase signal sequence, zn finger, tat seq id no: 27). all proteins were chemically synthesized by solid phase peptide synthesis by lifetein™ lcc. the nhp library was used to characterize plasmid dna binding abilities. it was demonstrated that all nhp's except those comprising a lac-i dna binding protein domain bound to plasmid dna. the different hybrid proteins display significant differences in binding affinities. fig. 3 depicts the ability of sht and szt (seq id nos 24 and 27 respectively) to bind to a plasmid dna (pbra2.0 fig. 1 and seq id no: 1). fig. 3 shows that as concentrations of sht (seq id no: 24) and szt (seq id no: 27) increase, plasmid migration on a gel electrophoresis assay are retarded. the nhp library was used to characterize plasmid dna transfection into various mammalian cell lines including hek-293, hela, caco-2, ll2 and ht-29s. various plasmids encoding the green fluorescent protein (gfp) gene under control of the cytomegalovirus promoter were bound to nhps from the library. the nhp-bound plasmids were then incubated with the various cell lines, and examined under fluorescent microscopy to detect gfp expression. as expected, dna binding domains alone did not result in plasmid transfection; however ht, lt, mt, zt, sht (seq id no: 24) and szt (seq id no: 27) did result in plasmid transfection with zt, sht and szt having the best transfection efficiencies. reverse orientation nhps such as tl, tz, ts, tm and ths result in very limited plasmid transfection. this suggests that the tat domain and dna binding domains are necessary for plasmid transfection, specifically in the orientation with tat domain on the carboxy-terminal. in addition, these findings also suggest that the secretion signal domain s, improves the transfection efficiency. fig. 10 depicts sht and szt mediated transfection of the pbra2.0 sht plasmid (seq id no: 1) in hek-293 and hela cell lines. all other cell lines tested demonstrated consistent results. a portion of the nhp library was used to examine the ability to transform bacteria with plasmid dna. the nhp library included ht, zt, th, tz, sht and szt. nhp-mediated transformation assays were tested on bifidobacteria longum, staphylococcus aureus and escherichia coli . it was demonstrated that only sht (seq id no: 24) and szt (seq id no: 27) were able to transform bifidobacteria longum , whereas ht, zt, th and tz were unable to, suggesting that the secretion signal is necessary for this transformation ability. this experiment was then repeated with staphylococcus aureus and escherichia coli confirming that sht and szt have cross-species transformation abilities. compared to the traditional transformation method from gram-positive bacteria, electrotransformation, the use of sht has been demonstrated to be superior, resulting in a higher rate of transformants. as sht and szt transform diverse bacterial species, it is believed that other bacterial species, both gram-positive and gram-negative will be able to effectively be transformed via this method. this includes species difficult to transform, similar to bifidobacteria longum , which genera include, lactococcus, clostridium, bacilius , and streptococcus. nhp-mediated gene delivery to both bacterial as well as mammalian target cells may therefore function through a cell-mediated internalization process. it is believed that the process requires the tat domain and is enhanced with secretion signal domain. for bacterial cell transformation, it is believed that a species specific section signal plays a significant role in transformation efficiencies, based on the reduced transformation efficiencies observed in staphylococcus aureus and escherichia coli assays compared to bifidobacteria longum. example 2 plasmid preparation a dna plasmid was designed and created to use bifidobacteria longum as a vector for nhp-mediated gene delivery to mammalian cells ( figs. 1 and 2 ). the vector is comprised of 6 components that encode information allowing the bifidobacteria to carry out its designed function. the genetic components encoded include: (a) a mammalian expression cassette; (b) a prokaryotic expression cassette, encoding a novel hybrid protein; (c) a origin of replication in e. coli ; (d) an origin of replication in bifidobacteria longum ; (e) a selectable marker; (f) specific dna binding sites for the novel hybrid protein. a more detailed explanation of the structure of the two vectors is presented above in the descriptions of embodiments. briefly, in the plasmid pbra2.0 sht ( fig. 1 and seq id no: 1) the first expression cassette for expression in a gram positive bacterium comprises an hu promoter and terminator flanking the nhp encoding sequences as will be seen in fig. 1 . in pbra2.0 sht the second expression cassette for expression in a human cell, comprises the cmv (cytomegalovirus) promoter, a kozak sequence and a thymidine kinase polyadenylation/termination sequence, the foregoing flanking an insertion sequence for the insertion of a cargo sequence. this will be seen in seq id no: 1 wherein a gfp protein is inserted into the second expression cassette. in the embodiment pbra2.0 sht the vector comprises a puc origin of replication for replication in e. coli and a pb44 origin of replication for replication in bifidobacterium . dna-bs denotes a nucleotide binding site (seq. id. no. 43) for a zinc finger dna binding protein (seq id no: 46). plasmid pfrg1.5 sht is shown in fig. 2 and seq id no: 2 and the numbers refer to the numbering of features in fig. 2 . the vector comprises gram-positive origin of replication, namely the pdojhr origin of replication (2) whose sequence is comprised within seq id no: 28 and a gram negative origin of replication, namely the puc ori (1) presented as seq id no: 29. the prokaryotic expression cassette comprises a ribosomal rna promoter (8) and terminator (10) flanking coding sequences (9) for the hybrid protein sht, defined elsewhere herein. the mammalian gene expression cassette comprises a cmv promoter (4) and a tk poly a site and terminator (6). in this case the inserted gene is a gfp reporter sequence (5). the plasmid comprises dna sequences (11) suitable for binding by the sequence comprised in the sht hybrid protein. the selectable marker in the pfrg1.5 plasmid is a spectinomycin resistance gene (3). once the vector is inserted into the bifidobacteria , components (b), (d) and (f) are designed to activate due to interactions with the bifidobacteria cell machinery. as a result, the novel hybrid protein is expressed, binding to a specific binding site on the vector, and results in the entire vector-novel hybrid protein complex being secreted out of the bifidobacteria and delivered into mammalian cells. once in the mammalian cell, the mammalian expression cassette component activates due to interactions with the cell machinery and it expresses the therapeutic gene to carry out a desired function. components (c) and (e) are not therapeutically relevant but are required during the construction and validation of the technology. the function of each component was tested experimentally after having the vector synthesized from geneart inc. a mammalian gene expression cassette the ability of the vector to express the mammalian gene expression cassette was examined in various mammalian cell culture lines including ll2, hela, hek-293, ht-29 and caco-2 cells. a mammalian expression cassette that contained the green fluorescent protein gene was used, allowing the use of fluorescent microscopy to detect proper function of the genetic component. fig. 5 depicts pbra2.0 sht (seq id no: 1) transfection in hela and hek-293 cell lines, and resulting mammalian expression cassette function. a prokaryotic gene expression cassette the ability of the vector to express the prokaryotic mammalian gene expression cassette in bifidobacteria longum was examined. to do this, immunoblot analysis on cell lysate of bifidobacteria longum was performed. fig. 9 depicts the immunoblot, with a band being detected at the predicted nhp size of 14 kda in lane 9. an origin of replication in e. coli and selectable marker to test this component, e. coli cells were transformed with pbra2.0 sht (seq id no: 1). after propagating them, plasmid were purified followed by gel electrophoresis to confirm positive transformants and the function of the origin of replication. fig. 4 depicts the gel electrophoresis of pbra2.0 sht (seq id no: 1, fig. 1 ) purified from e. coli and confirms positive transformants. after transformation the culture was propagated in spectinomycin containing culture broth. propagation in this broth suggests the plasmid is being replicated and the selectable marker functions. an origin of replication in bifidobacteria longum and selectable marker to test this component, bifidobacteria cells were transformed with pbra2.0 sht (seq id no: 1, fig. 1 ) purified from e. coli . after transformation the culture were propagated in spectinomycin containing culture broth. propagation in this broth suggests the plasmid is being replicated and the selectable marker functions. a specific dna binding site for nhps to test this function, purified pbra2.0 sht (seq id no: 1, fig. 1 ) was used and bound with sht and szt (seq id nos 24 and 27) proteins. fig. 3 depicts this binding assay with pbra2.0 sht migration being retarded with increasing nhp concentrations. additionally, sht and szt mediated pbra2.0 sht delivery to various cell lines were performed. the positive result is the expression of the mammalian gene expression cassette encoding gfp. fig. 5 depicts gfp positive cells post nhp-mediated pbra2.0 sht (seq id no: 1, fig. 1 ) transfection. collective vector function: after validating the components of the vector independently their collective function as a gene delivery system were examined. to do so, nhp-bound pbra2.0 sht (seq id no: 1, fig. 1 ) was screened for in the supernatant of bifidobacteria longum cultures. the first step was to detect pbra2.0 sht (seq id no: 1, fig. 1 ) plasmid dna via pcr ( fig. 6 ). after rigorous centrifugation, supernatants of wild-type bifidobacteria , ptm13 positive transformants and pbra2.0 sht positive transformants were collected. ptm13 is the same vector as pbra2.0 sht without the prokaryotic gene expression cassette encoding nhp, as a result we anticipated no plasmid secretion in ptm13 transformants. after the pcr, positive controls confirmed the validity of the pcr reaction, whereas negative control, including wild-type and ptm13 positive transformants did not have template amplification. only pbra2.0 sht positive transformants provided amplified template from isolated supernatant. second, pbra2.0 sht (seq id no: 1, fig. 1 ) was isolated from the supernatant, using plasmid purification protocols on the collected supernatant that included modified qiagen™ prep kits, as well as traditional phenol-chloroform purification protocols ( figs. 7 and 8 ). the figures depict an isolated vector corresponding to the size of pbra2.0 sht (seq id no: 1, fig. 1 ), and upon restriction digest analysis, was confirmed to be pbra2.0 sht. it is believed that increasing the expression of nhp and the copy number of the vector will provide sufficient nhp-bound vector to detect nhp-mediated gene delivery in mammalian cells. to do this, a vector having enhanced versions of component (b) and component (d) such as pfrg1.5 (seq id no: 2, fig. 2 ). it is also believed that changing the components to become species specific, for another species is relatively straight forward, as other gene expression cassettes, origins of replications, and selectable markers can be identified through genomic analysis. based on the current designs, it is believed that the vector can be specifically designed and tested to work in species similar to bifidobacteria such as lactococcus, clostridium, staphylococcus, streptococcus and bacillus. example 3 culture techniques, serial passages, plating and storage of bifidobacterial cells bifidobacterium longum cells were subcultured in screw capped anaerobic tubes containing 7-7.5 ml rcb (reinforced clostridial broth). after 18-24 hours of incubation, the cells were plated on rcba (reinforced clostridial agar) and incubated for 24-48 hours at 37° c. under anaerobic conditions. a single colony was picked and again subcultured in screw capped anaerobic tubes containing 7-7.5 ml rcb. this procedure of 2 serial subcultures or passages (10% vn) was used as a starter culture for all the experiments. after the cells reach an od 600 nm of 1.4-1.8, cell pellets were collected and frozen at −80° c. for future use. also, a glycerol stock of 2 ml of active culture is added to 0.5-1 ml 50% of sterile glycerol in a cryo vial and stored at −80° c. to obtain an active bacterial cell culture, the cells were constantly passaged every 2 days in sterile rcb tubes. example 4 electrotransformation in bifidobacterium longum preparation of cells for electroporation bifidobacterium longum overnight culture (10% vn) was inoculated in fresh reinforced clostridial broth (rcb) in 10 ml screw capped anaerobic tubes and incubated at 37° c. under anaerobic conditions. this culture was serially passaged in sterile rcb tube again under anaerobic conditions at 37° c. until an od 600 nm of 0.6-0.7 was reached. once the cells reached mid-exponential phase, the cells were first chilled on ice and centrifuged at 4,500 rpm for 15 min at 4° c. in a 15 ml falcon tube. after centrifugation, the cell pellet was resuspended and washed twice with ice-cold 0.5 m sucrose in eppendorf tubes. after, the cells were resuspended in 1/250 of the original culture volume (10 ml), which is 300 μl of ice-cold electroporation citrate buffer consisting of 1 mm ammonium citrate+0.5 m sucrose (ph-6) at 4° c. for 2.5 hours. electroporation protocol plasmid dna (400, 600, 800, 1000 ng: 800 ng best) were mixed with 80 μl cell suspension in a precooled bio-rad™ gene pulser™ disposable cuvette with an interelectrode distance of 0.2 cm. a bio-rad™ gene pulser™ was used to deliver a high-voltage electric pulse of 2000 v along with 25 μf capacitance and 200ω resistance. following electroporation, the transformant mixture was transferred in eppendorf tubes containing 900-920 μl of sterile rcb containing 0.5 m sucrose and the cells were incubated at 37° c. for 3-4 h inside the anaerobic glove bag. this procedure is required for cell recovery and the expression of antibiotic resistance marker. after cell recovery, the cells were subcultured on sterile rcb containing 25, 50, and 100 μg/ml of ampicillin and spectinomycin antibiotics respectively and incubated at 37° c. for 2-4 days. selection of transformants for selection, robs and rcbas (reinforced clostridial medium with spectinomycin for vector ptm13) were used for selection of transformants. the concentrations used for selection of positive transformants were 50 μg/ml, 75 μg/ml, 100 μg/ml, 125 μg/ml and 150 μg/ml spectinomycin in tubes and agar plates. for preparing liquid selective media, the above-mentioned concentrations of spectinomycin were added to sterile robs tubes inside the anaerobic glove bag asceptically. for selection of transformants after electroporation, 50-100 μl of transformant mixture was plated using spread plate technique on to sterile rcas agar plates pre-spread with appropriate concentrations of spectinomycin and incubated for 48-72 h at 37° c. inside the glove bag. control plates containing the same concentrations of spectinomycin were used for both sterile tubes and agar plates containing (10% vn for liquid broth and 50-100 μl of culture for agar plates) wildtype bifidobacterium longum cells. after 48-72 hours or even further sometimes, the positive transformant colonies were picked using a sterile loop inside the glove bag and subcultured in tubes containing selective robs with 125 μg/ml spectinomycin. no growth was observed on the control rcbas plates with the same concentrations of spectinomycin even after a week. screening positive transformants: plasmid isolation harvest the cells (6-7 ml culture) at 5,000 rpm at 4° c. for 15 min after overnight growth from single colonies from the selective agar plates in 15 ml falcon tubes. wash the pellets with 1 ml of pbs once to remove excess end products in the medium. resuspend pellets with 570 μl of solution a {6.7% sucrose, 50 mm tris-hcl, 1 mm edta (ph-8.0)} and prewarm at 37° c. for 5 min. cell lysis was performed by the addition of 145 μl of solution b {25 mm tris-hcl (ph-8.0), 20 mg/ml lysozyme} and incubated at 37° c. for 40-45 min. further, 72 μl of solution c {0.25 mm edta, 50 mm tris-hcl (ph-8.0)} and 42 μl of solution d {20% sds, 50 mm tris-hcl, 20 mm edta (ph-8.0)} were added, mixed gently and incubated for 10 min at 37° c. after incubation, the tubes were mixed briefly by inverting for few seconds and the genomic dna was denatured by adding 42 μl of 3m naoh and gently mixing for 10 min. the solution was neutralized by adding 75 μl of 2 m tris-hcl (ph-7.0) for 3 min and the dna was precipitated using 107 μl of 5 m nacl for 1 min by gentle mixing. the precipitated genomic dna was removed by centrifugation at 14,000 rpm for 10-15 min at 4° c. the supernatant was added to a new eppendorf tube containing 700 μl of phenol:chloroform:isoamyl alcohol mixture (25:24:1) and mixed vigorously. following that, the mixture was centrifuged at 14,000 rpm for 15-20 min at room temperature. after centrifugation, the upper aqueous phase was transferred to a new tube by careful aspiration or gentle pipetting and treated with 600 μl of isopropanol and the dna was precipitated for 30 min on ice. then, the tubes were centrifuged at 14,000 rpm for 15-20 min at 4° c. after centrifugation, the supernatant was discarded and the pellets were washed with ice-cold (−20° c.) 70% ethanol. after centrifugation at 14,000 rpm for 10 min, the supernantant was discarded and the pellets were air-dried for 5-10 min. the plasmid dna pellets were resuspended in 35 μl of nuclease free water or tris-hcl buffer. 1.5 μl of rnase was added and incubated at 37° c. for 15-30 min. restriction digestion and agarose gel electrophoresis 17 μl of plasmid dna (550-600 ng) were added to a mixture containing 2 μl of ecor1, 4 μl of reaction buffer, and 17 μl of water. the digestion mixture was incubated at 37° c. for 2-2.5 h. then, the reaction was heat inactivated at 65° c. for 10-15 min. the samples including transformant (cut, uncut) plasmid dna, control: wildtype grown in the absence of antibiotic (cut, uncut), control: wildtype grown in the presence of antibiotic (cut, uncut), vector dna (ptm13) cut/uncut, and dna ladders were loaded on 1.5% agarose gel and electrophoresed to see the appropriate band patterns. other plasmid isolation protocols buffers needed: tes (25 mm tris-cl, 10 mm edta, 50 mm sucrose); koac: 3 m k + , 5 m acetate (3 m potassium-acetate, 2 m acetic acid—glacial is 17 m); pbs, phosphate buffered saline ph 7.0. 1 ml of fresh culture grown anaerobically at 37° c. with mid-log phase od (˜8-10 h) was used for plasmid isolation. centrifuge 20 min at 4,000 rpm at 4° c. in centrifuge. discard the supernatant, and wash in 5 ml of pbs buffer (ph 7.0). centrifuge again and discard supernatant. add 300-400 μl tes containing 6 mg/ml lysozyme and 40 μg/ml mutanolysin and resuspend the pellet in a 2 ml eppendorf tube. incubate at 37° c. for 60 minutes. with o'sullivan's method, used 200 μl of 25% sucrose with 30 mg/ml lysozyme as resuspension buffer and incubated for 15-20 min at 37° c. meanwhile, 0.2 n naoh and 2% sds was freshly made to perform lysis. add 600 μl sds/naoh mix to each tube. incubate on ice for 5 min. with o'sullivan's method, also tried with 400 μl of lysis buffer (3% sds and 0.2 n naoh) and mixed by inversion and incubating at room temperature for 5-7 min. add 500 μl koac (3m k + , 5m acetate). incubate on ice for 2 min. shake vigorously and centrifuge at 17,200 g/12,000 rpm, 4° c., for 15 min. with o'sullivan's method, used 300 μl of ice-cold 3 m sodium acetate (ph-4.8) and mixed by inverting the tube and centrifuged at 13,500 rpm for 15 min at 4° c. remove supernatant (˜1 ml) into a fresh 2 ml tube and discard the pellet and remaining supernatant. add ½ volume (500 μl) of isopropanol. centrifuge at 17,200 g/12,000 rpm (room temperature) for 10 min. discard the supernatant. with o'sullivan's method, the supernatant was collected in a new eppendorf and mixed with room temperature 650 μl isopropanol and centrifuged at 13, 500 rpm for 10 min at 4° c. wash pellet with 1 ml 70% ethanol. air dry for 2-5 min on bench. resuspend in 500 μl te buffer. add 500 μl 5m licl. incubate on ice for 5 min. centrifuge at 17,200 g/12,000 rpm for 10 min. pour supernatant into an eppendorf. add 1 ml isopropanol. incubate on the bench for 10 min. centrifuge at 17,200 g/12,000 rpm for 10 min. discard the supernatant. wash the pellets with 100 μl 70% ethanol. resuspend in 375 μl te buffer. add 5 μl 1 mg/ml rnase a. incubate at 37° c. for 30 min. the same protocol was performed without licl step after 70% ethanol wash, the air-dried pellets were resuspended in te and subjected to rnaase a treatment. with o'sullivan's method, the dna pellets were resuspended in 500 μl of sterile nuclease free water after isopropanol step and then subjected to phenol-chloroform purification. add 700 μl phenol:chloroform:isoamyl alcohol. vortex until thoroughly mixed. centrifuge at top speed of microfuge for 2 min. pipette aqueous phase (the top one) into new eppendorf. then repeat the procedure twice with the same volume of chloroform:isoamyl alcohol to remove any phenol. add 750 μl straight ethanol and 125 μl 3m sodium acetate. put at −80° c. for 30 min or −20° c. overnight. with o'sullivan's method, the upper phase was mixed with 1 ml of ethanol (−20° c.) and centrifuged at 13,500 rpm for 15 min at 4° c. centrifuge at 13,600 g/12,000 rpm, 4° c., for >15 min. discard the supernatant. wash pellet with ˜100 μl 70% ethanol. resuspend in 200 μl tris buffer per 40 ml of original cells. with o'sullivan's method, the dna pellets were washed with 70% ethanol, air-dried and resuspended in 50 μl of te containing rnase a. example 5 pbra2.0 sht the structure of pbra2.0 sht is shown in fig. 1 and as seq id no: 1. pbra2.0 sht and its nucleotide sequence is presented as seq id no: 1 the vector is a plasmid and as indicated in fig. 1 it comprises a spectinomycin resistance gene smr, an e. coli puc origin of replication and a bifidobacterium longum origin of replication derived from pb44. the eukaryotic expression cassette comprises a cmv (cytomegalovirus) promoter, and a thymidine kinase terminator and polyadenylation site from the hsv virus. a candidate or cargo sequence (designated as “goi”) can be inserted in the correct reading frame for expression through operation of the cmv promoter. the vector also comprises an expression cassette for expression of a cargo fusion protein or nhp. as will be seen in the figure, the prokaryotic expression cassette is framed by the hu promoter hup and terminator hut. the desired nhp open reading frame is inserted between the two in the appropriate reading frame and under the control of the hu promoter. the hut sequence is followed by the dna-bs sequence which is a dna motif bound to by a zinc finger dna binding domain. it will be understood that in different variants the protein chosen may have the sequence sht, szt, smt, slt (seq id nos 24, 25, 26, 27). in the examples presented here the nhp has the sht structure, the coding nucleotide sequence for which is shown as seq id no: 24, and its corresponding dna sequence as seq id no: 8. in this embodiment the signal sequence is alpha arabinosidase, seq id nos 22, 23, the dna binding domain is the hu domain the dna sequence for which is presented as seq id no: 4 and the cpp domain is the tat domain seq id no: 11. the pbra 2.0 plasmid sequence was assembled from individual dna sequences and synthesized commercially by lifetein™ lcc. example 6 pfrg1.5 the structure of pfrg1.5 sht is shown in fig. 2 and its nucleotide sequence is shown as seq id no: 2 the vector comprises gram-positive origin of replication, namely the pdojhr origin of replication whose sequence is presented as a part of seq id no: 28 and a gram negative origin of replication, namely the puc ori ( e. coli ori) presented as seq id no: 29. the vector comprises prokaryotic and eukaryotic expression cassettes. the prokaryotic expression cassette comprises a ribosomal rna promoter and terminator flanking coding sequences fort the hybrid protein szt, defined elsewhere herein. the mammalian gene expression cassette comprises a cmv promoter and a tk poly a site and terminator. in this case the inserted gene is a gfp sequence. in the example the plasmid comprises dna sequences suitable for binding a zinc finger consensus sequence. the selectable marker in this plasmid is a spectinomycin resistance gene. pfrg was were obtained commercially and was synthesized using standard methods. example 7 validation of vector in the example a pbra2.0 sht plasmid containing the gfp marker was transformed into bifidobacterial cells. the bifidobacterial cells were confirmed as hosting the plasmid, and expressing the encoded nhp protein. secretion of the pbra2.0 plasmid from the bifidobacterial cells was confirmed directly. pbra2.0 sht e. coli transformation assay to validate the function of the e. coli ori pbra2.0 sht was purified from transformed e. coli dh5α chemically competent cells to validate the e. coli ori on pbra2.0 sht. positive transformants were selected for on agar plates containing 100 μg/μl of spectinomycin. cells were subject to plasmid purification using qiagen qiaprep spin miniprep kit following manufacturers protocols and subject to restriction enzyme digest using ecor1 and visualized on a 0.8% agarose gel. fig. 4 shows the results of running samples of the resulting dna digests on an agarose gel. lanes are as follows: 1. 1 kb ladder, 2. chemically synthesized vector, 3. pbra2.0 sht positive transformant a, 4. pbra2.0 sht positive transformant b. pbra2.0 sht transfection assay, validation of the eukaryotic orf pbra2.0 sht (seq id no: 1, fig. 1 ) was subject to transfection for validation of eukaryotic open reading frame on pbra2.0 sht. pbra2.0 sht was transfected into human cell lines using lipofectamine™ (life technologies™) following manufacturers protocols. cells were visualized using direct fluorescence microscopy for gfp expression. the results are shown in fig. 5 : panel 1 hek-293 cells: a. control; b. pbra2.0 sht; c. pbra2.0 sht+lipofectamine™. panel 2 hela cells: a. control; b. pbra2.0 sht; c. pbra2.0 sht+lipofectamine™. a comparison of the experimental panels 1c and 2c of fig. 5 with the controls demonstrates the expression of gfp from the eukaryotic expression cassette in these cells. validation of secretion of vector. pcr screening of ptm13 and prba2.0 supernatants, validation of pbra2.0 sht secretion cultures were grown to od 1.8 where 1 ml of supernatant for wild-type, ptm13 (a plasmid not containing the sht sequence) and pbra2.0 sht cultures were subjected to plasmid purification using qiagen™ qiaprep™ spin miniprep kit following manufacturers protocols. resulting eluates were subject to pcr using ptm13/pbra2.0 sht specific primers to validate plasmid secretion from pbra2.0 sht transformants. a series of primer pairs useable to identify the presence of the plasmid are presented as seq id nos 25 and 36 (primer pair to amplify gfp sequences); seq id nos 37 and 38 (primer pair to amplify spectinomycin resistance gene sequences); and seq id nos 39 and 40 (primer pair to amplify cmv sequences). as a further confirmation of the results, the amplicons were sequenced to confirm sequence identity by at the center for molecular medicine and therapeutics at the dna sequencing core facility in vancouver b.c. results are shown in fig. 6 . 1: 1 kb ladder; 2: 100 bp ladder; 3: pcr negative control-gfp primers, no template; 4: pcr negative control-specr primers, no template; 5: pcr negative control-cmv primers, no template; 6: pcr positive control-gfp, pbra-sht plasmid. dna; 7: pcr positive control-specr, pbra-sht plasmid. dna; 8: pcr positive control-cmv, pbra-sht plasmid dna; 9: pcr positive control-gfp, ptm13c plasmid dna; 10: pcr positive control-specr, ptm13c plasmid dna; 11: pcr positive control-cmv, ptm13c plasmid dna; 12: control b. longum p. dna-gfp; 13: control b. longum p. dna-specr; 14: control b. longum p. dna-cmv; 15: ptm13-transformant supernatant-p. dna-gfp; 16: ptm13-transformant supernatant-p. dna-specr; 17: ptm13-transformant supernatant-p. dna-cmv; 18: pbra-sht transformant supernatant-p. dna-gfp; 19: pbra-sht transformant supernatant-p. dna-specr; 20: pbra-sht transformant supernatant-p. dna-cmv. thus the results of this experiment demonstrate that pbra vector is secreted by the bacteria, whereas the ptm13 plasmid is not. visualization and digestion of pbra2.0 sht supernatant, validation of pbra2.0 sht secretion and characterization pbra2.0 sht culture was grown to od1.8 where 1 ml of supernatant of culture was subjected to plasmid purification using qiagen™ qiaprep™ spin miniprep kit following manufacturers protocols. eluate was subject to restriction enzyme digest and visualized on a 0.8% page to validate pbra2.0 sht secretion from positive transformants. the results are shown in fig. 7 . panel a: 1. 1 kb ladder; 2. pbra2.0 sht; 3. pbra2.0 sht from pdna miniprep kit. panel b: 1. 1 kb ladder; 2. pbra2.0 sht digested with ecor1; 3. pbra2.0 sht from pdna miniprep kit digested with ecor1. the results of this experiment more fully confirm the secretion of pbra2.0 sht from the bacterial cells. visualizations and digestions of concentrated supernatants, validation of pbra2.0 sht secretion and characterization wild-type, ptm13, pbra2.0-sht (pbra comprising sht seq id no: 24 in the prokaryotic expression cassette and pbra2.0-szt (comprising instead the szt protein seq id no: 27) cultures were grown to an od of 1.8. 7 ml of supernatant were concentrated under each of the aforesaid conditions using 3000 mwco concentrators and samples were subject to phenol-chloroform plasmid dna extraction. results were subject to restriction enzyme digest using ecor1 and visualized on a 0.8% page to valid dna secretion in pbra2.0 sht transformants. results are shown in fig. 8 . panel a—phenol-chloroform samples: 1. 1 kb ladder; 2. 100 bp ladder; 3. blank; 4. wild-type, 5. ptm13 transformants, 6. blank; 7. pbra2.0-sht; 8. pbra2.0-szt. panel b—phenol-chloroform samples digested with ecor1. 1. 1 kb ladder; 2. wild-type; 3. blank; 4. ptm13 transformants; 5. blank; 6. pbra2.0-sht transformants; 7. blank; 8. pbra2.0-szt transformants. example 8 western blot analysis of szt, sht validation of bifidobacterium specific orf wild-type, and ptm13, pbra2.0-sht and pbra2.0-szt transformed cultures were grown to an od of 1.8. samples were loaded onto an 0.8% pacm gel and subject to electrophoresis, and transferred to a membrane for immunoblot analysis using anti-tat antibodies to detect the expression of our hybrid proteins sht and szt seq id no: 27, to validate the functionality of the bifidobacterium specific promoter and terminator. samples were prepped from freshly grown cells washed three timex in sterile pbs. cells were pelleted and subject to sonication in deionized water with three pulses of 20 seconds with 30 second intervals in an ice bath. samples were quantified using via bradford and aliquots of each sample were mixed with 4×sds loading dye and boiled at 95 degree celsius for 10 minutes and quenched on ice. these results suggest the promoter and terminator functionality of the hu design on pbra2.0 is sufficient for protein expression. results are shown in fig. 9 . 1. precision plus protein™ prestained standards (bio-rad); 2. wild-type, 3. wild-type; 4. ptm13; 5. ptm13; 6. pbra2.0-sht; 7. pbra2.0-sht; 8. pbra2.0-szt; 9. pbra2.0-szt. sht and szt interact with pbra2.0 sht and result in gel-shift with increasing concentrations of peptide. pbra2.0 sht was incubated with sht or szt seq id no: 27 in binding buffer at room temperature for 15 minutes and subject to page. results are shown in fig. 3 . 1. 1 kb ladder; 2. 100 bp ladder; 3. 500 ng of pbra2.0 sht; 4. 500 ng of pbra2.0 sht+2 ng of sht; 5. 500 ng of pbra2.0 sht+5 ng of sht; 6. 500 ng of pbra2.0 sht+10 ng of sht; 7. 500 ng of pbra2.0 sht+20 ng of sht; 8. 500 ng of pbra2.0 sht+30 ng of sht, 9. 500 ng of pbra2.0 sht+40 ng of sht; 10. 500 ng of pbra2.0 sht+50 ng of sht; 11. blank; 12. 500 ng of pbra2.0 sht; 13. 500 ng of pbra2.0 sht+2 ng of szt; 14. 500 ng of pbra2.0 sht+5 ng of szt; 15. 500 ng of pbra2.0 sht+10 ng of szt; 16. 500 ng of pbra2.0 sht+20 ng of szt; 17. 500 ng of pbra2.0 sht+30 ng of szt, 18. 500 ng of pbra2.0 sht+40 ng of szt; 19. 500 ng of pbra2.0 sht+50 ng of szt; 20. blank. example 10 complexes of pbra2.0 sht dna and sht protein and of pbra2.0 sht dna and szt protein can transfect and express gfp in mammalian cell lines sht and szt seq id no: 27 were subject to dna binding assay followed by transfection assay to validate the cell-penetrating functional domain of each peptide. pbra2.0 sht was transfected into human cell lines using lipofectamine (life technologies) following manufacturers protocols as a positive control. cells were visualized using direct fluorescence microscopy for gfp expression. the results are shown in fig. 10 . panel 1—hek-293 cells: a. control; b. pbra2.0 sht; c. pbra2.0 sht+lipofectamine; d. pbra2.0-sht complex pane 2—hela cells: a. control; b. pbra2.0 sht; c. pbra2.0 sht+lipofectamine; d. pbra2.0-szt complex. example 11 transformation of cells using hybrid protein a hybrid protein comprising arabinosidase signal, hu dna binding domain and tat transduction sequence, was bound to pbra2.0 sht plasmid dna in binding buffer (150 mm nacl 50 mm tris ph 7.2) unless otherwise set out below, procedures were the same as set out for example 1 above. protocol for the hybrid protein-pbra2.0 binding assay were as follows: 1. aliquot 1 ng of sht and 150 ng of pbra2.0 1 into a 1.5 ml eppendorf tube containing 50 μl of nhp binding buffer (50 mm nacl 50 mm tris ph 7.2). 2. repeat step one, while increase concentration of sht independently by 10, 20, 30, 50, 100, 500 and 1000 ng for resultant laddering effect to determine concentration dependent novel hybrid protein-pbra2.0 binding. 3. leave at room temperature for 30 minutes and visualize on a 0.8% agarose gel electrophoresis. protocol for the hybrid protein-pbra2.0 transformation assay: 1. aliquot 5 μg of sht and 400 ng of pbra2.0 1 into a 1.5 ml eppendorf tube containing 50 μl of nhp binding buffer (50 mm nacl 50 mm tris ph 7.2 and leave at room temperature for 30 minutes. 2. resuspend 150 μl of 5×10 8 cells/ml into reaction mixture (step 1). 3. under anaerobic conditions, gently rock tube for 30 minutes containing protein-dna complexes. 4. add 800 μl of rcb for cell recovery under anaerobic conditions, incubate at 37° c. for 4 hours. 5. plate 80 μl of reaction mixture on rca-spectinomycin (200 μg/ml) resistance plates and incubate for 3 days at 37° c. under anaerobic conditions. 6. screen for positive transformants. sequence listings the following sequence listings are incorporated herein and form an integral part of this disclosure. seq id no: 1: pbra2.0-shtccaggcccgtggaggcgaggaagacggacggcgacggcaagggccattggacgagcgtggcggggtatggcgaggtgttcacgaccacggagctgttcgacgtgacggccgcgcgtgaccacttcgacggcaccgtggaggccggggaatgccgtttctgcgcgtttgacgcgcgcaaccgcgaacatcatgcgcggaacgccggaaggttgttctagcggccgtgtccgcgcctctggggcggttgcgcctgccatggagatctggggccgagtcggccgcgggcttcgagggaggcgacgagagcacatcgcccgcctcaggcgacgagagcacatcgcccgcctcggtcggccgcgaggcgacgagagcacatcgcccgggccgagtcggccgcgggcttcgagggaggtgggcgcggcggccatgaagtggcttgacaagcataatcttgtctgattcgtctattttcatacccccttcggggaaatagatgtgaaaacccttataaaacgcgggttttcgcagaaacatgcgctagtatcattgatgacaacatggactaagcaaaagtgcttgtcccctgacccaagaaggatgctttctcgagatgaccctgaccggcaccctgcgcaaagcgtttgcgaccaccctggcggcggcgatgctgattggcaccctggcgggctgcagcagcgcggcatacaacaagtctgacctcgtttcgaagatcgcccagaagtccaacctgaccaaggctcaggccgaggctgctgttaacgccttccaggatgtgttcgtcgaggctatgaagtccggcgaaggcctgaagctcaccggcctgttctccgctgagcgcgtcaagcgcccggctcgcaccggccgcaacccgcgcactggcgagcagattgacattccggcttcctacggcgttcgtatctccgctggctccctgctgaagaaggccgtcaccgagtatggacggaagaagcgcaggcagcgacggcgatgatctagacttctgctcgtagcgattacttcgagcattactgacgacaaagaccccgaccgagatggtcggggtctttttgttgtggtgctgtgacgtgttgtccaaccgtattattccggactagttcagcgaagcttcgacgagagcacatcgcccgcctcaggcgacgagagcacatcgcccgcctcggtcggccgcgaggcgacgagagcgcctcgaagcttttttttttttggggcggctttttttttttaagctttatctgctccctgcttgtgtgttggaggtcgctgagtagtgcgcgagcaaaatttaagctacaacaaggcaaggcttgaccgacaattgcatgaagaatctgcttaatttaagctacaacaaggcaaggcttgaccgacaattgcatgaagaatctgctaggtactggcttaactatgcggcatcagagcagattgtactgagagtgcaccatatgcggtgtgaaataccgcacagaatattattgaagcatttatcagggttattgtctcatgagcggatacatatttgaatgtatttagaaagcggccgcgttaggcgttttcgcgatgtacgggccagatatacgcgttgacattgattattgactagttattaatagtaatcaattacggggtcattagttcatagcccatatatggagttccgcgttacataacttacggtaaatggcccgcctggctgaccgcccaacgacccccgcccattgacgtcaataatgacgtatgttcccatagtaacgccaatagggactttccattgacgtcaatgggtggagtatttacggtaaactgcccacttggcagtacatcaagtgtatcatatgccaagtacgccccctattgacgtcaatgacggtaaatggcccgcctggcattatgcccagtacatgaccttatgggactttcctacttggcagtacatctacgtattagtcatcgctattaccatggtgatgcggttttggcagtacatcaatgggcgtggatagcggtttgactcacggggatttccaagtctccaccccattgacgtcaatgggagtttgttttggcaccaaaatcaacgggactttccaaaatgtcgtaacaactccgccccattgacgcaaatgggcggtaggcgtgtacggtgggaggtctatataagcagagctcgtttagtgaaccgtcagatcgcctggagacgccatccacgctgttttgacctccatagaagacaccgggaccgatccagcctccggactctagaggatcgaagaattcgccaccatggtgagcaagggcgaggagctgttcaccggggtggtgcccatcctggtcgagctggacggcgacgtaaacggccacaagttcagcgtgtccggcgagggcgagggcgatgccacctacggcaagctgaccctgaagttcatctgcaccaccggcaagctgcccgtgccctggcccaccctcgtgaccaccctgacctacggcgtgcagtgcttcagccgctaccccgaccacatgaagcagcacgacttcttcaagtccgccatgcccgaaggctacgtccaggagcgcaccatcttcttcaaggacgacggcaactacaagacccgcgccgaggtgaagttcgagggcgacaccctggtgaaccgcatcgagctgaagggcatcgacttcaaggaggacggcaacatcctggggcacaagctggagtacaactacaacagccacaacgtctatatcatggccgacaagcagaagaacggcatcaaggtgaacttcaagatccgccacaacatcgaggacggcagcgtgcagctcgccgaccactaccagcagaacacccccatcggcgacggccccgtgctgctgcccgacaaccactacctgagcacccagtccgccctgagcaaagaccccaacgagaagcgcgatcacatggtcctgctggagttcgtgaccgccgccgggatcactctcggcatggacgagctgtacaagtaggaattcttcgatccctaccggttagtaatgagtttaaacgggggaggctaactgaaacacggaaggagacaataccggaaggaacccgcgctatgacggcaataaaaagacagaataaaacgcacgggtgttgggtcgtttgttcataaacgcggggttcggtcccagggctggcactctgtcgataccccaccgagaccccattggggccaatacgcccgcgtttcttccttttccccaccccaccccccaagttcgggtgaaggcccagggctcgcagccaacgtcggggcggcaggccctgccatagcgcggccgctcgtaaagtctggaaacgcggaagtcagcgccctgcaccattatgttccggatctgcatcgcaggatgctgctggctaccctgtggaacacctacatctgtattaacgaagcgctggcattgaccctgagtgatttttctctggtcccgccgcatccataccgccagttgtttaccctcacaacgttccagtaaccgggcatgttcatcatcagtaacccgtatcgtgagcatgggatccatcatgcctcctctagaccagccaggacagaaatgcctcgacttcgctgctacccaaggttgccgggtgacgcacaccgtggaaacggatgaaggcacgaacccagtggacataagcctgttcggttcgtaagctgtaatgcaagtagcgtatgcgctcacgcaactggtccagaaccttgaccgaacgcagcggtggtaacggcgcagtggcggttttcatggcttgttatgactgtttttttggggtacagtctatgcctcgggcatccaagcagcaagcgcgttacgccgtgggtcgatgtttgatgttatggagcagcaacgatgttacgcagcagggcagtcgccctaaaacaaagttaaacatcatgagggaagcggtgatcgccgaagtatcgactcaactatcagaggtagttggcgtcatcgagcgccatctcgaaccgacgttgctggccgtacatttgtacggctccgcagtggatggcggcctgaagccacacagtgatattgatttgctggttacggtgaccgtaaggcttgatgaaacaacgcggcgagctttgatcaacgaccttttggaaacttcggcttcccctggagagagcgagattctccgcgctgtagaagtcaccattgttgtgcacgacgacatcattccgtggcgttatccagctaagcgcgaactgcaatttggagaatggcagcgcaatgacattcttgcaggtatcttcgagccagccacgatcgacattgatctggctatcttgctgacaaaagcaagagaacatagcgttgccttggtaggtccagcggcggaggaactctttgatccggttcctgaacaggatctatttgaggcgctaaatgaaaccttaacgctatggaactcgccgcccgactgggctggcgatgagcgaaatgtagtgcttacgttgtcccgcatttggtacagcgcagtaaccggcaaaatcgcgccgaaggatgtcgctgccgactgggcaatggagcgcctgccggcccagtatcagcccgtcatacttgaagctagacaggcttatcttggacaagaagaagatcgcttggcctcgcgcgcagatcagttggaagaatttgtccactacgtgaaaggcgagatcaccaaggtagtcggcaaataaccctcgagccacccatgaccaaaatcccttaacgtgagttacgcgtcgttccactgagcgtcagaccccgtagaaaagatcaaaggatcttcttgagatcctttttttctgcgcgtaatctgctgcttgcaaacaaaaaaaccaccgctaccagcgggatccatgcagctgcgtaaggagaaaataccgcatcaggcgctcttccgcttcctcgctcactgactcgctgcgctcggtcgttcggctgcggcgagcggtatcagctcactcaaaggcggtaatacggttatccacagaacgtacgatgtgagcaaaaggccagcaaaaggccagggaccgtaaaaaggccgcgttgctggcgtttttccataggctccgcccccctgacgagcatcacaaaaatcgacgctcaagtcagaggtggcgaaacccgacaggactataaagataccaggcgtttccccctggaagctccctcgtgcgctctcctgttccgaccctgccgcttaccggatacctgtccgcctttctcccttcgggaagcgtggcgctttctcatagctcacgctgtaggtatctcagttcggtgtaggtcgttcgctccaagctgggctgtgtgcacgaaccccccgttcagcccgaccgctgcgccttatccggtaactatcgtcttgagtccaacccggtaagacacgacttatcgccactggcagcagccactggtaacaggattagcagagcgaggtatgtaggcggtgctacagagttcttgaagtggtggcctaactacggctacactagaagaacagtatttggtatctgcgctctgctgaagccagttaccttcggaaaaagagttggtagctcttgatccggcaaacaaaccaccgctggtagcggtggtttttttgtttgcaagcagcagattacgcgcagaaaaaaaggatctcaagaagatcctttgatcttttctacgggcgtacgtcttcctttttcatcccggagacggtcacagcttgtctgtaagcggatgccgggagcagacaagcccgtcagggcgcgtcagcgggtgttggcgggtgtcggggcgcagccatgacccagtcacgtagcgatagcggagtgtatactggcttaactatgcggcatcagagcagattgtactgagagtgcaccatatgcggtgtgaaataccgcacagaatattattgaagcatttatcagggttattgtctcatgagcggatacatatttgaatgtatttagaaaaataaacaaataggggttccgcgcacatttccccgaaaagtgccacctgacgtctaagaaaccattattatcatgacattaacctataaaaataggcgtatcacgaggccctttcgtcttcaagaaagatctccatgggagaaccacgggccggacggataccagccgccctcatacgagccggtcaaccccgaacgcaggaccgcccagacgccttccgatggcctgatctgacgtccgaaaaaaggcgccgtgcgccctttttaaatcttttaaaatctttttacattcttttaggccctccgcagccctactctcccaacgggtttcggacggtacttagtacaaaaggggagcgaacttagtacaaaaggggagcgaacttagtacaaaaggggagcgaacttagtacaaaaggggagcgaactagtaaataaataaactttgtactagtatggagtcatgtccaatgagatcgtgaagttcagcaaccagttcaacaacgtcgcgctgaagaagttcgacgccgtgcacctggacgtgctcatggcgatcgcctcaagggtgagggagaagggcacggccacggtggagttctcgttcgaggagctgcgcggcctcatgcgattgaggaagaacctgaccaacaagcagctggcagacaagatcgtgcagacgaacgcgcgcctgctggcgctgaactacatgttcgaggattcgggcaagatcatccagttcgcgctgttcacgaagttcgtcaccgacccgcaggaggcgactctcgcggttggggtcaacgaggagttcgctttcctgctcaacgacctgaccagccagttcacgcgcttcgagctggccgagttcgccgacctcaagagcaagtacgccaaggagttctacaggcgcgccaagcagtaccgcagctcgggaatctggaagatcagccgcgatgagttctgccgactgcttggcgtatccgattccacggcaaaatccaccgccaacctgaacagggtcgtgctgaagacgatcgccgaagagtgtgggcctctccttggcctgaagatcgagcgccagtacgtgaaacgcaggctgtcgggcttcgtgttcacgttcgcccgcgagacccctccggtgatcgacgseq id no: 2: pfrg1.5 shtacgcgctggagatgttcaacgagtagatcgccacggcgacctccttccacgcgtgcgggcacggggattctcaaggggccggcccgaggccccttgagcccgccgggaggcgcccccggcagggcgggaatccaaagggcggagccctgtggccctccccgggcaggggcgggatcgtcaagggcggagcccttggccccctcgggagagcgcactgacacaatgctacctccggtagcattaagtgcgccctccgccatgcggagggacgggccgcgaccggatatgcggggaacgtccacgacgcgtcttccgtgtcctccgtcctgccttgtgccgtcgattataatcctcggataacggacgatgattgaaccgattggaggaaacgagatgccgaagagtttcgcgcagcagatcgaggacgacgagaacaagatcaagcgcatccgggagcaccagcgcatggtcagggccaaacaagccaaacaggagcgcaacgcccgcaccaaaaggctcgtggagaccggagcgatagtggaaaaagcgcacggcggcgcgtacgacgacgaaggacggcagacgttctcggatggtctgaacggcatcatctcggtctacgacccgtcccgcggcggcaacgtggacatgagagtcatcgacgtaatcgaccggcgcatccccagattgccaaggtccgaaaccacaacaggcacggcagcggcggcatcgcgaaccgtgcaagccaccgcaccacagccagcccacgcccaaccgcaaagcttcacgcccaacccccagcgcatcgaacaccggacaggagcacagcagccggatcggtgggcctagccgggacggcgcagccctcgctccgctgggctgaaatatctgacttggttcgtaacatatttccgacatgtcggaaatatgtcgtatcatcgttgctatgagtactgaacttcgtgagcagtgggagcagctttacctgccgctgcgcccgctgtgcacgaacgacttcatcgagggcgtgtaccggcagcccagggcgaaggcgctggagggctaccggtacatcgaggcgaaccccaagaccgtcagcgacctgctcgtggtcgacatcgacgacgcgaacgcccgcgcgatggccctatgggagcatgaggggatgctgcccaacgtgatcgtggagaacccgagaaacggccacgcgcacgccgtgtgggcgctggccgccccgttcagtcgcaccgaatacgcgcggcgcaagccgctcgccctggccgcggcggtcacggaggggctgcgccgctcgtgcgacggggacaagggatattcggggctgatgaccaagaacccgctgcacgaggcgtggaacagcgaactggtcaccgatcacctgtacgggctggacgaactgcgcgaggcgctggaggagtccggggacatgccgccggcctcgtggaagcgcacgaaacggcgcaacgtggtcggattgggccgcaactgccatctgttcgagacggcgcgcacatgggcgtacagggaggtgcgccaccactggggagacagcgaggggctgcggctggccatctcggccgaggcgcacgaaatgaacgccgaactgttccccgagccactggcatggtcggaggtcgagcagatcgccaagagcatccacaagtggatcgtcacccaaagccgcatgtggcgcgacggccccaccgtctacgaggcgacattcatcaccgtccaaagcgcccgagggaaaaagtcaggcaaggccagacgagaaacattcgaactattcgcgagcgaggagatgacgtaatggttattcaaacgtttttggggcggcttttgggcccatgtgagcaaaaggccagcaaaaggccagggaccgtaaaaaggccgcgttgctggcgtttttccataggctccgcccccctgacgagcatcacaaaaatcgacgctcaagtcagaggtggcgaaacccgacaggactataaagataccaggcgtttccccctggaagctccctcgtgcgctctcctgttccgaccctgccgcttaccggatacctgtccgcctttctcccttcgggaagcgtggcgctttctcatagctcacgctgtaggtatctcagttcggtgtaggtcgttcgctccaagctgggctgtgtgcacgaaccccccgttcagcccgaccgctgcgccttatccggtaactatcgtcttgagtccaacccggtaagacacgacttatcgccactggcagcagccactggtaacaggattagcagagcgaggtatgtaggcggtgctacagagttcttgaagtggtggcctaactacggctacactagaagaacagtatttggtatctgcgctctgctgaagccagttaccttcggaaaaagagttggtagctcttgatccggcaaacaaaccaccgctggtagcggtggtttttttgtttgcaagcagcagattacgcgcagaaaaaaaggatctcaagaagatcctttgatcttttctacgggtttttttggggcggctttgaattcttttttttggggcggctttttttttatgcgctcacgcaactggtccagaaccttgaccgaacgcagcggtggtaacggcgcagtggcggttttcatggcttgttatgactgtttttttggggtacagtctatgcctcgggcatccaagcagcaagcgcgttacgccgtgggtcgatgtttgatgttatggagcagcaacgatgttacgcagcagggcagtcgccctaaaacaaagttaaacatcatgagggaagcggtgatcgccgaagtatcgactcaactatcagaggtagttggcgtcatcgagcgccatctcgaaccgacgttgctggccgtacatttgtacggctccgcagtggatggcggcctgaagccacacagtgatattgatttgctggttacggtgaccgtaaggcttgatgaaacaacgcggcgagctttgatcaacgaccttttggaaacttcggcttcccctggagagagcgagattctccgcgctgtagaagtcaccattgttgtgcacgacgacatcattccgtggcgttatccagctaagcgcgaactgcaatttggagaatggcagcgcaatgacattcttgcaggtatcttcgagccagccacgatcgacattgatctggctatcttgctgacaaaagcaagagaacatagcgttgccttggtaggtccagcggcggaggaactctttgatccggttcctgaacaggatctatttgaggcgctaaatgaaaccttaacgctatggaactcgccgcccgactgggctggcgatgagcgaaatgtagtgcttacgttgtcccgcatttggtacagcgcagtaaccggcaaaatcgcgccgaaggatgtcgctgccgactgggcaatggagcgcctgccggcccagtatcagcccgtcatacttgaagctagacaggcttatcttggacaagaagaagatcgcttggcctcgcgcgcagatcagttggaagaatttgtccactacgtgaaaggcgagatcaccaaggtagtcggcaaataaggtaccgttaggcgttttcgcgatgtacgggccagatatacgcgttgacattgattattgactagttattaatagtaatcaattacggggtcattagttcatagcccatatatggagttccgcgttacataacttacggtaaatggcccgcctggctgaccgcccaacgacccccgcccattgacgtcaataatgacgtatgttcccatagtaacgccaatagggactttccattgacgtcaatgggtggagtatttacggtaaactgcccacttggcagtacatcaagtgtatcatatgccaagtacgccccctattgacgtcaatgacggtaaatggcccgcctggcattatgcccagtacatgaccttatgggactttcctacttggcagtacatctacgtattagtcatcgctattaccatggtgatgcggttttggcagtacatcaatgggcgtggatagcggtttgactcacggggatttccaagtctccaccccattgacgtcaatgggagtttgttttggcaccaaaatcaacgggactttccaaaatgtcgtaacaactccgccccattgacgcaaatgggcggtaggcgtgtacggtgggaggtctatataagcagagctcgtttagtgaaccgtcagatcgcctggagacgccatccacgctgttttgacctccatagaagacaccgggaccgatccagcctccggactctagaggatcgaagctagccaccatggtgagcaagggcgaggagctgttcaccggggtggtgcccatcctggtcgagctggacggcgacgtaaacggccacaagttcagcgtgtccggcgagggcgagggcgatgccacctacggcaagctgaccctgaagttcatctgcaccaccggcaagctgcccgtgccctggcccaccctcgtgaccaccctgacctacggcgtgcagtgcttcagccgctaccccgaccacatgaagcagcacgacttcttcaagtccgccatgcccgaaggctacgtccaggagcgcaccatcttcttcaaggacgacggcaactacaagacccgcgccgaggtgaagttcgagggcgacaccctggtgaaccgcatcgagctgaagggcatcgacttcaaggaggacggcaacatcctggggcacaagctggagtacaactacaacagccacaacgtctatatcatggccgacaagcagaagaacggcatcaaggtgaacttcaagatccgccacaacatcgaggacggcagcgtgcagctcgccgaccactaccagcagaacacccccatcggcgacggccccgtgctgctgcccgacaaccactacctgagcacccagtccgccctgagcaaagaccccaacgagaagcgcgatcacatggtcctgctggagttcgtgaccgccgccgggatcactctcggcatggacgagctgtacaagtagatcttcgatccctaccggttagtaatgagtttaaacgggggaggctaactgaaacacggaaggagacaataccggaaggaacccgcgctatgacggcaataaaaagacagaataaaacgcacgggtgttgggtcgtttgttcataaacgcggggttcggtcccagggctggcactctgtcgataccccaccgagtccccattggggccaatacgcccgcgtttcttccttttccccaccccaccccccaagttcgggtgaaggcccagggctcgcagccaacgtcggggcggcaggccctgccatagctttttggggcggcttttctcgagtccttccttaaggacgtgctcgtcaatttttgttttgagagtcatctattcggatgcttttcatgaagttttttatgaccctgaccggcaccctgcgcaaggccttcgccaccaccctggccgccgccatgctgatcggcaccctggccggctgctcctccgccgcatacaacaagtctgacctcgtttcgaagatcgcccagaagtccaacctgaccaaggctcaggccgaggctgctgttaacgccttccaggatgtgttcgtcgaggctatgaagtccggcgaaggcctgaagctcaccggcctgttctccgctgagcgcgtcaagcgcccggctcgcaccggccgcaacccgcgcactggcgagcagattgacattccggcttcctacggcgttcgtatctccgctggctccctgctgaagaaggccgtcaccgagtatggacggaagaagcgcaggcagcgacggcgatgaggaagaagcgcaggcagcgacggcgatgagggtttgcgcttgcgtcgtggagggagcggaacgccgaaaaaggatccseq id no: 3: lac i dna binding domainaaatatgtaacgttatacgatgtcgcagagtatgccggtgtctctcatcagaccgtttcccgcgtggtgaaccaggccagccacgtttctgcgaaaacgcgggaaaaagtggaagcggcgatggcggagctgaattacattcccaaccgcgtggcacaacaactggcgggcaaacagtcgttgctgattseq id no: 4: hu dna binding domaingcatacaacaagtctgacctcgtttcgaagatcgcccagaagtccaacctgaccaaggctcaggccgaggctgctgttaacgccttccaggatgtgttcgtcgaggctatgaagtccggcgaaggcctgaagctcaccggcctgttctccgctgagcgcgtcaagcgcccggctcgcaccggccgcaacccgcgcactggcgagcagattgacattccggcttcctacggcgttcgtatctccgctggctccctgctgaagaaggccgtcaccgagseq id no: 5: mer r dna binding domaingaaaacaatttggagaacctgaccattggcgttttcgccaggacggccggggtcaatgtggagaccatccggttctatcagcgcaagggcttgctcccggaaccggacaagccttacggcagcattcgccgctatggcgagacggatgtaacgcgggtgcgcttcgtgaaatcagcccagcggttgggcttcagcctggatgagatcgccgagctgctgcggctggaggatggcacccattgcgaggaagccagcagcctggccgagcacaagctcaaggacgtgcgcgagaggatggctgacctggcgcgcatggaggccgtgctgtctgatttggtgtgcgcctgccatgcgcgaagggggaacgtttcctgcccgctgatcgcgtcactacagggtggagcaagcttggcaggttcggctatgcctseq id no: 6: zinc finger dna binding domain.gaaaaactgcgcaacggcagcggcgatccgggcaaaaaaaaacagcatgcgtgcccggaatgcggcaaaagctttagccagagcagcgatctgcagcgccatcagcgcacccataccggcgaaaaaccgtataaatgcccggaatgcggcaaaagctttagccgcagcgatgaactgcagcgccatcagcgcacccataccggcgaaaaaccgtataaatgcccggaatgcggcaaaagctttagccgcagcgatcatctgagccgccatcagcgcacccatcagaacaaaaaaseq id no: 7: coding sequence for smt proteinatgaccctgaccggcaccctgcgcaaagcgtttgcgaccaccctggcggcggcgatgctgattggcaccctggcgggctgcagcagcgcggaaaacaatttggagaacctgaccattggcgttttcgccaggacggccggggtcaatgtggagaccatccggttctatcagcgcaagggcttgctcccggaaccggacaagccttacggcagcattcgccgctatggcgagacggatgtaacgcgggtgcgcttcgtgaaatcagcccagcggttgggcttcagcctggatgagatcgccgagctgctgcggctggaggatggcacccattgcgaggaagccagcagcctggccgagcacaagctcaaggacgtgcgcgagaggatggctgacctggcgcgcatggaggccgtgctgtctgatttggtgtgcgcctgccatgcgcgaagggggaacgtttcctgcccgctgatcgcgtcactacagggtggagcaagcttggcaggttcggctatgccttatggacggaagaagcgcaggcagcgacggcgatgaseq id no: 8: coding sequence for sht proteinatgaccctgaccggcaccctgcgcaaagcgtttgcgaccaccctggcggcggcgatgctgattggcaccctggcgggctgcagcagcgcggcatacaacaagtctgacctcgtttcgaagatcgcccagaagtccaacctgaccaaggctcaggccgaggctgctgttaacgccttccaggatgtgttcgtcgaggctatgaagtccggcgaaggcctgaagctcaccggcctgttctccgctgagcgcgtcaagcgcccggctcgcaccggccgcaacccgcgcactggcgagcagattgacattccggcttcctacggcgttcgtatctccgctggctccctgctgaagaaggccgtcaccgagtatggacggaagaagcgcaggcagcgacggcgatgaseq id no: 9: coding sequence for sltatgaccctgaccggcaccctgcgcaaagcgtttgcgaccaccctggcggcggcgatgctgattggcaccctggcgggctgcagcagcgcgaaatatgtaacgttatacgatgtcgcagagtatgccggtgtctctcatcagaccgtttcccgcgtggtgaaccaggccagccacgtttctgcgaaaacgcgggaaaaagtggaagcggcgatggcggagctgaattacattcccaaccgcgtggcacaacaactggcgggcaaacagtcgttgctgatttatggacggaagaagcgcaggcagcgacggcgatgaseq id no: 10: coding sequence for sztatgaccctgaccggcaccctgcgcaaagcgtttgcgaccaccctggcggcggcgatgctgattggcaccctggcgggctgcagcagcgcggaaaaactgcgcaacggcagcggcgatccgggcaaaaaaaaacagcatgcgtgcccggaatgcggcaaaagctttagccagagcagcgatctgcagcgccatcagcgcacccataccggcgaaaaaccgtataaatgcccggaatgcggcaaaagctttagccgcagcgatgaactgcagcgccatcagcgcacccataccggcgaaaaaccgtataaatgcccggaatgcggcaaaagctttagccgcagcgatcatctgagccgccatcagcgcacccatcagaacaaaaaatatggacggaagaagcgcaggcagcgacggcgatga|[mm1]seq id no: 18: alpha-amylase sequence with extra aminoacids that permit cleavageatgaaacatcggaaacccgcaccggcctggcataggctggggctgaagattagcaagaaagtggtggtcggcatcaccgccgcggcgaccgccttcggcggactggcaatcgccagcaccgcagcacaggccagcaccseq id no: 19: 46 amino acid alpha amylase signal peptide withputative cleavage sitemkhrkpapawhrlglkiskkvvvgitaaatafgglaiastaaqastseq id no: 20: cleaved alpha amylase signal peptideatgaaacatcggaaacccgcaccggcctggcataggctggggctgaagattagcaagaaagtggtggtcggcatcaccgccgcggcgaccgccttcggcggactggcaatcgccagcaccgcagcacaggccseq id no: 21: 44 amino acid predicted cleaved alpha amylasesignal peptide (no cleavage signal)mkhrkpapawhrlglkiskkvvvgitaaatafgglaiastaaqaseq id no: 22: arabinosidase signal peptide coding sequence.accctgaccggcaccctgcgcaaagcgtttgcgaccaccctggcggcggcgatgctgattggcaccctggcgggctgcagcagcgcgseq id no: 23: arabinosidase signal peptidetltgtlrkafattlaaamligtlagcssaseq id no: 24: sht hybrid protein (133 amino acids)mtltgtlrkafattlaaamligtlagcssaaynksdlvskiaqksnltkaqaeaavnafqdvfveamksgeglkltglfsaervkrpartgrnprtgeqidipasygvrisagsllkkavteygrkkrrqrrrseq id no: 25: slt hybrid protein (104 amino acids)mtltgtlrkafattlaaamligtlagcssakyvtlydvaeyagvshqtvsrvvnqashvsaktrekveaamaelnyipnrvaqqlagkqslliygrkkrrqrrrseq id no: 26: smt hybrid protein (184 amino acids)mtltgtlrkafattlaaamligtlagcssaennlenltigvfartagvnvetirfyqrkgllpepdkpygsirrygetdvtrvrfvksaqrlgfsldeiaellrledgthceeasslaehklkdvrermadlarmeavlsdlvcacharrgnvscpliaslqggaslagsampygrkkrrqrrrseq id no: 27: szt hybrid protein (139 amino acids)mtltgtlrkafattlaaamligtlagcssaeklrngsgdpgkkkqhacpecgksfsqssdlqrhqrthtgekpykcpecgksfsrsdelqrhqrthtgekpykcpecgksfsrsdhlsrhqrthqnkkygrkkrrqrrrseq id no: 28: sequence comprising pdohjr oriacgcgctggagatgttcaacgagtagatcgccacggcgacctccttccacgcgtgcgggcacggggattctcaaggggccggcccgaggccccttgagcccgccgggaggcgcccccggcagggcgggaatccaaagggcggagccctgtggccctccccgggcaggggcgggatcgtcaagggcggagcccttggccccctcgggagagcgcactgacacaatgctacctccggtagcattaagtgcgccctccgccatgcggagggacgggccgcgaccggatatgcggggaacgtccacgacgcgtcttccgtgtcctccgtcctgccttgtgccgtcgattataatcctcggataacggacgatgattgaaccgattggaggaaacgagatgccgaagagtttcgcgcagcagatcgaggacgacgagaacaagatcaagcgcatccgggagcaccagcgcatggtcagggccaaacaagccaaacaggagcgcaacgcccgcaccaaaaggctcgtggagaccggagcgatagtggaaaaagcgcacggcggcgcgtacgacgacgaaggacggcagacgttctcggatggtctgaacggcatcatctcggtctacgacccgtcccgcggcggcaacgtggacatgagagtcatcgacgtaatcgaccggcgcatccccagattgccaaggtccgaaaccacaacaggcacggcagcggcggcatcgcgaaccgtgcaagccaccgcaccacagccagcccacgcccaaccgcaaagcttcacgcccaacccccagcgcatcgaacaccggacaggagcacagcagccggatcggtgggcctagccgggacggcgcagccctcgctccgctgggctgaaatatctgacttggttcgtaacatatttccgacatgtcggaaatatgtcgtatcatcgttgctatgagtactgaacttcgtgagcagtgggagcagctttacctgccgctgcgcccgctgtgcacgaacgacttcatcgagggcgtgtaccggcagcccagggcgaaggcgctggagggctaccggtacatcgaggcgaaccccaagaccgtcagcgacctgctcgtggtcgacatcgacgacgcgaacgcccgcgcgatggccctatgggagcatgaggggatgctgcccaacgtgatcgtggagaacccgagaaacggccacgcgcacgccgtgtgggcgctggccgccccgttcagtcgcaccgaatacgcgcggcgcaagccgctcgccctggccgcggcggtcacggaggggctgcgccgctcgtgcgacggggacaagggatattcggggctgatgaccaagaacccgctgcacgaggcgtggaacagcgaactggtcaccgatcacctgtacgggctggacgaactgcgcgaggcgctggaggagtccggggacatgccgccggcctcgtggaagcgcacgaaacggcgcaacgtggtcggattgggccgcaactgccatctgttcgagacggcgcgcacatgggcgtacagggaggtgcgccaccactggggagacagcgaggggctgcggctggccatctcggccgaggcgcacgaaatgaacgccgaactgttccccgagccactggcatggtcggaggtcgagcagatcgccaagagcatccacaagtggatcgtcacccaaagccgcatgtggcgcgacggccccaccgtctacgaggcgacattcatcaccgtccaaagcgcccgagggaaaaagtcaggcaaggccagacgagaaacattcgaactattcgcgagcgaggagatgacgtaaseq id no: 29: e. coli /puc ori.atgtgagcaaaaggccagcaaaaggccagggaccgtaaaaaggccgcgttgctggcgtttttccataggctccgcccccctgacgagcatcacaaaaatcgacgctcaagtcagaggtggcgaaacccgacaggactataaagataccaggcgtttccccctggaagctccctcgtgcgctctcctgttccgaccctgccgcttaccggatacctgtccgcctttctcccttcgggaagcgtggcgctttctcatagctcacgctgtaggtatctcagttcggtgtaggtcgttcgctccaagctgggctgtgtgcacgaaccccccgttcagcccgaccgctgcgccttatccggtaactatcgtcttgagtccaacccggtaagacacgacttatcgccactggcagcagccactggtaacaggattagcagagcgaggtatgtaggcggtgctacagagttcttgaagtggtggcctaactacggctacactagaagaacagtatttggtatctgcgctctgctgaagccagttaccttcggaaaaagagttggtagctcttgatccggcaaacaaaccaccgctggtagcggtggtttttttgtttgcaagcagcagattacgcgcagaaaaaaaggatctcaagaagatcctttgatcttttctacgggtttttttggggcggctttseq id no: 30: pb44 oriccctcatacgagccggtcaaccccgaacgcaggaccgcccagacgccttccgatggcctgatctgacgtccgaaaaaaggcgccgtgcgccctttttaaatcttttaaaatctttttacattcttttaggccctccgcagccctactctcccaacgggtttcggacggtacttagtacaaaaggggagcgaacttagtacaaaaggggagcgaacttagtacaaaaggggagcgaacttagtacaaaaggggagcgaactagtaaataaataaactttgtactagtatggagtcatgtccaatgagatcgtgaagttcagcaaccagttcaacaacgtcgcgctgaagaagttcgacgccgtgcacctggacgtgctcatggcgatcgcctcaagggtgagggagaagggcacggccacggtggagttctcgttcgaggagctgcgcggcctcatgcgattgaggaagaacctgaccaacaagcagctggcagacaagatcgtgcagacgaacgcgcgcctgctggcgctgaactacatgttcgaggattcgggcaagatcatccagttcgcgctgttcacgaagttcgtcaccgacccgcaggaggcgactctcgcggttggggtcaacgaggagttcgctttcctgctcaacgacctgaccagccagttcacgcgcttcgagctggccgagttcgccgacctcaagagcaagtacgccaaggagttctacaggcgcgccaagcagtaccgcagctcgggaatctggaagatcagccgcgatgagttctgccgactgcttggcgtatccgattccacggcaaaatccaccgccaacctgaacagggtcgtgctgaagacgatcgccgaagagtgtgggcctctccttggcctgaagatcgagcgccagtacgtgaaacgcaggctgtcgggcttcgtgttcacgttcgcccgcgagacccctccggtgatcgacgccaggcccgtggaggcgaggaagacggacggcgacggcaagggccattggacgagcgtggcggggtatggcgaggtgttcacgaccacggagctgttcgacgtgacggccgcgcgtgaccacttcgacggcaccgtggaggccggggaatgccgtttctgcgcgtttgacgcgcgcaaccgcgaacatcatgcgcggaacgccggaaggttgttctagcggccgtgtccgcgcctctggggcggttgcgcctgccatggagatctseq id no: 31: tattatggacggaagaagcgcaggcagcgacggcgaseq id no: 32: p-beta mpg (gp41-sv40)ggcgcgctgtttctgggctttctgggcgcggcgggcagcaccatgggcgcgtggagccagccgaaaaaaaaacgcaaagtgseq id no: 33: transportan (galanin-mastoparan)ggctggaccctgaacagcgcgggctatctgctgggcaaaattaacctgaaagcgctggcggcgctggcgaaaaaaattctgseq id no: 34: pep-1 (trp-rich motif-sv40)aaagaaacctggtgggaaacctggtggaccgaatggagccagccgaaaaaaaaacgccgcgtgseq id no: 35: gfp forward primergaattcgccaccatggtgagcaagggseq id no: 36 gfp reverse primer (compliment):tggacgagctgtacaagtaggaattcseq id no: 37: spectinomycin forward primer:atgcgctcacgcaactggtccagaaccttseq id no: 38: spectinomycin reverse primer (compliment):ttatttgccgactaccttggtgatctcgcseq id no: 39 cmv forward primer:gttaggcgttttcgcgatgtacgggccagataseq id no: 40: cmv reverse primer (compliment):gtggcgaattcttcgatcctctagagtccggagseq id no: 41: merr dna recognition siteaagcttgtcgacctgcagccttggcgcttgactccgtacatgagtacggaagtaaggttacgctatccttggctgcagaagcttseq id no: 42: laci repressor dna binding siteaagcttcagtgagcgcaacgcaattaatgtgagttagctcactcattaggcaccccaggctttacactttatgcttccggctcgtatgttgtgtggaattgtgagcgctcacaaaagcttseq id no: 43: synthetic zinc finger dna recognition siteaagcttttttttttttggggcggctttttttttttaagcttseq id no: 44: hu dna binding domainaynksdlvskiaqksnltkaqaeaavnafqdvfveamksgeglkltglfsaervkrpartgrnprtgeqidipasygvrisagsllkkavteseq id no: 45: mer r dna binding domainmennlenltigvfartagvnvetirfyqrkgllpepdkpygsirrygetdvtrvrfvksaqrlgfsldeiaellrledgthceeasslaehklkdvrermadlarmeavlsdlvcacharrgnvscpliaslqggaslagsampseq id no: 46: zn finger dna binding domainmeklrngsgdpgkkkqhacpecgksfsqssdlqrhqrthtgekpykcpecgksfsrsdelqrhqrthtgekpykcpecgksfsrsdhlsrhqrthqnkkseq id no: 47: lac i dna binding domainmkyvtlydvaeyagvshqtvsrvvnqashvsaktrekveaamaelnyipnrvaqqlagkqslliseq id no: 48: prokaryotic expression cassette comprisinghu promoter and terminatortccttccttaaggacgtgctcgtcaatttttgttttgagagtcatctattcggatgcttttcatgaagttttttggaagaagcgcaggcagcgacggcgatgagggtttgcgcttgcgtcgseq id no: 49: eukaryotic expression cassette comprisingcmv promoter, kozak sequence and tk poly a site and terminator,flanking gfp coding sequences to be expressedggttaggcgttttcgcgatgtacgggccagatatacgcgttgacattgattattgactagttattaatagtaatcaattacggggtcattagttcatagcccatatatggagttccgcgttacataacttacggtaaatggcccgcctggctgaccgcccaacgacccccgcccattgacgtcaataatgacgtatgttcccatagtaacgccaatagggactttccattgacgtcaatgggtggagtatttacggtaaactgcccacttggcagtacatcaagtgtatcatatgccaagtacgccccctattgacgtcaatgacggtaaatggcccgcctggcattatgcccagtacatgaccttatgggactttcctacttggcagtacatctacgtattagtcatcgctattaccatggtgatgcggttttggcagtacatcaatgggcgtggatagcggtttgactcacggggatttccaagtctccaccccattgacgtcaatgggagtttgttttggcaccaaaatcaacgggactttccaaaatgtcgtaacaactccgccccattgacgcaaatgggcggtaggcgtgtacggtgggaggtctatataagcagagctcgtttagtgaaccgtcagatcgcctggagacgccatccacgctgttttgacctccatagaagacaccgggaccgatccagcctccggactctagaggatcgaagctagccaccatggtgagcaagggcgaggagctgttcaccggggtggtgcccatcctggtcgagctggacggcgacgtaaacggccacaagttcagcgtgtccggcgagggcgagggcgatgccacctacggcaagctgaccctgaagttcatctgcaccaccggcaagctgcccgtgccctggcccaccctcgtgaccaccctgacctacggcgtgcagtgcttcagccgctaccccgaccacatgaagcagcacgacttcttcaagtccgccatgcccgaaggctacgtccaggagcgcaccatcttcttcaaggacgacggcaactacaagacccgcgccgaggtgaagttcgagggcgacaccctggtgaaccgcatcgagctgaagggcatcgacttcaaggaggacggcaacatcctggggcacaagctggagtacaactacaacagccacaacgtctatatcatggccgacaagcagaagaacggcatcaaggtgaacttcaagatccgccacaacatcgaggacggcagcgtgcagctcgccgaccactaccagcagaacacccccatcggcgacggccccgtgctgctgcccgacaaccactacctgagcacccagtccgccctgagcaaagaccccaacgagaagcgcgatcacatggtcctgctggagttcgtgaccgccgccgggatcactctcggcatggacgagctgtacaagtagatcttcgatccctaccggttagtaatgagtttaaacgggggaggctaactgaaacacggaaggagacaataccggaaggaacccgcgctatgacggcaataaaaagacagaataaaacgcacgggtgttgggtcgtttgttcataaacgcggggttcggtcccagggctggcactctgtcgataccccaccgagtccccattggggccaatacgcccgcgtttcttccttttccccaccccaccccccaagttcgggtgaaggcccagggctcgcagccaacgtcggggcggcaggccctgccatagctttttggggcggctttt the embodiments and examples presented herein are illustrative of the general nature of the subject matter claimed and are not limiting. it will be understood by those skilled in the art how these embodiments can be readily modified and/or adapted for various applications and in various ways without departing from the spirit and scope of the subject matter claimed. phrases, words and terms employed herein are illustrative and are not limiting. where permissible by law, all references cited herein are incorporated by reference in their entirety. it will be appreciated that any aspects of the different embodiments disclosed herein may be combined in a range of possible alternative embodiments, and alternative combinations of features, all of which varied combinations of features are to be understood to form a part of the subject matter claimed. particular embodiments may alternatively comprise or consist of or exclude any one or more of the elements disclosed.
124-446-443-985-72X
KR
[ "EP" ]
G11C13/00,G11C16/04
2019-12-04T00:00:00
2019
[ "G11" ]
nonvolatile memory device
a nonvolatile memory device includes; a memory cell area including a cell structure and a common source plate. the memory cell area is mounted on a peripheral circuit area including a buried area covered by the memory cell area and an exposed area uncovered by the memory cell area. a first peripheral circuit (pc) via extending from the exposed area, and a common source (cs) via extending from the common source plate, wherein the first pc via and the cs via are connected by a cs wire disposed outside the cell structure and providing a bias voltage to the common source plate.
a nonvolatile memory device comprising: a memory cell area including at least one cell structure and a common source plate; a peripheral circuit area on which the memory cell area is mounted, including a buried area covered by the memory cell area and a first exposed area uncovered by the memory cell area; a first peripheral circuit, pc, via extending from the first exposed area; and a first common source, cs, via extending from the common source plate, wherein the first pc via and the first cs via are connected by a first wire disposed outside the at least one cell structure that provides a bias voltage to the common source plate. the nonvolatile memory device of claim 1, wherein the buried area is centrally disposed on the peripheral circuit area, and the peripheral circuit area further includes a second exposed area uncovered by the memory cell area, wherein the first exposed area and the second exposed area are disposed on opposing sides of the buried area. the nonvolatile memory device of claim 2, wherein the peripheral circuit area further includes a third exposed area uncovered by the memory cell area, wherein the third exposed area extends in a first direction, and the first exposed area and the second exposed area extend in a second direction perpendicular to the first direction. the nonvolatile memory device of claim 3, wherein the third exposed area include circuits configured to communicate with an external device. the nonvolatile memory device of any preceding claim, further comprising: a first wiring area above the first exposed area; and a plurality of first wires, including the first wire, respectively extending from a plurality of first pc vias, including the first pc via, to connect a plurality of first memory cell, mc, vias extending from the at least one cell structure. the nonvolatile memory device of claim 5, further comprising: a second wiring area above the second exposed area; and a plurality of second wires respectively extending from a plurality of second pc vias to connect a plurality of second mc vias extending from the at least one cell structure. the nonvolatile memory device of any preceding claim, wherein the common source plate is disposed between the at least one cell structure and the buried area. the nonvolatile memory device of any preceding claim, wherein the at least one cell structure comprises multiple cell structures spaced apart on the common source plate and separated by respective word line cuts. a nonvolatile memory device, optionally according to any preceding claim, comprising: a memory cell area including a cell structure and a common source plate, and mounted on a peripheral circuit area including a buried area covered by the memory cell area and an exposed area uncovered by the memory cell area; a first peripheral circuit,(c, via extending from the exposed area; and a common source, cs, via extending from the common source plate, wherein the first pc via and the cs via are connected by a cs wire disposed outside the cell structure and providing a bias voltage to the common source plate. the nonvolatile memory device of claim 9, further comprising: a wiring area above the exposed area; and a plurality of wires, including the cs wire, respectively extending from pc vias extending from the exposed area to connect memory cell, mc, vias extending from the cell structure. the nonvolatile memory device of claim 10, wherein the cell structure comprises an alternating, vertically stacked arrangement of conductive layers and insulating layers. the nonvolatile memory device of claim 11, wherein alternating, vertically stacked arrangement of conductive layers and insulating layers are arranged in a stair-stepped structure. the nonvolatile memory device of claim 10, wherein the cell structure comprises an alternating, vertically stacked arrangement of conductive layers and insulating layers that form a stair-stepped structure, the conductive layers include a third layer disposed on a second layer including an exposed portion of the second layer uncovered by the third layer, and the second layer is disposed on a first layer including an exposed portion of the first layer uncovered by the second layer, and the plurality of wires further includes a first wire extending from a first pc via to connect a first mc via extending from the exposed portion of the first layer, and a second wire extending from a second pc via to connect a second mc via extending from the exposed portion of the second layer. a nonvolatile memory device, optionally according to any preceding claim, comprising: a memory cell array including a plurality of cell strings disposed on a common source plate; and a peripheral circuit having an upper surface on which the memory cell array is mounted, wherein the peripheral circuit includes: a first row decoder having a first upper surface uncovered by the memory cell array, and configured to bias the plurality of cell strings through first wires extending upward from the first upper surface; a second row decoder having a second upper surface uncovered by the memory cell array, and configured to bias the plurality of cell strings through second wires extending upward from the second upper surface; first common source switches having a third upper surface uncovered by the memory cell array, and configured to bias the common source plate through third wires extending upward from the third upper surface; and second common source switches having a fourth upper surface uncovered by the memory cell array, and configured to bias the common source plate through fourth wires extending upward from the fourth upper surface. the nonvolatile memory device of claim 14, further comprising: a buffer having a fifth upper surface uncovered by the memory cell array, and configured to communicate with an external device.
technical field the present disclosure relates to semiconductor memory devices, and more particularly the present invention relates to nonvolatile memory devices. embodiments may provide improved reliability. background common types of nonvolatile memory include; read only memory (rom), programmable rom (prom), electrically programmable rom (eprom), electrically erasable and programmable rom (eeprom), flash memory, phase-change random access memory (pram), magnetic ram (mram), resistive ram (rram), and ferroelectric ram (fram). as semiconductor manufacturing technologies have developed, continuous efforts have been made to fabricate three-dimensional (3d) memory devices. compared with two-dimensional memory devices, 3d memory devices provide many more memory cells per unit of lateral chip area. however 3d nonvolatile memory devices are more difficult to design and fabricate and often suffer from unique challenges in suppressing noise. summary according to a first aspect of the present invention, there is provided a nonvolatile memory device according to claim 1. according to a second aspect of the present invention, there is provided a nonvolatile memory device according to claim 9. according to a second aspect of the present invention, there is provided a nonvolatile memory device according to claim 14. preferred embodiments are defined in the dependent claims. a nonvolatile memory device of any one of the first to third aspects may be or comprise a nonvolatile memory device of any one or more of the other of the first to third aspects. brief description of the drawings the above and other objects and features of the inventive concept will become apparent by describing in detail example embodiments thereof with reference to the accompanying drawings. figs. 1 , 2 and 3 are respective, perspective diagrams illustrating a nonvolatile memory device according to embodiments of the inventive concept. fig. 4 is a perspective diagram further illustrating cell structure 220 of fig. 3 . figs. 5 . 6 and 7 are respective cross-sectional views further illustrating the peripheral circuit area and the memory cell area of figs. 1 , 2 , 3 and 4 . fig. 8 is a circuit diagram further illustrating the channel area of fig. 7 . fig. 9 is a block diagram illustrating a nonvolatile memory device according to an embodiment of the inventive concept. fig. 10 further illustrates components of a first row decoder and a second row decoder corresponding to one memory block. fig. 11 is a block diagram illustrating a nonvolatile memory device according to another embodiment. fig. 12 further illustrates components of a first row decoder block and a second row decoder block corresponding to one memory block. detailed description embodiments of the inventive concept provide nonvolatile memory devices providing improved reliability. according to an example embodiment, a nonvolatile memory device includes; a memory cell area including at least one cell structure and a common source plate, a peripheral circuit area on which the memory cell area is mounted, including a buried area covered by the memory cell area and a first exposed area uncovered by the memory cell area, a first peripheral circuit (pc) via extending from the first exposed area, and a first common source (cs) via extending from the common source plate, wherein the first pc via and the first cs via are connected by a first wire disposed outside the at least one cell structure that provides a bias voltage to the common source plate. according to an example embodiment, a nonvolatile memory device includes; a memory cell area including a cell structure and a common source plate, and mounted on a peripheral circuit area including a buried area covered by the memory cell area and an exposed area uncovered by the memory cell area, a first peripheral circuit (pc) via extending from the exposed area, and a common source (cs) via extending from the common source plate, wherein the first pc via and the cs via are connected by a cs wire disposed outside the cell structure and providing a bias voltage to the common source plate. according to an example embodiment, a nonvolatile memory device includes; a memory cell array including a plurality of cell strings disposed on a common source plate, and a peripheral circuit having an upper surface on which the memory cell array is mounted. the peripheral circuit includes; a first row decoder having a first upper surface uncovered by the memory cell array, and configured to bias the plurality of cell strings through first wires extending upward from the first upper surface, a second row decoder having a second upper surface uncovered by the memory cell array, and configured to bias the plurality of cell strings through second wires extending upward from the second upper surface, first common source switches having a third upper surface uncovered by the memory cell array, and configured to bias the common source plate through third wires extending upward from the third upper surface, and second common source switches having a fourth upper surface uncovered by the memory cell array, and configured to bias the common source plate through fourth wires extending upward from the fourth upper surface. embodiments of the inventive concept will be described in some additional detail with reference to the accompanying drawings in which like reference numbers and labels are used to denote like or similar elements and features. figure ( fig.) 1 is a perspective diagram illustrating a physical structure of a nonvolatile memory device 10 according to an embodiment of the inventive concept. referring to fig. 1 , the nonvolatile memory device 10 may generally include a peripheral circuit area 100 and a memory cell area 200. here, both the peripheral area 200 and the memory cell area 200 may have respective, lateral plate structures, wherein the memory area 200 is vertically disposed on (or over) the peripheral circuit area 100. in this regard and throughout the following description, certain geometric terms (or descriptors) will be used to more clearly teach the making and use of embodiments of the inventive concept. those skilled in the art will recognize that such geometric terms are relative and arbitrary in nature. for example, the term "horizontal " (or "lateral") may refer to an arrangement or orientation of elements in relation to a first direction (or an x direction) and a second direction (or a y direction), whereas the term "vertical" (or "columnar") may refer to an arrangement or orientation of elements in a third direction (or z direction) substantially perpendicular to a horizontal plane. further in this regard and again assuming an arbitrary geometric orientation, certain element(s) may be described as being (or having) "upward/downward", "upper/lower", "top/bottom", "over/under", "above/beneath", "beside", "around", "facing" (or "opposing"), etc., in relation to other element(s). here again, those skilled in the art will understand that such description is relative in nature, and usually drawn to one or more illustrated embodiments in order to clearly teach certain example configurations. it is further noted that any reference herein to "first" may mean "sole" or "first of a plurality". hence, with regard to the illustrated embodiment of fig. 1 , the peripheral circuit area 100 may be disposed substantially in a first horizontal plane and the memory cell area 200 may be disposed substantially in a second horizontal plane over the first horizontal plane. that is, the memory cell area 200 may be vertically mounted on an upper surface of the peripheral circuit area 100. here, the term "mount" ("mounted" or "mounting") is used to generally denote a mechanical and/or electrical connection between two or more, vertically disposed elements. with regard to the illustrated embodiment of fig. 1 , circuits in the memory cell area 200 are mounted on circuits in the peripheral circuit area 100 to enable communication and interoperation between the respective circuits. fig. 2 is an expanded, perspective diagram further illustrating the mounting of the memory cell area 200 on the peripheral circuit area 100 of fig. 1 , wherein the memory cell area 200 occupies a smaller lateral area than the peripheral circuit area 100. thus, as can be seen in figs. 1 and 2 , the memory cell area 200 is centrally mounted on the peripheral circuit area 100 to define one or more buried portion(s) of the peripheral circuit area 100, and one or more exposed portion(s) of the peripheral circuit area 100. accordingly, a centrally disposed, buried portion 110 of the peripheral area 100 may be at least partially (and laterally) surrounded by at least one exposed portion of the peripheral circuit area 100. specific to the illustrated embodiment of fig. 2 , the buried portion 110 is partially surrounded by a first exposed area 120, a second exposed area 130 and a third exposed area 130. here, the first exposed area 120 and the second exposed area 139 extend in the second direction, and the third exposed area 140 extends in the first direction. in this regard, the terms "buried" and "exposed" refer to differing states of an upper surface of the peripheral circuit area 100 once the memory cell area 200 has been mounted thereon. hence, an upper surface of the buried portion 110 may be covered by the mounted memory cell area 200, and respective upper surfaces of the exposed portions may be uncovered by the mounted memory cell area 200. the memory cell area 200 may include a plurality of cell transistors. the plurality of cell transistors may be used as memory cells capable of storing data during the execution of write (or program) operations and retrieving stored data during the execution of read operations; dummy cell capable of storing dummy data; and/or selection transistors used to select (or not select) between the memory cells and/or the dummy memory cells during the execution of read/write operations. a first wiring area 310 may include first wires connecting the memory cell area 200 (or any particular component of the memory cell area 200) and the peripheral circuit area 100 (or any particular component of the peripheral circuit area) and may be provided over the first exposed area 120. a second wiring area 320 may include second wires connecting the memory cell area 200 and the peripheral circuit area 100 may be provided over the second exposed area 130. of note in this regard, the first wires and second wires may be used to variously connect components of the memory cell area 200 (e.g., a common source plate) with components of the peripheral circuit area 100 outside of (or external to) a cell structure disposed in the memory cell area 200. that is, various first and second wires need not be run through the cell structure to connect components. in this context, it should be noted that the term "wire" as used herein broadly denotes a great variety of conductive elements having various shapes, sizes and electrically conductive properties. hence, the term "wire" should be broadly construed to read on any electrically conductive element(s) capable of communicating an electrical signal from one element (e.g., a first via) to another element (e.g., a second via). that is, circuitry and components in the memory cell area 200 may be at least partially, electrically connected with circuitry and components in the peripheral circuit area 100 through the first wires of the first wiring area 310 and the second wires of the second wiring area 320 disposed on opposing sides of the memory cell area 200. the buried area 110 may include various circuits used to control the operation of the nonvolatile memory device 10. the first exposed area 120 may include circuits used to control the operation of certain cell transistors of the memory cell area 200 through the first wires, and the second exposed area 130 may include circuits used to control the operation of certain cell transistors of the memory cell area 200 through the second wires. in contrast, the third exposure area 140 may include circuits capable of communicating (e.g., exchanging signals) with an external device, and may also include various wires and pads used to physically interface with the external device. fig. 3 is a perspective diagram illustrating one possible structure for the memory cell area 200 of fig. 2 . referring to figs. 1 , 2 , and 3 , the memory cell area 200 may include a common source plate 210 extending in a horizontal plane defined by the first direction and the second direction, and a plurality of cell structures 220 disposed on the common source plate 210, and spaced from each other in the second direction. the common source plate 210 may be provided in common connection with respective circuitry in each one the plurality of cell structures 220. for example, the common source plate 210 may be used to transfer one or more voltage(s) to the plurality of cell structures 220. in this regard, the common source plate 210 may include silicon doped with p-type impurities and/or n-type impurities. a space physically separating the plurality of cell structures 220 may be referred as a word line cut (or wl cut). the plurality of cell structures 220 may have the same structure. each of the plurality of cell structures 220 may include cell strings horizontally arranged along the first direction and the second direction, and each of the cell strings may include cell transistors vertically stacked in the third direction. the plurality of cell structures 220 may be variously configured to provide memory block(s). that is, each memory block may be provided by one or more cell structures 220. for example, a memory block may include a set of commonly managed cell transistors, so that during the execution of read/write operations, various voltages may be simultaneously applied to cell transistors belonging to the same memory block. fig. 4 is a perspective diagram illustrating one possible structure for the one cell structure 220 of fig. 3 . referring collectively to figs. 1 , 2 , 3 , and 4 , a stair-stepped cell structure 220 may be disposed on the common source plate 210, wherein the stair-stepped cell structure 220 includes a plurality of vertically stacked layers. here, each of the successively stacked layers ("stacked layers") includes an upper surface having a buried portion and laterally opposing, exposed portions resulting from the fact that each successively stacked layer occupies a smaller lateral area than the immediately underlying layer. the illustrated example of fig. 4 shows the cell structure 220 including eleven (11) layers, but this is just one possible configuration contemplated by embodiments of the inventive concept. the cell structure 220 may include a channel area 230. the channel area 230 may be centrally displaced in the cell structure 220 (e.g., in the first direction). the channel area 230 may extend upward to be commonly included in the vertically stacked layers of the cell structure 220. that is, in the channel area 230, cell transistors may be implemented in each of the stacked layers. each of the stacked layers of the cell structure 220 may be respectively connected with the peripheral circuit area 100 using one or more wires. for example, referring to figs. 2 and 4 , one or more of the first wires of the first wiring area 310 and/or one or more of the second wires of the second exposed area 130 may be used to variously connect the peripheral circuit area 100 to one or more the stacked layers of the cell structure 220. in order to avoid fig. 4 from becoming unnecessarily complicated, only certain second wires are illustrated. here, the second wires 243 may respectively connect to memory cell (mc) vias 241 (i.e., vias disposed in the memory cell area 200) with peripheral circuit (pc) vias 242 (i.e., vias disposed in the peripheral circuit area 100). the mc vias 241 may extend vertically upward from (or through) one or more of the stacked layers, and the pc vias 242 may extend vertically upward through the second wiring area 320. the mc vias 241, pc vias 242, and the second wires 243 may be variously formed from one or more conductive materials (e.g., one or more metal(s) or metal layer(s)). in this context, phrases like "extend vertically upward", "extend from", "extending from", etc. denote relationships in which a via may be electrically connectable as it is disposed in (or as it passes through) a material layer. hence, a via may significantly extend above an upper surface of the material layer, or it may only minimally extend from (or be exposed in) a material layer in order to be connected with another conductive element. the second wires (and/or the first wires, not shown) may further include at least one wire connected to the common source plate 210. hereafter, certain first and/or second wires connecting the common source plate 210 will be referred to as cs wire(s) and certain pc vias and/or mc vias will be referred to as cs via(s). in fig. 4 , at least one cs via 251 may extend vertically upward from the common source plate 210 and be connected to a pc via 252 through a cs wire 253. here again, the conductive material(s) forming the cs wire 253 and the cs via 251 may include one or more metal(s). it is possible to electrically bias the common source plate 210 using one or more cs via(s) (e.g., through hole via(s)) that penetrate the cell structure 220 (e.g., the channel area 230). however, one or more cs via(s) that do not penetrate the cell structure 220 may be used in certain embodiments of the inventive concept. in such cases, wherein a cs via does not penetrate the cell structure 220, the level of difficulty associated with the fabrication of nonvolatile memory devices according to embodiments of the inventive concept may be reduced. in this regard, cs via(s) may occupy space not occupied by cell transistors. accordingly, the placement and number of cs vias need not increase the integration density of the constituent cell transistors. as a further result of the foregoing, additional cs vias may be incorporated in nonvolatile memory devices according to embodiments of the inventive concept without fear of adversely influencing the integration density of the cell transistors. accordingly, an increased number of cs vias provides an enhanced voltage capacity for biasing the common source plate 210. when a ground voltage is applied to the common source plate 210 and a current flows from the channel area 230 to the common source plate 210, a phenomenon may arise in which the voltage of the common source plate 210 varies from ground due to electrical noise. however, in certain embodiments of the inventive concept, the common source plate 210 may be used to apply a relatively bias voltage without noise becoming a problem. in certain embodiments of the inventive concept, the common source plate 210 may be used to apply a high voltage during the execution of an erase operation. in such cases, wherein the bias voltage capacity of the common source plate 210 increases, the time required to charge the bias voltage of the common source plate 210 to a desired level (e.g., a high voltage level) may be reduced. fig. 5 is a cross-sectional diagram further illustrating in one example the peripheral circuit area 100 and the memory cell area 200 of a nonvolatile memory device according to an embodiment of the inventive concept. here, the cross-sectional view of the cell structure 220 is taken along the first direction, and further illustrates wires connecting an uppermost layer of the cell structure 220 from among the first wires and the second wires. referring to figs. 1 , 2 , 3 , 4 , and 5 , the peripheral circuit area 100 may include an active area 150, as well as a first connecting elements 160 and a second connection element 170 disposed on the active area 150. in certain embodiments, the active area 150 may be part of a semiconductor substrate, the first connecting element 160 may be a first transistor connecting a first pc via 242, and the second connecting element 170 may be a second transistor connecting a second pc via 242. the first transistor 160 may include a gate 161, an insulating layer 162, a first junction 163, and a second junction 164. the second transistor 170 may include a gate 171, an insulating layer 172, a first junction 173, and a second junction 174. the first junction 163 of the first transistor 160 may be connected with a third pc via 151. the third pc via 151 may be connected with another component in the peripheral circuit area 100 through a metal wire 152. the second junction 164 of the first transistor 160 may be connected with the pc via 242. the first junction 173 of the second transistor 170 may be connected to a fourth pc via 153. the fourth pc via 153 may be connected with another component in the peripheral circuit area 100 through a metal wire 154. the second junction 174 of the second transistor 170 may be connected with the pc via 242. components directly connected with a pc via 242 from among the many components typically populating the peripheral circuit area 100 are shown in fig. 5 . however, those skilled in the art will recognize that additional components not illustrated in fig. 5 may be present in the peripheral circuit area 100. the memory cell area 200 may include the common source plate 210 and the cell structure 220 vertically stacked on the common source plate 210. the cell structure 220 may have a structure in which insulating layers 221 and conductive layers 222 are sequentially and vertically stacked on the common source plate 210. vertical channels 260 may downwardly penetrate the cell structure 220 in the channel area 230. the vertical channels 260 may be used to form cell transistors (e.g., including memory cells, dummy memory cells, and selection transistors) vertically stacked together within the cell structure 220. the cell structure 220 may have a stair-stepped structure in which each successively stacked layer occupies a smaller horizontal footprint (e.g., has a smaller horizontal length in the first direction) than the immediately underlying layer. certain stacked layers may be information storage layer(s) including a silicon oxide layer, a silicon nitride layer, and/or a silicon oxide layer and that are disposed between the cell structure 220 and the vertical channels 260. the conductive layers 222 of the cell structure 220 may extend along the first direction and may be electrically connected to cell transistors. the conductive layers 222 of the cell structure 220 may be biased through one or more mc vias. that is, cell transistors in each of the conductive layers 222 may be commonly biased through one or more mc via(s). each of the pc vias 242 may extend upwardly from the second junction 164 or 174 of the first transistor 160 or the second transistor 170. in certain embodiments of the inventive concept, "unoccupied portion(s)" of the peripheral circuit area 100 and/or unoccupied portion(s) of the memory cell area 200 (i.e., portions not including active elements) may be filled with an insulting material. the pc vias 242 may penetrate upwardly into the insulating material of unoccupied portions of the peripheral circuit area 100 and the memory cell area 200. when the nonvolatile memory device 10 of fig. 1 operates, a high voltage (e.g., 10 v or higher) may be applied to the conductive layers 222. the first transistor 160 and the second transistor 170 may be respectively implemented as high-voltage transistors readily adapted to withstand high voltage. the first transistor 160 and the second transistor 170 may be referred to as "pass transistors" with respect to a high-voltage transfer. for example, twenty-four (24) pass transistors may be used to transfer voltages to eleven (11) conductive material layers. that is, 2(n+1) pass transistors may be used to transfer voltages to 'n' conductive material layers. to efficiently design pass transistors, the pass transistors may be implemented as an array disposed in the peripheral circuit area 100. the number of pass transistors necessary for one cell structure 220 may vary depending on a structure of the cell structure 220, and the connection relationship(s) between corresponding wires, vias and other elements. in the illustrated example of fig. 5 , the vertical channels 260 are shown in cross section, as are the mc vias 241 and pc vias 242 connected with the uppermost layer of the cell structure 220. however, in the case where a location of the vertical channels 260 is not aligned with the cross section corresponding to the mc vias 241 and the pc 242 connected with the uppermost layer of the cell structure 220, the vertical channels 260 may not be viewed, or only a part of the vertical channels 260 may be viewed. in the illustrated examples of figs. 4 and 5 , four (4) vertical channels 260 are shown in the channel area 230. however, this number of vertical channels is merely one possible example. likewise, the cell structure 220 illustrated in fig. 5 includes 11 layers, but this number is merely a selected example. to prevent drawings from being unnecessarily complicated, components associated with the vertical channels 260 are not illustrated in fig. 5 . however, certain components typically associated with the vertical channels 260 will be described in some additional detail with reference to fig. 7 . in the illustrated example of fig. 5 , first and second wires 243 are shown connected with the uppermost layer of the cell structure 220. however, those skilled in the art will recognize that similar connections may exist between various first and second wires 243 and other layers of the cell structure 220, as well as between the first and second wires 242 and various mc vias 241 and pc vias 242 fig. 6 is another cross-sectional view (analogous to fig. 5 ) of the peripheral circuit area 100 and the memory cell area 200 of certain embodiments of the inventive concept. here, the cross-sectional view of the cell structure 210 is taken along the first direction with regard to wires connecting the common source plate 210 from among the first wires and/or the second wires. referring to figs. 1 , 2 , 3 , 4 , and 6 , the peripheral circuit area 100 may include the active area 150, as well as a third connection element 180 and a fourth connection element 190 dispose in the active area 150. the active area 150 may be a semiconductor substrate or a portion of a semiconductor substrate. the third and fourth connection elements 180 and 190 may be respective (third and fourth) transistors connected with the pc vias 252 through a first wire or a second wire. the third transistor 180 may include a gate 181, an insulating layer 182, a first junction 183, and a second junction 184. the fourth transistor 190 may include a gate 191, an insulating layer 192, a first junction 193, and a second junction 194. the first junction 183 of the third transistor 180 may be connected to a third pc via 155. the third pc via 155 may be connected with another component in the peripheral circuit area 100 through a metal wire 156. the second junction 184 of the third transistor 180 may be connected with the pc via 252. the first junction 193 of the fourth transistor 190 may be connected with a fourth pc via 157. the fourth pc via 157 may be connected with another component in the peripheral circuit area 100 through a metal wire 158. the second junction 194 of the fourth transistor 190 may be connected with the pc via 252. here, only components directly connected with the pc vias 252 among components in the peripheral circuit area 100 are illustrated in fig. 6 . however, additional components not illustrated in fig. 6 may be present in the peripheral circuit area 100. the memory cell area 200 may include the common source plate 210 and the cell structure 220 on the common source plate 210. the cell structure 220 may have the same structure as described with reference to fig. 5 . each of the pc vias 252 may extend vertically upward from the second junction 184 or 194 of the third transistor 180 or the fourth transistor 190 through respective unoccupied portions of the peripheral circuit area 100 and/or the memory cell area 200 filled with an insulting material. the third transistor 180 and the fourth transistor 190 may be implemented together with pass transistors and may be included in an array of pass transistors disposed in the peripheral circuit area 100. the third transistor 180 and the fourth transistor 190 may be implemented as high-voltage transistors or low-voltage transistors. in an embodiment, the array of pass transistors including the third transistor 180 and/or the fourth transistor 190 may be variously disposed in the first exposed area 120 and/or the second exposed area 130. as described with reference to figs. 6 and 7 , the memory cell area 200 may be implemented over the peripheral circuit area 100 to control the memory cell area 200. this structure may be called a "cell over peri (cop)". fig. 7 is a perspective cross-sectional view of a portion of a channel area 230. referring to figs. 1 , 2 , 3 , 4 , and 7 , the channel area 230 of the cell structure 220 is disposed on an upper surface of the common source plate 210. in the channel area 230, the insulating layers 221 and the conductive layers 222 are alternatingly and sequentially stacked on the common source plate 210. here, the insulating layers 221 may include silicon oxide or silicon nitride. adjacent vertical channels 260 are spaced apart from one another in the second direction and penetrate downward through the insulating layers 221 and the conductive layers 222 of the channel area 230. in an embodiment, the vertical channels 260 contact the common source plate 210 through the vertical stack of alternating insulating layers 221 and conductive layers 222. each of the vertical channels 260 may include an inner material 261, a channel layer 262, and a first insulating layer 263. the inner material 261 may include an insulating material or an air gap. the channel layer 262 may include a p-type semiconductor material or an intrinsic semiconductor material. the first insulating layer 263 may include one or more insulating layers (e.g., different insulating layers) such as a silicon oxide layer, a silicon nitride layer, and an aluminum oxide layer. the second insulating layers 223 are provided on upper surfaces (e.g., surfaces facing the third direction) of the conductive layers 222, lower surfaces (e.g., surfaces facing away from the third direction) of the conductive layers 222, and surfaces (e.g., surfaces facing the second direction and surfaces facing away from the second direction) of the conductive layers 222 adjacent to the channel layer 262 in each of the vertical channels 260, the first insulating layer 263 and the second insulating layer 223 may be coupled adjacent to each other to form an information storage layer. for example, the first insulating layer 263 and the second insulating layer 223 may include oxide-nitride-oxide (ono) or oxide-nitride-aluminum (ona). the first insulating layer 263 and the second insulating layer 223 may form a tunneling insulating layer, a charge trap layer, and a blocking insulating layer. the conductive layers 222 may include first to eleventh conductive layers 222_1 to 222_11. the conductive layers 222 may include a metallic conductive material. drains 264 are provided on the vertical channels 260. in an embodiment, the drains 264 may include an n-type semiconductor material (e.g., silicon). in an embodiment, the drains 264 may be in contact with upper surfaces of the channel layers 262 of the vertical channels 260. bit lines (bl) that extend in the second direction, are spaced apart in the first direction, and are disposed on the drains 264. the bit lines bl are connected with the drains 264. in an embodiment, the drains 264 and the bit lines bl may be connected through contact plugs. the bit lines bl may include a metallic conductive material. the vertical channels 260 form cell strings cs (refer to fig. 8 ) together with the first and second insulating layers 263 and 223 and the conductive layers 222. each of the vertical channels 260 form one cell string together with the first insulating layers 263, the second insulating layers 223, and adjacent conductive layers 222. each of the first to eleventh conductive layers 222_1 to 222_11 may form a plurality of cell transistors, which belong to one layer, together with the first insulating layers 263, the second insulating layers 223, and the channel layers 262 adjacent thereto. each of the first to eleventh conductive layers 222_1 to 222_11 may form a wire connected in common with cell transistors. as the first to eleventh conductive layers 222_1 to 222_11 are vertically stacked in the third direction, cell transistors in each cell string may be stacked along the third direction. the first to eleventh conductive layers 222_1 to 222_11 may extend in the first direction and be arranged in a stair-stepped structure. the bit lines bl may extend in the second direction and may be connected with the plurality of cell structures 220. the bit lines bl may be electrically connected with components of the peripheral circuit area 100 through various vias (e.g., through hole vias) that penetrate the channel area 230 or the cell structure 220. thus, the cell structure 220 may be provided as a 3d memory array. the 3d memory array may be monolithically formed in one or more physical levels of a circuit associated with operations of the common source plate 210 and the cell transistors. here, the term "monolithic" means that layers of each level of the 3d array are directly deposited on the layers of each underlying level of the 3d array. in an embodiment of the inventive concept, the 3d array includes vertical cell strings cs (or nand strings) that are vertically oriented such that at least one memory cell is located over another memory cell. the at least one memory cell may include a charge trap layer. in this regard, the following documents are incorporated by reference that further describe various configurations of 3d memory arrays: u.s. patent nos. 7,679,133 ; 8,553,466 ; 8,654,587 ; 8,559,235 ; and published u.s. patent application pub. no. 2011/0233648 . fig. 8 is a circuit diagram further illustrating the channel area 230 of fig. 7 . referring to figs. 7 and 8 , four (4) vertical channels 260 are assumed to form four (4) vertical cell strings. cell transistors corresponding to the first conductive layer 222_1 may be used as ground selection transistor gst. the first conductive layer 222_1 may be a ground selection line gsl connected in common with the ground selection transistor gst. cell transistors corresponding to the second to ninth conductive layers 222_2 to 222_9 may be used as memory cells mc. the second to ninth conductive layers 222_2 to 222_9 may be used as first to eighth word lines wl1 to wl8. each of the first to eighth word lines wl1 to wl8 may be connected in common with the memory cells mc of the corresponding layer. cell transistors corresponding to the tenth and eleventh conductive layers 222_10 and 222_11 may be used as string selection transistors sst. the tenth and eleventh conductive layers 222_10 and 222_11 may be used as string selection lines ssl11, ssl12, ssl21, and ssl22. the string selection lines ssl11 and ssl21 may correspond to the tenth conductive layers 222_10, respectively. as described with reference to fig. 4 , the string selection lines ssl11 and ssl21 may be biased through different wires of the first wires and different wires of the second wires. the string selection lines ssl12 and ssl22 may correspond to the eleventh conductive layers 222_11, respectively. as described with reference to fig. 4 , the string selection lines ssl12 and ssl22 may be biased through different wires of the first wires and different wires of the second wires. the common source plate 210 may be used as a common source line csl connected in common with the cell strings. the common source plate 210 may be biased by a positive high voltage in an erase operation and may be biased by a ground voltage or a positive voltage or a negative voltage having a similar level to the ground voltage in a read operation. fig. 9 is a block diagram illustrating a nonvolatile memory device 400 according to an embodiment of the inventive concept. referring to figs. 1 , 2 , 3 , and 9 , the nonvolatile memory device 400 includes a memory cell array 410, a first row decoder block 420, a second row decoder block 430, a page buffer block 440, a data input and output block 450, a buffer block 460, and a control logic block 470. the memory cell array 410 includes a plurality of memory blocks blk1 to blkz on the common source plate 210. each of the memory blocks blk1 to blkz includes one cell structure 220. each cell structure 220 includes a plurality of memory cells. the memory cell array 410 may be implemented with the memory cell area 200. each of the memory blocks blk1 to blkz may be connected with the first row decoder block 420 and the second row decoder block 430 through at least one or more ground selection lines gsl, word lines wl, and at least one or more string selection lines ssl. some of the word lines wl may be used as dummy word lines. each of the memory blocks blk1 to blkz may be connected with the page buffer block 440 through a plurality of bit lines bl. the plurality of memory blocks blk1 to blkz may be connected in common with the plurality of bit lines bl. the first row decoder block 420 is connected to the memory cell array 410 through the ground selection lines gsl, the word lines wl, and the string selection lines ssl. the first row decoder block 420 operates under control of the control logic block 470. the second row decoder block 430 is connected with the memory cell array 410 through the ground selection lines gsl, the word lines wl, and the string selection lines ssl. the second row decoder block 430 operates under control of the control logic block 470. each of the first and second row decoder blocks 420 and 430 may decode a row address ra received from the buffer block 460 and may control voltages to be applied to the string selection lines ssl, the word lines wl, and the ground selection lines gsl based on the decoded row address. the first row decoder block 420 may include first common source line switches 423. the first common source line switches 423 may bias voltages to the common source plate 210. the second row decoder block 430 may include second common source line switches 433. the second common source line switches 433 may bias voltages to the common source plate 210. the first common source line switches 421 may correspond to one of the third transistor 180 and the fourth transistor 190 described with reference to fig. 6 , and the second common source line switches 431 may correspond to the other of the third transistor 180 and the fourth transistor 190 described with reference to fig. 6 . the page buffer block 440 is connected with the memory cell array 410 through the plurality of bit lines bl. the page buffer block 440 is connected with the data input and output block 450 through a plurality of data lines dl. the page buffer block 440 operates under control of the control logic block 470. during a write operation, the page buffer block 440 may store data to be written to memory cells. the page buffer block 440 may apply voltages to the plurality of bit lines bl based on the stored data. during a read operation or during a verify read operation performed as part of a write operation or an erase operation, the page buffer block 440 may sense voltages of the bit lines bl and may store the sensing result. the data input and output block 450 is connected with the page buffer block 440 through the plurality of data lines dl. the data input and output block 450 may receive a column address ca from the buffer block 460. the data input and output block 450 may output data read by the page buffer block 440 to the buffer block 460 depending on the column address ca. the data input and output block 450 may provide data received from the buffer block 460 to the page buffer block 440, based on the column address ca. the buffer block 460 may receive a command cmd and an address addr from an external device through a first channel ch1 and may exchange data "data" with the external device. the buffer block 460 may operate under control of the control logic block 470. the buffer block 460 may transmit the command cmd to the control logic block 470. the buffer block 460 may transmit the row address ra of the address addr to the row decoder block 420 and may transmit the column address ca of the address addr to the data input and output block 450. the buffer block 460 may exchange the data "data" with the data input and output block 450. the control logic block 470 may exchange a control signal ctrl from the external device through a second channel ch2. the control logic block 470 may allow the buffer block 460 to route the command cmd, the address addr, and the data "data". the control logic block 470 may decode the command cmd received from the buffer block 460 and may control the nonvolatile memory device 400 depending on the decoded command. in an embodiment, the first row decoder block 430, the second row decoder block 430, the page buffer block 440, the data input and output block 450, the buffer block 460, and the control logic block 470 may be implemented in the peripheral circuit area 100. the first row decoder block 420 or at least a part of the first row decoder block 420 may be implemented in the first exposed area 120. the second row decoder block 430 or at least a part of the second row decoder block 430 may be implemented in the second exposed area 130. the buffer block 460 or at least a part of the buffer block 460 may be implemented in the third exposure area 140. the control logic block 470 may include a row voltage driver 471 and a common source line (csl) driver 472. the row voltage driver 471 may generate various voltages to be applied to the string selection lines ssl, the word lines wl, and the ground selection lines gsl and may provide the generates voltages to the first row decoder block 420 and the second row decoder block 430. the common source line driver 472 may generate various common source line voltages vcsl to be applied to the common source plate 210 and may provide the generated common source line voltages vcsl to first common source line switches 423 and second common source line switches 433. fig. 10 is a block diagram further illustrating the first row decoder block 420 and the second row decoder block 430 of fig. 9 in relation a memory block blki. referring to figs. 1 , 2 , 3 , 8 , 9 , and 10 , the first row decoder block 420 may include a transistor array 421, a block decoder 424, and a decoder 425. the transistor array 421 may include a plurality of transistors. transistors, which are connected with the ground selection line gsl, the first to eighth word lines wl1 to wl8, and the string selection lines ssl11, ssl12, ssl21, and ssl22, from among the plurality of transistors may be pass transistors 422. the pass transistors 422 may be turned on/off simultaneously under control of the block decoder 424. each of the pass transistors 422 may transfer a voltage output from the decoder 425 to the memory block blki through a corresponding line. a transistor(s), which transfers a voltage to the common source plate 210, from among the plurality of transistors is the common source line switch 423. the common source line switch 423 may be turned on/off under control of the control logic block 470. the common source line switch 423 may apply the common source line voltages vcsl received from the common source line driver 472 of the control logic block 470 to the common source plate 210. the block decoder 424 may receive a block address, which indicates the memory block blki, of the row address ra from the buffer block 460. when the block address indicates the memory block blki, the block decoder 424 may turned on the pass transistors 422. when the block address does not indicate the memory block blki, the block decoder 424 may turned off the pass transistors 422. the decoder 425 may receive a ground selection line voltage, a word line selection voltage, word line non-selection voltages, string selection voltages, and string non-selection voltages from the row voltage driver 471 of the control logic block 470. also, the decoder 425 may receive the remaining address of the row address ra other than the block address from the buffer block 460. the decoder 425 may apply the ground selection line voltage to the pass transistors 422 connected with the ground selection line gsl. the decoder 425 may apply a selection voltage to the pass transistor 422 connected with a word line indicated by the remaining address from among the word lines wl1 to wl8 and may apply non-selection voltages to the pass transistors 422 connected with the remaining word lines. the decoder 425 may apply the string selection voltages to the pass transistors 422 connected with string selection lines indicated by the remaining address from among the string selection lines ssl11, ssl12, ssl21, and ssl22. the decoder 425 may apply the string non-selection voltages to the pass transistors 422 connected with string selection lines not indicated by the remaining address from among the string selection lines ssl11, ssl12, ssl21, and ssl22. the second row decoder block 430 may include a transistor array 431, a block decoder 434, and a decoder 435. the transistor array 431 may include pass transistors 432 and the common source line switch 433. a structure and an operation of the second row decoder block 430 may be the same as the structure and the operation of the first row decoder block 420. the first row decoder block 420 and the second row decoder block 430 may be disposed in the first exposed area 120 and the second exposed area 130 , respectively. for another example, at least a portion of the first row decoder block 420 and the second row decoder block 430, for example, the transistor arrays 421 and 431 may be disposed in the first exposed area 120 and the second exposed area 130 , respectively. the common source line driver 472 that applies the common source line voltages vcsl to the common source plate 210 through the common source line switches 423 and 433 may be disposed in the buried area 110. as the common source line switches 423 and 433 are densely disposed together with the pass transistors 422 and 432, the common source line switches 423 and 433 may be described as a portion of the first row decoder block 420 and the second row decoder block 430. however, the common source line switches 423 and 433 may be understood as components independent of the first row decoder block 420 and the second row decoder block 430. fig. 11 is a block diagram illustrating a nonvolatile memory device 500 according to another embodiment of the inventive concept. referring to figs. 1 , 2 , 3 , and 11 , the nonvolatile memory device 500 includes a memory cell array 510, a first row decoder block 520, a second row decoder block 530, a page buffer block 540, a data input and output block 550, a buffer block 560, and a control logic block 570. the first row decoder block 520 may include common source line switches 523 and a common source line driver 526. the second row decoder block 530 may include common source line switches 533 and a common source line driver 536. the control logic block 570 may include a row voltage driver 571. a structure and an operation of the nonvolatile memory device 500 may be the same as the structure and the operation of the nonvolatile memory device 400 of fig. 9 , except that the common source line drivers 526 and 536 are respectively disposed in the first and second row decoder blocks 520 and 530. fig. 12 illustrates some of components of the first row decoder block 520 and the second row decoder block 530 corresponding to one memory block blki. referring to figs. 1 , 2 , 3 , 8 , 11 , and 12 , the first row decoder block 520 may include a transistor array 521, a block decoder 524, a decoder 525, and a common source line driver 526. the transistor array 521 may include pass transistors 522 and the common source line switch 523. a structure and an operation of the first row decoder block 520 may be the same as the structure and the operation of the first row decoder block 420 described with reference to fig. 8 , except that the common source line driver 526 is added. the second row decoder block 530 may include a transistor array 531, a block decoder 534, a decoder 535, and the common source line driver 536. the transistor array 531 may include pass transistors 532 and the common source line switch 533. a structure and an operation of the second row decoder block 530 may be the same as the structure and the operation of the first row decoder block 520. the first row decoder block 520 and the second row decoder block 530 may be disposed in the first exposed area 120 and the second exposed area 130 , respectively. in particular, the transistor arrays 521 and 521 and the common source line drivers 526 and 536 may be disposed in the first exposed area 120 and the second exposed area 130 , respectively. in the above embodiments, elements of illustrated embodiments concept are described in differentiating terms, such as "first", "second", etc. those skilled in the art will recognize that such terms are used for descriptive clarity and not for specific enumeration. in the above embodiments, certain element are described as "blocks." such blocks may be variously implemented as hardware devices, (e.g., integrated circuits, application specific ics (asci), field programmable gate arrays (fpga), and complex programmable logic devices (cpld)), firmware driven by hardware devices, software (e.g., application(s)), and/or combinations of hardware and software. various blocks may include circuits implemented with semiconductor elements in an integrated circuit or circuits enrolled as intellectual property (ip). according to embodiments of the inventive concept, a common source plate functioning as a common source line may be biased using certain vias disposed outside a channel area. as such, it is possible to prevent the voltages being transmitted by the common source plate from adversely affecting signal performance in the channel area. accordingly, nonvolatile memory devices according to the inventive concept provide improved reliability. some arrangements are defined in the clauses e9 - e20 below: e9. a nonvolatile memory device comprising: a memory cell area including a cell structure and a common source plate, and mounted on a peripheral circuit area including a buried area covered by the memory cell area and an exposed area uncovered by the memory cell area; a first peripheral circuit (pc) via extending from the exposed area; and a common source (cs) via extending from the common source plate, wherein the first pc via and the cs via are connected by a cs wire disposed outside the cell structure and providing a bias voltage to the common source plate. e10. the nonvolatile memory device of e9, further comprising: a connection element disposed in an active portion of the peripheral circuit area including a first junction connected to a second pc via that connects a high voltage and a second junction that connects the first pc via. e11. the nonvolatile memory device of e9 or e10, further comprising: a wiring area above the exposed area; and a plurality of wires, including the cs wire, respectively extending from pc vias extending from the exposed area to connect memory cell (mc) vias extending from the cell structure. e12. the nonvolatile memory device of any one of e9 to e11, wherein the cell structure comprises an alternating, vertically stacked arrangement of conductive layers and insulating layers. e13. the nonvolatile memory device of any one of e9 to e12, wherein alternating, vertically stacked arrangement of conductive layers and insulating layers are arranged in a stair-stepped structure. e14. the nonvolatile memory device of any one of e9 to e13, wherein the cell structure comprises an alternating, vertically stacked arrangement of conductive layers and insulating layers that form a stair-stepped structure, the conductive layers include a third layer disposed on a second layer including an exposed portion of the second layer uncovered by the third layer, and the second layer is disposed on a first layer including an exposed portion of the first layer uncovered by the second layer, and the plurality of wires further includes a first wire extending from a first pc via to connect a first mc via extending from the exposed portion of the first layer, and a second wire extending from a second pc via to connect a second mc via extending from the exposed portion of the second layer. e15. a nonvolatile memory device comprising: a memory cell array including a plurality of cell strings disposed on a common source plate; and a peripheral circuit having an upper surface on which the memory cell array is mounted, wherein the peripheral circuit includes: a first row decoder having a first upper surface uncovered by the memory cell array, and configured to bias the plurality of cell strings through first wires extending upward from the first upper surface; a second row decoder having a second upper surface uncovered by the memory cell array, and configured to bias the plurality of cell strings through second wires extending upward from the second upper surface; first common source switches having a third upper surface uncovered by the memory cell array, and configured to bias the common source plate through third wires extending upward from the third upper surface; and second common source switches having a fourth upper surface uncovered by the memory cell array, and configured to bias the common source plate through fourth wires extending upward from the fourth upper surface. e16. the nonvolatile memory device of e15, further comprising: a page buffer having a fifth upper surface covered by the memory cell array and connected to the plurality of cell strings through vias penetrating the memory cell array. e17. the nonvolatile memory device of e15 or e16, further comprising: control logic having a fifth upper surface covered by the memory cell array, and configured to control the first row decoder, the second row decoder, the first common source switches, and the second common source switches. e18. the nonvolatile memory device of any one of e15 to e17, further comprising: a buffer having a fifth upper surface uncovered by the memory cell array, and configured to communicate with an external device. e19. the nonvolatile memory device of any one of e15 to e18, wherein the memory cell array is a three-dimensional memory cell array including a plurality of vertical channels extending downwardly through a cell structure including the plurality of cell strings. e20. the nonvolatile memory device of any one of e15 to e19, wherein each one of the plurality of vertical channels connects a vertical stack of cell transistors including at least one of memory cell transistors, dummy memory cell transistors, and selection transistors. while the inventive concept has been described with reference to example embodiments thereof, it will be apparent to those of ordinary skill in the art that various changes and modifications may be made thereto without departing from the scope of the inventive concept as set forth in the following claims.
125-285-483-518-706
US
[ "US" ]
C08K3/34,C08L75/00
2009-12-09T00:00:00
2009
[ "C08" ]
method of making an atomizing agent
a method of making atomizing agent is described hereinafter. firstly, make a nanometer silicon dioxide gel by means of a sol-gel method. next, mix a proper quantity of nanometer silicon dioxide gel and organic solvent together to form a nanometer silicon dioxide solution. lastly, add a proper quantity of water-based polyurethane resin or de-ionized water into the nanometer silicon dioxide solution so as to obtain the atomizing agent by means of being stirred for 1 hour and then aging for 24 hours under a room temperature.
1 . a method of making atomizing agent, comprising the steps of: making a nanometer silicon dioxide gel by means of a sol-gel method; mixing the nanometer silicon dioxide gel and the organic solvent together to form a nanometer silicon dioxide solution; and adding water-based polyurethane resin or de-ionized water into the nanometer silicon dioxide solution so as to obtain the atomizing agent by means of being stirred for 1 hour and then aging for 24 hours under a room temperature. 2 . the method as claimed in claim 1 , wherein a particle diameter of the nanometer silicon dioxide gel is 8-10 nm. 3 . the method as claimed in claim 1 , wherein the weight ratio of the nanometer silicon dioxide gel to the organic solvent is 1:1. 4 . the method as claimed in claim 1 , wherein the organic solvent is any one of or a mixture of an ethanol solvent and an isopropanol solvent. 5 . the method as claimed in claim 1 , wherein the weight ratio of polyurethane resin in the water-based polyurethane resin to the nanometer silicon dioxide solution is 5-25:100. 6 . the method as claimed in claim 5 , wherein the polyurethane resin in the water-based polyurethane resin has a 10% concentration. 7 . the method as claimed in claim 1 , wherein the weight ratio of the de-ionized water to the nanometer silicon dioxide solution is 5-32.5:100.
background of the invention 1. field of the invention the present invention generally relates to a chemical agent, and more particularly to a method of making an atomizing agent. 2. the related art at present, lots of products are made of engineering plastic materials. there are different types of surface treatment technologies that are used in order to enhance the surfaces of the plastic products. here are two methods for producing the plastic product with atomized surfaces. one conventional method is executed by means of regulating process temperature and molds during molding the plastic product to form the atomized surfaces on the plastic product. however, it results in a complicated process and a low productivity. another conventional method is executed by means of incorporating atomizing agents into the plastic materials in the manufacturing processes to finally achieve the plastic product with the atomized surfaces, wherein the atomizing agent is mainly made of silica powder having a 5˜15 um diameter. however, a greater quantity of atomizing agents is embedded in the plastic product without any effects, thus resulting in materials waste and increasing manufacture cost. furthermore, an excess of silica powder in the atomizing agent will cause the difficulty of forming the atomized surfaces on the plastic product, and the atomized surfaces formed by the atomizing agent further has a high surface glossiness. summary of the invention an object of the present invention is to provide a method of making atomizing agent. the method is described hereinafter. firstly, make a nanometer silicon dioxide gel by means of a sol-gel method. next, mix a proper quantity of nanometer silicon dioxide gel and organic solvent together to form a nanometer silicon dioxide solution. lastly, add a proper quantity of water-based polyurethane resin or de-ionized water into the nanometer silicon dioxide solution so as to obtain the atomizing agent by means of being stirred for 1 hour and then aging for 24 hours under a room temperature. as described above, the above-mentioned method according to the present invention can make an atomizing agent which can well atomize a surface of a substrate to form an atomized film with a lower surface glossiness. therefore, it can achieve a simple process, a high productivity and a low manufacture cost. detailed description of the preferred embodiment a method of making an atomizing agent according to the present invention is described hereinafter. firstly, a nanometer silicon dioxide gel with an about 8-10 nm diameter is made by means of a sol-gel method. next, a proper quantity of nanometer silicon dioxide gel and organic solvent are mixed together to form a nanometer silicon dioxide solution, wherein the weight ratio of the nanometer silicon dioxide gel to the organic solvent is 1:1, and the organic solvent may be any one of or a mixture of an ethanol solvent and an isopropanol solvent. lastly, under a room temperature, add a proper quantity of water-based polyurethane resin or de-ionized water into the nanometer silicon dioxide solution so as to obtain the atomizing agent by means of being stirred for 1 hour and then aging for 24 hours, wherein the weight ratio of the polyurethane resin in the water-based polyurethane resin to the nanometer silicon dioxide solution is 5-25:100, the weight ratio of the de-ionized water to the nanometer silicon dioxide solution is 5-32.5:100, and the polyurethane resin in the water-based polyurethane resin has a 10% concentration. a method of atomizing a surface of a substrate with the foregoing atomizing agent is described hereinafter. firstly, the atomizing agent is coated onto the surface of the substrate evenly by way of spraying, dipping or roll-to-rolling. then the substrate coated with the atomizing agent is subjected to a heating environment of 50-100 degrees centigrade for 10-30 minutes so as to form an atomized film on the surface of the substrate. one unlimited embodiment is introduced for describing the above-mentioned method of making the atomizing agent in detail. in the unlimited embodiment, 10% concentration of water-based polyurethane resin is added into the nanometer silicon dioxide solution so as to obtain the atomizing agent by means of being stirred for 1 hour and then aging for 24 hours under a room temperature, wherein the weight ratio of the polyurethane resin in the water-based polyurethane resin to the nanometer silicon dioxide solution is 1:10, namely slowly adding 1000 g 10% concentration of water-based polyurethane resin into 1000 g nanometer silicon dioxide solution during stirring the nanometer silicon dioxide solution. then the substrate is processed by the above-mentioned method for forming the atomized film on the surface thereof, wherein the substrate is made of plastic materials. next, do a surface glossiness test to the atomized film of the substrate based on a gloss meter with a 60° measuring angle. as a result, the measured value is equal to 9.3. however, the surface glossiness of an atomized surface of a plastic product in the prior art is measured by the gloss meter with the 60° measuring angle to equal 13.3. another unlimited embodiment is introduced for describing the above-mentioned method of making the atomizing agent in detail again. in the unlimited embodiment, a proper quantity of de-ionized water is added into the nanometer silicon dioxide solution so as to obtain the atomizing agent by means of being stirred for 1 hour and then aging for 24 hours under a room temperature, wherein the weight ratio of the de-ionized water to the nanometer silicon dioxide solution is 1:10, namely slowly adding 100 g de-ionized water into 1000 g nanometer silicon dioxide solution during stirring the nanometer silicon dioxide solution. then the substrate is processed by the above-mentioned method for forming the atomized film on the surface thereof, wherein the substrate is made of plastic materials. next, do a surface glossiness test to the atomized film of the substrate based on a gloss meter with a 60° measuring angle. as a result, the measured value is equal to 10.1. however, the surface glossiness of the atomized surface of the plastic product in the prior art is measured by the gloss meter with the 60° measuring angle to equal 13.3. as described above, the atomizing agent made by the above-mentioned method of the present invention can well atomize the surface of the substrate to form the atomized film with a lower surface glossiness. therefore, it can achieve a simple process, a high productivity and a low manufacture cost.
126-921-602-219-76X
US
[ "US" ]
G06K7/00,G06K7/10,G06K17/00,G07G1/00,G08B13/24
1994-03-16T00:00:00
1994
[ "G06", "G07", "G08" ]
time division multiplexed batch mode item identification system
apparatus and method are disclosed for communicating between a central location and a plurality of identification tags or labels located in a container or in a space without separately passing each tag or labeled product through a read station. the disclosure includes use of hashing to reduce the amount of time needed to read the possible tags in the space. reading is accomplished using radio communication with a combination of broadcast and time division multiplex architectures.
1. a method of reading a plurality of identification tags in a read volume, each tag having a chip, each chip having data storage, logic and communication circuits said storage containing tag identification information comprising the steps of: 1) broadcasting a start command to all of said tags, said start command indicating to said tags that their tag identification information needs to be transmitted; 2) broadcasting a hashing base number to all of said tags from which each tag, to which it has been indicated that their tag identification information needs to be transmitted, use the hashing base number, said tag identification information and a hashing algorithm to determine which of a plurality of time slots to transmit said tag identification information; 3) receiving null and tag identification information transmitted by said tags in the plurality of time slots; 4) detecting a collision state when a plurality of tags transmit tag identification information in a single time slot; 5) detecting a clear signal state when tag identification information from only one tag is received in a time slot; 6) transmitting clear signal acknowledgment information to those tags which transmitted tag identification information in a time slot in which no other tag transmitted, said acknowledgment information indicating to said those tags that their tag identification information need not be retransmitted; 7) repeating steps 2, 3, 4, 5, and 6 using a newly generated hashing base number for each iteration, as the hashing base number until a collision state is not detected; and 8) processing the received tag identification information. 2. the method of claim 1 wherein step 8 includes the steps of: 1) using the tag identification information to identify the product to which the related tag was attached by accessing storage to retrieve product information and price information; 2) tabulating total purchases; 3) accepting payment; and 4) adjusting inventory records. 3. the method of claim 2 wherein said storage is physically located on said related tag and step 1 of claim 2 further comprises the steps of: 1) transmitting a read storage command to said related tag; and 2) receiving from said tag, product information and price related information of the product to which said related tag is attached. 4. the method of claim 1 wherein step 6 further comprises the steps of: 1) transmitting a receive acknowledgement command to all of said tags, said receive acknowledgement command defining the start of a new set of time slots and 2) transmitting tag identification information to a tag for which a clear signal was detected in a unique time slot of said new set of time slots, said unique time slot being the time slot in which later communication may be conducted with said tag. 5. the method of claim 4 wherein said processing step 8 of claim 1 further comprises: 1) broadcasting a group read product data command to all of said tags: 2) receiving product information and price information from said tag in said unique time slot described in step 2 of claim 4 3) receiving product information and price information from others of said tags in other time slots of said new set of time slots; 4) tabulating total purchases; 5) accepting payment, and 6) adjusting inventory records. 6. the method of conveying data from a tag, said tag having data storage, logic and communication circuits said storage containing tag identification information comprising the steps of: 1) receiving a start command; 2) receiving a hashing base number; 3) hashing said hashing base number with said tag's tag identification information to identify a time slot; 4) transmitting said tag identification information in said identified time slot, and 5) repeating steps 2, 3 and 4 until a clear signal acknowledgement containing said tag identification information is received. 7. the method of claim 6 further comprising the steps of: 1) receiving a read storage command; 2) transmitting from said tag, product information and price information of the product to which said tag is attached. 8. apparatus for reading a plurality of identification tags, each tag having data storage, logic and communication circuits, said apparatus comprising: communication means for transmitting signals to and receiving signals from said plurality of tags; processor means connected to said communication means and to a storage means; said processor executing programs stored in said storage means and processing data including data stored in said storage means; said programs including a broadcast start command means for broadcasting a start command to all of said tags; said programs including first programmed means for broadcasting a hashing base number to all of said tags; said programs including second programmed means for receiving null and tag data in a plurality of time slots; said programs including third programmed means for detecting a collision state when a plurality of tags transmit tag data in a single time slot; said programs including fourth programmed means for detecting a clear signal state when tag data from only one tag is received in a time slot; said programs including fifth programmed means for transmitting clear signal acknowledgement information to those tags which transmitted tag data in a time slot in which no other tag transmitted, said acknowledgement information indicating to said those tags that their tag data need not be retransmitted; said programs including a programmed control means for reactivating said second, third, fourth and fifth programmed means until a collision state is not detected; said programs including programmed transaction processing means for processing said received tag data. 9. the apparatus of claim 8 wherein said programmed transaction means further comprises: storage access means responsive to said tag data to identify the product to which the related tag was attached by accessing storage to retrieve product information and price information; summing means for tabulating total purchases; means for accepting payment, and means for updating inventory records. 10. the apparatus of claim 9 wherein said storage is physically located on said related tag and said storage access means further comprises: means for controlling said communication means to transmit a read storage command to said related tag; means for controlling said communication means to receive from said tag, product information and information of the product to which said related tag is attached. 11. the apparatus of claim 8 wherein said fifth programmed means further comprises: means for controlling said communication means to transmit a receive acknowledgement command to all of said tags, said receive acknowledgement command defining the start of a new set of time slots; means for controlling said communication means to transmit tag identification information to a tag for which a clear signal was detected in a unique time slot of said new set of time slots, said unique time slot being the time slot in which later communication may be conducted with said tag. 12. the method of claim 11 wherein said programmed transaction processing means further comprises: programmed means for controlling said communication means to broadcast a read product data command to all of said tags: programmed means for controlling said communication means to receive product information and price information from said tag in said unique time slot; programmed means for controlling said communication means to receive product information and price information from others of said tags in other time slots of said new set of time slots; programmed means for controlling said processor means to tabulate total purchases; programmed means for controlling said processor means to accept payment, and programmed means for controlling said processor means to adjust inventory records. 13. an identification tag comprising: data storage; logic connected to said data storage; communication circuits connected to said logic; said logic including first means for responding to a start command; said logic including second means for responding to a hashing base number; said logic including third means for calculating a time slot using said hashing base number, a hashing algorithm and data from said storage means; said logic including fourth means for controlling said communication circuits to transmit data in said time slot; said logic including fifth means for receiving an acknowledgement; said acknowledgement indicating to said tag that its data need not be retransmitted, and said logic including control means for reactivating said second, third, fourth and fifth means until an acknowledgement is received. 14. the apparatus of claim 13 wherein said data storage means includes storage for product information and price information, said control means being responsive to a read command to control said communication means to transmit said information. 15. the apparatus of claim 13 wherein said fifth means further comprises: means for controlling said communication means to receive a receive acknowledgement command, said receive acknowledgement command defining the start of a new set of time slots; and said logic further comprises sixth means responsive to said acknowledgement command to receive data in a unique time slot of said new set of time slots, said unique time slot being the time slot in which later communication may be conducted with said tag. 16. the method of claim 1 wherein step 6 said acknowledgement information contains tag identification information for those tags which transmitted tag identification information in a time slot in which no other tag transmitted. 17. the apparatus of claim 8 wherein said tag data includes tag identification information and said clear signal information transmitted by said fifth programmed means includes tag identification data for those tags which transmitted tag data in a time slot in which no other tag transmitted. 18. the method of claim 1 wherein step 1 the start command provides identification of the hashing algorithm that is used to determine the time slot in which to transmit. 19. a method of reading a plurality of identification tags at a point of sale checkout counter, each tag associated with an item to be purchased, each tag having a chip, each chip having data storage, logic and communication circuits said storage containing tag identification information comprising the steps of: 1) broadcasting a start command and a hashing base number to all of said tags, said start command indicating to said tags that their tag identification information needs to be transmitted; 2) hashing, in each tag to which it has been indicated that their tag identification information is to be transmitted, using the hashing base number and the tag identification information to determine which of a plurality of time slots to transmit said tag identification information; 3) receiving null and tag identification information transmitted by said tags in the plurality of time slots; 4) detecting a collision state when a plurality of tags transmit tag identification information in a single time slot; 5) detecting a clear signal state when tag identification information from only one tag is received in a time slot; 6) transmitting clear signal acknowledgment information to those tags which transmitted tag identification information in a time slot in which no other tag transmitted, said acknowledgment information indicating to said those tags that their tag identification information need not be transmitted; 7) repeating steps 2, 3, 4, 5, and 6 with a newly generated hashing base number as the hashing base number in each iteration, until a collision state is not detected; and 8) processing the received tag identification information. 20. the method of claim 19 wherein said newly generated hashing base number is generated from said broadcast hashing base number in each of the tags.
background of the invention 1. field of the invention this invention relates to reading a plurality of identification tags or labels in a container or space without separately passing each tag through a read station. this invention has particularly useful application in conjunction with cash registers and cash recorders, particularly those which determine the price by scanning or in some way interrogating a coded label on products such as consumer goods and which maintain inventory records. this invention also relates to reading such information using invisible radiant energy in the form of a radio transponder that is realized with integrated circuits fabricated on a monolithic semiconductor chip. 2. prior art it is known in the art of food, clothing, and consumer goods distribution to provide a point of sale or checkout station near the exit of a store in order to allow the identification tags or labels on each of the items of merchandise selected by customers to be read, prices tallied, payment made and inventory counts adjusted. at such checkout stations, each item must be individually removed from a shopping cart or basket and moved past a reader in order to read the stock keeping unit information in the form of the universal product code (upc) from the label. u.s. pat. no. 4,862,160 teaches an inventory data acquisition system where each item of inventory has a tag containing a small passive resonate transponder in the form of a printed circuit. a computerized transceiver mounted on a wheeled cart is moved through the aisles, and the transceiver generates signals causing tags which resonate at a unique pair of frequencies to re-radiate simultaneously a third frequency to which the receiver portion of the transceiver on the cart is tuned. the amplitude of the third frequency detected by the receiver portion is a function of distance and the number of tagged products present on the shelf or rack. with the system of this reference, the accuracy of the number of items is thereby compromised by distance and antenna pattern. furthermore, the transceiver will have to generate all possible combinations of frequencies in order to interrogate all possible items of inventory such as would be required when the inventory is removed from the shelf by a customer and placed in a shopping basket. unlike inventory data collection, the items in a shopping basket cannot be predicted at a point of sale. because a large retail enterprise may have millions of different items for sale with millions of different stock keeping units, it is not practical to serially interrogate each possible stock keeping unit to determine whether it is present. u.s. pat. no. 3,832,530 relates to apparatus for identifying articles such as a suitcase or a mailbag. the objects are moved one at a time through an electromagnetic field. circuits on the label are powered by the electromagnetic field to change the states of a chain of flip flops on the label in a predetermined fashion thereby absorbing electromagnetic energy in a predetermined pattern. the pattern of electromagnetic energy absorption is sensed and then decoded. the chain of flip flops may be set for different codes. the teachings of this reference do not handle multiple items to be sensed in the read region at the same time and do not allow one or more of a plurality of items to be individually addressed and requested to store data. further, this prior art technique requires a magnetic shielded box which is expensive and inconvenient. u.s. pat. no. 4,471,345 describes a tag and portal system for monitoring the whereabouts of, for example, people wearing the tags. up to six tags may be simultaneously interrogated as their holders pass through a doorway. the tags respond to interrogation signals generated by the portal and their response occurs after a pseudorandom delay. the pseudorandom delay is used to avoid data collisions by the six responding tags. summary of the invention an advantage of the invention described herein is that the apparatus and method of the invention will read the information in a large number of identification tags that are in close proximity to one another, in an efficient manner. another advantage is that the tags do not require line-of-sight communication linkage to the reader. a further advantage is that the tags do not need to be presented to the reader individually so handling of the items to enable reading is essentially eliminated. in the preferred embodiment, prior knowledge of the possible identities of the tags to be read is not required. for example, the identification tags on each bottle of spice in a shipping crate can be interrogated individually using this invention. likewise, all the items in a shopping cart can be identified and individually interrogated with out removing them from the cart. other applications, including reading tags on wildlife in a feeding area, reading debit cards, and detailed product data, and further applications which have not been identified are possible. these and other advantages of the invention are accomplished using the tags and reading devices which communicate with each other using radiation. the communication protocol architecture includes a combination of broadcast and time division multiplexing that utilizes a hashing algorithm to select the transmission time slot. once the reader learns the identity of the tags in the read volume, it can address communications to the individual tags. each tag contains non-volatile memory. data will be added or modified by a reader anytime during the life of the item the tag is attached to, making the tags a form of a data base. tags are powered by a battery or by radiation from the reader or by any other convenient means. in the preferred embodiment, tags are manufactured in the form of semiconductor chips. large numbers of identification tags will be manufactured on an automated assembly line, and the tags are later personalized by storing unique identification data in each tag. initial personalization will be done at either the manufacturing site or at a site where they are attached to the item to be identified. in one application of the invention, the reader requests responses from only a fraction of the tags in the read volume. this is done by command from the reader that requires certain fields in the tags' data base to contain particular data in order for the tag to respond to the reader. the reader, for example, could request a response from only the black ski gloves that are present. the data fields contained in the response would also be controlled by the reader; they may or may not contain the fields used to qualify for responding. multiple items with the same universal product code could be individually identified using a serial number for each item. in this embodiment, multiple identical (except for serial numbers) packages of bread will be identified by different stock keeping units comprising the upc and serial numbers. brief description of the drawings fig. 1 is a perspective view of a shopping cart, the contents of which are being read in accordance with the invention; fig. 2 is a block diagram of the circuits of the reader of fig. 1; fig. 3 is a perspective view of a product identification chip; fig. 4 is a timing chart showing the signals being read from a plurality of tags; and fig. 5 is a diagram showing how the tags may be programmed with product identity. fig. 6 is a flow diagram of the operation of a reader. fig. 7 is a flow diagram of the operation of a tag. description of the preferred embodiment features of the invention are accomplished by the reader first broadcasting a set of parameters to all the tags in the read volume. the broadcast initiates a series of time slots with which the reader and the tags get synchronized. each tag uses the broadcast parameters, their unique identity, and/or some or all the data they contain to calculate a time slot in which it will communicate with the reader. the parameters transmitted from the reader to the tags can be, but are not limited to, a hashing base number (which is the same as the number of time slots), a data field selector, a hashing algorithm identifier, and a command. the individual tag's time slot selection calculation is done based on a hashing algorithm. the length and number of time slots are either predefined at the time of tag manufacture or are defined in the information transmitted from the reader. the time slot sequence starts at the end of the reader's broadcast. the number of time slots is selected to be greater than the number of tags anticipated to be in the read volume at one time. the reader transmits an individual acknowledgement (ack) to all the tags who successfully communicated with the reader. the ack conditions the tag in a manner that removes it from participation in subsequent read operations until reenablement. in the event that more than one tag chooses the same time slot, a collision occurs and no tag's communication successfully reaches the reader. no ack will be transmitted from the reader to the colliding tags. another read cycle will be initiated by the reader with another hashing base number used by the tags, resulting in the tags using different time slots. this process is repeated until all the tags' communication is successful. (there are no more collisions and a read cycle reads no new tags.) the key elements for operating on identifying data are the data fields, data encoding, the hashing algorithm, and the hashing parameters. for maximum flexibility of application to various business operations, maintenance of multiple data fields and selection of the field to be used as the identifier is desirable. where the identification of an item type and item serial number is the focus for sales checkout operation used throughout this document four illustration, other fields of data my be transmitted. for example, data such as a purchase order number and purchase order item number might be the fields of interest at a receiving dock. during transit, way bill numbers might be selected for controlling responses. efficient hashing operation during read cycles requires matching the algorithm to the encoding of information and selecting the proper randomizing divisor for the sample universe. an algorithm designed for a binary field does not necessarily efficiently use available bandwidth when applied to an ascii encoded field, for example. control of the hashing base number allows trade-offs among read cycle times, the number of potentially stocked items in the store, and the number of items in a read batch. therefore, the preferred embodiment allows for flexibility in these areas. fig. 1 shows a cart 11 packed with four different bags full of items. each item has a tag 10 affixed to it. these tags are read by reader 21 via radio communication using antennas 23, 24, 25, 26, 27 and 29. the antennas provide a read volume within which the cart is placed for reading. by reading the entire cart full of items, the need to remove each item from the cart and pass it over a scanner or through a read volume has been eliminated. referring now to fig. 2, a block diagram showing the electronic circuits of reader 21 is set forth. a processor 31 has input/output connections 33 which are connected to printers and coin changers, etc. making up a point of sale station. the processor 31 has a memory 35 to store information and programs. a transceiver 37 is also connected to processor 31 to transmit and receive radio frequency signals through antenna 39. antenna 39 comprises the individual antennas 23, 25, 27, 29 and antennas above and below cart 26 and 24 shown in fig. 1. the reader does not require the rf rectifier or battery represented by the power block. the block diagram of the tag is the same as fig. 2, but the tag has no i/o out lines. referring now to fig. 3, a perspective view of an example of a tag in the form of a monolithic identification chip 10 is shown. the chip 10 has a processor 41 which is connected to a memory 43. memory 43 is either a ferroelectric, a flash, or an eeprom memory. a power supply and transceiver 45 is connected to both the memory 43 and the processor 41 to provide power to both of those circuits as well as to send and receive information from and to the processor 41. an antenna 47 is connected to transceiver 45. the antenna receives energy via radio waves from antenna 39 of the reader in fig. 2 to provide an alternative to having a battery power the processor memory and transceiver electronic circuits as well as receiving and transmitting information in a predetermined band. each chip 10 shown in fig. 3 transmits and receives using radio frequencies with a range determined by application, but typically 6-12 feet for check-out at point of sale. fig. 5 shows a programmer 51 which transmits signals through antenna 53 to individual tags 55 and 57 which are sequentially passed near antenna 53 on a conveyer 59 to receive their initial identity and product data. because tags 57 have not yet been programmed, they can not use the time division multiplex feature of the invention at this stage but must communicate one at a time. in this embodiment, the tags have already been attached to the item. programmer 51 has a processor similar to the processor shown as 31 in fig. 2, a memory similar to memory 35, a transceiver similar to transceiver 37 and the antenna 53 is similar to antenna 39 but may be a single antenna as shown in fig. 5. as each tag moves past the programmer 51, it is provided with the upc or other data of the type of item and other data such as run number, date code, and serial number of, for example, a container of cinnamon spice. the programmer then loads each tag 55, 57, etc. with the same upc which identifies the item type. in addition to the code number, the programmer 51 loads each tag with a different or unique serial number. the upc and the serial number are loaded into memory 43 shown in fig. 3 after having been received by antenna 47 and been processed through transceiver 45 and processor 41. since the type of item passing the programmer is known, the loading of the tags 10 is not complex and is done in accordance with the techniques of the prior art known in radio communication and computing. method of operation of the invention referring now to figs. 4, 6, and 7, the method of reading a plurality of tags will be described. each tag is powered by a battery or by radio frequency signals transmitted from the antenna 39 of fig. 2, received by antenna 47 of fig. 3 and converted in transceiver power circuits 45 into direct current voltages needed by each processor 41 and memory 43 in each tag in cart 11. the reader starts to interrogate the tags in the read volume by first transmitting a set of operational parameters called group tag initialization data (gtid). the gtid consists of the hashing algorithm selector, the hashing base number (matches the number of time slots), the data field selector, and a command. the commands control such things as whether the tag should respond based on one field of data but actually transmit data from a different field. a command can also cause tags that had previously responded to a reader to respond to a new read cycle and/or to reset any status conditions it is storing. the hashing algorithm causes the tags to group themselves by time slots. the tags with the same time slot (hashed number) transmit simultaneously to the reader. memory 43 of each chip contains several hashing algorithms in computer program form. in extended embodiments of the invention, it is possible to define a command to be used by the reader to enable it to load a new hashing algorithm into any tags that can receive the reader's signal. in the example of this embodiment, the selected algorithm hashes a field in the tags memory 43. in this embodiment, the tag's identification serial number is divided by a divisor (the hashing base number) to produce a remainder (the hashed number) which corresponds to the communication time slot in which the tag will transmit. note that the hashing base number equals the member of time slots used. hashing algorithms are well known for their ability to distribute data over a defined range of numerical values. if the original data is truly random, then the hashed numbers will be evenly distributed over the range of numbers defined by the hashing base number. however, data is frequently not truly random, such as data stored in ascii format. in order to distribute nonrandom data smoothly, the hashing algorithm must match the nature of the data. thus the necessity to vary the hashing algorithm used. since the entire population of tags existing locally (in a store, for example) far exceeds the hashing base number (number of time slots), some items will wind up with the same hashed number. for example, a store with 3 million items could be using a hashing base number of 30,000. this would result in each hashed number being used by 100 items. only a small subset of the entire population of tags (a filled shopping cart, for example) will be in the read volume at one time. the probability of two tags in the read volume transmitting in the same time slot (having the same hashed number) is low. this probability goes up with the number of tags being interrogated at once. since the probability of a randomly chosen item using a particular hashed number is so low, a large number of tags could be read at one time and the system would not have to deal with excessive cases of tags using the same time slot which would result in collisions. if a tag's communication is successful in reaching the reader, the reader will then know to address an ack to that tag. when the tag receives the ack from the reader, the tag then knows that its message was received by the reader and that the reader is aware of the tag's presence. the tag will not transmit again until it is specifically requested to do so by the reader. if two tags use the same time slot to transmit, the reader will detect a collision by the absence of a clear transmission. a collision will cause the data to be garbled. when the reader does not receive a clear transmission it does not send an ack. if a tag does not receive an ack to its transmission, it will retransmit on a succeeding read cycle. the reader will repeat read cycles until no collisions are detected and no successful communication from tags are received. the invention contemplates any of several methods for transmitting the acknowledgements. three example methods are: ack at the end of each time slot, use the next time slot cycle as an ack cycle, or transmit a group ack at the end of the time slot cycle. the number of time slots required to transmit the acks is much smaller than the total number required to receive the identities of the tags; therefore, it requires less time to transmit acks. the group ack would not be in the format of a time slot cycle, but would be single packet containing the id's of the tags successfully heard from. before a tag retransmits after a collision on the next time slot cycle, it either receives a new base number or it calculates a new hashed number that is based on a hashing base number that is 1 less than the hashing base previously used. this is enough to avoid a large percentage of hashed number collisions. alternatively, the reader would down load a new hashing base number into the unread tags. the reader knows which previous hashing numbers resulted in collisions and will use an intelligent algorithm to set the new hashing base number. by the time the second read time slot cycle comes around, most of the tags will have been read and the unread tags will have chosen new, probably non-colliding time slots. the probability of two unread tags choosing the same hashed number again is small. the number of tags read in the second cycle will be smaller than in the first, so the odds of a collision will be drastically smaller. if, despite low odds, two tags' messages collide during the second read time slot cycle, a third cycle is initiated. of course new hashed numbers will be calculated as described above. this cycle can be repeated, but the probability of another cycle being required drops drastically with each cycle. the number of time slots in a cycle affects how fast the reader must respond to the individual time slots. even if a time slot is allowed for all 30,000 upc label types in a typical store, it would mean that each time slot would be 33 microseconds long if the total cycle time is 1 second. it is not unreasonable to allow a 10 second cycle time, thus allowing 330 microseconds per time slot. with inexpensive processors having instruction times of around 1 microsecond, it is reasonable to have a microprocessor/specialized hardware or multiple microprocessor design perform satisfactorily. the timing chart of fig. 4 shows a number of time slots one through twenty-eight and beyond on the x axis. each signal level is shown in the y direction. time slots one, three, four and five have no signal in them indicating null information. null information means that there are no tags in the shopping cart corresponding to universal product code and serial numbers that would result in a hashed number designating these time slots. time slots two and six do show signals being transmitted from one or more tags to the reader. in time slots two, eleven, fourteen, seventeen, twenty-one, and twenty-four, a clean signal is shown representing the identification information from only one tag. the signals in time slots six, twelve, twenty-three and twenty-eight, on the other hand, show a collision indicating that more than one tag is transmitting in these time slots which will be detected as collisions. attention is now directed to fig. 6 where a flow diagram is set forth showing the method of operation of the reader as it is controlled by programs stored therein. operation of the reader begins at start block 61 which may be initiated by a checkout person pressing a start button or it may be initiated when the shopping cart is moved into the read volume shown in fig. 1. during the start operation, programmed instructions from storage 35 are executed in processor 31 to control the communications modem 37 to transmit a gtid command to any and all tags within the read volume of fig. 1. broadcast transmission of the gtid command performs multiple functions. first, it provides a radio frequency signal that can be received by the antenna 47 of all tags and converted into power within the chips to bring the tags into operation. in addition, the gtid command, when received by each tag acts to initialize the tag and put all tags into synchronization with the reader. this operation is shown in fig. 6 at block 63. the gtid contains the hashing base number, which is used by the tag in combination with the specified data fields of the tag. this may include the universal product code and serial number of the item to which the chip is attached to generate a remainder (hashed number) which identifies the time slot in which the tag transmits its identity to the reader. the execution of the program in the processor of the reader remains synchronized with such communication and, therefore, keeps track of which time slot is being received. such synchronous operation appears in fig. 6 as decision block 65. at decision block 65, the program determines whether the current time slot is the last time slot which could be created using the previously broadcast hashing base number. so long as the last time slot has not passed, the antenna 39 and modem 37 of the reader will attempt to receive tag data in the then current time slot as represented in the timing chart of fig. 4 and by block 67 in fig. 6. signal discrimination program instructions from storage 35 execute in processor 31 to discriminate between null, clear and collision signals. such discrimination is represented in the flow diagram of fig. 6 by decision blocks 69 and 73. if more than one response has been received, the response will be garbled and the output from decision block 69 sets collision detected state 71 shown in fig. 6. once a collision has been detected in a time slot, the program flow returns to decision block 65 to await the next time slot. if more than one response has not been received, the possibility exits that there has been a null response or a clear response from a single tag. this decision is detected at block 73 where a yes output leads directly back to time slot decision block 65 because a null indicates that no tag has transmitted in the time slot. a no output from block 73 indicates that a clear signal has been received from only one tag which will be acknowledged at block 75. the transmission of an acknowledgement to a clearly responding tag can be within the same time slot wherein the tag is transmitting or it can be at the end of all time slots. if the ack is transmitted within the same time slot, it will be presumed that the acknowledgement is transmitted immediately in the latter portion of the same time slot. the acknowledgement is transmitted specifically to a tag identified by its tag universal product code or other specified field, and serial number and the acknowledgement message could also include a new slot number. the new slot number is the slot in which future transmissions from the tag to the reader will be expected by the reader. after detecting responses and acknowledging clear responses, the program flow returns to decision block 65 and the loop is repeated until the last time slot has been received. the yes output from decision block 65 leads to block 66 which, if group ack are used, broadcasts a group ack to all the tags successfully heard from. the group ack packet would have a broadcast address and send the addresses of the tags which successfully communicated as data. the occurrence of any collisions during the previous series of time slots is determined by block 77. if no collisions occurred, that means that all tags in the cart have clearly transmitted their identity to the reader one tag at a time, each in a single time slot and the program flow can continue to blocks 79, 81, 83 and 85 to look up each of the prices for each of the items, total the purchase amount, accept payment for the transaction and adjust any inventory records to reflect sale of the quantity and type of item contained in the cart. the tag identification information may be used to identify the product to which the related tag was attached by accessing storage to retrieve product information and price information. this storage may be part of the reader or stored in the identification tag itself. in the event that a collision was detected, the collision means that there still remain items in the cart whose label tags have not yet been able to clearly transmit their identity to the reader. in this case, the flow of fig. 6 follows the yes path to block 87 where a new hashing base number is broadcast to all tags. those tags which have already received an acknowledge transmission do not act on the new hashing base number. only those tags which have had no acknowledgement and whose programs are therefore in a state indicating that they have not yet clearly transmitted, act on the new hashing base number to combine it with the specified fields to define a new time slot. after new time slots have been defined, the remaining tags are still in synchronism with the reader and at block 65 the cycle repeats in order to read the remaining tags. this process continues until, as previously described, all tags have been read as indicated by the detection of no collisions. as previously mentioned with respect to block 75 of fig. 6, there are a number of ways that the acknowledgement can be transmitted to the tag. in addition to transmitting in the same time slot as previously described, acknowledgements can be made at the end of each sequence of time slots by sending a group acknowledge command followed by the identity of each tag whose signal was clearly received alone in a time slot. this is done in block 66. in this way all tags whose signal was clearly received will be provided with information to be used by the tag to store a state causing that tag not to retransmit during the next sequence of time slots based on the new hashing base number. referring now to fig. 7, the program flow of the operation of each tag 10 is set forth in flow diagram format. each tag receives power at its antenna 47 or from a battery. this state is represented in fig. 7 at block 101. the tag is synchronized with the reader at block 101 by recognition of the gtid command at block 103. the hashing base number is used at block 105 when the program stored in storage 43 executes in processor 41 to calculate the number of the time slot wherein this tag will transmit response information to the reader. after calculating its own time slot, the tag will count time slots until its time slot occurs as set forth graphically in the timing chart of fig. 4. when the time slot of this particular tag occurs, this tag sends its data by broadcast transmission to the reader as depicted in block 107 of fig. 7. if the tag receives an acknowledgement from the reader, the tag enters a standby state and no longer has to calculate further time slots and retransmit its identification data because the program in effect knows that the reader has received its data, clearly alone, in a time slot. since the reader 21 is now aware of the identity of the tag, it can later address a command to the tag at block 111 to receive from the tag any other data at 113 which is stored therein. the delta may include price, product expiration date or other information relevant to the item to which this tag is attached. if at decision block 109, the program of the tag determines that no acknowledgement has been received, the tag goes into a wait state until it receives from the reader another hashing base number at block 103. alternately, the tag can calculate a new hashing base number. thereafter, this tag repeats this cycle of operations at blocks 103,105, 107 and 109 until it has received an acknowledgement. while the invention has been described with respect to a preferred embodiment including the application of the invention in a store having a checkout station and having a plurality of items in stock to which tags have been attached, it will be recognized that various changes in application and detailed implementation may be made without departing from the spirit and scope of the invention which is to allow acquisition of information from a plurality of sources without the need to pass each through a reader in sequence. for example, the invention may find utility in the monitoring of wildlife to which tags have been applied. wildlife within a feeding area thereafter may be identified from time to time and thereby learn the feeding or other habits of the wildlife without interfering in their environment in order to collect the data. future tags' data fields could be made large enough for passenger information, patient information or instruction manuals. examples of variations in the detailed implementation are the inclusion of a battery on each tag so that power need not be received or the use of functional logic on each tag instead of a sequential processor and stored program as described in the preferred embodiment. tags may be implemented in multiple components or a single chip. another variation could be the use of a method that indirectly detects collisions by the absence of a clear signal reaching the reader from a tag. either no transmission from any tag or colliding transmissions from multiple tags could prevent a clear signal from reaching the reader. collision resolution could be done simply by repeating read cycles until no clear message is received several consecutive times, changing the hashing base each read cycle. with a relatively small number of extra read cycles, the probability of multiple tags repeatedly colliding in the same time slot would be extremely small. this would effectively eliminate the risk of not reading a tag. accordingly, it will be recognized that these and other modifications may be made without departing from the scope of the invention as measured by the following claims.
127-299-800-831-094
US
[ "US" ]
B29C35/08,B29C65/14
1998-01-16T00:00:00
1998
[ "B29" ]
method of using infrared radiation for assembling a first component with a second component
a method of assembling a first component for assembly with a second component involves a heating device which includes an enclosure having a cavity for inserting a first component. an array of infrared energy generators is disposed within the enclosure. at least a portion of the first component is inserted into the cavity, exposed to infrared energy and thereby heated to a temperature wherein the portion of the first component is sufficiently softened and/or expanded for assembly with a second component.
1. a method of assembling a first component for assembly with a second component comprising the steps of: a. providing a heating device comprising an enclosure defining a cavity for inserting a first component, an array of infrared energy generators disposed within said enclosure and arranged so that at least a portion of said first component inserted into said cavity is exposable to infrared energy generated by said array; b. inserting at least a portion of said first component into said cavity to expose said portion of said first component to infrared energy generated by said infrared energy generators so that said portion of said first component is heated to a temperature wherein said portion of said first component is sufficiently at least one of softened and expanded for assembly with a second component; c. removing said portion of said first component from said cavity; and d. assembling said first component with said second component. 2. a method in accordance with claim 1 wherein said first component comprises an elastic material. 3. a method in accordance with claim 1 wherein said first component comprises a polymer boot. 4. a method in accordance with claim 3 wherein said second component comprises a cv joint. 5. a method in accordance with claim 1 wherein said infrared energy generators comprise incandescent lamps. 6. a method in accordance with claim 1 wherein said array surrounds said cavity so that an outer surface of said portion of said first component is exposed to infrared energy. 7. a method in accordance with claim 6 further comprising another array disposed within said cavity so that an inner surface of said portion of said first component is exposed to infrared energy. 8. a method in accordance with claim 1 further comprising the additional step of controlling the amount of infrared energy generated by said array. 9. a method in accordance with claim 8 wherein said additional step of controlling the amount of infrared energy is accomplished by controlling means comprising a temperature sensor. 10. a method in accordance with claim 9 wherein said temperature sensor comprises an optical pyrometer. 11. a method in accordance with claim 1 further comprising the additional step of removing residual heat from said heating device. 12. a method in accordance with claim 11 wherein said additional step of removing residual heat is accomplished by cooling means comprising a fan. 13. a method in accordance with claim 11 wherein said additional step of removing residual heat is accomplished by cooling means which are thermostatically controlled.
field of the invention the present invention relates to devices and methods of heating a specific portion of a first component in order to facilitate assembly with another component, and more particularly to such devices and methods wherein infrared energy is generated and used to heat the first component prior to assembly. background of the invention in various types of industry, components are assembled by pressing together wherein a first component having a cavity is assembled with a second component which slides into the cavity of the first component. one of the components is generally characterized by a usually known degree of elasticity, and is stretched or compressed to a usually known degree during assembly, and remains in a stretched or compressed condition to grip or force against the other component in order to maintain assembly with the other component. for example, in the automotive industry, tubular elastic (usually polymer) boots are assembled over metal components by manually forcing the elastic component over the metal component, usually with the aid of a lubricant. other methods include heating in a convection heater or via direct contact with a hot fluid. disadvantages thereof include time required for the operation, inefficient heating of components which results in waste heat generated into the work environment, worker fatigue, and costs involved in the use of a lubricant. of particular interest are methods employed in the assembly of polymer boots onto automotive constant-velocity (cv) joints. presently, statistics indicate that at least 17,000 boots are assembled by hand onto cv joint components daily using lubricants and force. objects of the invention accordingly, objects of the present invention include the provision of devices and methods of assembling a first component with a second component while minimizing time requirement, minimizing waste heat generated into the work environment, minimizing worker fatigue, and minimizing or eliminating the need for use of a lubricant. further and other objects of the present invention will become apparent from the description contained herein. summary of the invention in accordance with one aspect of the present invention, the foregoing and other objects are achieved by a device for heating an component including an enclosure defining a cavity for inserting a component; and an array of infrared energy generators disposed within the enclosure and arranged so that at least a portion of a component inserted into the cavity is exposable to infrared energy generated by the array. in accordance with another aspect of the present invention, a method of assembling a first component for assembly with a second component includes the steps of: providing a heating device including an enclosure defining a cavity for inserting a first component, an array of infrared energy generators disposed within the enclosure and arranged so that at least a portion of the first component inserted into the cavity is exposable to infrared energy generated by the array; inserting at least a portion of the first component into the cavity to expose the portion of the first component to infrared energy generated by the infrared energy generators so that the portion of the first component is heated to a temperature wherein the portion of the first component is sufficiently softened and/or expanded for assembly with a second component; removing the portion of the first component from the cavity; and assembling the first component with the second component. brief description of the drawing in the drawing: fig. 1 is an oblique cutaway view of a partially disassembled boot heater in accordance with an embodiment of the present invention. fig. 2 is an oblique view of a boot heater in accordance with an embodiment of the present invention. fig. 3 is a schematic top view of a boot heater in accordance with an embodiment of the present invention. detailed description of the invention the present invention is applicable to boots and other components such as seals, dust covers, gaskets, handles, fasteners, mechanism components, and the like. such components can be comprised of any material exhibiting elasticity and which also is softenable and/or expandable via infrared heating thereof. in accordance with a particular embodiment of the present invention, a device and method are described wherein elastic components, for example, polymer boots, are exposed to and efficiently heated by infrared energy. the device directs energy generated thereby efficiently to the component, with minimal heat transferred to the work environment. the device is "cold-walled"--the enclosure containing the infrared chamber is not substantially heated. moreover, only the portion of the part that needs to be heated will be heated. referring to figs. 1 and 2, an embodiment of the present invention is described which is suitable for assembling polymer boots onto cv joints. a boot heater 10 is structurally supported by an enclosure 12 which comprises top 14, sides 16, and bottom 18. an array of incandescent lamps 20, preferably of the tungsten halogen type, is mounted within the enclosure 12 and supported thereby. each lamp 20 is supported by a pair of conventional respective upper and lower lamp supports 22, 22' and electrically connected by conventional respective upper and lower electrical connections 24, 24'. lamp supports 22, 22' are respectively arranged on and supported by upper and lower support plates 30, 30' which are fastened to the enclosure 12 by respective upper and lower standoff supports 32, 32'. the array of lamps 20 are surrounded by a reflector 40 which can be of straight (shown), elliptical, parabolic, or any other suitable cross section to reflect infrared energy as desired. the reflector 40 is supported by upper and lower support plates 30, 30'. the back of the reflector 40 is preferably covered with insulation 42. the top 14 defines an opening 90 for inserting a boot 95 into the heater where the boot 95 is surrounded by of the array of lamps 20. the top 14 or the lower support plate 30' acts as a stop to allow insertion of the boot 95 only to a preselected depth into the enclosure 12. a tubular shield 92 can optionally be located around the opening 90 and down to the upper support plate 30 to cover the upper electrical connections 24. cooling means such as an exhaust fan 50 and/or vent openings (not illustrated) direct air through the enclosure 12 to remove any residual heat that is not absorbed by the boot 95. the exhaust fan 50 is preferably mounted on the bottom 18 to provide balanced airflow through the enclosure 12. the exhaust fan 50 can be controlled by a thermostat 46. cooling means can optionally comprise a static or flowing liquid or any other conventional cooling method and/or device which is suitable for cooling a device as described herein. controlled power is supplied to the lamps via a controlled power supply 60 in order to control the amount of infrared energy generated thereby. conventional wiring 62 is used to electrically connect the electrical connections 24, 24' of the lamps 20 to the controlled power supply 60 and thence to a power source 64. the controlled power supply 60 can comprise a manual controller. the controlled power supply 60 can comprise electrical and/or electronic control systems, for example, an optical pyrometer temperature sensor 44 in operative relationship with the heater and coupled with a silicon-controlled-rectifier (scr)/controller system located within the controlled power supply 60. moreover, the controlled power supply 60 can comprise a mechanical, electrical and/or electronic time-delay control system. the controlled power supply 60 can be fully automated via conventional automation technology, as can the entire device and the process of heating and assembling components. electrical connection 66 to the exhaust fan 50 can bypass the controlled power supply 60 and be connected directly to the power source 64. example i a boot heater was constructed as described hereinabove. a polymer boot was inserted into the boot heater so that a portion of the boot that is generally assembled onto a cv joint was within the cavity and exposable to the array of lamps therein, the remaining portion of the boot remaining outside of the boot heater. the boot heater was energized for 12 seconds to expose an outer surface of the portion of the boot to infrared energy. the boot was immediately removed and the temperature of the exposed portion thereof was measured at 260.degree. f., a temperature at which the polymer material thereof was suitably softened/expanded for assembly onto a respective cv joint. methods of heating cv joint boots and other components for assembly can vary within the scope of the present invention. for example, voltage to the infrared energy generator and time of exposure to the infrared energy can be varied in order to heat components to various temperatures. some experimentation is usually desirable in order to optimize voltage and exposure time for a particular component. moreover, the number of lamps in the array can be modified to increase or decrease the amount of infrared energy generated. for example, the boot heater in fig. 1 can be modified as shown schematically in fig. 3 to have a second array of lamps 21 within the cavity so that when a boot is inserted therein, the second array of lamps 21 will be inside the boot, and will heat the boot from the inside by exposing an inner surface thereof to infrared energy. an advantage of such an arrangement is that the boot will be heated to the desired temperature faster, and with the result of a more even heating of inside and outside surfaces of the boot. moreover, the device can be modified to accept components of various sizes and/or shapes by removing the top 14 and shield 92 and replacing the same with others of desired size/shape. while there has been shown and described what are at present considered the preferred embodiments of the invention, it will be obvious to those skilled in the art that various changes and modifications can be made therein without departing from the scope of the inventions defined by the appended claims.
129-005-197-613-012
US
[ "US", "WO" ]
G06F3/033,G06F3/048
1997-04-09T00:00:00
1997
[ "G06" ]
system and method for producing a drag-and-drop object from a popup menu item
a method and apparatus are provided in conjunction with a graphical user interface (gui) for producing drag-and-drop objects (317b) from one or more choice-items (317a) presented in a choice-listing menu of a source application program (314). the so-produced drag-and-drop object is dropped onto a receiving area (312c) and acquires new functionality based on the source of the choice-item and on the nature of the receiving area. a filename presented as a choice-item in a history portion of a file menu for example may be converted into a drag-and-drop object and dropped on an open desktop area. the dropped object (319) may be then activated to automatically launch the source application program and to automatically open the file identified by the original choice-item.
1. a machine-implemented method for using choice-items presented in a choice-listing menu for functions other than choice-item selection or choice-item activation, said method comprising the steps of: (a) detecting selection of a presented choice-item; (b) identifying object specifications of the selected choice-item; (c) detecting special keying on the selected choice-item, where the special keying indicates a request for producing a drag-and-drop object from the selected choice-item; and (d) using the identified object specifications of the selected choice-item that was specially keyed on to generate a drag-and-drop object. 2. a machine-implemented method according to claim 1 wherein the implementing machine has an operating system (os) that detects events and said step (a) of detecting selection includes: (a.1) subclassing an application program to cause the subclassed application program to request an os callback to a menu monitoring routine each time the os detects an event involving the subclassed program; (a.2) determining within the menu monitoring routine whether the os-detected event is a movement-invoked selection of a next choice-item. 3. a machine-implemented method according to claim 2 further comprising between said step (b) of identifying and said step (c) of detecting the step of: (b.1) copying the identified object specifications of the selected choice-item into a buffer that is later accessible to the menu monitoring routine at the time of said step (d) of using the identified object specifications. 4. a machine-implemented method according to claim 2 wherein said implementing machine comprises a movable mouse having left and right depressible buttons and further wherein said step (c) of detecting special keying includes: (c.1) determining within the menu monitoring routine whether the os-detected event indicates completion of a mouse right-button depression and a mouse dragging movement over an area associated with the selected choice-item. 5. a machine-implemented method according to claim 2 wherein said operating system (os) produces drag-and-drop objects in response to passed requests for production of such drag-and-drop objects and in further response to passed object specifications defining attributes of the to-be-produced drag-and-drop objects, and wherein said step (d) of using includes: (d.1) passing the identified object specifications of the specially keyed on choice-item to the os; and (d.2) transmitting to the os a request to produce a drag-and-drop object based on the passed object specifications. 6. a machine-implemented method according to claim 2 further comprising between said step (c) of detecting special keying and said step (d) of using, the step of: (c.1) closing the choice-listing menu in which the specially keyed on choice-item was presented. 7. a machine-implemented method according to claim 6 wherein the choice-listing menu in which the specially keyed on choice-item was presented is a submenu of one or more other opened choice-listing menus and wherein between said step (c) of detecting special keying and said step (d) of using, the method further comprises the steps of: (c.2) closing the one or more other opened choice-listing menus. 8. a machine-implemented method according to claim 1 wherein said implementing machine comprises a graphically-oriented user input device having means by which the user can request movement of a displayed graphical object and further wherein said step (c) of detecting special keying includes: (c.1) detecting a user request presented by way of said graphically-oriented user input device for movement of the selected choice-item; and (c.2) detecting a simultaneous user request presented by simultaneous depression of a pre-assigned button for a special drag of the selected choice-item, said special drag indicating a request for initiating a drag-and-drop operation. 9. a machine-implemented method according to claim 1 further comprising the steps of: (e) allowing the generated drag-and-drop object to be dropped on a user-selected receiving area; and (f) producing in the user-selected receiving area, a dropped object having properties defined by an inter-application transfer to the receiving area of said object specifications. 10. a machine-implemented method for use with a plurality of choice-items presented in one or more choice-listing menus, wherein choice-items can be normally keyed on to invoke respective normal functions associated with the choice-items, said method comprising the steps of: (a) detecting special keying on a selected choice-item that is displayed in a source area within an opened choice-listing menu, where the special keying indicates a request for dragging the selected choice-item away from of the opened choice-listing menu; and (b) generating a drag-and-drop object from the selected choice-item that was specially keyed on. 11. a machine-implemented method for use with a plurality of choice-items presented in one or more choice-listing menus, said method comprising the steps of: (a) detecting special keying on a selected choice-item that is displayed in a source area within an opened choice-listing menu, where the special keying indicates a request for dragging the selected choice-item away from of the opened choice-listing menu; (b) generating a drag-and-drop object from the selected choice-item that was specially keyed on: (c) allowing the generated drag-and-drop object to be dropped on a user-selected receiving area; and (d) producing in the user-selected receiving area, a dropped object having properties defined by the user-selected receiving area and by the source area. 12. a machine-implemented method according to claim 11 wherein: (a.1) the selected choice-item that was specially keyed can be a historical filename of a data file previously worked on under an associated application program; (c.1) the user-selected receiving area can be an open desktop area; and (d.1) when the user-selected receiving area is such an open desktop area, and the selected choice-item that was specially keyed on is such a historical filename, said producing step produces a shortcut object on the open desktop area for automatically launching the associated application program and simultaneously opening the named historical data file from within the launched application program. 13. a machine-implemented method according to claim 11 wherein: (a.1) the selected choice-item that was specially keyed can be a net.sub.-- favorite filename representation of a file that is frequently downloaded by way of a network; (c.1) the user-selected receiving area can be an open desktop area; and (d.1) when the user-selected receiving area is such an open desktop area, and the selected choice-item that was specially keyed on is such net.sub.-- favorite filename representation, said producing step produces a shortcut object on the open desktop area for automatically launching a notification program that, when invoked, tests an external version of the file identified by said net.sub.-- favorite filename for changes relative to a local copy that has been previously downloaded, and if there are differences, fetches the latest version from over the network. 14. a machine-implemented method according to claim 11 wherein: (a.1) the selected choice-item that was specially keyed can be a network sitename displayed in a sitename recording menu within a network browsing program; (c.1) the user-selected receiving area can be an open desktop area; and (d.1) when the user-selected receiving area is such an open desktop area, and the selected choice-item that was specially keyed on is such a network sitename, said producing step produces a shortcut object on the open desktop area for automatically launching the network browsing program and automatically seeking the network site identified by the sitename. 15. a machine-implemented method according to claim 11 wherein: (a.1) the selected choice-item that was specially keyed can be an e-mail address displayed in an address recording menu within an electronic-mail program; (c.1) the user-selected receiving area can be an open desktop area; and (d.1) when the user-selected receiving area is such an open desktop area, and the selected choice-item that was specially keyed on is such a e-mail address, said producing step produces a shortcut object on the open desktop area for automatically launching the electronic-mail program and automatically addressing an e-mail message to the addressee identified by the e-mail address. 16. a machine-implemented method according to claim 11 wherein: (a.1) the selected choice-item that was specially keyed can be a function choice-item displayed in a functions listing menu within an application program; (c.1) the user-selected receiving area can be an open desktop area; and (d.1) when the user-selected receiving area is such an open desktop area, and the selected choice-item that was specially keyed on is such a function choice-item, said producing step produces a function object on the open desktop area for invoking the identified function from the desktop. 17. a machine-implemented method according to claim 11 wherein: (a.1) the selected choice-item that was specially keyed can be a tool choice-item displayed in a tools listing menu within an application program; (c.1) the user-selected receiving area can be a toolbar area; and (d.1) when the user-selected receiving area is such a toolbar area, and the selected choice-item that was specially keyed on is such a tool choice-item, said producing step produces a tool object in the toolbar area for invoking the corresponding tool function from the toolbar. 18. a machine-implemented method according to claim 11 wherein: (a.1) the selected choice-item that was specially keyed can be a first choice-item displayed in a first listing menu within a first application program; (c.1) the user-selected receiving area can be an executable object that when invoked launches a second application program; and (d.1) when the user-selected receiving area is such an executable object, and the selected choice-item that was specially keyed on is such a first choice-item, said producing step transfers object specifications of the first choice-item to the second application program. 19. a machine-implemented method according to claim 11 wherein: (a.1) the selected choice-item that was specially keyed can identify a file having a filename with a file-categorizing extension; (c.1) the user-selected receiving area can discriminate between received object specifications having different file-categorizing extensions; and (d.1) when the user-selected receiving area is such a discriminator among different file-categorizing extensions, and the selected choice-item that was specially keyed on is such an extension categorized choice-item, said producing step produces different results in response to different file-categorizing extensions. 20. an instructing device adapted for passing execution instructions to an instructable machine, wherein the instructable machine uses choice-items presented in one or more choice-listing menus, wherein choice-items can be normally keyed on to invoke respective normal functions associated with the choice-items, said execution instructions including instructions for causing the machine to perform the steps of: (a) detecting special keying on a selected choice-item that is displayed in a source area within an opened choice-listing menu, where the special keying indicates a request for dragging the selected choice-item away from of the opened choice-listing menu; and (b) generating a drag-and-drop object from the selected choice-item that was specially keyed on. 21. a computer system comprising: (a) display means for displaying a plurality of graphical items including a graphical desktop layer, one or more overlying, graphical windows and a user-movable cursor, wherein at least one of the windows occasionally contains a choice-listing menu presenting a plurality of selectable choice-items; (b) a graphically-oriented user input device for manipulating one or more of said graphical items; and (c) graphical user interface means for detecting and processing graphically oriented manipulations and producing corresponding results, said graphical user interface means including: (c.1) hot key detecting means for detecting special keying by a user on a selected choice-item that is displayed in a source area within an opened choice-listing menu, where the special keying indicates a request for dragging the selected choice-item away from of the opened choice-listing menu; and (c.2) object generating means for generating a drag-and-drop object from the selected choice-item that was specially keyed on. 22. a machine-implemented method according to claim 10 wherein the normal functions associated with normal keying on the choice-items include at least one of the functions in the group consisting of: open file, save file, save file as, rename file, close file, unfurl file history, and exit file. 23. a machine-implemented method according to claim 10 wherein the normal functions associated with normal keying on the choice-items include at least one of the functions in the group consisting of: edit undo, edit copy, edit cut, and edit paste. 24. a machine-implemented method according to claim 10 wherein the normal functions associated with normal keying on the choice-items include at least an unfurl file history function. 25. a machine-implemented method according to claim 10 wherein the normal functions associated with normal keying on the choice-items include at least one of the functions in the group consisting of: spell-checking, grammar-checking, clip-art fetch, thesaurus-substitute, magnifying glass, and macro-playback. 26. a machine-implemented method according to claim 10 wherein the normal functions associated with normal keying on the choice-items include at least one of the functions in the group consisting of: macro record and macro playback. 27. a machine-implemented method according to claim 10 where the normal functions associated with normal keying on the choice-items include at least one of the functions in the group consisting of: visiting a favorite or bookmarked internet site; performing an ftp file-download; and performing a newsgroup exchange. 28. a machine-implemented method according to claim 10 wherein the normal functions associated with normal keying on the choice-items include at least listing e-mail aliases or addresses. 29. a machine-implemented method according to claim 10 wherein the normal functions associated with normal keying on the choice-items include at least listing specially-encoded data files. 30. a machine-implemented method according to claim 29 wherein the listed, specially-encoded data files include at least one of: html files, dot-txt files, dot-wav files, dot-bmp files, dot-gif files, dot-pcx files, and files with other picture defining formats. 31. a machine-implemented method for use with a plurality of choice-items presented in one or more, popped choice-listing menus, wherein said popped choice-listing menus are each temporarily presented in a display area and retracted after use, and wherein such use can include a normal keying on one of the presented choice-items for invoking a respective normal function associated with the normally keyed-on choice-item, said method comprising the steps of: (a) detecting special keying on a presented choice-item, where the special keying indicates a request for dragging the specially-keyed-on choice-item away from the corresponding choice-listing menu; and (b) generating a drag-and-drop object from the selected choice-item that was specially keyed on. 32. a machine-implemented method according to claim 31 wherein the normal functions associated with normal keying on the choice-items include at least one of the functions in the group consisting of: open file, save file, save file as, rename file, close file, unfurl file history, and exit file. 33. a machine-implemented method according to claim 31 wherein the normal functions associated with normal keying on the choice-items include at least one of the functions in the group consisting of: edit undo, edit copy, edit cut, and edit paste. 34. a machine-implemented method according to claim 31 wherein the normal functions associated with normal keying on the choice-items include at least an unfurl file history function. 35. a machine-implemented method according to claim 31 wherein the normal functions associated with normal keying on the choice-items include at least one of the functions in the group consisting of: spell-checking, grammar-checking, clip-art fetch, thesaurus-substitute, magnifying glass, and macro-playback. 36. a machine-implemented method according to claim 31 wherein the normal functions associated with normal keying on the choice-items include at least one of the functions in the group consisting of: macro record and macro playback. 37. a machine-implemented method according to claim 31 wherein the normal functions associated with normal keying on the choice-items include at least one of the functions in the group consisting of: visiting a favorite or bookmarked internet site; performing an ftp file-download; and performing a newsgroup exchange. 38. a machine-implemented method according to claim 31 wherein the normal functions associated with normal keying on the choice-items include at least listing e-mail aliases or addresses. 39. a machine-implemented method according to claim 31 wherein the normal functions associated with normal keying on the choice-items include at least listing specially-encoded data files. 40. a machine-implemented method according to claim 39 wherein the listed, specially-encoded data files include at least one of: html files, dot-txt files, dot-wav files, dot-bmp files, dot-gif files, dot-pcx files, and files with other picture defining formats. 41. a manufactured instructing signal for conveying instructions to an instructable machine, wherein the instructable machine displays choice-items presented in one or more choice-listing menus, wherein choice-items can be normally keyed on to invoke respective normal functions associated with the choice-items, said conveyed instructions including instructions for causing the machine to perform the steps of: (a) detecting special keying on a selected choice-item that is displayed in a source area within an opened choice-listing menu, where the special keying indicates a request for dragging the selected choice-item away from of the opened choice-listing menu; and (b) generating a drag-and-drop object from the selected choice-item that was specially keyed on. 42. a manufactured instructing signal for conveying instructions to an instructable machine, wherein the instructable machine provides a plurality of choice-items presented in one or more, popped choice-listing menus, wherein said popped choice-listing menus are each temporarily presented in a display area and retracted after use, and wherein such use can include a normal keying on one of the presented choice-items for invoking a respective normal function associated with the normally keyed-on choice-item, said conveyed instructions including instructions for causing the machine to perform the steps of: (a) detecting special keying on a presented choice-item, where the special keying indicates a request for dragging the specially-keyed-on choice-item away from the corresponding choice-listing menu; and (b) generating a drag-and-drop object from the selected choice-item that was specially keyed on. 43. a machine-implemented method for use with a plurality of choice-items presented in one or more choice-listing menus of a first application program, said method comprising the steps of: (a) detecting special keying on a selected choice-item that is displayed in a source area within an opened choice-listing menu of the first application program, where the special keying indicates a request for dragging the selected choice-item away from of the opened choice-listing menu; (b) generating a drag-and-drop object from the selected choice-item that was specially keyed on; (c) allowing the generated drag-and-drop object to be dropped on a receiving area of a second application program; and (d) producing in the receiving area of the second application program, a dropped object having properties defined by the receiving area. 44. a machine-implemented method according to claim 43 wherein the second application program is a gui desktop. 45. a machine-implemented method according to claim 43 wherein the selected choice-item that was specially keyed on represents a favorite or bookmarked internet site. 46. a machine-implemented method according to claim 43 wherein the selected choice-item that was specially keyed on represents an e-mail address. 47. a machine-implemented method according to claim 43 wherein the selected choice-item that was specially keyed on represents a file and the receiving area of the second application program is a `favorite addressee` object which automatically copies the file dropped thereon into or automatically attaches the file dropped thereon to an e-mail letter and which automatically then sends the letter to a corresponding `favorite addressee`. 48. a machine-implemented method according to claim 43 wherein the generated drag-and-drop object represents an object which opens a pre-identified-file after launching an associated application program. 49. a machine-implemented method according to claim 48 wherein the object which opens said pre-identified-file is tucked into a desktop start.sub.-- up folder so that the function of said object which opens will be automatically executed when an associated desktop starts up. 50. a machine-implemented method according to claim 43 wherein the generated drag-and-drop object represents an object which performs a function of a shrunken window.
background 1. field of the invention the invention is generally directed to graphical user interfaces (gui's) on computer systems. the invention is more specifically directed to a subclassing method that enables users to convert an on-screen listed item of a popup menu into a drag-and-drop object. 2. description of the related art graphical user interfaces (gui's) have found wide spread acceptance among computer users. instead of having to remember a list of arcane command names, users can now intuitively click on (e.g., with the aid of a mouse button), or otherwise activate, graphical areas on the screen and thereby open previously-hidden lists of next-action choices. a further click or other activation on a selected one of the choice-items initiates a pre-programmed function associated with the chosen item. such menu-driven choice-making usually takes the form of the well-known popup menu, or scrolldown menu, or context menu. any one or more of these choice-supplying mechanisms or their like is referred to herein as a `choice-listing menu`. to invoke a choice-listing menu (e.g., a scrolldown menu), the user typically moves an on-screen cursor or another pointing indicia over part of a menu-dropping bar or over another such menu producing area. the user then clicks on a mouse button to initiate the unscrolling or other presentation and/or activation of the choice-listing menu (e.g., a scroll-down menu). of course, means other than mouse clicking may be used for initiating the presentation and/or activation of a choice-listing menu. for example the user may instead press a predefined combination of `hot` keys on the keyboard to cause a desired menu to emerge onto the screen. as its name implies, the choice-listing menu displays a set of choice-items to the user. the user is expected to select one of these choice-items. the represented function, if selected and activated, is thereafter executed. one or more of the displayed choice-items in a first choice-listing menu can be a source area that unfurls yet another choice-listing menu from the first menu. the unfurled second menu can itself have further submenus. most users of the microsoft windows 3.1.tm. or microsoft windows95.tm. operating systems are familiar with the common menu-dropping bar that appears at the top of an active window that is opened under the win16 or win32 system kernel. the window's top bar (menu bar) typically contains areas marked as, file, edit, view, help and so forth. if the user clicks on the file area, a scrolldown menu comes down from that area to present file-related choice-items such as open, save, save as, rename, close, exit. selecting the open choice-item implies that an open file function will be performed and a to-be-designated file will be opened. selecting the save choice-item implies that a save file function will be performed and a designated file (usually the currently open file) will be saved. selecting the save as choice-item implies that a save file as.sub.-- new.sub.-- name function will be performed and a designated file (usually the currently open file) will be saved under a new name. the remaining file-related choice-items provided further file-related functions in accordance with their respective names. if the user clicks on the edit area, a different scrolldown menu comes down from that area to present edit-related choice-items such as undo, copy, cut, paste, and so forth. the copy function for example copies selected data to a clipboard area that is shared by application programs. once one of the presented choice-items is selected and invoked from an activated choice-listing menu, a corresponding, pre-programmed function is executed (such as open file, save file, copy, etc.). a menu-invoked function can include the unfurling of a further menu that has further choice-items. for each of the presented choice-items in a given menu, the user is given only the following choices: either select one of the choice-items in the displayed menu or do not select any item and instead close (rescroll) the choice-listing menu. it may be advantageous to enable users to perform additional functions with the choice-items of choice-listing menus other than to either select one of the presented choice-items or not select any choice-item. summary of the invention in accordance with the invention, users are empowered to generate one or more drag-and-drop objects from a corresponding one or more choice-items within a presented choice-listing menu. the thus-generated objects may be dragged away from their points of creation and dropped into new areas of the screen. at the point of drop, each so-generated and dragged item acquires a new functionality, generally one other than being a choice-item in a choice-listing menu. in one embodiment of the invention, the choice-items within a choice-listing menu include one or more historical file names identifying data files recently worked on from within the corresponding application program. each such historical file name, as listed in the application's choice-listing menu, may be converted into a draggable object (with or without deletion of the original choice-item). the draggable object may then be dragged away from its point of origin and dropped onto another screen area. the dropped item then becomes a separately-activatable object within the screen area where it was dropped. this separately-activatable object typically takes the form of a newly-created icon which may be used, for example, as a `shortcut` for automatically launching an associated application program and simultaneously opening the named historical data file from within that application program. the user is thus given an intuitive and convenient way of creating shortcuts to routinely worked-on files and routinely-used file processing, application programs. in order to uniformly provide each application program with this same ability to convert a choice-item from a displayed choice-listing menu into a drag-and-drop object, an application program subclassing trick is used. at the time each application program is launched, a popup menu monitoring subroutine is attached (subclassed on) to that application program. the popup menu monitoring subroutine is activated each time an event is detected where the event may involve use of a choice-listing menu within the subclassed application program. if a predefined `hot` sequence of user-input events is carried out, such as a special keying composed of clicking on the right mouse button and simultaneously dragging, the popup menu monitoring subroutine temporarily takes over control to create a drag-and-drop object from the choice-item that had been last selected and then hot keyed on (specially keyed on). at the time of selection, the popup menu monitoring subroutine saves the object specifications of the selected choice-item. at the time of hot keying, the popup menu monitoring subroutine closes the currently unfurled choice-listing menu and any additional further menus in the active window. the closing of the currently unfurled one or more choice-listing menus allows another choice-listing menu to be unfurled during a subsequent drag-and-drop object operation without conflicting with the first unfurled choice-listing menu or menus. after closing the first unfurled choice-listing menu(s), the popup menu monitoring subroutine instructs the operating system to create a drag-and-drop object using the last saved object specifications of the last selected choice-item. at the time of drop, the area receiving the dragged-and-dropped object completes the assignment of properties to the dragged-and-dropped object based on the source specifications and further based on the target location into which the dragged-and-dropped object has been dropped. other aspects of the invention will become apparent from the below detailed description. brief description of the drawings the below detailed description makes reference to the accompanying drawings, in which: fig. 1 is a perspective view of a computer system in accordance with the invention; fig. 2 is a block diagram of interconnected components within a computer system in accordance with the invention; fig. 3 illustrates a drag-and-drop operation in accordance with the invention; and figs. 4a and 4b provide a flow chart of a method for creating draggable objects from choice-items of choice-listing menus of various application programs. fig. 5 illustrates a method for locating the window handle of the choice-listing menu that is the source of the drag-and-drop object and closing that choice-listing menu. fig. 6 illustrates further operations that may be carried out on a gui desktop after converting choice-items into drag-and-drop objects. figs. 2a, 3a, 6a-b provide legend information. detailed description fig. 1 illustrates a perspective view of an overall computer system 100 in accordance with the invention. the computer system includes a display monitor 110, a computer housing 120, a keyboard 130 and a mouse 140. the illustrated user input and output devices 110, 130 and 140 are merely by way of example. other visual output devices and user input devices may, of course, be used in addition to or in place of the illustrated devices. mouse 140 for example can be replaced by or supplemented with other graphically-oriented user input devices such as trackballs, touch pads, joysticks, and so forth. display monitor 110 includes a display screen 111 that displays a number of graphical items including a desktop layer 112 and an overlying, opened application window 114. in this particular example the opened application window 114 contains a running word processing program having the fictional name, word process. the actual word processing program could be microsoft word.tm., or corel wordperfect.tm., or lotus ami pro.tm., or any one of a host of other commercially available word processing programs. the application window 114 could alternatively have contained a spreadsheet program (e.g., microsoft excel.tm.), a picture-drawing program (e.g., adobe illustrator.tm.), an internet browser program (e.g., microsoft explorer.tm.), an electronic mailing program (e.g., qualcomm eudora.tm.) or any other such application program. application window 114 is shown to contain a top menu bar 115 having menu-dropping areas such as file, edit, view, format, etc. a user-movable cursor 118 is displayed on screen 111 in the form of an arrowhead and made movable over the other displayed items in response to user activation of the mouse 140 or of another such input device. the arrowhead shaped cursor 118 is shown in fig. 1 to have been brought over the file portion of the top menu bar 115. the left mouse button has been clicked and this caused a pop-down menu 116 to unscroll from the activated file region. the unscrolled pop-down menu 116 lists a number of options that may be performed by the application program with respect to the file category. these exemplary options include: open (to open an otherwise identified file), save (to save an identified file), close (to close an identified file that has been previously opened), and history . . . , where the latter choice-item unfurls a second menu (not shown in fig. 1, but see fig. 3) that names a set of previously worked-on files. the named historical files may be reopened from within the running application by clicking on one of their names after unfurling the file history . . . submenu (317 of fig. 3). the file exit choice-item of pop-down menu 116 (fig. 1) may be clicked on when the user wishes to exit the application program (e.g., word process) and close all related files. as understood by users of gui's, the cursor 118 moves in coordination with movements of the mouse 140 or in coordination with movements of another appropriate user input device such as a trackball or a joystick. cursor 118 may be brought down along the choice-items of pop-down menu 116 for selecting a specific one of the listed choice-items. in the past, a user was generally limited to selecting and invoking one of the choice-items listed on the pop-down menu or not selecting any choice-item and instead closing the menu 116 (e.g., by hitting the esc key). the present invention provides additional options. as will be detailed below, one or more of the menu choice-items may be copied or torn-away as a draggable object and dropped into a new location where the dragged-and-dropped object takes on new functionality. this means that application program functions that were previously accessed by opening the application program, and thereafter navigating through a series of menus and submenus, can now be converted into more permanently visible, object icons or tool icons that can be accessed and invoked more easily than before. a variety of new desktop functions can be created by the user as desired. as a first example, assume there is a particular `local` data file that the user routinely keeps on a local hard disk of his or her machine. assume the user routinely uses that local data file from within a particular application (e.g., word.sub.-- process or spreadsheet). the local data file may have a long pathname such as, c:.backslash.user.backslash.me.backslash.wordproc.backslash.projectx.backs lash.my.sub.-- file.123 which the user does not want to type out in full each time. to avoid such lengthy typing, the user can instead (1) launch the word process application program, (2) point to and unfurl the file menu, (3) select and unfurl the history . . . submenu, and then finally (4) select and activate the my.sub.-- file.123 choice-item (assuming it is still listed as part of the history . . . and has not been replaced by names of more-recently worked on files). but even this is a somewhat convoluted process. in accordance with the invention, the user unfurls the history . . . submenu once, points to the desired my.sub.-- file.123 choice-item (a historical data filename) and then right depresses with the mouse, drags on the selected choice-item and drops it on an open desktop area. the dragged-on choice-item is automatically converted into a desktop `shortcut` object that remains on the desktop after being dropped there even after the application program (e.g., word process) is closed and exited from. a later double-clicking on the created `shortcut` icon automatically re-launches the application program (e.g., word process) and thereafter automatically opens the corresponding local data file (c:.backslash. . . . .backslash.my.sub.-- file.123). as a second example, suppose there is a particular `external` download file that can be, and periodically is changed by other users. suppose the local user has a notification program that, when invoked, compares the latest version indicators of the `external` download file against those of a local copy that has been previously downloaded by the local user. if the version indicators are different, the notification program fetches the latest version from over the internet or from over some other network (e.g., a company intra-net) and locally substitutes the fetched file in place of the earlier version local file. an example of such a notification program is norton notify.tm. available from symantec corp. of california. suppose that the downloaded latest-version file is to be used by the local user with a particular application program. one possibility is that the `external` download file contains constantly changing financial information and the application program is a spreadsheet or like program that analyzes the most recent information. the difference here relative to the first given example is that the downloaded file changes quickly at its source site and the user wants to each time download the most recent version of that quickly changing file. in accordance with the invention, the `net.sub.-- favorites` or `net.sub.-- bookmarks` or other such filename representations of the downloaded file, as it appears in a choice-listing menu of the application program, is converted into a dragged-and-dropped desktop icon. the "properties" of the dragged icon is modified to automatically invoke the notification program while the parameter passed to the notification program is the associated filename of the so-generated icon itself. when the so-generated, `net.sub.-- favorite` icon is thereafter double-clicked on, the open-file action automatically launches the notification program, which then automatically obtains the latest version of the desired external file either locally if available or from over the network if the latest version has changed. as an extension to the second example, the user may further, for convenience's sake, create a second desktop object by tearing away a file.sub.-- open or a file.sub.-- history menu choice-item from the application program (e.g., spreadsheet program) that is to next use the latest-version obtained by the notification program. when this second desktop object is double-clicked on, it will automatically pick up the latest-version file and open it under an automatically launched application program (e.g., the spreadsheet program). for even more automation, one can tuck into a windows start.sub.-- up folder in the recited order, both of the executable objects, namely, (a) the notification program launching object; and (b) the menu-torn-from object which opens the named-file after launching the associated application program (e.g., spreadsheet or word.sub.-- process). these executable objects may then be automatically executed in the recited order when the user's desktop starts up and begins executing the executable objects within the start.sub.-- up folder. when start.sub.-- up completes, the user will find him/herself looking at the last opened application program (e.g., spreadsheet or word.sub.-- process) with the latest version of the desired internet (or other network based) file having been fetched, saved locally, and opened. as a third example of how the choice-item drag-and-convert-to-object feature of the invention can be used, assume there is a particular internet or intranet web site that is a routinely-visited favorite of the user. the name of that internet/intranet `favorite` web site can be torn out from a `favorites` or `bookmarks` or like visit-history recording menu within the browser program and converted into a browser launching icon that automatically seeks out that `favorites` web site (e.g., a favorite search engine site). by way of yet a fourth example of how the choice-item drag-and-convert-to-object feature of the invention can be used, assume there is a particular e-mail addressee to whom the user routinely sends electronic mail (via the internet or over another network). the name of that favorite e-mail addressee can be torn out from an `aliases` menu in the electronic-mail program and converted into an e-mail launching icon that automatically fills in the e-mail `to` address field after launching the e-mailing program and invoking the send e-mail function. before delving into all these and further options, a few system basics are reviewed. referring to fig. 2, interconnected components of computer 100 are shown schematically. computer 100 includes a central processing unit (cpu) 150 or other data processing means, and a system memory 160 for storing immediately-executable instructions and immediately-accessible data for the cpu 150. system memory 160 typically takes the form of dram (dynamic random access memory) and cache sram (static random access memory). other forms of such high-speed memory may also be used. a system bus 155 operatively interconnects the cpu 150 and system memory 160. computer system 100 may further include nonvolatile mass storage means 170 such as a hard disk drive, a floppy drive, a cd-rom drive, a re-writable optical drive, or the like that is operatively coupled to the system bus 155 for transferring instructions and/or data over bus 155. instructions for execution by the cpu 150 may be introduced into system 100 by way of computer-readable media 175 such as a floppy diskette or a cd-rom optical platter or other like, instructing devices adapted for operatively instructing the cpu 150 (or an equivalent instructable machine). the computer-readable media 175 may define a device for coupling to, and causing system 100 to perform operations in accordance with the present invention as further described herein. system 100 may further include input/output (i/o) means 180 for providing interfacing between system bus 155 and peripheral devices such as display 110, keyboard 130 and mouse 140. the i/o means 180 may further provide interfacing to a communications network 190 such as an ethernet network, a scsi network, a telephone network, a cable system, or the like. instructions for execution by the cpu 150 may be introduced into system 100 by way of data transferred over communications network 190. communications network 190 may therefore define a means for coupling to, and causing system 100 to perform operations in accordance with the present invention. system memory 160 holds executing portions of the operating system (os) and of any then-executing parts of application programs. the application programs generally communicate with the operating system by way of an api (application program interface). system memory 160 may include memory means for causing system 100 to perform various operations in accordance with the present invention. with gui-type operating systems (os's) such as microsoft windows 3.1.tm. or microsoft windows95.tm., or microsoft windows nt.tm. 4.0 the os often temporarily stores data object specifications of executable or other software objects that are currently `open` and immediately executable or otherwise accessible to the cpu 150. although not specifically shown in fig. 2, parts of system memory 160 can be dynamically allocated for storing the data object specifications of open objects. the so-allocated memory space may be de-allocated when the corresponding object closes. the de-allocated memory space can then be overwritten with new information as demanded by system operations and actions of third party application programs. for `desktop` objects, the corresponding data object specifications typically include the following: (1) a full pathname pointer to the object so that the object can be fetched and executed; (2) information about associated application programs, (3) information about the size, shape and location of the on-screen icon that represents the object, and (4) information about various user-set preferences that control user-programmable properties of the object. the corresponding data object specifications can of course include additional or alternative items of information of like nature. transfer of objects and specifications between application programs is typically handled in accordance with an inter-application protocol such as microsoft ole (object linking and embedding) or dde (dynamic data exchange). referring to fig. 3, a simple choice-item drag-and-convert-to-object operation in accordance with the invention is now described in more detail. region 311 is a screen area containing the graphical metaphor for a desktop 312. within desktop 312, one or more application windows such as 314 may be opened. in this example, the opened window is running a word processing program (word process). it could just as easily have been a spreadsheet or a net-browser. a first scroll-down menu 316 has been unfurled from the file region of the window's menu bar 315. window 314 includes an exit-button (x in a square) 313 that may be clicked to quickly close the window and exit the application program. in the example of fig. 3, the user has pointed to and has mouse-wise left-clicked on the history . . . side menu to reveal a historical listing 317 of names of files recently worked on. the user has then moved the cursor (318, shown in a later position of the operation) to select (e.g., highlight) the my.sub.-- file2 choice-item 317a. thereafter the user has right-clicked on the my.sub.-- file2 choice-item 317a and dragged a generated copy 317b of that choice-item across the screen toward region 312c. cursor 318 is shown in the state where the cursor is riding along with the now being-dragged object 317b. the user will continue to depress the right mouse button and drag icon 317b until it comes into a user-chosen, empty desktop region 312c. at that point, the user releases the right mouse button and thereby drops the dragged object 317b over new region 312c. the dropped item 317b is then converted by the receiving desktop area 312c into a new user-activatable object 319. the properties of the new object 319 are initially established by the inter-application object-transfer protocol then in force for drag-and-drop operations. the inter-application object-transfer protocol can be the microsoft ole standard or dde or any other such protocol. in most instances, the user can modify a "preferences" portion within the properties of the new object 319 after it has been dropped into position. the new object 319 may be represented by a graphical icon or other indicia as appropriate. in the illustrated example, the new object 319 is represented by a 3-dimensional appearing, word-processor launching icon having the title my.sub.-- file2 as shown in screen region 312c. a user can revise the title portion of object 319 to whatever title is desired using the rename facilities of the operating system. after object 319 is created from the corresponding choice-item (e.g., my.sub.-- file2) and is optionally renamed, the user may double-click on the new object 319 or otherwise activate it. at that point in time, a version of the same application program (word process) will be launched and the file named my.sub.-- file2 will be automatically opened from within the launched application. in this way, the user is provided with a shortcut to opening the desired file within the desired application program. the user avoids having to separately open the application program and then traverse through a series of menus 316 and submenus 317 to reach the desired goal state. figs. 4a and 4b show a flow chart for a method 400 in accordance with the invention for converting menu choice-items into drag-and-drop objects. at step 410 a choice-item converting program 400 in accordance with the invention is automatically launched along with other system-invoked processes during start up of the computer 100. the automatic launching can be realized for example by including a pointer to the choice-item converting program 400 in a windows start.sub.-- up folder that is used during start up by microsoft windows 3.1.tm. or microsoft windows95.tm. or microsoft windows nt.tm. 4.0 or a like windows-based operating system. program 400 can also be initiated by the user rather automatically by system start up. at first entry step 415, the choice-item converting program 400 registers a request for an os call back to a second entry point 420 each time the os receives a broadcast message indicating that a new application program has been launched. the request registration of step 415 may be carried out with an events hooking procedure. events hooking within gui environments such as the windows 3.1.tm. and windows95.tm. environment is a procedure well known to those skilled in the art and will therefore not be detailed here. reference is made to the microsoft application development guidelines and to the various works of matt pietrek such as `windows95 system programming secrets`, idg books, regarding such event hooking procedures. after step 415, the choice-item converting program 400 makes a first return (return-1) as indicated at 416 and returns control to the operating system (os). dashed line 417 represents a time period during which the operating system (os) or other programs are in control. at an event-driven time point 419, the os transfers control back to re-entry point 420 of the choice-item converting program 400. this happens because a `new application launched` event report has been received by the os and the os has executed the call back as requested at step 415. at step 425 a `procedure subclassing` operation is performed to attach a popup menu monitoring subroutine (having entry point 430) to the newly launched application program. such `procedure subclassing` is known to those skilled in the art and will therefore not be detailed here. reference is made to the microsoft application development guidelines and to the various works of matt pietrek such as `windows95 system programming secrets`, idg books, regarding such procedure subclassing operations. the subclassing operation 425 in essence registers a further request for an os call back to a third entry point 430 each time the os receives a broadcast message indicating that a windows event has occurred involving the subclassed application program. such windows events include a wide variety of activities within the subject window (e.g., window 314 of fig. 3) such as: a mouse-move within the area of the window (314), a mouse-button click within the area of the window, the opening of a menu-type subwindow (316) within the area of the window, a request for closing of the window and exiting from the running application (e.g., because window exit button 313 has been clicked on), and so forth. after step 425, the choice-item converting program 400 makes a second return (return-2) as indicated at 426 and relinquishes control back to the operating system (os). dashed line 427 represents a time period during which the operating system (os) or other programs are in control. at an event-driven time point 429, the os returns control to re-entry point 430 of the choice-item converting program 400. this happens because a windows event has been detected for the window in which the subclassed application program is running. many of the possible window events are of no interest to the popup menu monitoring subroutine (which subroutine has entry point 430) and control is quickly returned to the os if this is found to be so. one window event that is of interest to the popup menu monitoring subroutine is an event that calls for either a closing of the subclassed application program or a shutdown of the operating system (os). one possibility is that the user has clicked on a file exit choice-item and has thereby requested a complete closing of the application program. another possibility is that the user has clicked on the window exit button 313. yet another possibility is that a system error has occurred and the error handling software has invoked a system shutdown. at step 440, a first test is conducted to determine if the callback-invoking event indicates that the subclassed application program of that window is closing or that the os is shutting down. if the answer is yes, control passes to step 470. in step 470, the closing application program is returned to a nonsubclassed state. this allows the application program to later relaunch cleanly even if for some reason the choice-item converting program 400 is not invoked prior to the launch. in a following step 472 a test is made as to whether the event is merely an exit from the application program (because of either a user request or because of some other reason) or whether the event is to continue with a system shutdown (again either because of a user request or because of some other reason). if the system is not shutting down (and the response to test 472 is no), control passes to step 474 where the application is allowed to complete its normal closing. a return (return-4) is made at step 475 to return control back to the os. step 475 is shown returning control to wait-state 417. the choice-item converting program 400 will restart at step 420 the next time a new application program is launched. if the system is shutting down (and the response is yes to test 472), control passes to step 476 where the hook request of step 415 is undone. at step 477, the shutdown of the system is allowed to continue. the choice-item converting program 400 terminates and does not restart until step 410 is again entered at some future time, such as when the system is brought back up. automatic re-start of the choice-item converting program 400 may not occur if changes are made to the start.sub.-- up folder that remove it from the automatic start up list. besides program closing and system shutdown, another of the window events that is of interest to the popup menu monitoring subroutine is an event that causes `selection` of a next choice-item in a choice-listing menu. selection is not the same as invocation of a menu choice-item. selection generally occurs before invocation. typically the user is moving the mouse arrow (118) down a choice-listing menu, highlighting each successive choice-item in sequence. the highlighting constitutes a selection but a mouse button has not yet been clicked to invoke the choice-item. selection of a choice-item can also be carried out with the cursor movement keys of the keyboard. at the time of choice-item selection, the os generates and/or points to a data buffer containing the object specifications of the selected choice-item. the object specifications can be owner.sub.-- drawn, meaning that the running application program has generated them, or the object specifications can be system generated, meaning that the os has generated them. in either case the full object specifications are available at the time of choice-item selection. they may not always be available at other times. for example, a third party application program may arbitrarily choose at a later time to move or erase the object specifications. given this, if in step 445 the popup menu monitoring subroutine determines that the event was a move-invoked selection of a next choice-item, the popup menu monitoring subroutine (430) identifies and makes a full copy for itself of the object specifications of the selected choice-item at step 446 each time such a new selection is made. the copied object specifications are placed into a private memory buffer of the popup menu monitoring subroutine. if memory space has not been allocated previously for this buffer, it is so allocated (statically or persistently) the first time the popup menu monitoring subroutine 430 executes step 446. thereafter with each subsequent selection of a choice-item, the object specifications of the selected item overwrite the object specifications of a previously selected choice-item. this is done so that, application-drawn object specifications are assuredly preserved even if a third party application program arbitrarily chooses at a later time to move or erase the object specifications. after step 446, a return (return-4) is executed at step 447 to return control back to the os. step 447 is shown returning control to wait-state 427. if the next window event is another choice-item selection, step 446 will be performed again to capture the full object specifications of the latest selected choice-item. if the next window event is an invocation by a left mouse click on the selected choice-item, control passes from step 429, through the `no` outgoing path of step 440, through the `no` outgoing path 448 of step 445, and through the `no` outgoing path of next-described step 450 (fig. 4b), to quickly exit (return-5 back to os) at step 452. the os then handles the left-click invocation of the selected choice-item (e.g., file save or file close) in the usual manner. if however, the next window event indicates completion of a `hot` user input (special keying) such as a mouse right-button depress followed by a begin of drag, control passes from step 429, through the `no` outgoing path of step 440, and through the `no` outgoing path 448 of step 445 to the hot input detecting test of step 450. a yes answer to the hot input detecting test of step 450 is taken as user request for conversion of the last selected choice-item into a drag-and-drop object. at subsequently executed step 460, the process id of the application program is initially obtained and stored. this is done to support a later-described step 550 of one embodiment wherein the process id is used. next, the handle of the currently-open choice-listing menu is located and the os is instructed to close that menu and to close all higher level choice-listing menus from which the first menu springs. in the case of fig. 3 for example, submenu 317 would close and be erased from the screen and then higher level menu 316 would close and roll back up into its file source area in menu bar 315. a method for locating the windows handle of the currently-open, lowest level, choice-listing menu is described below with reference to fig. 5. at following step 462, after the popup menus have closed, a drag-and-drop object such as shown at 317b of fig. 3 is created. this is done by passing to the operating system (os) the object specifications of the last selected choice-item as they were saved in the private buffer of the popup menu monitoring subroutine (430) during the last pass through step 446. the operating system (os) is asked to begin a drag-and-drop operation using the passed object specifications. in fig. 3, this step results in the on-screen drawing of a drag-and-drop object such as shown at 317b. the actual size and shape of the drawn object 317b will vary in accordance with the passed object specifications and how the os responds to them. the user will continue to drag the generated object (e.g., 317b) until it reaches a user-chosen drop area, such as open desktop area 312c of fig. 3. the user then releases the mouse button and thereby drops the dragged object onto the receiving area (e.g., 312c). the dropping of the drag-and-drop object 317b is indicated by step 464 of fig. 4b. in step 464, the receiving area (or target area, e.g. 312c) can respond to the drop operation in a variety of ways. one way is by changing the appearance of the dropped object. this possibility is indicated in fig. 3 by the difference in appearance between the in-drag object 317b and the ultimately dropped object 319. transfer of object specifications from the dragged object (e.g., 317b) to the receiving area (e.g., 312c) can be by way of ole's uniform data transfer protocol (utd) or by way of like transfer protocols of then in-use inter-application communication processes (e.g. dde). when the dragging completes, the target area (e.g. 312c) may optionally respond to the dropping of the dragged object 317b by presenting a new choice-listing menu to the user. this action is not shown in fig. 3. however step 464 of fig. 4b shows that the typical choice-listing menu at this stage will have choice-items such as copy, move, and create.sub.-- shortcut. the user selects and invokes one of these choice-items and the target area (e.g. 312c) responds accordingly. one important thing to note is that if step 460 (closing of the original menus) had not been previously carried out, there would be a conflict of open menus on the screen and the drag-and-drop operation of step 462 would fail in cases where the target area (e.g. 312c) tries to open its own choice-listing menu (e.g., copy, move, create.sub.-- shortcut). step 460 assures that the drag-and-drop operation will complete without conflict between multiple open menus. user selection of the copy choice-item at the target area will generally result in leaving the source choice-item data intact in memory and creating a copy of the data in memory as a separate entity. user selection of the move choice-item at the target area will generally result in deletion of the source choice-item data and in the creating of the `moved` version as a substitute entity having a new address. user selection of the create.sub.-- shortcut choice-item at the target area will generally result in leaving the source choice-item data intact and in the creating of a dropped object (e.g., 319) as a separate entity that automatically launches the associated application program when double-clicked on. the above examples are not intended to limit the actions of the target area (e.g. 312c) in response to the dropping of the dragged object (e.g., 317b). properties of the dropped object (e.g., 319) can vary in accordance with the specific source menu, the application program that produces the source menu, the specific choice-item (317a) that serves as the source of the drag-and-drop object (317b), the specific target area that receives the dragged-and-dropped object, the inter-application drag-and-drop transfer-protocol being used, and various user-set preferences. the target area that receives the dragged-and-dropped object can be an open desktop area or a second gui object or a shortcut window associated with any particular application that supports the inter-application drag-and-drop transfer-protocol (e.g., ole or dde). at step 466, the private buffer memory space that stored the object specifications for the drag-and-drop operation is cleared and thereby made ready for re-use with next-occurring menu selections. after step 466, a return (return-6) is executed at step 480 to return control back to the os. step 480 is shown returning control to wait-state 427. if a subsequent window event is another choice-item selection, step 446 will be performed again to capture the full object specifications of the latest selected choice-item. if a following window event is a hot user input such as dragging with the right mouse button depressed, steps 450, 460 through 480 will be performed again to generate another dragged-and-dropped object. any desired and practical number of conversions of choice-items into dragged-and-dropped objects may be carried out using either a same source window or different source windows, as desired. after a dragged-and-dropped object is created in accordance with the invention from a choice-item, that object can be later deleted if desired in the same manner that other desktop objects can be deleted. for example, the created object may be dragged into the trash bin (or recycling bin, not shown) of the desktop so that it will be deleted when a garbage collection operation occurs. if a user later needs the same object again, the user can recreate it by again selecting the choice-item in an unfurled menu and hot keying on the selected choice-item so as to produce a drag-and-drop object from the selected choice-item. referring to fig. 5, one method 500 for closing currently open popup menus (as called for in step 460) is shown. method 500 applies to the microsoft windows 3.1.tm. or microsoft windows95.tm. operating systems. similar methods may be used for other os's. the close popup menus subroutine 500 is entered at step 510. at step 515, a pointer is set to point to a first window handle in the os's list of window handles. a popup menu (choice-listing menu) is treated as a special kind of window by the os. the aim here is to locate and close all the open choice-listing menus of the currently subclassed application program. at step 520, a test is conducted to see if the end of the window handles list (eol, end of list) has been reached. the last entry in the list is an invalid one or an eol marker. if such an end of list has been reached, an exit (return on `done state`) is taken by way of step 525. at step 530, a test is conducted to see if certain style bits associated with popup menus are set. in the microsoft windows 3.1.tm. or microsoft windows95.tm. operating systems these are the ws.sub.-- ex.sub.-- toolwindow extended style bit (which indicates a `floatable` window) and the ws.sub.-- ex.sub.-- topmost extended style bit (which indicates the window is shown in the foreground rather than being in the background). testing for these bits quickly rules out many possibilities that would not qualify under one of the subsequent tests (540 and 550). because so many nonqualifying possibilities can be ruled out with such a simple bit-masking test, step 530 is preferably carried out in the qualifying test chain before steps 540 and 550. if the answer is no in test step 530, the current window handle is clearly not for a popup menu and path 517 is followed to step 518. step 518 increments the current window pointer to point to the next window handle and transfers control to step 520. if the answer is yes to the test performed at step 530, a further filtering test is performed at step 540. not all windows that pass muster under test 530 are popup menus. in step 540 a test is conducted to see if the class name corresponds to that of a popup menu. this test is often more time consuming than the bit-masking test of step 530 and is therefore preferably carried out after step 530 so as not to waste system time on window handles that can be filtered out by faster test 530. in the microsoft windows 3.1.tm. or microsoft windows95.tm. operating systems the so-indicative class name for popup menus is designated as "#32768". if the answer is no, the current window handle is not for a popup menu and path 517 is followed to step 518. not all popup menus are to be closed however. if the answer is yes to the test performed at step 540, a next filtering test is performed at step 550 to pick out only those popup menus belonging to the subclassed application program now having one of its choice-items converted into a drag-and-drop object. in step 550 this is done by testing to see if the process identifier of the current window matches the process identifier of the subclassed application program that is having its choice-item converted. if the answer is no, the current popup menu does not belong to the current application program and path 517 is followed to step 518. on the other hand, if the answer has been yes to each of tests 530, 540 and 550, the next step performed is step 560. in step 560 one or more messages are sent to the current window (the current popup menu) to cause it to close. one way of doing this is by indicating depression and release of the escape key as shown by way of example in fig. 5. the subclassed application program that is now having one of its choice-items converted into a drag-and-drop object can have multiple levels of submenus open. after step 560 is performed, control is passed to step 518 via path 517. the loop is repeated until there are no more windows open that qualify under the tests performed by steps 530, 540, 550 and 520. at the end of the window handles list, an exit (or return out of subroutine) is taken by step 525. the specific actions to be performed when a choice-item of a subclassed application program is converted into a drag-and-drop object depend on the factors listed above, mainly, the point of origin of the drag-and-drop object and the point of deposit of the object. a first set of actions in accordance with the invention convert a `function` choice-item into a drag-and-drop object and deposit the dragged object onto an open desktop area. the term, `function` choice-item refers to choice-items that when invoked, cause a particular program to execute independent of a specific data file. examples include choice-items having names such as open, save.sub.-- as, mail, cut, copy, and paste. for some of these, the user is expected to supply further information in order for the function to complete. in the case of file open for example, the user has to indicate what specific file is to be opened. in the case of file save.sub.-- as, the user is expected to indicate the name of the new file into which the current data is to be saved. in the case of mail, the user is expected to indicate the name or names of the entities to whom e-mail is to be sent. there are a number of advantages to using an immediately-executable `function` object located on the open desktop instead of using a choice-item hidden in a yet-to-be unscrolled choice-listing menu. these advantages are particularly important for those specific functions that a user finds him/herself using with greater regularity than most other menu-borne functions. first, the user avoids extra steps such as having to navigate through a series of menus and submenus. the user clicks directly on the dragged-and-dropped object to have the function immediately performed. second, the functional capability of the dragged-and-dropped object is constantly being advertised to the user on the desktop rather than being hidden as a choice-item in an unscrolled menu. given this, the user may be better able to recall that the particular function is available. third, the position and size of the dragged-and-dropped object remains constant on the open desktop irrespective of where the application program window is moved or how the application program window is sized. this consistency may be advantageous in certain situations such as when users are periodically `minimizing` application program windows and placing the shrunken versions on the taskbar even though it is desirable to perform certain program-internal functions. (the menu bar, e.g. 115 of a minimized window is not visible and hence cannot be immediately used for unfurling a choice-listing menu.) a second set of actions in accordance with the invention convert a `data file` choice-item into a drag-and-drop object and deposit the dragged object onto an open desktop area. the term, `data file` choice-item refers to choice-items that when invoked, cause a particular data file to open under a specific application program. examples include the historical worked-on files discussed above. the advantages of using an immediately-executable `data file` object located on the open desktop instead of using a choice-item hidden in a yet-to-be unscrolled choice-listing menu are similar to those described for function objects, namely, (a) the user avoids extra steps such as opening the application program and navigating through a series of menus and submenus to get to the desired choice-item, (b) the availability of the dragged-and-dropped `file data` object is constantly being advertised to the user rather than being hidden as a choice-item in an unscrolled menu, (c) the position and size of the dragged-and-dropped object remains constant on the open desktop, and (d) even if the name of the specific data file rolls past the end of the history . . . tracking portion in the application's choice-listing menu, it is nonetheless preserved by the created `data file` object. the application program that is launched when double-clicking on a `data file` object created in accordance with the invention is not limited to word processors. as explained above, all other forms of application programs that use data files may be launched including spreadsheet programs, financial tracking programs, modeling programs, picture-drawing programs, and so forth. a third set of actions in accordance with the invention convert a `net download` choice-item into a drag-and-drop object and deposit the dragged object onto an open desktop area. the term, `net download` choice-item refers to choice-items that when invoked, cause data to be downloaded over a communications network from a particular source site to nonvolatile storage the user's computer under the supervision of a specific network using program. one example of this is discussed above and referenced as an `external` download file which is named in a `favorites` choice-listing menu of an internet or intra-net browsing program. the download can be by ftp (file transfer protocol) or http (hypertext transfer protocol) or any other download protocol. fig. 6 illustrates some of the above points. like reference numerals in the `600` number series are used in fig. 6 where possible to represent like references elements of fig. 3 that were numbered in the `300` series. as such in the embodiment 600 of fig. 6 it is understood that: 612 is a gui desktop, 614 is an opened window containing a first application program such as a word processing program, 615 is a menu-dropping bar, 616 is a first dropped menu containing file history information, 617 is a second dropped menu (dropped at a different time) that contains choice-items representing editing functions such as copy (to clipboard), 619a is a first dragged-and-dropped object that was created in accordance with the invention by right-dragging the my.sub.-- file4 choice-item from menu 616, and 619b is a second dragged-and-dropped object that was created in accordance with the invention by right-dragging the copy choice-item from menu 617. note that for dragged-and-dropped objects 619a and 619b the name of the originating application program (e.g., word process) is indicated on the front face of the rectangular icon. other means can of course be used for indicating the originating application program such as changing the shape of the representative icon. note further that for object 619b, the user has renamed the marquee title to read copy to clip so as to more clearly indicate that whatever happens to be selected in window 614 will be copied to the system clipboard. fig. 6 shows some additional features not previously discussed with respect to fig. 3. desktop 612 includes a `task bar` 630. this task bar 630 generally contains one or more task buttons such as 631, 632 and 633. each task button may be used to expand a previously shrunken window. clicking on task button 631 for example brings a start.sub.-- up window out onto the desktop 612. window 614 is shown to contain, in addition to a close-window button 643 (x in a square), a window maximizing button 642 (square in a square), and a window minimizing button 641 (flatten edge in a square). clicking on the window minimizing button 641 causes window 614 to shrink in size until its internal contents are no longer visible and it further causes the shrunken window to move to a button position inside of task bar 630. this new position can be task button 632 for example. the shrinking of window 614 and its movement into the form of task button 632 is represented by dashed lines 634. of importance, when window 614 is minimized, its minimal representation 632 does not provide the user with immediate access to the menu-dropping bar 615. in the past, if the user wanted to invoke the copy function of bar 615 after window 614 had been minimized, the user generally had to click on task button 632 so as to return window 614 to its original size, and to then click on the edit portion of menu-dropping bar 615, and to then select and activate the copy choice-item. however, with the dragged-and-dropped copy object 619b now present on the desktop 612, a user can simply click on the copy.sub.-- to.sub.-- clip object 619b, even if window 614 is still minimized. whatever was selected inside shrunken window 614 will be copied to the system clipboard for retrieval by some other application program (e.g., an electronic mailing program). for completeness, and as will be understood by gui users, the maximizing button 642 of window 614 causes that window 614 to expand to fill the display area to the maximum extent possible. the menu-dropping bar 615 of the word process window is shown to include a tools area. clicking on the tools area causes a choice-listing menu (not shown) to unfurl and display as choice-items a variety tool functions. the tool functions can include spell-checking, grammar-checking, clip-art fetch, and so forth. sometimes, an application program provides a `tool bar` capability such as shown at 620. the tool bar 620 contains a set of tool icons (e.g., magnifying glass, hammer, etc.) each representing a specific tool function in nonliteral form. the user is allowed to click on the tool icon and invoke a corresponding tool function in place of locating the tool function in a choice-listing menu and activating the same function through the latter path. most application programs unfortunately provide only a limited number tools within the tool bar 620 and no simple way for changing the tool set contained in the tool bar 620. in accordance with the invention, however, tool choice-items from a tools choice-listing menu can be converted to drag-and-drop objects and dragged into a tool bar 620 to add a corresponding new tool to the tool bar 620. in the illustrated example, 618 is a `macro` portion of an unfurled tools menu. one of the macro choice-items is playback. the corresponding playback function plays back a specified macro. a macro is formed by recording a sequence of actions within an application program for later playback within that application program. although not shown, it is to be understood that a macro playback choice-item often times unfurls to display a submenu of macro functions, one of which is to be selected and activated for playback. the present invention contemplates the conversion of choice-items from such a submenu into drag-and-drop objects that are moved to the tool bar 620. alternatively, as shown in fig. 6, the macro playback choice-item may itself be converted (by right-dragging) into a drag-and-drop object that is deposited into the tool bar 620 for thereby adding a new tool icon. in the illustrated example, the macro playback choice-item is so-converted and used to produce the icon with two-reels and tape extended between them. dashed lines 624 represent the conversion of the macro playback choice-item into a drag-and-drop object and the dropping of that object into the tool bar 620. the conversion of the drag-and-drop object into an application tool can be mediated either by the application program being originally configured to so respond to the os's data transfer protocol (e.g., ole or dde) or by the application program having been subclassed to so carry out the conversion of the dragged-and-dropped object into the corresponding tool in response to the drop operation. fig. 6 shows a second window 674 as having been opened on the desktop 612. in this example, a network-using program such as an internet browser (fictitiously named, net browser) is running within second window 674. the menu-dropping bar 675 of second window 674 includes a favorites area which may be clicked on to unfurl a favorites choice-listing menu 677. within the favorites choice-listing menu 677 are a plurality of choice-items representing favorite network sites of the user. by way of example they are fictitiously named in fig. 6 as the abcsite, the defsite and the xyzsite. each could be a web site on the internet or a site on another kind of network. dashed lines 654 indicate that the xyzsite choice-item has been right-dragged onto the open desktop in accordance with the invention and thereby converted into executable object 679a. the marquee title of object 679a has been renamed by the user to read as xyz.sub.-- site.sub.-- visit so that its function is more clearly understood than the original title of the source choice-item, xyzsite. double clicking on this executable xyz.sub.-- site.sub.-- visit object 679a invokes an automatic launching of the associated application program (net browser) and an execution of the function underlying the favorites xyzsite choice-item. in other words, it automatically invokes a visit to the xyzsite on the associated network. the menu-dropping bar 675 of second window 674 further includes a file area which may be clicked on to unfurl a file-functions choice-listing menu 676. file-functions of menu 676 are directed to files that are operated on by the net browser program. dashed lines 655 indicate that the save.sub.-- as choice-item of menu 676 has been right-dragged onto the open desktop in accordance with the invention and has been thereby converted into executable object 679b. the marquee title of object 679b has been renamed by the user to read as save.sub.-- visit.sub.-- as so that its function is more clearly understood than the original title of the source choice-item, save.sub.-- as (by 655). double clicking on this executable save.sub.-- visit.sub.-- as object 679b invokes the corresponding file-function of the associated application program (net browser). in other words, it automatically causes whatever data is open within second window 674 to be saved under a user-provided file name. fig. 6 indicates that there was a third window (not shown) previously opened on the desktop 612 and running a spreadsheet program (fictitiously named, spreadsheet). this third window has been minimized into third task button 633. before minimization, however, a file-history choice-item (not shown) of the third window was converted in accordance with the invention into a drag-and-drop object that was dropped at 689. the so-converted file-history choice-item (not shown) of the third window choice-item references the same file as is saved by object 679b so that the functions of objects 679b and 689 can be executed in succession to respectively save data from a visited favorite site (xyzsite) to disk under a given name and to thereafter launch the spreadsheet program and open the just-saved disk file for further processing by the spreadsheet program. in the illustrated example, the user has renamed the marquee title of object 689 to read as process xyz.sub.-- site.sub.-- file so that the function of object 689 is more clearly understood than the original title of the source choice-item, history . . . .backslash.xyz.sub.-- site.sub.-- file (not shown). dashed lines 684 indicate that the history . . . .backslash.xyz.sub.-- site.sub.-- file choice-item (not shown) of the now-shrunken spreadsheet window had been right-dragged onto the open desktop in accordance with the invention and had been thereby converted into executable object 689, and that thereafter the third window had been minimized into the form of third task button 633. in view of the above, it is seen that a fourth set of actions in accordance with the invention convert a `net.sub.-- site visit` choice-item (in 677) into a drag-and-drop object and deposit the dragged object (679a, xyz.sub.-- site.sub.-- visit) onto an open desktop area. the term, `net.sub.-- site visit` choice-item refers to choice-items that when invoked, cause a particular network site such as an internet web site to be visited over a communications network and the contents of the site to be made viewable on the user's computer under the supervision of a specific network browsing program (e.g., net browser of window 674). one example of such a choice-item is that which comes within a `favorites` or `bookmarks` choice-listing menu 677 of an internet or intra-net browsing program. double-clicking on the dragged-and-dropped object 679a that was created from the `favorites` or `bookmarks` choice-listing menu causes the browser to be launched and the favorite site to be automatically visited. a fifth set of actions in accordance with the invention convert an `e-mail addressee` choice-item into a drag-and-drop object and deposit the dragged object onto an open desktop area. the term, `e-mail addressee` choice-item refers to choice-items that when invoked, cause a particular e-mail address to be inserted into a `send.sub.-- to` field of an e-mailing program. double-clicking on the dragged-and-dropped object created from such an `e-mail addressee` choice-item causes the corresponding electronic mailing program to be launched and the e-mail address of the specified addressee to be automatically copied into the `send.sub.-- to` field of an e-mail letter under composition. the conversion of the dragged-and-dropped object into such an on-the-desktop, application-launching and initiating tool can be mediated either by the application program having been originally configured to, at the time of the choice-item right-drag operation, so generate the corresponding instructions in accordance with the os's data transfer protocol (e.g., ole or dde) or by the application program having been subclassed to so produce the conversion instructions for converting the dragged-and-dropped object into the corresponding on-the-desktop tool in response to the right-drag operation. a sixth set of actions in accordance with the invention convert a function choice-item (e.g., in 618) into a drag-and-drop object and deposit (624) the dragged object onto a toolbar area (620) of an opened window so as to add the functionality of the dragged-and-dropped object to the tool bar. note that this is different from the dropping of a dragged object onto an open desktop area. the conversion of the drag-and-drop object into an application toolbar item can be mediated either by the application program being originally configured to so respond to the os's data transfer protocol (e.g., ole or dde) or by the application program having been subclassed to so carry out the conversion of the dragged-and-dropped object into the corresponding tool in response to the drop operation. one example of such tool creation, as shown in fig. 6 at 624, involves the `macro` function found in many application programs. a given macro is formed by recording a sequence of actions within an application program and storing these actions under a particular macro filename for later playback within the same application program. often the user has to navigate through multiple menus and submenus to reach the desired `macro playback` function. in accordance with the invention, however, the choice-item that represents the desired `macro playback` function is converted into a drag-and-drop object. the created drag-and-drop object is then dragged-and-dropped over a tool bar area. the object-receiving tool bar area converts the received object into an in-bar tool (such as the tape-and-reels shown in toolbar 620). clicking on the in-bar tool invokes the desired `macro playback` function. in addition to, or besides the `macro playback` function, many application programs have further `tool` functions that are hidden as choice-items in low level submenus. examples include the `spell-checking`, `grammar-checking` and `thesaurus-substitute` functions. such tool functions may be found in a word processor or in a host of other kinds of application programs including even picture-drawing programs and spreadsheet generating programs. in accordance with the invention, the choice-item that represents a desired `tool` function is converted into a drag-and-drop object. the created drag-and-drop object is then dragged-and-dropped over a tool bar area. the object-receiving tool bar area then converts the received object into an in-bar tool. clicking on the in-bar tool invokes the newly-added `tool` function. in this way, the user can bring a favorite tool of the user to the forefront for quicker access and activation. another example of tool creation involves the `favorites` or `bookmarks` choice-listing menu found in web browsers and like net-using application programs. the same net-using application programs often include a tool bar 680. in accordance with the invention, the choice-item that represents a desired net-site address (e.g., xyzsite) is converted into a drag-and-drop object. instead of dropping onto an open desktop area, the created drag-and-drop object is dragged-and-dropped over a tool bar area (680). the object-receiving tool bar area converts the received object into an in-bar tool. clicking on the so-created in-bar tool causes the net-using application program to access the corresponding `favorite` site (xyzsite). such an access can be in the form of a web-site visit or an ftp file-download or a newsgroup exchange, as appropriate. in this way, the user can bring a favorite network-using activity of the user to the forefront as an activatable tool that provides more convenient access to the favorite network-using activity. a seventh set of actions in accordance with the invention convert a function choice-item of a first application program (e.g., word process) into a drag-and-drop object and deposit the dragged object onto another object representing a second application program (e.g., e.sub.-- mail or spreadsheet). the receiving application program absorbs the dropped object and reacts according to its specific programming. note that the dragging-and-dropping of a first choice-item-sourced object onto a second object (which second object represents another application) is different than dropping a dragged object onto an open desktop area or onto an in-application tool bar. the inter-application transference and the by-transit conversion of the dragged-and-dropped object into an appropriate object within the second application window can be mediated either by the first and second application programs having been originally configured to so coordinate with one another using the os's data transfer protocol (e.g., ole or dde) or by one or both of the first and second application programs having been subclassed to so carry out the conversion of the dragged-and-dropped object into the corresponding object in response to the right-drag and drop operation. one example of such dropping of a first choice-item-sourced object into a second object involves e-mailing programs. in accordance with the invention, a historical file-name choice-item is right-dragged from the file menu or a history submenu thereof and the resulting object (e.g., 619a) is dropped onto a second object (second icon, e.g. 690) representing an e-mailing program. this operation is represented by dashed path 691 in fig. 6. the drop causes the e-mailing program 690 to start up and to automatically generate a new e-mail letter containing the contents (text or other) of the data file (e.g., my.sub.-- file4) associated with the dropped file object (e.g., 619a). the user thereby bypasses the conventional procedure of manually launching the e-mailing program 690 by double-clicking on its icon and thereafter using a file open function of the e-mailing program to locate and fetch the desired material (e.g., that which is stored in my.sub.-- file4). in a variation of the above, the dropped first object becomes an `attachment` to the generated e-mail letter rather than part of the body of the letter. the determination of whether mail-insertion or mail-attachment should be carried out, can be made based on the `extension` parameters of the right-dragged object. for example, if the filename of the dragged object ends with a dot-txt (or ".txt") extension, it can be assumed that the contents of the named file are encoded in ascii or an os accepted format, and as such it is safe to embed the contents directly into the e-mail message as an insertion. on the other hand, if the filename of the dragged object ends with a dot-doc extension (or ".doc", or ".wpd", where the last named extension indicates a word processing document encoded according to the wordperfect.tm. format of corel corp.), it can be assumed that the contents of the named file are encoded in a nongeneric format, and as such should be `attached` as a separate file to the e-mail message rather than being directly embedded as part of the contents of the e-mail message. this discriminative sensitivity to the extension portion of a filename can be further applied to others of the right-drag and drop operations described herein. for example, the response of the receiving area of a {right-drag}-and-drop operation in accordance with the invention may respond differently if the filename extension categorizes the corresponding file as one such as `.exe` or `.com` or `.bat` or `.dll` which indicates an executable file; or if the filename extension instead types the file as one such as `.bmp` or `.gif` which indicates a graphic image file; or if the filename extension alternatively categorizes the corresponding file as one such as `.ini` which indicates a user preference file; or if the filename extension designates the file as one such as `.txt` or `.doc` which indicates a file containing text encoded according to a particular format. like many net browser programs, a number of e-mail programs may provide a `favorites` or `bookmarks` or `address book` choice-listing menu that lists addresses of persons/entities to whom the user most frequently send e-mail. (the favorite e-mail recipients may be alternatively listed in an e-mail `aliases` or `addresses` menu.) an eighth set of actions in accordance with the invention enable the user to right-drag (or otherwise convert) a `favorites` or like choice-item out of an e-mail program and to drop it on the open desktop so as to create a `favorite addressee` object 694 such as the one entitled to.sub.-- bill in fig. 6. path 692 represents such a creation of `favorite addressee` object 694. next, a filename choice-item (e.g., my.sub.-- file4) can be dragged out of a second application program (e.g., word process) and dropped on the `favorite addressee` object 694. the contents of the corresponding file (e.g., my.sub.-- file4) are then automatically copied into or attached to an e-mail letter and the letter is automatically sent to the `favorite addressee` (e.g., to.sub.-- bill). as with the above described other processes involving the {right-drag}-and-drop operation in accordance with the invention, the inter-application transference and the by-transit conversion of the dragged-and-dropped object into an appropriate object within the receiving area can be mediated either by the first and second application programs having been originally configured to so coordinate with one another using the os's data transfer protocol (e.g., ole or dde) or by one or both of the first and second application programs having been subclassed to so carry out the conversion of the dragged-and-dropped object into the corresponding object in response to the {right-drag}-and-drop operation. the reference, "inside ole", 2.sup.nd edition, by kraig brockschmidt, ms press is cited here as a sample description of how ole may be used for inter-application transference of objects. an ninth set of actions in accordance with the invention enable the user to convert a `favorites` choice-item originating in an e-mail program into an internet chat-site address. to do this, an addressee-identifying choice-item is right-dragged out of the e-mail program's window (690 after double-clicking thereon) and dropped onto a net browser icon (674 before it is opened as a window by double-clicking on the net browser icon). in response, the receiving net browser icon automatically opens, locates an internet chat-group address for the person/entity identified by the dropped e-mail address object, and establishes a chat-session with the so-identified person/entity. as a variation on the above, instead of establishing a chat session, the dropped e-mail address object could alternatively establish an over-network video conferencing session with the so-identified person/entity. as yet another variation on the above, instead of establishing a chat session, the dropped e-mail address object could alternatively cause the net browser 674 to automatically visit the world wide web (www) page of the so-identified person/entity. a tenth set of actions in accordance with the invention enable the user to intelligibly view specially-formatted files by using the choice-item converting process of the invention. for example, a first choice-listing menu may list a set of files that are encoded according to html format (hypertext markup language). a corresponding html viewing program is generally needed to intelligibly view the graphics within the html file in accordance with the intents of its author. if a desired html file is listed as a choice-item in a choice-listing menu, the user is allowed to right-drag (or otherwise convert) into a drag-and-drop object and to drop the object on the icon of a net browser (674) or on the icon of another html viewing application program. the receiving icon automatically launches its program and opens the identified html file for viewing. similar functions can be performed with other specially-encoded data files. suppose for example that a text file (e.g., having a dot-txt extension) is listed as a choice-item in a menu and the user wants to listen to the message contained in the text file rather than reading it. the user can be allowed to right-drag (or otherwise convert) the text file choice-item so as to produce a corresponding drag-and-drop object and to drop the object on the icon of a text-to-speech converting program. this action causes the receiving icon to automatically launches its text-to-speech converting program, to open the identified text file (.txt), and to automatically begin outputting the contained message as speech through an appropriate speech output device. suppose as a further example that a music file (e.g., having a dot-wav extension) is listed as a choice-item in a menu and the user wants to listen to the music (or other audio information) contained in the so-identified music file. the user can be allowed to right-drag (or otherwise convert) the music file choice-item so as to create a corresponding drag-and-drop object and to drop the object on the icon of an audio-reproducing program. this action causes the receiving icon to automatically launches its audio-reproducing program, to open the identified music file (.wav), and to automatically begin outputting the contained audio information as sound through an appropriate sound output device. suppose as yet a further example that a picture file (e.g., having a dot-bmp extension or a dot-gif extension or a dot-pcx extension or some other picture defining format) is listed as a choice-item in a menu and the user wants to view the picture contained in the so-identified picture file. the user can be allowed to right-drag (or otherwise convert) the picture file choice-item so as to create a corresponding drag-and-drop object and to drop the object on the icon of a picture-viewing program. this action causes the receiving icon to automatically launches its picture-viewing program, to open the identified picture file (e.g., my.sub.-- picture.bmp), and to automatically begin outputting the contained graphic information as a picture through an appropriate graphics output device (e.g., the computer's monitor 110). the above disclosure is to be taken as illustrative of the invention, not as limiting its scope or spirit. numerous modifications and variations will become apparent to those skilled in the art after studying the above disclosure. it is to be understood for example that the {right-drag} operations described above for specially keying on the choice-item that is to be converted into a drag-and-drop object can be substituted for by other hot keying sequences such as depressing a right or middle trackball button while simultaneously rolling the trackball so as to indicate a special drag operation. a control key (ctrl or alt or shift) can alternatively be depressed on the keyboard while a mouse or trackball or joystick is moved so as to define the hot keying sequence that initiates conversion of the desired choice-item into a drag-and-drop object. given the above disclosure of general concepts and specific embodiments, the scope of protection sought is to be defined by the claims appended hereto.
129-879-239-458-290
KR
[ "CN", "KR", "US" ]
H04S7/00,G06T19/00,H04S5/00,H04R3/00,H04R5/033,H04R5/04
2017-03-20T00:00:00
2017
[ "H04", "G06" ]
system and program for implementing augmented reality three-dimensional sound reflecting real-life sound
provided is an augmented reality sound implementation system for executing an augmented reality sound implementation method. the system includes: a first computing device of a first user; and a firstsound device which is worn by the first user so that the first user can receive a three-dimensional augmented reality sound, is connected to the first computing device in a wired or wireless manner, and includes a sound recording function. the method includes: a step in which the first sound device acquires real-life sound information which indicates a real-life sound, and transmits the real-lifesound information to the first computing device; a step in which the first computing device acquires a first virtual sound which indicates a sound generated from a virtual reality game executed in thefirst computing device; a step in which the first computing device generates a three-dimensional augmented reality sound on the basis of the real-life sound information and the first virtual sound; and a step in which the first computing device provides the three-dimensional augmented reality sound to the first user through the first sound device.
1 . an augmented reality sound implementation system for performing a method for an augmented reality sound, the system comprising: a first computing device of a first user; and a first sound device which is worn by the first user such that the first user can receive a three-dimensional augmented reality sound, is connected to the first computing device in a wired or wireless manner, and includes a sound recording function, wherein the method comprising: obtaining, by the first sound device, realistic sound information, which indicates a realistic sound, to transmit the realistic sound information to the first computing device; obtaining, by the first computing device, a first virtual sound which indicates a sound generated from a virtual reality game executed by the first computing device; generating, by the first computing device, a three-dimensional augmented reality sound based on the realistic sound information and the first virtual sound; and providing, by the first computing device, the three-dimensional augmented reality sound to the first user through the first sound device. 2 . the system of claim 1 , wherein the method further comprising: obtaining, by the first computing device, direction feature information indicating a location where the realistic sound is generated; and further considering, by the first computing device, the direction feature information to generate the three-dimensional augmented reality sound. 3 . the system of claim 2 , wherein the first computing device determines whether the first user and a second user spaced apart from the first user are closer than a predetermined distance, and obtains the direction feature information of the realistic sound based on the realistic sound information when the first user and the second user are closer than the predetermined distance, and wherein the realistic sound information is in a binaural type measured by using a plurality of microphones in the first sound device. 4 . the system of claim 2 , wherein the first computing device determines whether the first user and a second user spaced apart from the first user are closer than a predetermined distance, and obtains the direction feature information of the realistic sound based on location information of the first user and the second user when the first user and the second user are not closer than the predetermined distance. 5 . the system of claim 1 , wherein, when a location difference that a relative location in a reality space of the first user and a second user spaced apart from the first user does not correspond to a relative location of an avatar in a virtual space of the first user and the second user occurs, the first computing device generates the three-dimensional augmented reality sound based on the location difference. 6 . the system of claim 5 , wherein the location difference is a case where the second user and the avatar of the second user are divided, as a case where the second user utilizes a skill to the first user. 7 . the system of claim 5 , wherein the location difference is a case where movement of the avatar of the second user is greater or shorter than movement of the second user, as a case where the second user utilizes a skill to the first user. 8 . the system of claim 5 , wherein the three-dimensional augmented reality sound is generated through blending a second virtual sound, which is generated to correspond to a location of the avatar, and the realistic sound. 9 . a computer-readable medium recording a program for performing an augmented reality sound implementing method performed by the augmented reality sound implementing system described in claim 1 . 10 . an application for a terminal device stored in a medium to perform a method for the augmented reality sound performed by the augmented reality sound implementing system coupled to the computing device being hardware, which is described in claim 1 .
cross-reference to related applications the present application is a continuation of international patent application no. pct/kr2018/003189 filed mar. 19, 2018, which is based upon and claims the benefit of priority to korean patent application nos. 10-2017-0034398 filed mar. 20, 2017, 10-2017-0102892 filed aug. 14, 2017 and 10-2017-0115842 filed sep. 11, 2017. the disclosures of the above-listed applications are hereby incorporated by reference herein in their entirety. background embodiments of the inventive concept described herein relate to a system and a program for implementing three-dimensional augmented reality sound based on realistic sound. the augmented reality refers to, but is not limited to, a computer graphic technology that displays one image obtained by mixing a real-world image that a user sees and a virtual image, and thus also refers to the real-virtual image blending technology used in mixed reality. the augmented reality may be obtained by composing images of virtual objects or information and specific objects of real world images. the three-dimensional sound refers to a technology that provides three-dimensional sound such that the user can feel the sense of presence. in the virtual reality field, a three-dimensional sound is implemented by providing a sound depending on the path transmitted from the sound generating location to the user by using the vector value of the virtual reality image. however, it is difficult to use a method for implementing three-dimensional augmented reality sound based on realistic sound, in that the direction of the realistic sound cannot be grasped in advance and must be grasped in real time in augmented reality. for example, the location of another user may not be intuitively predicted because sound is ringing when a plurality of users are placed inside the building in the virtual reality. accordingly, a method and a program capable of implementing the three-dimensional sound based on real-time realistic sound are required in augmented reality fields. summary the inventive concept provides a system and a program for implementing a three-dimensional augmented reality sound based on realistic sound. the technical objects of the inventive concept are not limited to the above-mentioned ones, and the other unmentioned technical objects will become apparent to those skilled in the art from the following description. in accordance with an aspect of the inventive concept, there is provided an augmented reality sound implementation system for performing a method for an augmented reality sound, the system comprises a first computing device of a first user; and a first sound device which is worn by the first user such that the first user can receive a three-dimensional augmented reality sound, is connected to the first computing device in a wired or wireless manner, and includes a sound recording function, wherein the method comprises obtaining, by the first sound device, realistic sound information, which indicates a realistic sound, to transmit the realistic sound information to the first computing device; obtaining, by the first computing device, a first virtual sound which indicates a sound generated from a virtual reality game executed by the first computing device; generating, by the first computing device, a three-dimensional augmented reality sound based on the realistic sound information and the first virtual sound; and providing, by the first computing device, the three-dimensional augmented reality sound to the first user through the first sound device. the other detailed items of the inventive concept are described and illustrated in the specification and the drawings. brief description of the figures the above and other objects and features will become apparent from the following description with reference to the following figures, wherein like reference numerals refer to like parts throughout the various figures unless otherwise specified, and wherein: fig. 1 is a conceptual diagram for describing a method for implementing augmented reality sound; fig. 2 is a block diagram illustrating a device for implementing augmented reality sound; fig. 3 is a flowchart illustrating a first embodiment of a method for implementing augmented reality sound; and fig. 4 is a flowchart illustrating a second embodiment of a method for implementing augmented reality sound. detailed description the above and other aspects, features and advantages of the invention will become apparent from the following description of the following embodiments given in conjunction with the accompanying drawings. however, the inventive concept is not limited to the embodiments disclosed below, but may be implemented in various forms. the embodiments of the inventive concept are provided to make the disclosure of the inventive concept complete and fully inform those skilled in the art to which the inventive concept pertains of the scope of the inventive concept. the terms used herein are provided to describe the embodiments but not to limit the inventive concept. in the specification, the singular forms include plural forms unless particularly mentioned. the terms “comprises” and/or “comprising” used herein does not exclude presence or addition of one or more other elements, in addition to the aforementioned elements. throughout the specification, the same reference numerals dente the same elements, and “and/or” includes the respective elements and all combinations of the elements. although “first”, “second” and the like are used to describe various elements, the elements are not limited by the terms. the terms are used simply to distinguish one element from other elements. accordingly, it is apparent that a first element mentioned in the following may be a second element without departing from the spirit of the inventive concept. unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by those skilled in the art to which the inventive concept pertains. it will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the specification and relevant art and should not be interpreted in an idealized or overly formal sense unless expressly so defined herein. hereinafter, exemplary embodiments of the inventive concept will be described in detail with reference to the accompanying drawings. according to an embodiment of the present disclosure, a method for implementing three-dimensional augmented reality sound based on realistic sound may be implemented by a computing device 200 . the method for implementing augmented reality sound may be implemented with an application, may be stored in the computing device 200 , and may be performed by the computing device 200 . for example, the computing device 200 may be provided as, but not limited to, a mobile device such as a smartphone, a tablet pc, or the like and only needs to be equipped with a camera, output sound, and process and store data. that is, the computing device 200 may be provided as a wearable device, which is equipped with a camera and outputs sound, such as a glasses, a band, or the like. the arbitrary computing device 200 not illustrated may be provided. fig. 1 is a conceptual diagram for describing a method for implementing augmented reality sound. referring to fig. 1 , the plurality of users 10 and 20 carry sound devices 100 - 1 and 100 - 2 and computing devices 200 - 1 and 200 - 2 and experience augmented reality content. in an embodiment, only the two users 10 and 20 are illustrated. however, an embodiment is not limited thereto. for example, the method for implementing augmented reality sound may be substantially identically applied to an environment in which there are two or more users. for example, a sound device 100 may be provided in the form of a headphone, a headset, an earphone, or the like. the sound device 100 may include a speaker so as to output sound; in addition, the sound device 100 may include a microphone so as to obtain and record the surrounding sound. the sound device 100 may be provided in the binaural type for the purpose of enhancing the sense of presence. the sound including direction feature information may be obtained by recording the left sound and the right sound separately, using a binaural effect. in some embodiments, the sound device 100 may be a separate device as the sound output device and the sound recording device. the sound device 100 may obtain realistic sound information generated by the users 10 and 20 . alternatively, the sound device 100 may obtain the realistic sound information generated at a periphery of the users 10 and 20 . that is, a sound source may be placed at a location where the realistic sound is generated. the sound source may not be limited to the sound generated by the plurality of users 10 and 20 . herein, the realistic sound information may indicate actual sound information generated in real life. for example, when the second user 20 makes a sound to the first user 10 while playing an augmented reality game, the first sound device 100 - 1 of the first user 10 may obtain the realistic sound information (sound) generated from the second user 20 . herein, the second user 20 may be a user located at a place space apart from the first user 10 . the first sound device 100 - 1 of the first user 10 may also obtain direction feature information of the realistic sound generated by the second user 20 together. the first computing device 200 - 1 of the first user 10 may synthesize the realistic sound information of the first user 10 and first virtual sound information indicating sound (e.g., background sound, effect sound, or the like) generated in a virtual reality game, based on the direction feature information of the realistic sound obtained from the first sound device 100 - 1 to generate three-dimensional augmented reality sound for the first user 10 . when the sound device 100 does not support the binaural type of sound or when a distance between the first user 10 and the second user 20 is longer than a predetermined distance, the first computing device 200 - 1 may obtain the direction feature information of the realistic sound based on information about the relative location of the first user 10 and the second user 20 . the plurality of computing devices 200 - 1 and 200 - 2 or a server may obtain the locations of the first user 10 and the second user 20 and may compare the locations of each other to generate relative location information. for example, a well-known positioning system including a gps system may be used to obtain the locations of the plurality of users 10 and 20 . the plurality of computing devices 200 - 1 and 200 - 2 or the server may obtain three-dimensional locations of the first user 10 and the second user 20 and may compare the three-dimensional locations of each other to generate relative three-dimensional location information. for example, as illustrated in fig. 1 , the relative location information indicating that the second user 20 is located in a direction of 8 o′clock, at a distance of 50 m, and at a low altitude of 5 m with respect to the first user 10 may be generated. herein, the second user 20 may generate the realistic sound. in addition, the direction feature information of the realistic sound obtained by the first user 10 is determined based on the relative location information. the three-dimensional augmented reality sound for the first user 10 may be implemented by synthesizing the realistic sound information obtained from the second user 20 by the first user 10 , the direction feature information of the realistic sound, and the first virtual sound information. the element such as amplitude, a phase, a frequency, or the like of the realistic sound may be adjusted depending on the determined direction feature information of the realistic sound. the method for implementing augmented reality sound according to an embodiment of the present disclosure may use the binaural type of the sound device 100 or the relative location information of the plurality of users 10 and 20 , and thus may implement the three-dimensional augmented reality sound based on the real-time realistic sound. according to an embodiment, the above-described binaural type of the sound device 100 and the relative location information of the first user 10 and the second user 20 may be used together. fig. 2 is a block diagram illustrating a device for implementing augmented reality sound. referring to fig. 2 , the sound device 100 may include at least one control unit 110 , a storage unit 120 , an input unit 130 , an output unit 140 , a transceiver unit 150 , and a gps unit 160 . each of the components included in the sound device 100 may be connected by a bus so as to communicate with one another. the control unit 110 may execute a program command stored in the storage unit 120 . the control unit 110 may indicate a central processing unit (cpu), a graphic processing unit (gpu), or a dedicated processor that performs methods according to an embodiment of the present disclosure. the storage unit 120 may be implemented with at least one of a volatile storage medium and a nonvolatile storage medium. for example, the storage unit 120 may be implemented with at least one of a read only memory (rom) and a random access memory (ram). the input unit 130 may be a recording device capable of recognizing and recording a voice. for example, the input unit 130 may be a microphone, or the like. the output unit 140 may be an output device capable of outputting a voice. the output device may include a speaker, or the like. the transceiver unit 150 may be connected to the computing device 200 or a server so as to perform communication. the gps unit 160 may track the location of the sound device 100 . the computing device 200 may include at least one control unit 210 , a storage unit 220 , an input unit 230 , an output unit 240 , a transceiver unit 250 , a gps unit 260 , a camera unit 270 , and the like. each of the components included in the computing device 200 may be connected by a bus so as to communicate with one another. the output unit 240 may be an output device capable of outputting a screen. the output device may include a display, or the like. the control unit 210 may execute a program command stored in the storage unit 220 . the control unit 210 may indicate a cpu, a gpu, or a dedicated processor that performs methods according to an embodiment of the present disclosure. the storage unit 220 may be implemented with at least one of a volatile storage medium and a nonvolatile storage medium. for example, the storage unit 220 may be implemented with at least one of a rom and a ram. the transceiver unit 250 may be connected to the other computing device 200 , the sound device 100 , or the server so as to perform communication. the gps unit 260 may track the location of the computing device 200 . the camera unit 270 may obtain a reality image. in some embodiments, the method for implementing augmented reality sound may be implemented by linking the computing device to another computing device or a server. fig. 3 is a flowchart illustrating a first embodiment of a method for implementing augmented reality sound. referring to fig. 3 , the first sound device 100 - 1 of the first user 10 may obtain realistic sound information. herein, the realistic sound information may be the realistic sound generated from the second user 20 or the realistic sound generated at the first user 10 . the first sound device 100 - 1 may transmit realistic sound information to the first computing device 200 - 1 of the first user 10 . in operation s 300 , the first computing device 200 - 1 may obtain the realistic sound information from the first sound device 100 - 1 . in operation s 310 , the first computing device 200 - 1 may determine whether another user (e.g., the second user 20 ) is present at a distance closed to the first user 10 . the close distance may be a predetermined distance. when the first user 10 and the second user 20 are closer to each other than the predetermined distance, in operation s 320 , the first computing device 200 - 1 may obtain direction feature information of the realistic sound, based on the realistic sound information. herein, the realistic sound information may be the binaural type of sound information measured by the plurality of input units 130 of the first sound device 100 - 1 . in operation s 321 , the first computing device 200 - 1 may obtain first virtual sound information indicating sound generated in a virtual reality game. in operation s 322 , the first computing device 200 - 1 may generate three-dimensional augmented reality sound based on at least one of the realistic sound information, the direction feature information, or the first virtual sound information. in particular, the first computing device 200 - 1 may generate the three-dimensional augmented reality sound by blending at least one information of the realistic sound information, the direction feature information, or the first virtual sound information. for example, when the first user 10 has obtained the sound source of the first verse of the national anthem saying that “until the east sea's waters and baekdu mountain are dry and worn away, god protects and helps us. may our nation be eternal” from the north side of the first user 10 , the first computing device 200 - 1 may generate the three-dimensional augmented reality sound by blending the direction feature information, the realistic sound information, and the first virtual sound information indicating sound generated in the virtual reality game such that the first user 10 can hear the corresponding sound source as if the corresponding sound source has been originated from the north. in operation s 323 , the first computing device 200 - 1 may provide the three-dimensional augmented reality sound to the first user 10 through the first sound device 100 - 1 . when the first user 10 and the second user 20 are not closer to each other than a predetermined distance, in operation s 330 , the second computing device 200 - 2 may obtain location information of the first user 10 and the second user 20 . in operation s 331 , the first computing device 200 - 1 may obtain direction feature information of the realistic sound, based on location information of the first user 10 and the second user 20 . in operation s 332 , the first computing device 200 - 1 may obtain first virtual sound information indicating sound generated in a virtual reality game. in operation s 333 , the first computing device 200 - 1 may generate three-dimensional augmented reality sound based on at least one of the realistic sound information, the direction feature information, or the first virtual sound information. in particular, the first computing device 200 - 1 may generate the three-dimensional augmented reality sound by blending at least two or more information of the realistic sound information, the direction feature information, or the first virtual sound information. for example, when the sound source of the first verse of the national anthem saying that “until the east sea's waters and baekdu mountain are dry and worn away, god protects and helps us. may our nation be eternal” is generated from the second user 20 , the first computing device 200 - 1 may generate the three-dimensional augmented reality sound in consideration of the direction feature information such that the first user 10 located at the right side of the second user 20 can hear the corresponding sound source as if the corresponding sound source has been originated from the left side. in operation s 334 , the first computing device 200 - 1 may provide the three-dimensional augmented reality sound to the first user 10 through the first sound device 100 - 1 . fig. 4 is a flowchart illustrating a second embodiment of a method for implementing augmented reality sound. referring to fig. 4 , the first sound device 100 - 1 of the first user 10 may obtain realistic sound information. herein, the realistic sound information may be the realistic sound generated from the first user 10 or the realistic sound generated at the first user 10 . the first sound device 100 - 1 may transmit the realistic sound information to the first computing device 200 - 1 of the first user 10 . in operation s 300 , the first computing device 200 - 1 may obtain the realistic sound information from the first sound device 100 - 1 . in operation s 301 , the first computing device 200 - 1 may determine whether a location difference in which the relative location of the plurality of users 10 and 20 in a reality space corresponds to the relative location of avatars of the plurality of users 10 and 20 in a virtual space of an augmented reality game. the location difference may be the case where the second user 20 utilizes a skill to the first user 10 and may be the case where the second user 20 and the avatar of the second user 20 are divided. the detailed example of the case where the avatar is divided may be as follows. the second user 20 may utilize the skill to the first user 10 . in this case, after being divided, the avatar of the second user 20 may move to the avatar of the first user 10 and then may utilize the skill. in addition, the location difference may be the case where the second user 20 utilizes the skill and may be the case where the avatar of the second user 20 teleports. generally, teleportation may be referred to as “teleport” in a game. the teleportation (or teleport) may mean that anyone moves to any space momentarily. usually, the teleportation may be used when anyone moves to very distant places. for example, while being located at the east side of the avatar of the first user 10 , the avatar of the second user 20 teleports and then is located at the west side of the avatar of the first user 10 . in this case, the first computing device 200 - 1 may generate three-dimensional augmented reality sound in consideration of the difference between the location of the avatar of the second user 20 and the location of the second user 20 . furthermore, the location difference may be the case where the second user 20 utilizes the skill to the first user 10 and may be the case where the movement of the avatar of the second user 20 is greater or shorter than the movement of the second user 20 . for example, the case where the movement of the avatar of the second user 20 is greater or shorter than the movement of the second user 20 may be the case where the second user 20 is moving faster because the second user 20 utilizes the skill. in this case, the first computing device 200 - 1 may consider the sound generated while the avatar of the second user 20 moves rapidly. when the location difference occurs, in operation s 302 , the first computing device 200 - 1 may obtain location information of the first user 10 and the second user 20 . in operation s 303 , the first computing device 200 - 1 may generate the three-dimensional augmented reality sound based on the location difference. when the location difference occurs, the first computing device 200 - 1 may generate the three-dimensional augmented reality sound through blending the realistic sound and the second virtual sound generated to correspond to the locations of avatars of the plurality of users 10 and 20 . when the first user 10 or the second user 20 utilizes the skill, the first computing device 200 - 1 may perform sound blending so as to fit the first-person situation or the third-person situation. for example, when the location of the avatar of the first user 10 or the second user 20 is changed because the first user 10 or the second user 20 utilizes the skill while playing a game in the first-person situation, the first computing device 200 - 1 may generate virtual sound so as to fit the third-person situation and then may blend the realistic sound and the generated virtual sound. in operation s 304 , the first computing device 200 - 1 may provide the three-dimensional augmented reality sound to the first user through the first sound device 100 - 1 . when the location difference does not occur, in operation s 310 , the first computing device 200 - 1 may determine whether another user (e.g., the second user 20 ) is present at a distance closed to the first user 10 . the close distance may be a predetermined distance. when the first user 10 and the second user 20 are closer to each other than the predetermined distance, in operation s 320 , the first computing device 200 - 1 may obtain direction feature information of the realistic sound, based on the realistic sound information. herein, the realistic sound information may be the binaural type of sound information measured by the plurality of input units 130 of the first sound device 100 - 1 . in operation s 321 , the first computing device 200 - 1 may obtain first virtual sound information indicating sound generated in a virtual reality game. in operation s 322 , the first computing device 200 - 1 may generate three-dimensional augmented reality sound based on at least one of the realistic sound information, the direction feature information, or the first virtual sound information. in particular, the first computing device 200 - 1 may generate the three-dimensional augmented reality sound by blending at least one information of the realistic sound information, the direction feature information, or the first virtual sound information. for example, when the first user 10 has obtained the sound source of the first verse of the national anthem saying that “until the east sea's waters and baekdu mountain are dry and worn away, god protects and helps us. may our nation be eternal” from the north side of the first user 10 , the first computing device 200 - 1 may generate the three-dimensional augmented reality sound by blending the direction feature information, the realistic sound information, and the first virtual sound information indicating sound generated in the virtual reality game such that the first user 10 can hear the corresponding sound source as if the corresponding sound source has been originated in the north. in operation s 323 , the first computing device 200 - 1 may provide the three-dimensional augmented reality sound to the first user 10 through the first sound device 100 - 1 . when the first user 10 and the second user 20 are not closer to each other than a predetermined distance, in operation s 330 , the second computing device 200 - 2 may obtain location information of the first user 10 and the second user 20 . in operation s 331 , the first computing device 200 - 1 may obtain the direction feature information of the realistic sound, based on the location information of the first user 10 and the second user 20 . in operation s 332 , the first computing device 200 - 1 may obtain the first virtual sound information indicating sound generated in a virtual reality game. in operation s 333 , the first computing device 200 - 1 may generate three-dimensional augmented reality sound based on at least one of the realistic sound information, the direction feature information, or the first virtual sound information. in particular, the first computing device 200 - 1 may generate the three-dimensional augmented reality sound by blending at least two or more information of the realistic sound information, the direction feature information, or the first virtual sound information. for example, when the sound source of the first verse of the national anthem saying that “until the east sea's waters and baekdu mountain are dry and worn away, god protects and helps us. may our nation be eternal” is generated from the second user 20 , the first computing device 200 - 1 may generate the three-dimensional augmented reality sound in consideration of the direction feature information such that the first user 10 located at the right side of the second user 20 can hear the corresponding sound source as if the corresponding sound source has been originated from the left side. in operation s 334 , the first computing device 200 - 1 may provide the three-dimensional augmented reality sound to the first user 10 through the first sound device 100 - 1 . above, a method for implementing augmented reality sound is described. however, it will be understood by those skilled in the art that the present disclosure is not limited to implementation of augmented reality sound but may also be substantially identically performed on the implementation of a mixed reality sound including an augmented virtual reality obtained by mixing a reality image with a virtual world image. in some embodiments, the above-discussed method of fig. 3 and fig. 4 , according to this disclosure, is implemented in the form of program being readable through a variety of computer means and be recorded in any non-transitory computer-readable medium. here, this medium, in some embodiments, contains, alone or in combination, program instructions, data files, data structures, and the like. these program instructions recorded in the medium are, in some embodiments, specially designed and constructed for this disclosure or known to persons in the field of computer software. for example, the medium includes hardware devices specially configured to store and execute program instructions, including magnetic media such as a hard disk, a floppy disk and a magnetic tape, optical media such as cd-rom (compact disk read only memory) and dvd (digital video disk), magneto-optical media such as floptical disk, rom, ram (random access memory), and flash memory. program instructions include, in some embodiments, machine language codes made by a compiler compiler and high-level language codes executable in a computer using an interpreter or the like. these hardware devices are, in some embodiments, configured to operating as one or more of software to perform the operation of this disclosure, and vice versa. a computer program (also known as a program, software, software application, script, or code) for the above-discussed method of fig. 3 and fig. 4 according to this disclosure is, in some embodiments, written in a programming language, including compiled or interpreted languages, or declarative or procedural languages. a computer program includes, in some embodiments, a unit suitable for use in a computing environment, including as a stand-alone program, a module, a component, or a subroutine. a computer program is or is not, in some embodiments, correspond to a file in a file system. a program is, in some embodiments, stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). a computer program is, in some embodiments, deployed to be executed on one or more computer processors located locally at one site or distributed across multiple remote sites and interconnected by a communication network. according to the disclosed embodiment, since the three-dimensional augmented reality sound is provided in the proper manner of a binaural scheme or a positioning scheme depending on the distance between users using an augmented reality game, it is possible to implement the three-dimensional augmented reality sound by reflecting realistic sound and virtual sound more realistically in real time. furthermore, when a difference between the location of a user using an augmented reality game and the location or movement of the user's avatar occurs, the three-dimensional augmented reality sound may be implemented in consideration of the difference. although the exemplary embodiments of the inventive concept have been described with reference to the accompanying drawings, it will be understood by those skilled in the art to which the inventive concept pertains that the inventive concept can be carried out in other detailed forms without changing the technical spirits and essential features thereof. therefore, the above-described embodiments are exemplary in all aspects, and should be construed not to be restrictive.
130-725-470-219-448
US
[ "US" ]
E21B47/00,E21B47/09,E21B33/13
2007-04-02T00:00:00
2007
[ "E21" ]
use of micro-electro-mechanical systems (mems) in well treatments
a method of servicing a wellbore, comprising placing a plurality of micro-electro-mechanical system (mems) sensors in at least a portion of a sealant composition, placing the sealant composition in an annular space formed between a casing and the wellbore wall, and monitoring, via the mems sensors, the sealant composition and/or the annular space for a presence of gas, water, or both. a method of servicing a wellbore, comprising placing a plurality of micro-electro-mechanical system (mems) sensors in a wellbore composition, placing the wellbore composition in the wellbore, and monitoring, via the mems sensors, the wellbore and/or the surrounding formation for movement.
1 . a method of servicing a wellbore, comprising: placing a plurality of micro-electro-mechanical system (mems) sensors in at least a portion of a sealant composition; placing the sealant composition in an annular space formed between a casing and the wellbore wall; and monitoring, via the mems sensors, the sealant composition and/or the annular space for a presence of gas, water, or both. 2 . the method of claim 1 , wherein the sealant composition is a cement slurry and wherein the monitoring is carried out prior to setting of the cement slurry. 3 . the method of claim 2 , further comprising signaling an operator upon detection of gas and/or water. 4 . the method of claim 2 , further comprising providing a location in the wellbore corresponding a detection of gas and/or water. 5 . the method of claim 3 , further comprising applying pressure to the well upon detection of gas and/or water. 6 . the method of claim 3 , further comprising activating at least one device to prevent flow out of the well upon detection gas and/or water. 7 . the method of claim 2 , wherein the cement slurry is pumped down the annulus in a reverse cementing service. 8 . the method of claim 2 , wherein the cement slurry is pumped down the casing and up the annulus in a conventional cementing service. 9 . the method of claim 1 , wherein the sealant composition is a cement slurry and wherein the monitoring is carried out after setting of the cement slurry. 10 . the method of claim 9 , wherein the monitoring is carried out by running an interrogator tool into the wellbore at one or more service intervals over the operating life of the well. 11 . the method of claim 9 , further comprising providing a location in the wellbore corresponding a detection of gas and/or water. 12 . the method of claim 11 , further comprising assessing the integrity of the casing and/or the cement proximate the location where gas and/or water is detected. 13 . the method of claim 12 , further comprising performing a remedial action on the casing and/or the cement proximate the location where gas and/or water is detected. 14 . the method of claim 13 , wherein the remedial action comprises placing additional sealant composition proximate the location where gas and/or water is detected. 15 . the method of claim 13 , wherein the remedial action comprises replacing and/or reinforcing the casing proximate the location where gas and/or water is detected. 16 . the method of claim 9 , further comprising upon detection of gas and/or water, adjusting an operating condition of the well. 17 . the method of claim 16 , wherein the operating condition comprises temperature, pressure, production rate, length of service interval, or any combination thereof. 18 . the method of claim 16 , wherein adjusting the operating condition extends an expected service life of the wellbore. 19 . a method of servicing a wellbore, comprising: placing a plurality of micro-electro-mechanical system (mems) sensors in a wellbore composition; placing the wellbore composition in the wellbore; and monitoring, via the mems sensors, the wellbore and/or the surrounding formation for movement. 20 . the method of claim 19 , wherein the mems sensors are in a sealant composition placed within an annular casing space in the wellbore and wherein the movement comprises a relative movement between the sealant composition and the adjacent casing and/or wellbore wall. 21 . the method of claim 19 , wherein at least a portion of the wellbore composition comprising the mems flows into the surrounding formation and wherein the movement comprises a movement in the formation. 22 . the method of claim 21 , further comprising upon detection of the movement in the formation, adjusting an operating condition of the well. 23 . the method of claim 22 , wherein the operating condition comprises a production rate of the wellbore. 24 . the method of claim 23 , wherein adjusting the production rate extends an expected service life of the wellbore. 25 . the method of claim 1 , wherein the gas comprises carbon dioxide, hydrogen sulfide, or combinations thereof. 26 . the method of claim 12 , wherein a corrosive gas is detected. 27 . the method of claim 12 , wherein the integrity of the casing and/or cement is compromised via corrosion and further comprising performing a remedial action on the casing and/or the cement proximate the location where corrosion is present. 28 . the method of claim 1 , wherein the wellbore is associated with a carbon dioxide injection system and wherein the monitoring an undesirable leak or loss of zonal isolation in the wellbore. 29 . the method of claim 28 , further comprising performing a remedial action on the casing and/or the cement proximate a location where the leak or loss of zonal isolation is detected. 30 . the method of claim 29 , further comprising placing carbon dioxide into the wellbore and surrounding formation to sequester the carbon dioxide.
cross-reference to related applications this is a continuation-in-part application of u.s. patent application ser. no. 12/618,067 filed on nov. 13, 2009, published as u.s. patent application publication no. 2010/0051266 a1, which is a continuation-in-part application of u.s. patent application ser. no. 11/695,329, now u.s. pat. no. 7,712,527, both entitled “use of micro-electro-mechanical systems (mems) in well treatments,” each of which is hereby incorporated by reference herein in its entirety. background of the invention 1. field of the invention this disclosure relates to the field of drilling, completing, servicing, and treating a subterranean well such as a hydrocarbon recovery well. in particular, the present disclosure relates to systems and methods for detecting and/or monitoring the position and/or condition of a wellbore, the surrounding formation, and/or wellbore compositions, for example wellbore sealants such as cement, using mems-based data sensors. still more particularly, the present disclosure describes systems and methods of monitoring the integrity and performance of the wellbore, the surrounding formation and/or the wellbore compositions from drilling/completion through the life of the well using mems-based data sensors. 2. background of the invention natural resources such as gas, oil, and water residing in a subterranean formation or zone are usually recovered by drilling a wellbore into the subterranean formation while circulating a drilling fluid in the wellbore. after terminating the circulation of the drilling fluid, a string of pipe (e.g., casing) is run in the wellbore. the drilling fluid is then usually circulated downward through the interior of the pipe and upward through the annulus, which is located between the exterior of the pipe and the walls of the wellbore. next, primary cementing is typically performed whereby a cement slurry is placed in the annulus and permitted to set into a hard mass (i.e., sheath) to thereby attach the string of pipe to the walls of the wellbore and seal the annulus. subsequent secondary cementing operations may also be performed. one example of a secondary cementing operation is squeeze cementing whereby a cement slurry is employed to plug and seal off undesirable flow passages in the cement sheath and/or the casing. non-cementitious sealants are also utilized in preparing a wellbore. for example, polymer, resin, or latex-based sealants may be desirable for placement behind casing. to enhance the life of the well and minimize costs, sealant slurries are chosen based on calculated stresses and characteristics of the formation to be serviced. suitable sealants are selected based on the conditions that are expected to be encountered during the sealant service life. once a sealant is chosen, it is desirable to monitor and/or evaluate the health of the sealant so that timely maintenance can be performed and the service life maximized. the integrity of sealant can be adversely affected by conditions in the well. for example, cracks in cement may allow water influx while acid conditions may degrade cement. the initial strength and the service life of cement can be significantly affected by the water content and the slurry formulation. water content, slurry formulation and temperature are the primary drivers for the hydration of cement slurries. thus, it is desirable to measure one or more sealant parameters (e.g., moisture content, temperature, ph and ion concentration) in order to monitor sealant integrity. active, embeddable sensors can involve drawbacks that make them undesirable for use in a wellbore environment. for example, low-powered (e.g., nanowatt) electronic moisture sensors are available, but have inherent limitations when embedded within cement. the highly alkali environment can damage their electronics, and they are sensitive to electromagnetic noise. additionally, power must be provided from an internal battery to activate the sensor and transmit data, which increases sensor size and decreases useful life of the sensor. accordingly, an ongoing need exists for improved methods of monitoring wellbore sealant condition from placement through the service lifetime of the sealant. likewise, in performing wellbore servicing operations, an ongoing need exists for improvements related to monitoring and/or detecting a condition and/or location of a wellbore, formation, wellbore servicing tool, wellbore servicing fluid, or combinations thereof. such needs may be meet by the novel and inventive systems and methods for use of mems sensors down hole in accordance with the various embodiments described herein. brief summary disclosed herein is a method of servicing a wellbore, comprising placing a plurality of micro-electro-mechanical system (mems) sensors in at least a portion of a sealant composition, placing the sealant composition in an annular space formed between a casing and the wellbore wall, and monitoring, via the mems sensors, the sealant composition and/or the annular space for a presence of gas, water, or both. further disclosed herein is a method of servicing a wellbore, comprising placing a plurality of micro-electro-mechanical system (mems) sensors in a wellbore composition, placing the wellbore composition in the wellbore, and monitoring, via the mems sensors, the wellbore and/or the surrounding formation for movement. the foregoing has outlined rather broadly the features and technical advantages of the present disclosure in order that the detailed description that follows may be better understood. additional features and advantages of the apparatus and method will be described hereinafter that form the subject of the claims of this disclosure. it should be appreciated by those skilled in the art that the conception and the specific embodiments disclosed may be readily utilized as a basis for modifying or designing other structures for carrying out the same purposes of the present disclosure. it should also be realized by those skilled in the art that such equivalent constructions do not depart from the spirit and scope of the apparatus and method as set forth in the appended claims. brief description of the drawings for a detailed description of the embodiments of the apparatus and methods of the present disclosure, reference will now be made to the accompanying drawing in which: fig. 1 is a flowchart illustrating an embodiment of a method in accordance with the present disclosure. fig. 2 is a schematic view of a typical onshore oil or gas drilling rig and wellbore. fig. 3 is a flowchart detailing a method for determining when a reverse cementing operation is complete and for subsequent optional activation of a downhole tool. fig. 4 is a flowchart of a method for selecting between a group of sealant compositions according to one embodiment of the present disclosure. figs. 5 , 6 , 7 , 8 , 9 , 10 are schematic views of embodiments of a wellbore parameter sensing system. figs. 11 and 12 flowcharts of methods for servicing a wellbore. fig. 13 is a schematic cross-sectional view of an embodiment of a casing. figs. 14 and 15 are schematic views of further embodiments of a wellbore parameter sensing system. fig. 16 is a flowchart of a method for servicing a wellbore. fig. 17 is a schematic view of a portion of a wellbore. figs. 18 a to 18 c are schematic cross-sectional views at different elevations of the wellbore of fig. 17 . fig. 19 is a schematic view of a portion of a wellbore. figs. 20 a to 20 e are schematic cross-sectional views at different elevations of the wellbore of fig. 19 . fig. 21 is a flowchart of a method for servicing a wellbore. figs. 22 a to 22 c are schematic views of a further embodiment of a wellbore parameter sensing system. figs. 23 a to 23 c are schematic views of a further embodiment of a wellbore parameter sensing system. figs. 23 d to 23 f are flowcharts of methods for servicing a wellbore. figs. 24 a to 24 c are schematic views of embodiments of a wellbore parameter sensing system. fig. 24 d is a flowchart of a method for servicing a wellbore. fig. 25 is a schematic view of a further embodiment of a wellbore parameter sensing system. figs. 26 a to 26 c are schematic cross-sectional views at different elevations of the wellbore of fig. 25 . fig. 26 d is a flowchart of a method for servicing a wellbore. figs. 27 a , 28 a , 29 a , 30 a , and 31 are schematic views of embodiments of a wellbore parameter sensing system. figs. 27 b , 28 b , 29 b , and 30 b are flowcharts of methods for servicing a wellbore. figs. 32 and 35 are schematic views of embodiments of a downhole interrogation/communication unit. figs. 33 and 34 are schematic views of embodiment of a downhole power generator. detailed description disclosed herein are methods for detecting and/or monitoring the position and/or condition of a wellbore, a formation, a wellbore service tool, and/or wellbore compositions, for example wellbore sealants such as cement, using mems-based data sensors. still more particularly, the present disclosure describes methods of monitoring the integrity and performance of wellbore compositions over the life of the well using mems-based data sensors. performance may be indicated by changes, for example, in various parameters, including, but not limited to, moisture content, temperature, ph, and various ion concentrations (e.g., sodium, chloride, and potassium ions) of the cement. in embodiments, the methods comprise the use of embeddable data sensors capable of detecting parameters in a wellbore composition, for example a sealant such as cement. in embodiments, the methods provide for evaluation of sealant during mixing, placement, and/or curing of the sealant within the wellbore. in another embodiment, the method is used for sealant evaluation from placement and curing throughout its useful service life, and where applicable to a period of deterioration and repair. in embodiments, the methods of this disclosure may be used to prolong the service life of the sealant, lower costs, and enhance creation of improved methods of remediation. additionally, methods are disclosed for determining the location of sealant within a wellbore, such as for determining the location of a cement slurry during primary cementing of a wellbore as discussed further hereinbelow. additional embodiments and methods for employing mems-based data sensors in a wellbore are described herein. the methods disclosed herein comprise the use of various wellbore compositions, including sealants and other wellbore servicing fluids. as used herein, “wellbore composition” includes any composition that may be prepared or otherwise provided at the surface and placed down the wellbore, typically by pumping. as used herein, a “sealant” refers to a fluid used to secure components within a wellbore or to plug or seal a void space within the wellbore. sealants, and in particular cement slurries and non-cementitious compositions, are used as wellbore compositions in several embodiments described herein, and it is to be understood that the methods described herein are applicable for use with other wellbore compositions. as used herein, “servicing fluid” refers to a fluid used to drill, complete, work over, fracture, repair, treat, or in any way prepare or service a wellbore for the recovery of materials residing in a subterranean formation penetrated by the wellbore. examples of servicing fluids include, but are not limited to, cement slurries, non-cementitious sealants, drilling fluids or muds, spacer fluids, fracturing fluids or completion fluids, all of which are well known in the art. while fluid is generally understood to encompass material in a pumpable state, reference to a wellbore servicing fluid that is settable or curable (e.g., a sealant such as cement) includes, unless otherwise noted, the fluid in a pumpable and/or set state, as would be understood in the context of a given wellbore servicing operation. generally, wellbore servicing fluid and wellbore composition may be used interchangeably unless otherwise noted. the servicing fluid is for use in a wellbore that penetrates a subterranean formation. it is to be understood that “subterranean formation” encompasses both areas below exposed earth and areas below earth covered by water such as ocean or fresh water. the wellbore may be a substantially vertical wellbore and/or may contain one or more lateral wellbores, for example as produced via directional drilling. as used herein, components are referred to as being “integrated” if they are formed on a common support structure placed in packaging of relatively small size, or otherwise assembled in close proximity to one another. discussion of an embodiment of the method of the present disclosure will now be made with reference to the flowchart of fig. 1 , which includes methods of placing mems sensors in a wellbore and gathering data. at block 100 , data sensors are selected based on the parameter(s) or other conditions to be determined or sensed within the wellbore. at block 102 , a quantity of data sensors is mixed with a wellbore composition, for example a sealant slurry. in embodiments, data sensors are added to a sealant by any methods known to those of skill in the art. for example, the sensors may be mixed with a dry material, mixed with one more liquid components (e.g., water or a non-aqueous fluid), or combinations thereof. the mixing may occur onsite, for example addition of the sensors into a bulk mixer such as a cement slurry mixer. the sensors may be added directly to the mixer, may be added to one or more component streams and subsequently fed to the mixer, may be added downstream of the mixer, or combinations thereof. in embodiments, data sensors are added after a blending unit and slurry pump, for example, through a lateral by-pass. the sensors may be metered in and mixed at the well site, or may be pre-mixed into the composition (or one or more components thereof) and subsequently transported to the well site. for example, the sensors may be dry mixed with dry cement and transported to the well site where a cement slurry is formed comprising the sensors. alternatively or additionally, the sensors may be pre-mixed with one or more liquid components (e.g., mix water) and transported to the well site where a cement slurry is formed comprising the sensors. the properties of the wellbore composition or components thereof may be such that the sensors distributed or dispersed therein do not substantially settle during transport or placement. the wellbore composition, e.g., sealant slurry, is then pumped downhole at block 104 , whereby the sensors are positioned within the wellbore. for example, the sensors may extend along all or a portion of the length of the wellbore adjacent the casing. the sealant slurry may be placed downhole as part of a primary cementing, secondary cementing, or other sealant operation as described in more detail herein. at block 106 , a data interrogation tool (also referred to as a data interrogator tool, data interrogator, interrogator, interrogation/communication tool or unit, or the like) is positioned in an operable location to gather data from the sensors, for example lowered or otherwise placed within the wellbore proximate the sensors. in various embodiments, one or more data interrogators may be placed downhole (e.g., in a wellbore) prior to, concurrent with, and/or subsequent to placement in the wellbore of a wellbore composition comprising mems sensors. at block 108 , the data interrogation tool interrogates the data sensors (e.g., by sending out an rf signal) while the data interrogation tool traverses all or a portion of the wellbore containing the sensors. the data sensors are activated to record and/or transmit data at block 110 via the signal from the data interrogation tool. at block 112 , the data interrogation tool communicates the data to one or more computer components (e.g., memory and/or microprocessor) that may be located within the tool, at the surface, or both. the data may be used locally or remotely from the tool to calculate the location of each data sensor and correlate the measured parameter(s) to such locations to evaluate sealant performance. accordingly, the data interrogation tool comprises mems sensor interrogation functionality, communication functionality (e.g., transceiver functionality), or both. data gathering, as shown in blocks 106 to 112 of fig. 1 , may be carried out at the time of initial placement in the well of the wellbore composition comprising mems sensors, for example during drilling (e.g., drilling fluid comprising mems sensors) or during cementing (e.g., cement slurry comprising mems sensors) as described in more detail below. additionally or alternatively, data gathering may be carried out at one or more times subsequent to the initial placement in the well of the wellbore composition comprising mems sensors. for example, data gathering may be carried out at the time of initial placement in the well of the wellbore composition comprising mems sensors or shortly thereafter to provide a baseline data set. as the well is operated for recovery of natural resources over a period of time, data gathering may be performed additional times, for example at regular maintenance intervals such as every 1 year, 5 years, or 10 years. the data recovered during subsequent monitoring intervals can be compared to the baseline data as well as any other data obtained from previous monitoring intervals, and such comparisons may indicate the overall condition of the wellbore. for example, changes in one or more sensed parameters may indicate one or more problems in the wellbore. alternatively, consistency or uniformity in sensed parameters may indicate no substantive problems in the wellbore. the data may comprise any combination of parameters sensed by the mems sensors as present in the wellbore, including but not limited to temperature, pressure, ion concentration, stress, strain, gas concentration, etc. in an embodiment, data regarding performance of a sealant composition includes cement slurry properties such as density, rate of strength development, thickening time, fluid loss, and hydration properties; plasticity parameters; compressive strength; shrinkage and expansion characteristics; mechanical properties such as young's modulus and poisson's ratio; tensile strength; resistance to ambient conditions downhole such as temperature and chemicals present; or any combination thereof, and such data may be evaluated to determine long term performance of the sealant composition (e.g., detect an occurrence of radial cracks, shear failure, and/or de-bonding within the set sealant composition) in accordance with embodiments set forth in k. ravi and h. xenakis, “cementing process optimized to achieve zonal isolation,” presented at petrotech-2007 conference, new delhi, india, which is incorporated herein by reference in its entirety. in an embodiment, data (e.g., sealant parameters) from a plurality of monitoring intervals is plotted over a period of time, and a resultant graph is provided showing an operating or trend line for the sensed parameters. atypical changes in the graph as indicated for example by a sharp change in slope or a step change on the graph may provide an indication of one or more present problems or the potential for a future problem. accordingly, remedial and/or preventive treatments or services may be applied to the wellbore to address present or potential problems. in embodiments, the mems sensors are contained within a sealant composition placed substantially within the annular space between a casing and the wellbore wall. that is, substantially all of the mems sensors are located within or in close proximity to the annular space. in an embodiment, the wellbore servicing fluid comprising the mems sensors (and thus likewise the mems sensors) does not substantially penetrate, migrate, or travel into the formation from the wellbore. in an alternative embodiment, substantially all of the mems sensors are located within, adjacent to, or in close proximity to the wellbore, for example less than or equal to about 1 foot, 3 feet, 5 feet, or 10 feet from the wellbore. such adjacent or close proximity positioning of the mems sensors with respect to the wellbore is in contrast to placing mems sensors in a fluid that is pumped into the formation in large volumes and substantially penetrates, migrates, or travels into or through the formation, for example as occurs with a fracturing fluid or a flooding fluid. thus, in embodiments, the mems sensors are placed proximate or adjacent to the wellbore (in contrast to the formation at large), and provide information relevant to the wellbore itself and compositions (e.g., sealants) used therein (again in contrast to the formation or a producing zone at large). in alternative embodiments, the mems sensors are distributed from the wellbore into the surrounding formation (e.g., additionally or alternatively non-proximate or non-adjacent to the wellbore), for example as a component of a fracturing fluid or a flooding fluid described in more detail herein. in embodiments, the sealant is any wellbore sealant known in the art. examples of sealants include cementitious and non-cementitious sealants both of which are well known in the art. in embodiments, non-cementitious sealants comprise resin based systems, latex based systems, or combinations thereof. in embodiments, the sealant comprises a cement slurry with styrene-butadiene latex (e.g., as disclosed in u.s. pat. no. 5,588,488 incorporated by reference herein in its entirety). sealants may be utilized in setting expandable casing, which is further described hereinbelow. in other embodiments, the sealant is a cement utilized for primary or secondary wellbore cementing operations, as discussed further hereinbelow. in embodiments, the sealant is cementitious and comprises a hydraulic cement that sets and hardens by reaction with water. examples of hydraulic cements include but are not limited to portland cements (e.g., classes a, b, c, g, and h portland cements), pozzolana cements, gypsum cements, phosphate cements, high alumina content cements, silica cements, high alkalinity cements, shale cements, acid/base cements, magnesia cements, fly ash cement, zeolite cement systems, cement kiln dust cement systems, slag cements, micro-fine cement, metakaolin, and combinations thereof. examples of sealants are disclosed in u.s. pat. nos. 6,457,524; 7,077,203; and 7,174,962, each of which is incorporated herein by reference in its entirety. in an embodiment, the sealant comprises a sorel cement composition, which typically comprises magnesium oxide and a chloride or phosphate salt which together form for example magnesium oxychloride. examples of magnesium oxychloride sealants are disclosed in u.s. pat. nos. 6,664,215 and 7,044,222, each of which is incorporated herein by reference in its entirety. the wellbore composition (e.g., sealant) may include a sufficient amount of water to form a pumpable slurry. the water may be fresh water or salt water (e.g., an unsaturated aqueous salt solution or a saturated aqueous salt solution such as brine or seawater). in embodiments, the cement slurry may be a lightweight cement slurry containing foam (e.g., foamed cement) and/or hollow beads/microspheres. in an embodiment, the mems sensors are incorporated into or attached to all or a portion of the hollow microspheres. thus, the mems sensors may be dispersed within the cement along with the microspheres. examples of sealants containing microspheres are disclosed in u.s. pat. nos. 4,234,344; 6,457,524; and 7,174,962, each of which is incorporated herein by reference in its entirety. in an embodiment, the mems sensors are incorporated into a foamed cement such as those described in more detail in u.s. pat. nos. 6,063,738; 6,367,550; 6,547,871; and 7,174,962, each of which is incorporated by reference herein in its entirety. in some embodiments, additives may be included in the cement composition for improving or changing the properties thereof. examples of such additives include but are not limited to accelerators, set retarders, defoamers, fluid loss agents, weighting materials, dispersants, density-reducing agents, formation conditioning agents, lost circulation materials, thixotropic agents, suspension aids, or combinations thereof. other mechanical property modifying additives, for example, fibers, polymers, resins, latexes, and the like can be added to further modify the mechanical properties. these additives may be included singularly or in combination. methods for introducing these additives and their effective amounts are known to one of ordinary skill in the art. in embodiments, the mems sensors are contained within a wellbore composition that forms a filtercake on the face of the formation when placed downhole. for example, various types of drilling fluids, also known as muds or drill-in fluids have been used in well drilling, such as water-based fluids, oil-based fluids (e.g., mineral oil, hydrocarbons, synthetic oils, esters, etc.), gaseous fluids, or a combination thereof. drilling fluids typically contain suspended solids. drilling fluids may form a thin, slick filter cake on the formation face that provides for successful drilling of the wellbore and helps prevent loss of fluid to the subterranean formation. in an embodiment, at least a portion of the mems remain associated with the filtercake (e.g., disposed therein) and may provide information as to a condition (e.g., thickness) and/or location of the filtercake. additionally or in the alternative at least a portion of the mems remain associated with drilling fluid and may provide information as to a condition and/or location of the drilling fluid. in embodiments, the mems sensors are contained within a wellbore composition that when placed downhole under suitable conditions induces fractures within the subterranean formation. hydrocarbon-producing wells often are stimulated by hydraulic fracturing operations, wherein a fracturing fluid may be introduced into a portion of a subterranean formation penetrated by a wellbore at a hydraulic pressure sufficient to create, enhance, and/or extend at least one fracture therein. stimulating or treating the wellbore in such ways increases hydrocarbon production from the well. in some embodiments, the mems sensors may be contained within a wellbore composition that when placed downhole enters and/or resides within one or more fractures within the subterranean formation. in such embodiments, the mems sensors provide information as to the location and/or condition of the fluid and/or fracture during and/or after treatment. in an embodiment, at least a portion of the mems remain associated with a fracturing fluid and may provide information as to the condition and/or location of the fluid. fracturing fluids often contain proppants that are deposited within the formation upon placement of the fracturing fluid therein, and in an embodiment a fracturing fluid contains one or more proppants and one or more mems. in an embodiment, at least a portion of the mems remain associated with the proppants deposited within the formation (e.g., a proppant bed) and may provide information as to the condition (e.g., thickness, density, settling, stratification, integrity, etc.) and/or location of the proppants. additionally or in the alternative at least a portion of the mems remain associated with a fracture (e.g., adhere to and/or retained by a surface of a fracture) and may provide information as to the condition (e.g., length, volume, etc.) and/or location of the fracture. for example, the mems sensors may provide information useful for ascertaining the fracture complexity. in embodiments, the mems sensors are contained in a wellbore composition (e.g., gravel pack fluid) which is employed in a gravel packing treatment, and the mems may provide information as to the condition and/or location of the wellbore composition during and/or after the gravel packing treatment. gravel packing treatments are used, inter alia, to reduce the migration of unconsolidated formation particulates into the wellbore. in gravel packing operations, particulates, referred to as gravel, are carried to a wellbore in a subterranean producing zone by a servicing fluid known as carrier fluid. that is, the particulates are suspended in a carrier fluid, which may be viscosified, and the carrier fluid is pumped into a wellbore in which the gravel pack is to be placed. as the particulates are placed in the zone, the carrier fluid leaks off into the subterranean zone and/or is returned to the surface. the resultant gravel pack acts as a filter to separate formation solids from produced fluids while permitting the produced fluids to flow into and through the wellbore. when installing the gravel pack, the gravel is carried to the formation in the form of a slurry by mixing the gravel with a viscosified carrier fluid. such gravel packs may be used to stabilize a formation while causing minimal impairment to well productivity. the gravel, inter alia, acts to prevent the particulates from occluding the screen or migrating with the produced fluids, and the screen, inter alia, acts to prevent the gravel from entering the wellbore. in an embodiment, the wellbore servicing composition (e.g., gravel pack fluid) comprises a carrier fluid, gravel and one or more mems. in an embodiment, at least a portion of the mems remain associated with the gravel deposited within the wellbore and/or formation (e.g., a gravel pack/bed) and may provide information as to the condition (e.g., thickness, density, settling, stratification, integrity, etc.) and/or location of the gravel pack/bed. in various embodiments, the mems may provide information as to a location, flow path/profile, volume, density, temperature, pressure, or a combination thereof of a sealant composition, a drilling fluid, a fracturing fluid, a gravel pack fluid, or other wellbore servicing fluid in real time such that the effectiveness of such service may be monitored and/or adjusted during performance of the service to improve the result of same. accordingly, the mems may aid in the initial performance of the wellbore service additionally or alternatively to providing a means for monitoring a wellbore condition or performance of the service over a period of time (e.g., over a servicing interval and/or over the life of the well). for example, the one or more mems sensors may be used in monitoring a gas or a liquid produced from the subterranean formation. mems present in the wellbore and/or formation may be used to provide information as to the condition (e.g., temperature, pressure, flow rate, composition, etc.) and/or location of a gas or liquid produced from the subterranean formation. in an embodiment, the mems provide information regarding the composition of a produced gas or liquid. for example, the mems may be used to monitor an amount of water produced in a hydrocarbon producing well (e.g., amount of water present in hydrocarbon gas or liquid), an amount of undesirable components or contaminants in a produced gas or liquid (e.g., sulfur, carbon dioxide, hydrogen sulfide, etc. present in hydrocarbon gas or liquid), or a combination thereof. in embodiments, the data sensors added to the wellbore composition, e.g., sealant slurry, etc., are passive sensors that do not require continuous power from a battery or an external source in order to transmit real-time data. in embodiments, the data sensors are micro-electromechanical systems (mems) comprising one or more (and typically a plurality of) mems devices, referred to herein as mems sensors. mems devices are well known, e.g., a semiconductor device with mechanical features on the micrometer scale. mems embody the integration of mechanical elements, sensors, actuators, and electronics on a common substrate. in embodiments, the substrate comprises silicon. mems elements include mechanical elements which are movable by an input energy (electrical energy or other type of energy). using mems, a sensor may be designed to emit a detectable signal based on a number of physical phenomena, including thermal, biological, optical, chemical, and magnetic effects or stimulation. mems devices are minute in size, have low power requirements, are relatively inexpensive and are rugged, and thus are well suited for use in wellbore servicing operations. in embodiments, the mems sensors added to a wellbore servicing fluid may be active sensors, for example powered by an internal battery that is rechargeable or otherwise powered and/or recharged by other downhole power sources such as heat capture/transfer and/or fluid flow, as described in more detail herein. in embodiments, the data sensors comprise an active material connected to (e.g., mounted within or mounted on the surface of) an enclosure, the active material being liable to respond to a wellbore parameter, and the active material being operably connected to (e.g., in physical contact with, surrounding, or coating) a capacitive mems element. in various embodiments, the mems sensors sense one or more parameters within the wellbore. in an embodiment, the parameter is temperature. alternatively, the parameter is ph. alternatively, the parameter is moisture content. still alternatively, the parameter may be ion concentration (e.g., chloride, sodium, and/or potassium ions). the mems sensors may also sense well cement characteristic data such as stress, strain, or combinations thereof. in embodiments, the mems sensors of the present disclosure may comprise active materials that respond to two or more measurands. in such a way, two or more parameters may be monitored. in addition or in the alternative, a mems sensor incorporated within one or more of the wellbore compositions disclosed herein may provide information that allows a condition (e.g., thickness, density, volume, settling, stratification, etc.) and/or location of the composition within the subterranean formation to be detected. suitable active materials, such as dielectric materials, that respond in a predictable and stable manner to changes in parameters over a long period may be identified according to methods well known in the art, for example see, e.g., ong, zeng and grimes. “a wireless, passive carbon nanotube-based gas sensor,” ieee sensors journal, 2, 2, (2002) 82-88; ong, grimes, robbins and singl, “design and application of a wireless, passive, resonant-circuit environmental monitoring sensor,” sensors and actuators a, 93 (2001) 33-43, each of which is incorporated by reference herein in its entirety. mems sensors suitable for the methods of the present disclosure that respond to various wellbore parameters are disclosed in u.s. pat. no. 7,038,470 b1 that is incorporated herein by reference in its entirety. in embodiments, the mems sensors are coupled with radio frequency identification devices (rfids) and can thus detect and transmit parameters and/or well cement characteristic data for monitoring the cement during its service life. rfids combine a microchip with an antenna (the rfid chip and the antenna are collectively referred to as the “transponder” or the “tag”). the antenna provides the rfid chip with power when exposed to a narrow band, high frequency electromagnetic field from a transceiver. a dipole antenna or a coil, depending on the operating frequency, connected to the rfid chip, powers the transponder when current is induced in the antenna by an rf signal from the transceiver's antenna. such a device can return a unique identification “id” number by modulating and re-radiating the radio frequency (rf) wave. passive rf tags are gaining widespread use due to their low cost, indefinite life, simplicity, efficiency, ability to identify parts at a distance without contact (tether-free information transmission ability). these robust and tiny tags are attractive from an environmental standpoint as they require no battery. the mems sensor and rfid tag are preferably integrated into a single component (e.g., chip or substrate), or may alternatively be separate components operably coupled to each other. in an embodiment, an integrated, passive mems/rfid sensor contains a data sensing component, an optional memory, and an rfid antenna, whereby excitation energy is received and powers up the sensor, thereby sensing a present condition and/or accessing one or more stored sensed conditions from memory and transmitting same via the rfid antenna. in embodiments, mems sensors having different rfid tags, i.e., antennas that respond to rf waves of different frequencies and power the rfid chip in response to exposure to rf waves of different frequencies, may be added to different wellbore compositions. within the united states, commonly used operating bands for rfid systems center on one of the three government assigned frequencies: 125 khz, 13.56 mhz or 2.45 ghz. a fourth frequency, 27.125 mhz, has also been assigned. when the 2.45 ghz carrier frequency is used, the range of an rfid chip can be many meters. while this is useful for remote sensing, there may be multiple transponders within the rf field. in order to prevent these devices from interacting and garbling the data, anti-collision schemes are used, as are known in the art. in embodiments, the data sensors are integrated with local tracking hardware to transmit their position as they flow within a wellbore composition such as a sealant slurry. the data sensors may form a network using wireless links to neighboring data sensors and have location and positioning capability through, for example, local positioning algorithms as are known in the art. the sensors may organize themselves into a network by listening to one another, therefore allowing communication of signals from the farthest sensors towards the sensors closest to the interrogator to allow uninterrupted transmission and capture of data. in such embodiments, the interrogator tool may not need to traverse the entire section of the wellbore containing mems sensors in order to read data gathered by such sensors. for example, the interrogator tool may only need to be lowered about half-way along the vertical length of the wellbore containing mems sensors. alternatively, the interrogator tool may be lowered vertically within the wellbore to a location adjacent to a horizontal arm of a well, whereby mems sensors located in the horizontal arm may be read without the need for the interrogator tool to traverse the horizontal arm. alternatively, the interrogator tool may be used at or near the surface and read the data gathered by the sensors distributed along all or a portion of the wellbore. for example, sensors located a distance away from the interrogator (e.g., at an opposite end of a length of casing or tubing) may communicate via a network formed by the sensors as described previously. generally, a communication distance between mems sensors varies with a size and/or mass of the mems sensors. however, an ability to suspend the mems sensors in a wellbore composition and keep the mems sensors suspended in the wellbore composition for a long period of time, which may be important for measuring various parameters of a wellbore composition throughout a volume of the wellbore composition, generally varies inversely with the size of the mems sensors. therefore, sensor communication distance requirements may have to be adjusted in view of sensor suspendability requirements. in addition, a communication frequency of a mems sensor generally varies with the size and/or mass of the mems sensor. in embodiments, the mems sensors are ultra-small, e.g., 3 mm 2 , such that they are pumpable in a sealant slurry. in embodiments, the mems device is approximately 0.01 mm 2 to 1 mm 2 , alternatively 1 mm 2 to 3 mm 2 , alternatively 3 mm 2 to 5 mm 2 , or alternatively 5 mm 2 to 10 mm 2 . in embodiments, the data sensors are capable of providing data throughout the cement service life. in embodiments, the data sensors are capable of providing data for up to 100 years. in an embodiment, the wellbore composition comprises an amount of mems effective to measure one or more desired parameters. in various embodiments, the wellbore composition comprises an effective amount of mems such that sensed readings may be obtained at intervals of about 1 foot, alternatively about 6 inches, or alternatively about 1 inch, along the portion of the wellbore containing the mems. in an embodiment, the mems sensors may be present in the wellbore composition in an amount of from about 0.001 to about 10 weight percent. alternatively, the mems may be present in the wellbore composition in an amount of from about 0.01 to about 5 weight percent. in embodiments, the sensors may have dimensions (e.g., diameters or other dimensions) that range from nanoscale, e.g., about 1 to 1000 nm (e.g., nems), to a micrometer range, e.g., about 1 to 1000 μm (e.g., mems), or alternatively any size from about 1 nm to about 1 mm. in embodiments, the mems sensors may be present in the wellbore composition in an amount of from about 5 volume percent to about 30 volume percent. in various embodiments, the size and/or amount of sensors present in a wellbore composition (e.g., the sensor loading or concentration) may be selected such that the resultant wellbore servicing composition is readily pumpable without damaging the sensors and/or without having the sensors undesirably settle out (e.g., screen out) in the pumping equipment (e.g., pumps, conduits, tanks, etc.) and/or upon placement in the wellbore. also, the concentration/loading of the sensors within the wellbore servicing fluid may be selected to provide a sufficient average distance between sensors to allow for networking of the sensors (e.g., daisy-chaining) in embodiments using such networks, as described in more detail herein. for example, such distance may be a percentage of the average communication distance for a given sensor type. by way of example, a given sensor having a 2 inch communication range in a given wellbore composition should be loaded into the wellbore composition in an amount that the average distance between sensors in less than 2 inches (e.g., less than 1.9, 1.8, 1.7, 1.6, 1.5, 1.4, 1.3, 1.2, 1.1, 1.0, etc. inches). the size of sensors and the amount may be selected so that they are stable, do not float or sink, in the well treating fluid. the size of the sensor could range from nano size to microns. in some embodiments, the sensors may be nanoelectromechanical systems (nems), mems, or combinations thereof. unless otherwise indicated herein, it should be understood that any suitable micro and/or nano sized sensors or combinations thereof may be employed. the embodiments disclosed herein should not otherwise be limited by the specific type of micro and/or nano sensor employed unless otherwise indicated or prescribed by the functional requirements thereof, and specifically nems may be used in addition to or in lieu of mems sensors in the various embodiments disclosed herein. in embodiments, the mems sensors comprise passive (remain unpowered when not being interrogated) sensors energized by energy radiated from a data interrogation tool. the data interrogation tool may comprise an energy transceiver sending energy (e.g., radio waves) to and receiving signals from the mems sensors and a processor processing the received signals. the data interrogation tool may further comprise a memory component, a communications component, or both. the memory component may store raw and/or processed data received from the mems sensors, and the communications component may transmit raw data to the processor and/or transmit processed data to another receiver, for example located at the surface. the tool components (e.g., transceiver, processor, memory component, and communications component) are coupled together and in signal communication with each other. in an embodiment, one or more of the data interrogator components may be integrated into a tool or unit that is temporarily or permanently placed downhole (e.g., a downhole module), for example prior to, concurrent with, and/or subsequent to placement of the mems sensors in the wellbore. in an embodiment, a removable downhole module comprises a transceiver and a memory component, and the downhole module is placed into the wellbore, reads data from the mems sensors, stores the data in the memory component, is removed from the wellbore, and the raw data is accessed. alternatively, the removable downhole module may have a processor to process and store data in the memory component, which is subsequently accessed at the surface when the tool is removed from the wellbore. alternatively, the removable downhole module may have a communications component to transmit raw data to a processor and/or transmit processed data to another receiver, for example located at the surface. the communications component may communicate via wired or wireless communications. for example, the downhole component may communicate with a component or other node on the surface via a network of mems sensors, or cable or other communications/telemetry device such as a radio frequency, electromagnetic telemetry device or an acoustic telemetry device. the removable downhole component may be intermittently positioned downhole via any suitable conveyance, for example wire-line, coiled tubing, straight tubing, gravity, pumping, etc., to monitor conditions at various times during the life of the well. in embodiments, the data interrogation tool comprises a permanent or semi-permanent downhole component that remains downhole for extended periods of time. for example, a semi-permanent downhole module may be retrieved and data downloaded once every few months or years. alternatively, a permanent downhole module may remain in the well throughout the service life of well. in an embodiment, a permanent or semi-permanent downhole module comprises a transceiver and a memory component, and the downhole module is placed into the wellbore, reads data from the mems sensors, optionally stores the data in the memory component, and transmits the read and optionally stored data to the surface. alternatively, the permanent or semi-permanent downhole module may have a processor to process and sensed data into processed data, which may be stored in memory and/or transmit to the surface. the permanent or semi-permanent downhole module may have a communications component to transmit raw data to a processor and/or transmit processed data to another receiver, for example located at the surface. the communications component may communicate via wired or wireless communications. for example, the downhole component may communicate with a component or other node on the surface via a network of mems sensors, or a cable or other communications/telemetry device such as a radio frequency, electromagnetic telemetry device or an acoustic telemetry device. in embodiments, the data interrogation tool comprises an rf energy source incorporated into its internal circuitry and the data sensors are passively energized using an rf antenna, which picks up energy from the rf energy source. in an embodiment, the data interrogation tool is integrated with an rf transceiver. in embodiments, the mems sensors (e.g., mems/rfid sensors) are empowered and interrogated by the rf transceiver from a distance, for example a distance of greater than 10 m, or alternatively from the surface or from an adjacent offset well. in an embodiment, the data interrogation tool traverses within a casing in the well and reads mems sensors located in a wellbore servicing fluid or composition, for example a sealant (e.g., cement) sheath surrounding the casing, located in the annular space between the casing and the wellbore wall. in embodiments, the interrogator senses the mems sensors when in close proximity with the sensors, typically via traversing a removable downhole component along a length of the wellbore comprising the mems sensors. in an embodiment, close proximity comprises a radial distance from a point within the casing to a planar point within an annular space between the casing and the wellbore. in embodiments, close proximity comprises a distance of 0.1 m to 1 m. alternatively, close proximity comprises a distance of 1 m to 5 m. alternatively, close proximity comprises a distance of from 5 m to 10 m. in embodiments, the transceiver interrogates the sensor with rf energy at 125 khz and close proximity comprises 0.1 m to 5 m. alternatively, the transceiver interrogates the sensor with rf energy at 13.5 mhz and close proximity comprises 0.05 m to 0.5 m. alternatively, the transceiver interrogates the sensor with rf energy at 915 mhz and close proximity comprises 0.03 m to 0.1 m. alternatively, the transceiver interrogates the sensor with rf energy at 2.4 ghz and close proximity comprises 0.01 m to 0.05 m. in embodiments, the mems sensors incorporated into wellbore cement and used to collect data during and/or after cementing the wellbore. the data interrogation tool may be positioned downhole prior to and/or during cementing, for example integrated into a component such as casing, casing attachment, plug, cement shoe, or expanding device. alternatively, the data interrogation tool is positioned downhole upon completion of cementing, for example conveyed downhole via wireline. the cementing methods disclosed herein may optionally comprise the step of foaming the cement composition using a gas such as nitrogen or air. the foamed cement compositions may comprise a foaming surfactant and optionally a foaming stabilizer. the mems sensors may be incorporated into a sealant composition and placed downhole, for example during primary cementing (e.g., conventional or reverse circulation cementing), secondary cementing (e.g., squeeze cementing), or other sealing operation (e.g., behind an expandable casing). in primary cementing, cement is positioned in a wellbore to isolate an adjacent portion of the subterranean formation and provide support to an adjacent conduit (e.g., casing). the cement forms a barrier that prevents fluids (e.g., water or hydrocarbons) in the subterranean formation from migrating into adjacent zones or other subterranean formations. in embodiments, the wellbore in which the cement is positioned belongs to a horizontal or multilateral wellbore configuration. it is to be understood that a multilateral wellbore configuration includes at least two principal wellbores connected by one or more ancillary wellbores. fig. 2 , which shows a typical onshore oil or gas drilling rig and wellbore, will be used to clarify the methods of the present disclosure, with the understanding that the present disclosure is likewise applicable to offshore rigs and wellbores. rig 12 is centered over a subterranean oil or gas formation 14 located below the earth's surface 16 . rig 12 includes a work deck 32 that supports a derrick 34 . derrick 34 supports a hoisting apparatus 36 for raising and lowering pipe strings such as casing 20 . pump 30 is capable of pumping a variety of wellbore compositions (e.g., drilling fluid or cement) into the well and includes a pressure measurement device that provides a pressure reading at the pump discharge. wellbore 18 has been drilled through the various earth strata, including formation 14 . upon completion of wellbore drilling, casing 20 is often placed in the wellbore 18 to facilitate the production of oil and gas from the formation 14 . casing 20 is a string of pipes that extends down wellbore 18 , through which oil and gas will eventually be extracted. a cement or casing shoe 22 is typically attached to the end of the casing string when the casing string is run into the wellbore. casing shoe 22 guides casing 20 toward the center of the hole and minimizes problems associated with hitting rock ledges or washouts in wellbore 18 as the casing string is lowered into the well. casing shoe, 22 , may be a guide shoe or a float shoe, and typically comprises a tapered, often bullet-nosed piece of equipment found on the bottom of casing string 20 . casing shoe, 22 , may be a float shoe fitted with an open bottom and a valve that serves to prevent reverse flow, or u-tubing, of cement slurry from annulus 26 into casing 20 as casing 20 is run into wellbore 18 . the region between casing 20 and the wall of wellbore 18 is known as the casing annulus 26 . to fill up casing annulus 26 and secure casing 20 in place, casing 20 is usually “cemented” in wellbore 18 , which is referred to as “primary cementing.” a data interrogation tool 40 is shown in the wellbore 18 . in an embodiment, the method of this disclosure is used for monitoring primary cement during and/or subsequent to a conventional primary cementing operation. in this conventional primary cementing embodiment, mems sensors are mixed into a cement slurry, block 102 of fig. 1 , and the cement slurry is then pumped down the inside of casing 20 , block 104 of fig. 1 . as the slurry reaches the bottom of casing 20 , it flows out of casing 20 and into casing annulus 26 between casing 20 and the wall of wellbore 18 . as cement slurry flows up annulus 26 , it displaces any fluid in the wellbore. to ensure no cement remains inside casing 20 , devices called “wipers” may be pumped by a wellbore servicing fluid (e.g., drilling mud) through casing 20 behind the cement. as described in more detail herein, the wellbore servicing fluids such as the cement slurry and/or wiper conveyance fluid (e.g., drilling mud) may contain mems sensors which aid in detection and/or positioning of the wellbore servicing fluid and/or a mechanical component such as a wiper plug, casing shoe, etc. the wiper contacts the inside surface of casing 20 and pushes any remaining cement out of casing 20 . when cement slurry reaches the earth's surface 16 , and annulus 26 is filled with slurry, pumping is terminated and the cement is allowed to set. the mems sensors of the present disclosure may also be used to determine one or more parameters during placement and/or curing of the cement slurry. also, the mems sensors of the present disclosure may also be used to determine completion of the primary cementing operation, as further discussed herein below. referring back to fig. 1 , during cementing, or subsequent the setting of cement, a data interrogation tool may be positioned in wellbore 18 , as at block 106 of fig. 1 . for example, the wiper may be equipped with a data interrogation tool and may read data from the mems while being pumped downhole and transmit same to the surface. alternatively, an interrogator tool may be run into the wellbore following completion of cementing a segment of casing, for example as part of the drill string during resumed drilling operations. alternatively, the interrogator tool may be run downhole via a wireline or other conveyance. the data interrogation tool may then be signaled to interrogate the sensors (block 108 of fig. 1 ) whereby the sensors are activated to record and/or transmit data, block 110 of fig. 1 . the data interrogation tool communicates the data to a processor 112 whereby data sensor (and likewise cement slurry) position and cement integrity may be determined via analyzing sensed parameters for changes, trends, expected values, etc. for example, such data may reveal conditions that may be adverse to cement curing. the sensors may provide a temperature profile over the length of the cement sheath, with a uniform temperature profile likewise indicating a uniform cure (e.g., produced via heat of hydration of the cement during curing) or a change in temperature might indicate the influx of formation fluid (e.g., presence of water and/or hydrocarbons) that may degrade the cement during the transition from slurry to set cement. alternatively, such data may indicate a zone of reduced, minimal, or missing sensors, which would indicate a loss of cement corresponding to the area (e.g., a loss/void zone or water influx/washout). such methods may be available with various cement techniques described herein such as conventional or reverse primary cementing. due to the high pressure at which the cement is pumped during conventional primary cementing (pump down the casing and up the annulus), fluid from the cement slurry may leak off into existing low pressure zones traversed by the wellbore. this may adversely affect the cement, and incur undesirable expense for remedial cementing operations (e.g., squeeze cementing as discussed hereinbelow) to position the cement in the annulus. such leak off may be detected via the present disclosure as described previously. additionally, conventional circulating cementing may be time-consuming, and therefore relatively expensive, because cement is pumped all the way down casing 20 and back up annulus 26 . one method of avoiding problems associated with conventional primary cementing is to employ reverse circulation primary cementing. reverse circulation cementing is a term of art used to describe a method where a cement slurry is pumped down casing annulus 26 instead of into casing 20 . the cement slurry displaces any fluid as it is pumped down annulus 26 . fluid in the annulus is forced down annulus 26 , into casing 20 (along with any fluid in the casing), and then back up to earth's surface 16 . when reverse circulation cementing, casing shoe 22 comprises a valve that is adjusted to allow flow into casing 20 and then sealed after the cementing operation is complete. once slurry is pumped to the bottom of casing 20 and fills annulus 26 , pumping is terminated and the cement is allowed to set in annulus 26 . examples of reverse cementing applications are disclosed in u.s. pat. nos. 6,920,929 and 6,244,342, each of which is incorporated herein by reference in its entirety. in embodiments of the present disclosure, sealant slurries comprising mems data sensors are pumped down the annulus in reverse circulation applications, a data interrogator is located within the wellbore (e.g., integrated into the casing shoe) and sealant performance is monitored as described with respect to the conventional primary sealing method disclosed hereinabove. additionally, the data sensors of the present disclosure may also be used to determine completion of a reverse circulation operation, as further discussed hereinbelow. secondary cementing within a wellbore may be carried out subsequent to primary cementing operations. a common example of secondary cementing is squeeze cementing wherein a sealant such as a cement composition is forced under pressure into one or more permeable zones within the wellbore to seal such zones. examples of such permeable zones include fissures, cracks, fractures, streaks, flow channels, voids, high permeability streaks, annular voids, or combinations thereof. the permeable zones may be present in the cement column residing in the annulus, a wall of the conduit in the wellbore, a microannulus between the cement column and the subterranean formation, and/or a microannulus between the cement column and the conduit. the sealant (e.g., secondary cement composition) sets within the permeable zones, thereby forming a hard mass to plug those zones and prevent fluid from passing therethrough (i.e., prevents communication of fluids between the wellbore and the formation via the permeable zone). various procedures that may be followed to use a sealant composition in a wellbore are described in u.s. pat. no. 5,346,012, which is incorporated by reference herein in its entirety. in various embodiments, a sealant composition comprising mems sensors is used to repair holes, channels, voids, and microannuli in casing, cement sheath, gravel packs, and the like as described in u.s. pat. nos. 5,121,795; 5,123,487; and 5,127,473, each of which is incorporated by reference herein in its entirety. in embodiments, the method of the present disclosure may be employed in a secondary cementing operation. in these embodiments, data sensors are mixed with a sealant composition (e.g., a secondary cement slurry) at block 102 of fig. 1 and subsequent or during positioning and hardening of the cement, the sensors are interrogated to monitor the performance of the secondary cement in an analogous manner to the incorporation and monitoring of the data sensors in primary cementing methods disclosed hereinabove. for example, the mems sensors may be used to verify the location of the secondary sealant, one or more properties of the secondary sealant, that the secondary sealant is functioning properly and/or to monitor its long-term integrity. in embodiments, the methods of the present disclosure are utilized for monitoring cementitious sealants (e.g., hydraulic cement), non-cementitious (e.g., polymer, latex or resin systems), or combinations thereof, which may be used in primary, secondary, or other sealing applications. for example, expandable tubulars such as pipe, pipe string, casing, liner, or the like are often sealed in a subterranean formation. the expandable tubular (e.g., casing) is placed in the wellbore, a sealing composition is placed into the wellbore, the expandable tubular is expanded, and the sealing composition is allowed to set in the wellbore. for example, after expandable casing is placed downhole, a mandrel may be run through the casing to expand the casing diametrically, with expansions up to 25% possible. the expandable tubular may be placed in the wellbore before or after placing the sealing composition in the wellbore. the expandable tubular may be expanded before, during, or after the set of the sealing composition. when the tubular is expanded during or after the set of the sealing composition, resilient compositions will remain competent due to their elasticity and compressibility. additional tubulars may be used to extend the wellbore into the subterranean formation below the first tubular as is known to those of skill in the art. sealant compositions and methods of using the compositions with expandable tubulars are disclosed in u.s. pat. nos. 6,722,433 and 7,040,404 and u.s. pat. pub. no. 2004/0167248, each of which is incorporated by reference herein in its entirety. in expandable tubular embodiments, the sealants may comprise compressible hydraulic cement compositions and/or non-cementitious compositions. compressible hydraulic cement compositions have been developed which remain competent (continue to support and seal the pipe) when compressed, and such compositions may comprise mems sensors. the sealant composition is placed in the annulus between the wellbore and the pipe or pipe string, the sealant is allowed to harden into an impermeable mass, and thereafter, the expandable pipe or pipe string is expanded whereby the hardened sealant composition is compressed. in embodiments, the compressible foamed sealant composition comprises a hydraulic cement, a rubber latex, a rubber latex stabilizer, a gas and a mixture of foaming and foam stabilizing surfactants. suitable hydraulic cements include, but are not limited to, portland cement and calcium aluminate cement. often, non-cementitious resilient sealants with comparable strength to cement, but greater elasticity and compressibility, are required for cementing expandable casing. in embodiments, these sealants comprise polymeric sealing compositions, and such compositions may comprise mems sensors. in an embodiment, the sealants composition comprises a polymer and a metal containing compound. in embodiments, the polymer comprises copolymers, terpolymers, and interpolymers. the metal-containing compounds may comprise zinc, tin, iron, selenium magnesium, chromium, or cadmium. the compounds may be in the form of an oxide, carboxylic acid salt, a complex with dithiocarbamate ligand, or a complex with mercaptobenzothiazole ligand. in embodiments, the sealant comprises a mixture of latex, dithio carbamate, zinc oxide, and sulfur. in embodiments, the methods of the present disclosure comprise adding data sensors to a sealant to be used behind expandable casing to monitor the integrity of the sealant upon expansion of the casing and during the service life of the sealant. in this embodiment, the sensors may comprise mems sensors capable of measuring, for example, moisture and/or temperature change. if the sealant develops cracks, water influx may thus be detected via moisture and/or temperature indication. in an embodiment, the mems sensor are added to one or more wellbore servicing compositions used or placed downhole in drilling or completing a monodiameter wellbore as disclosed in u.s. pat. no. 7,066,284 and u.s. pat. pub. no. 2005/0241855, each of which is incorporated by reference herein in its entirety. in an embodiment, the mems sensors are included in a chemical casing composition used in a monodiameter wellbore. in another embodiment, the mems sensors are included in compositions (e.g., sealants) used to place expandable casing or tubulars in a monodiameter wellbore. examples of chemical casings are disclosed in u.s. pat. nos. 6,702,044; 6,823,940; and 6,848,519, each of which is incorporated herein by reference in its entirety. in one embodiment, the mems sensors are used to gather data, e.g., sealant data, and monitor the long-term integrity of the wellbore composition, e.g., sealant composition, placed in a wellbore, for example a wellbore for the recovery of natural resources such as water or hydrocarbons or an injection well for disposal or storage. in an embodiment, data/information gathered and/or derived from mems sensors in a downhole wellbore composition e.g., sealant composition, comprises at least a portion of the input and/or output to into one or more calculators, simulations, or models used to predict, select, and/or monitor the performance of wellbore compositions e.g., sealant compositions, over the life of a well. such models and simulators may be used to select a wellbore composition, e.g., sealant composition, comprising mems for use in a wellbore. after placement in the wellbore, the mems sensors may provide data that can be used to refine, recalibrate, or correct the models and simulators. furthermore, the mems sensors can be used to monitor and record the downhole conditions that the composition, e.g., sealant, is subjected to, and composition, e.g., sealant, performance may be correlated to such long term data to provide an indication of problems or the potential for problems in the same or different wellbores. in various embodiments, data gathered from mems sensors is used to select a wellbore composition, e.g., sealant composition, or otherwise evaluate or monitor such sealants, as disclosed in u.s. pat. nos. 6,697,738; 6,922,637; and 7,133,778, each of which is incorporated by reference herein in its entirety. in an embodiment, the compositions and methodologies of this disclosure are employed in an operating environment that generally comprises a wellbore that penetrates a subterranean formation for the purpose of recovering hydrocarbons, storing hydrocarbons, injection of carbon dioxide, storage of carbon dioxide, disposal of carbon dioxide, and the like, and the mems located downhole (e.g., within the wellbore and/or surrounding formation) may provide information as to a condition and/or location of the composition and/or the subterranean formation. for example, the mems may provide information as to a location, flow path/profile, volume, density, temperature, pressure, or a combination thereof of a hydrocarbon (e.g., natural gas stored in a salt dome) or carbon dioxide placed in a subterranean formation such that effectiveness of the placement may be monitored and evaluated, for example detecting leaks, determining remaining storage capacity in the formation, etc. in some embodiments, the compositions of this disclosure are employed in an enhanced oil recovery operation wherein a wellbore that penetrates a subterranean formation may be subjected to the injection of gases (e.g., carbon dioxide) so as to improve hydrocarbon recovery from said wellbore, and the mems may provide information as to a condition and/or location of the composition and/or the subterranean formation. for example, the mems may provide information as to a location, flow path/profile, volume, density, temperature, pressure, or a combination thereof of carbon dioxide used in a carbon dioxide flooding enhanced oil recovery operation in real time such that the effectiveness of such operation may be monitored and/or adjusted in real time during performance of the operation to improve the result of same. referring to fig. 4 , a method 200 for selecting a sealant (e.g., a cementing composition) for sealing a subterranean zone penetrated by a wellbore according to the present embodiment basically comprises determining a group of effective compositions from a group of compositions given estimated conditions experienced during the life of the well, and estimating the risk parameters for each of the group of effective compositions. in an alternative embodiment, actual measured conditions experienced during the life of the well, in addition to or in lieu of the estimated conditions, may be used. such actual measured conditions may be obtained for example via sealant compositions comprising mems sensors as described herein. effectiveness considerations include concerns that the sealant composition be stable under downhole conditions of pressure and temperature, resist downhole chemicals, and possess the mechanical properties to withstand stresses from various downhole operations to provide zonal isolation for the life of the well. in step 212 , well input data for a particular well is determined and/or specified. well input data includes routinely measurable or calculable parameters inherent in a well, including vertical depth of the well, overburden gradient, pore pressure, maximum and minimum horizontal stresses, hole size, casing outer diameter, casing inner diameter, density of drilling fluid, desired density of sealant slurry for pumping, density of completion fluid, and top of sealant. as will be discussed in greater detail with reference to step 214 , the well can be computer modeled. in modeling, the stress state in the well at the end of drilling, and before the sealant slurry is pumped into the annular space, affects the stress state for the interface boundary between the rock and the sealant composition. thus, the stress state in the rock with the drilling fluid is evaluated, and properties of the rock such as young's modulus, poisson's ratio, and yield parameters are used to analyze the rock stress state. these terms and their methods of determination are well known to those skilled in the art. it is understood that well input data will vary between individual wells. in an alternative embodiment, well input data includes data that is obtained via sealant compositions comprising mems sensors as described herein. in step 214 , the well events applicable to the well are determined and/or specified. for example, cement hydration (setting) is a well event. other well events include pressure testing, well completions, hydraulic fracturing, hydrocarbon production, fluid injection, perforation, subsequent drilling, formation movement as a result of producing hydrocarbons at high rates from unconsolidated formation, and tectonic movement after the sealant composition has been pumped in place. well events include those events that are certain to happen during the life of the well, such as cement hydration, and those events that are readily predicted to occur during the life of the well, given a particular well's location, rock type, and other factors well known in the art. in an embodiment, well events and data associated therewith may be obtained via sealant compositions comprising mems sensors as described herein. each well event is associated with a certain type of stress, for example, cement hydration is associated with shrinkage, pressure testing is associated with pressure, well completions, hydraulic fracturing, and hydrocarbon production are associated with pressure and temperature, fluid injection is associated with temperature, formation movement is associated with load, and perforation and subsequent drilling are associated with dynamic load. as can be appreciated, each type of stress can be characterized by an equation for the stress state (collectively “well event stress states”), as described in more detail in u.s. pat. no. 7,133,778 which is incorporated herein by reference in its entirety. in step 216 , the well input data, the well event stress states, and the sealant data are used to determine the effect of well events on the integrity of the sealant sheath during the life of the well for each of the sealant compositions. the sealant compositions that would be effective for sealing the subterranean zone and their capacity from its elastic limit are determined. in an alternative embodiment, the estimated effects over the life of the well are compared to and/or corrected in comparison to corresponding actual data gathered over the life of the well via sealant compositions comprising mems sensors as described herein. step 216 concludes by determining which sealant compositions would be effective in maintaining the integrity of the resulting cement sheath for the life of the well. in step 218 , parameters for risk of sealant failure for the effective sealant compositions are determined. for example, even though a sealant composition is deemed effective, one sealant composition may be more effective than another. in one embodiment, the risk parameters are calculated as percentages of sealant competency during the determination of effectiveness in step 216 . in an alternative embodiment, the risk parameters are compared to and/or corrected in comparison to actual data gathered over the life of the well via sealant compositions comprising mems sensors as described herein. step 218 provides data that allows a user to perform a cost benefit analysis. due to the high cost of remedial operations, it is important that an effective sealant composition is selected for the conditions anticipated to be experienced during the life of the well. it is understood that each of the sealant compositions has a readily calculable monetary cost. under certain conditions, several sealant compositions may be equally efficacious, yet one may have the added virtue of being less expensive. thus, it should be used to minimize costs. more commonly, one sealant composition will be more efficacious, but also more expensive. accordingly, in step 220 , an effective sealant composition with acceptable risk parameters is selected given the desired cost. furthermore, the overall results of steps 200 - 220 can be compared to actual data that is obtained via sealant compositions comprising mems sensors as described herein, and such data may be used to modify and/or correct the inputs and/or outputs to the various steps 200 - 220 to improve the accuracy of same. as discussed above and with reference to fig. 2 , wipers are often utilized during conventional primary cementing to force cement slurry out of the casing. the wiper plug also serves another purpose: typically, the end of a cementing operation is signaled when the wiper plug contacts a restriction (e.g., casing shoe) inside the casing 20 at the bottom of the string. when the plug contacts the restriction, a sudden pressure increase at pump 30 is registered. in this way, it can be determined when the cement has been displaced from the casing 20 and fluid flow returning to the surface via casing annulus 26 stops. in reverse circulation cementing, it is also necessary to correctly determine when cement slurry completely fills the annulus 26 . continuing to pump cement into annulus 26 after cement has reached the far end of annulus 26 forces cement into the far end of casing 20 , which could incur lost time if cement must be drilled out to continue drilling operations. the methods disclosed herein may be utilized to determine when cement slurry has been appropriately positioned downhole. furthermore, as discussed hereinbelow, the methods of the present disclosure may additionally comprise using a mems sensor to actuate a valve or other mechanical means to close and prevent cement from entering the casing upon determination of completion of a cementing operation. the way in which the method of the present disclosure may be used to signal when cement is appropriately positioned within annulus 26 will now be described within the context of a reverse circulation cementing operation. fig. 3 is a flowchart of a method for determining completion of a cementing operation and optionally further actuating a downhole tool upon completion (or to initiate completion) of the cementing operation. this description will reference the flowchart of fig. 3 , as well as the wellbore depiction of fig. 2 . at block 130 , a data interrogation tool as described hereinabove is positioned at the far end of casing 20 . in an embodiment, the data interrogation tool is incorporated with or adjacent to a casing shoe positioned at the bottom end of the casing and in communication with operators at the surface. at block 132 , mems sensors are added to a fluid (e.g., cement slurry, spacer fluid, displacement fluid, etc.) to be pumped into annulus 26 . at block 134 , cement slurry is pumped into annulus 26 . in an embodiment, mems sensors may be placed in substantially all of the cement slurry pumped into the wellbore. in an alternative embodiment, mems sensors may be placed in a leading plug or otherwise placed in an initial portion of the cement to indicate a leading edge of the cement slurry. in an embodiment, mems sensors are placed in leading and trailing plugs to signal the beginning and end of the cement slurry. while cement is continuously pumped into annulus 26 , at decision 136 , the data interrogation tool is attempting to detect whether the data sensors are in communicative (e.g., close) proximity with the data interrogation tool. as long as no data sensors are detected, the pumping of additional cement into the annulus continues. when the data interrogation tool detects the sensors at block 138 indicating that the leading edge of the cement has reached the bottom of the casing, the interrogator sends a signal to terminate pumping. the cement in the annulus is allowed to set and form a substantially impermeable mass which physically supports and positions the casing in the wellbore and bonds the casing to the walls of the wellbore in block 148 . if the fluid of block 130 is the cement slurry, mems-based data sensors are incorporated within the set cement, and parameters of the cement (e.g., temperature, pressure, ion concentration, stress, strain, etc.) can be monitored during placement and for the duration of the service life of the cement according to methods disclosed hereinabove. alternatively, the data sensors may be added to an interface fluid (e.g., spacer fluid or other fluid plug) introduced into the annulus prior to and/or after introduction of cement slurry into the annulus. the method just described for determination of the completion of a primary wellbore cementing operation may further comprise the activation of a downhole tool. for example, at block 130 , a valve or other tool may be operably associated with a data interrogation tool at the far end of the casing. this valve may be contained within float shoe 22 , for example, as disclosed hereinabove. again, float shoe 22 may contain an integral data interrogation tool, or may otherwise be coupled to a data interrogation tool. for example, the data interrogation tool may be positioned between casing 20 and float shoe 22 . following the method previously described and blocks 132 to 136 , pumping continues as the data interrogation tool detects the presence or absence of data sensors in close proximity to the interrogator tool (dependent upon the specific method cementing method being employed, e.g., reverse circulation, and the positioning of the sensors within the cement flow). upon detection of a determinative presence or absence of sensors in close proximity indicating the termination of the cement slurry, the data interrogation tool sends a signal to actuate the tool (e.g., valve) at block 140 . at block 142 , the valve closes, sealing the casing and preventing cement from entering the portion of casing string above the valve in a reverse cementing operation. at block 144 , the closing of the valve at 142 , causes an increase in back pressure that is detected at the hydraulic pump 30 . at block 146 , pumping is discontinued, and cement is allowed to set in the annulus at block 148 . in embodiments wherein data sensors have been incorporated throughout the cement, parameters of the cement (and thus cement integrity) can additionally be monitored during placement and for the duration of the service life of the cement according to methods disclosed hereinabove. in embodiments, systems for sensing, communicating and evaluating wellbore parameters may include the wellbore 18 ; the casing 20 or other workstring, toolstring, production string, tubular, coiled tubing, wireline, or any other physical structure or conveyance extending downhole from the surface; mems sensors 52 that may be placed into the wellbore 18 and/or surrounding formation 14 , for example, via a wellbore servicing fluid; and a device or plurality of devices for interrogating the mems sensors 52 to gather/collect data generated by the mems sensors 52 , for transmitting the data from the mems sensors 52 to the earth's surface 16 , for receiving communications and/or data to the earth's surface, for processing the data, or any combination thereof, referred to collectively herein a data interrogation/communication units or in some instances as a data interrogator or data interrogation tool. unless otherwise specified, it is understood that such devices as disclosed in the various embodiments herein will have mems sensor interrogation functionality, communication functionality (e.g., transceiver functionality), or both, as will be apparent from the particular embodiments and associated context disclosed herein. the wellbore servicing fluid comprising the mems sensors 52 may comprise a drilling fluid, a spacer fluid, a sealant, a fracturing fluid, a gravel pack fluid, a completion fluid, or any other fluid placed downhole. in addition, the mems sensors 52 may be configured to measure physical parameters such as temperature, stress and strain, as well as chemical parameters such as co 2 concentration, h 2 s concentration, ch 4 concentration, moisture content, ph, na + concentration, k + concentration, and cl − concentration. various embodiments described herein are directed to interrogation/communication units that are dispersed or distributed at intervals along a length of the casing 20 and form a communication network for transmitting and/or receiving communications to/from a location downhole and the surface, with the further understanding that the interrogation/communication units may be otherwise physically supported by a workstring, toolstring, production string, tubular, coiled tubing, wireline, or any other physical structure or conveyance extending downhole from the surface. referring to fig. 5 , a schematic view of a wellbore parameter sensing system 300 is illustrated. the wellbore parameter sensing system 300 may comprise the wellbore 18 , inside which the casing 20 is positioned. in an embodiment, the wellbore parameter sensing system 300 may comprise one or more (e.g., a plurality) of data interrogation/communication units 310 , which may be situated on the casing 20 and spaced at regular or irregular intervals along the casing 20 . in embodiments, the data interrogation/communication units 310 may be situated on or in casing collars that couple casing joints together. for example, the interrogation/communication units 310 may be located in side pocket mandrels or other spaces/voids within the casing collar or casing joint. in addition, the data interrogation/communication units 310 may be situated in an interior of the casing 20 , on an exterior of the casing 20 , or both. in an embodiment, the data interrogation/communication units 310 a may be coupled to one another by an electrical cable 320 , which may run along an entire length of the casing 20 up to the earth's surface (where they may connect to other components such as a processor 330 and a power source 340 ), and are configured to transmit data between the data interrogation/communication units 310 and/or the earth's surface (e.g., the processor 330 ), supply power from the power source 340 to the data interrogation/communication units 310 , or both. in alternative embodiments, all or a portion of the interrogation/communication units 310 b communicate wirelessly with one another. in an embodiment, the data interrogation/communication units 310 may be configured as regional data interrogation/communication units 310 , which may be spaced apart about every 5 m to 15 m along the length of the casing 20 , alternatively about every 8 m to 12 m along the length of the casing 20 , alternatively about every 10 m along the length of the casing 20 . each regional data interrogation/communication unit 310 may be configured to interrogate, and receive data from, the mems sensors 52 in a vicinity of the regional data interrogation/communication unit 310 . the vicinity of the regional data interrogation/communication unit 310 may be defined as an approximately cylindrical region extending upward from the regional data interrogation/communication unit 310 , up to half a distance from the regional data interrogation/communication unit 310 in question to a regional data interrogation/communication unit 310 immediately uphole from the regional data interrogation/communication unit 310 in question, and extending downward from the regional data interrogation/communication unit 310 , up to half a distance from the regional data interrogation/communication unit 310 in question to a regional data interrogation/communication unit 310 immediately downhole from the regional data interrogation/communication unit 310 in question. the approximately cylindrical region may also extend outward from a centerline of the casing 20 , past an outer wall of the casing 20 , past a wall of the wellbore 18 , and about 0.05 m to 0.15 m, alternatively about 0.08 m to 0.12 m, alternatively about 0.1 m, into a formation through which the wellbore 18 passes. all or a portion of the regional data interrogation/communication units 310 may communicate with each other via wired communications (e.g., units 310 a ), wireless communications (e.g., 310 b ), or both. in an embodiment, each mems sensor 52 situated in the casing 20 and/or in the annulus and/or in the formation, as well as in the vicinity of the regional data interrogation/communication unit 310 , may transmit data regarding one or more parameters sensed by the mems sensor 52 directly to the regional data interrogation/communication unit 310 in response to being interrogated by the regional data interrogation/communication unit 310 . in an embodiment, the mems sensors 52 in the vicinity of the regional data interrogation/communication unit 310 may form regional networks of mems sensors 52 (and in some embodiments, with regional networks of mems sensors generally corresponding to and communicating with one or more similarly designated regional data interrogation/communication units 310 ) and transmit mems sensor data inwards and/or outwards and/or upwards and/or downwards through the casing 20 and/or through the annulus 26 , to the regional data interrogation/communication unit 310 via the regional networks of mems sensors 52 . the double arrows 312 , 314 signify transmission of sensor data via regional networks of mems sensors 52 , and the single arrows 316 , 318 signify transmission of sensor data directly from one or more mems sensors to the regional data interrogation/communication units 310 . in an embodiment, the mems sensors 52 (including a network of mems sensors) may be passive sensors, i.e., may be powered, for example, by bursts of electromagnetic radiation from the regional data interrogation/communication units 310 . in an embodiment, the mems sensors 52 (including a network of mems sensors) may be active sensors, i.e., powered by a battery or batteries situated in or on the sensor 52 . in an embodiment, batteries of the mems sensors 52 may be inductively rechargeable by the regional data interrogation/communication units 310 . referring to fig. 6 , a schematic view of a further embodiment of a wellbore parameter sensing system 400 is illustrated. the wellbore parameter sensing system 400 may comprise the wellbore 18 , inside which the casing 20 is situated. in an embodiment, the wellbore parameter sensing system 400 further comprises a processor 410 configured to receive and process sensor data from mems sensors 52 , which are situated in the wellbore 18 and are configured to measure at least one parameter inside the wellbore 18 . the embodiment of wellbore parameter sensing system 400 differs from that of wellbore parameter sensing system 300 illustrated in fig. 5 , in that the wellbore sensing system 400 does not comprise any data interrogation/communication units (or comprises very few, for example one at the end of a casing string such as in a cement shoe and/or a few spaced at lengthy intervals in comparision to fig. 5 ) for interrogating, and receiving sensor data from, the mems sensors 52 . instead, the mems sensors 52 , which, in an embodiment, are powered by batteries (or otherwise are powered by a downhole power source such as ambient conditions, e.g., temperature, fluid flow, etc.) situated in the sensors 52 , are configured to form a global data transmission network of mems sensors 52 (e.g., a “daisy-chain” network) extending along the entire length of the wellbore 18 . accordingly, sensor data generated by mems sensors 52 at all elevations of the wellbore 18 may be transmitted to neighboring mems sensors 52 and uphole along the entire length of the wellbore 18 to the processor 410 . double arrows 412 , 414 denote transmission of sensor data between neighboring mems sensors 52 . single arrows 416 , 418 denote transmission of sensor data up the wellbore 18 via the global network of mems sensors 52 , and single arrows 420 , 422 denote transmission of sensor data from the annulus 26 and the interior of the casing 20 to the exterior of the wellbore 18 , for example to a processor 410 or other data capture, storage, or transmission equipment. in an embodiment, the mems sensors 52 are contained in a wellbore servicing fluid placed in the wellbore 18 and are present in the wellbore servicing fluid at a mems sensor loading sufficient for reliable transmission of mems sensor data from the interior of the wellbore 18 to the processor 410 . referring to fig. 7 , a schematic view of an embodiment of a wellbore parameter sensing system 500 is illustrated. the wellbore parameter sensing system 500 may comprise the wellbore 18 , inside which the casing 20 is situated. in an embodiment, the wellbore parameter sensing system 500 may comprise one or more data interrogation/communication units 510 a and/or 510 b , which may be situated on the casing 20 . in embodiments, the data interrogation/communication unit 510 may be situated on or in a casing collar that couples casing joints together, at the end of a casing string such as a casing shoe, or any other suitable support location along a mechanical conveyance extending from the surface into the wellbore. in addition, the data interrogation/communication unit 510 may be situated in an interior of the casing 20 , on an exterior of the casing 20 , or both. in an embodiment, the data interrogation/communication unit 510 may be situated part way, e.g., about midway, between a downhole end of the wellbore 18 and an uphole end of the wellbore 18 . in an embodiment, the data interrogation/communication unit 510 a may be powered by a power source 540 , which is situated at an exterior of the wellbore 18 and is connected to the data interrogation/communication unit 510 a by an electrical cable 520 . the electrical cable 520 may be situated in the annulus 26 in close proximity to, or in contact with, an outer wall of the casing 20 and run along at least a portion of the length of the casing 20 . in an embodiment, the data interrogation/communication unit, e.g., unit 510 b , is powered and/or communicates wirelessly. in an embodiment, the wellbore parameter sensing system 500 may further comprise a processor 530 , which is connected to the data interrogation/communication unit 510 a via the electrical cable 520 and is configured to receive mems sensor data from the data interrogation/communication unit 510 a and process the mems sensor data. in an embodiment, the wellbore parameter sensing system 500 may further comprise a processor 530 , which is wirelessly connected to the data interrogation/communication unit 510 b and is configured to receive mems sensor data from the data interrogation/communication unit 510 b and process the mems sensor data. in an embodiment, the mems sensors 52 may be passive sensors, i.e., may be powered, for example, by bursts of electromagnetic radiation from the data interrogation/communication unit 510 . in an embodiment, the mems sensors 52 may be active sensors, i.e., powered by a battery or batteries situated in or on the sensor 52 or by other downhole power sources. in an embodiment, batteries of the mems sensors 52 may be inductively rechargeable. in an embodiment, mems sensors 52 may be placed inside the wellbore 18 via a wellbore servicing fluid. the mems sensors 52 are configured to measure at least one wellbore parameter and transmit sensor data regarding the at least one wellbore parameter to the data interrogation/communication unit 510 . as in the case of the embodiment of the wellbore parameter sensing system 400 illustrated in fig. 6 , the mems sensors 52 may transmit mems sensor data to neighboring mems sensors 52 , thereby forming data transmission networks of mems sensors for the purpose of transmitting mems sensor data from mems sensors 52 situated away from the data interrogation/communication unit 510 to the data interrogation/communication unit 510 . however, in contrast to the embodiment of the wellbore parameter sensing system 400 illustrated in fig. 6 , the mems sensors 52 in the embodiment of the wellbore parameter sensing system 500 illustrated in fig. 7 may, in some instances, not have to transmit mems sensor data along the entire length of the wellbore 18 , but rather only along a portion of the length of the wellbore 18 , for example to reach a given primary or regional data interrogation/communication unit. horizontal double arrows 512 , 514 denote transmission of sensor data between mems sensors 52 situated in the annulus 26 and inside the casing 20 , downwardly oriented single arrows 516 , 518 denote transmission of sensor data downhole to the data interrogation/communication unit 510 , and upwardly oriented single arrows 522 , 524 denote transmission of sensor data uphole to the data interrogation/communication unit 510 . referring to fig. 8 , a schematic view of an embodiment of a wellbore parameter sensing system 600 is illustrated. the wellbore parameter sensing system 600 may comprise the wellbore 18 , inside which the casing 20 is situated. in an embodiment, the wellbore parameter sensing system 600 may further comprise a plurality of regional communication units 610 , which may be situated on the casing 20 and spaced at regular or irregular intervals along the casing, e.g., about every 5 m to 15 m along the length of the casing 20 , alternatively about every 8 m to 12 m along the length of the casing 20 , alternatively about every 10 m along the length of the casing 20 . in embodiments, the regional communication units 610 may be situated on or in casing collars that couple casing joints together. in addition, the regional communication units 610 may be situated in an interior of the casing 20 , on an exterior of the casing 20 , or both. in an embodiment, the wellbore parameter sensing system 600 may further comprise a tool (e.g., a data interrogator 620 or other data collection and/or power-providing device), which may be lowered down into the wellbore 18 on a wireline 622 , as well as a processor 630 or other data storage or communication device, which is connected to the data interrogator 620 . in an embodiment, each regional communication unit 610 may be configured to interrogate and/or receive data from, mems sensors 52 situated in the annulus 26 , in the vicinity of the regional communication unit 610 , whereby the vicinity of the regional communication unit 610 is defined as in the above discussion of the wellbore parameter sensing system 300 illustrated in fig. 5 . the mems sensors 52 may be configured to transmit mems sensor data to neighboring mems sensors 52 , as denoted by double arrows 632 , as well as to transmit mems sensor data to the regional communication units 610 in their respective vicinities, as denoted by single arrows 634 . in an embodiment, the mems sensors 52 may be passive sensors that are powered by bursts of electromagnetic radiation from the regional communication units 610 . in a further embodiment, the mems sensors 52 may be active sensors that are powered by batteries situated in or on the mems sensors 52 or by other downhole power sources. in contrast with the embodiment of the wellbore parameter sensing system 300 illustrated in fig. 5 , the regional communication units 610 in the present embodiment of the wellbore parameter sensing system 600 are neither wired to one another, nor wired to the processor 630 or other surface equipment. accordingly, in an embodiment, the regional communication units 610 may be powered by batteries, which enable the regional communication units 610 to interrogate the mems sensors 52 in their respective vicinities and/or receive mems sensor data from the mems sensors 52 in their respective vicinities. the batteries of the regional communication units 610 may be inductively rechargeable by the data interrogator 620 or may be rechargeable by other downhole power sources. in addition, as set forth above, the data interrogator 620 may be lowered into the wellbore 18 for the purpose of interrogating regional communication units 610 and receiving the mems sensor data stored in the regional communication units 610 . furthermore, the data interrogator 620 may be configured to transmit the mems sensor data to the processor 630 , which processes the mems sensor data. in an embodiment, a fluid containing mems in contained within the wellbore casing (for example, as shown in figs. 5 , 6 , 7 , and 10 ), and the data interrogator 620 is conveyed through such fluid and into communicative proximity with the regional communication units 610 . in various embodiments, the data interrogator 620 may communicate with, power up, and/or gather data directly from the various mems sensors distributed within the annulus 26 and/or the casing 20 , and such direct interaction with the mems sensors may be in addition to or in lieu of communication with one or more of the regional communication units 610 . for example, if a given regional communication unit 610 experiences an operational failure, the data interrogator 620 may directly communicate with the mems within the given region experiencing the failure, and thereby serve as a backup (or secondary/verification) data collection option. referring to fig. 9 , a schematic view of an embodiment of a wellbore parameter sensing system 700 is illustrated. as in earlier-described embodiments, the wellbore parameter sensing system 700 comprises the wellbore 18 and the casing 20 that is situated inside the wellbore 18 . in addition, as in the case of other embodiments illustrated in the figures (e.g., figs. 5 and 8 ), the wellbore parameter sensing system 700 comprises a plurality of regional communication units 710 , which may be situated on the casing 20 and spaced at regular or irregular intervals along the casing, e.g., about every 5 m to 15 m along the length of the casing 20 , alternatively about every 8 m to 12 m along the length of the casing 20 , alternatively about every 10 m along the length of the casing 20 . in embodiments, the regional communication units 710 may be situated on or in casing collars that couple casing joints together. in addition, the regional communication units 710 may be situated in an interior of the casing 20 , on an exterior of the casing 20 , or both, or may be otherwise located and supported as described in various embodiments herein. in contrast to the embodiment of the wellbore parameter sensing system 300 illustrated in fig. 5 , in an embodiment, the wellbore parameter sensing system 700 further comprises one or more primary (or master) communication units 720 . the regional communication units 710 a and the primary communication unit 720 a may be coupled to one another by a data line 730 , which allows sensor data obtained by the regional communication units 710 a from mems sensors 52 situated in the annulus 26 to be transmitted from the regional communication units 710 a to the primary communication unit 720 a , as indicated by directional arrows 732 . in an embodiment, the mems sensors 52 may sense at least one wellbore parameter and transmit data regarding the at least one wellbore parameter to the regional communication units 710 b , either via neighboring mems sensors 52 as denoted by double arrow 734 , or directly to the regional communication units 710 as denoted by single arrows 736 . the regional communication units 710 b may communicate wirelessly with the primary or master communication unit 720 b , which may in turn communicate wirelessly with equipment located at the surface (or via telemetry such as casing signal telemetry) and/or other regional communication units 720 a and/or other primary or master communication units 720 a. in embodiments, the primary or master communication units 720 gather information from the mems sensors and transmit (e.g., wirelessly, via wire, via telemetry such as casing signal telemetry, etc.) such information to equipment (e.g., processor 750 ) located at the surface. in an embodiment, the wellbore parameter sensing system 700 further comprises, additionally or alternatively, a data interrogator 740 , which may be lowered into the wellbore 18 via a wire line 742 , as well as a processor 750 , which is connected to the data interrogator 740 . in an embodiment, the data interrogator 740 is suspended adjacent to the primary communication unit 720 , interrogates the primary communication unit 720 , receives mems sensor data collected by all of the regional communication units 710 and transmits the mems sensor data to the processor 750 for processing. the data interrogator 740 may provide other functions, for example as described with reference to data interrogator 620 of fig. 8 . in various embodiments, the data interrogator 740 (and likewise the data interrogator 620 ) may communicate directly or indirectly with any one or more of the mems sensors (e.g., sensors 52 ), local or regional data interrogation/communication untits (e.g., units 310 , 510 , 610 , 710 ), primary or master communication units (e.g., units 720 ), or any combination thereof. referring to fig. 10 , a schematic view of an embodiment of a wellbore parameter sensing system 800 is illustrated. as in earlier-described embodiments, the wellbore parameter sensing system 800 comprises the wellbore 18 and the casing 20 that is situated inside the wellbore 18 . in addition, as in the case of other embodiments shown in figs. 5-9 , the wellbore parameter sensing system 800 comprises a plurality of local, regional, and/or primary/master communication units 810 , which may be situated on the casing 20 and spaced at regular or irregular intervals along the casing 20 , e.g., about every 5 m to 15 m along the length of the casing 20 , alternatively about every 8 m to 12 m along the length of the casing 20 , alternatively about every 10 m along the length of the casing 20 . in embodiments, the communication units 810 may be situated on or in casing collars that couple casing joints together. in addition, the communication units 810 may be situated in an interior of the casing 20 , on an exterior of the casing 20 , or both, or may be otherwise located and supported as described in various embodiments herein. in an embodiment, mems sensors 52 , which are present in a wellbore servicing fluid that has been placed in the wellbore 18 , may sense at least one wellbore parameter and transmit data regarding the at least one wellbore parameter to the local, regional, and/or primary/master communication units 810 , either via neighboring mems sensors 52 as denoted by double arrows 812 , 814 , or directly to the communication units 810 as denoted by single arrows 816 , 818 . in an embodiment, the wellbore parameter sensing system 800 may further comprise a data interrogator 820 , which is connected to a processor 830 and is configured to interrogate each of the communication units 810 for mems sensor data via a ground penetrating signal 822 and to transmit the mems sensor data to the processor 830 for processing. in a further embodiment, one or more of the communication units 810 may be coupled together by a data line (e.g., wired communications). in this embodiment, the mems sensor data collected from the mems sensors 52 by the regional communication units 810 may be transmitted via the data line to, for example, the regional communication unit 810 situated furthest uphole. in this case, only one regional communication unit 810 is interrogated by the surface located data interrogator 820 . in addition, since the regional communication unit 810 receiving all of the mems sensor data is situated uphole from the remainder of the regional communication units 810 , an energy and/or parameter (intensity, strength, wavelength, amplitude, frequency, etc.) of the ground penetrating signal 822 may be able to be reduced. in other embodiments, a data interrogator such as unit 620 or 740 ) may be used in addition to or in lieu of the surface unit 810 , for example to serve as a back-up in the event of operation difficulties associated with surface unit 820 and/or to provide or serve as a relay between surface unit 820 and one or more units downhole such as a regional unit 810 located at an upper end of a string of interrogator units. for sake of clarity, it should be understood that like components as described in any of figs. 5-10 may be combined and/or substituted to yield additional embodiments and the functionality of such components in such additional embodiments will be apparent based upon the description of figs. 5-10 and the various components therein. for example, in various embodiments disclosed herein (including but not limited to the embodiments of figs. 5-10 ), the local, regional, and/or primary/master communication/data interrogation units (e.g., units 310 , 510 , 610 , 620 , 710 , 740 , and/or 810 ) may communicate with one another and/or equipment located at the surface via signals passed using a common structural support as the transmission medium (e.g., casing, tubular, production tubing, drill string, etc.), for example by encoding a signal using telemetry technology such as an electrical/mechanical transducer. in various embodiments disclosed herein (including but not limited to the embodiments of figs. 5-10 ), the local, regional, and/or primary/master communication/data interrogation units (e.g., units 310 , 510 , 610 , 620 , 710 , 740 , and/or 810 ) may communicate with one another and/or equipment located at the surface via signals passed using a network formed by the mems sensors (e.g., a daisy-chain network) distributed along the wellbore, for example in the annular space 26 (e.g., in a cement) and/or in a wellbore servicing fluid inside casing 20 . in various embodiments disclosed herein (including but not limited to the embodiments of figs. 5-10 ), the local, regional, and/or primary/master communication/data interrogation units (e.g., units 310 , 510 , 610 , 620 , 710 , 740 , and/or 810 ) may communicate with one another and/or equipment located at the surface via signals passed using a ground penetrating signal produced at the surface, for example being powered up by such a ground-penetrating signal and transmitting a return signal back to the surface via a reflected signal and/or a daisy-chain network of mems sensors and/or wired communications and/or telemetry transmitted along a mechanical conveyance/medium. in some embodiments, one or more of), the local, regional, and/or primary/master communication/data interrogation units (e.g., units 310 , 510 , 610 , 620 , 710 , 740 , and/or 810 ) may serve as a relay or broker of signals/messages containing information/data across a network formed by the units and/or mems sensors. referring to fig. 11 , a method 900 of servicing a wellbore is described. at block 910 , a plurality of mems sensors is placed in a wellbore servicing fluid. at block 920 , the wellbore servicing fluid is placed in a wellbore. at block 930 , data is obtained from the mems sensors, using a plurality of data interrogation units spaced along a length of the wellbore. at block 940 , the data obtained from the mems sensors is processed. referring to fig. 12 , a further method 1000 of servicing a wellbore is described. at block 1010 , a plurality of mems sensors is placed in a wellbore servicing fluid. at block 1020 , the wellbore servicing fluid is placed in a wellbore. at block 1030 , a network consisting of the mems sensors is formed. at block 1040 , data obtained by the mems sensors is transferred from an interior of the wellbore to an exterior of the wellbore via the network consisting of the mems sensors. any of the embodiments set forth in the figures described herein, for example, without limitation, figs. 5-10 , may be used in carrying out the methods as set forth in figs. 11 and 12 . in some embodiments, a conduit (e.g., casing 20 or other tubular such as a production tubing, drill string, workstring, or other mechanical conveyance, etc.) in the wellbore 18 may be used as a data transmission medium, or at least as a housing for a data transmission medium, for transmitting mems sensor data from the mems sensors 52 and/or interrogation/communication units situated in the wellbore 18 to an exterior of the wellbore (e.g., earth's surface 16 ). again, it is to be understood that in various embodiments referencing the casing, other physical supports may be used as a data transmission medium such as a workstring, toolstring, production string, tubular, coiled tubing, wireline, jointed pipe, or any other physical structure or conveyance extending downhole from the surface. referring to fig. 13 , a schematic cross-sectional view of an embodiment of the casing 1120 is illustrated. the casing 1120 may comprise a groove, cavity, or hollow 1122 , which runs longitudinally along an outer surface 1124 of the casing, along at least a portion of a length of the 1120 casing. the groove 1122 may be open or may be enclosed, for example with an exterior cover applied over the groove and attached to the casing (e.g., welded) or may be enclosed as an integral portion of the casing body/structure (e.g., a bore running the length of each casing segment). in an embodiment, at least one cable 1130 may be embedded or housed in the groove 1122 and run longitudinally along a length of the groove 1122 . the cable 1130 may be insulated (e.g., electrically insulated) from the casing 1120 by insulation 1132 . the cable 1130 may be a wire, fiber optic, or other physical medium capable of transmitting signals. in an embodiment, a plurality of cables 1130 may be situated in groove 1122 , for example, one or more insulated electrical lines configured to power pieces of equipment situated in the wellbore 18 and/or one or more data lines configured to carry data signals between downhole devices and an exterior of the wellbore 18 . in various embodiments, the cable 1130 may be any suitable electrical, signal, and/or data communication line, and is not limited to metallic conductors such as copper wires but also includes fiber optical cables and the like. fig. 14 illustrates an embodiment of a wellbore parameter sensing system 1100 , comprising the wellbore 18 inside which a wellbore servicing fluid loaded with mems sensors 52 is situated; the casing 1120 having a groove 1122 ; a plurality of data interrogation/communication units 1140 situated on the casing 1120 and spaced along a length of the casing 1120 ; a processing unit 1150 situated at an exterior of the wellbore 18 ; and a power supply 1160 situated at the exterior of the wellbore 18 . in embodiments, the data interrogation/communication units 1140 may be situated on or in casing collars that couple casing joints together. in addition or alternatively, the data interrogation/communication units 1140 may be situated in an interior of the casing 1120 , on an exterior of the casing 1120 , or both. in an embodiment, the data interrogation/communication units 1140 a may be connected to the cable(s) and/or data line(s) 1130 via through-holes 1134 in the insulation 1132 and/or the casing (e.g., outer surface 1124 ). the data interrogation/communication units 1140 a may be connected to the power supply 1160 via cables 1130 , as well as to the processor 1150 via data line(s) 1133 . the data interrogation/communication units 1140 a commonly connected to one or more cables 1130 and/or data lines 1133 may function (e.g., collect and communication mems sensor data) in accordance with any of the embodiments disclosed herein having wired connections/communications, including but not limited to figs. 5 , 7 , and 9 . furthermore, the wellbore parameter sensing system 1100 may further comprise one or more data interrogation/communication units 1140 b in wireless communication and may function (e.g., collect and communication mems sensor data) in accordance with any of the embodiments disclosed herein having wireless connections/communications, including but not limited to figs. 5 , 7 , 8 , 9 , and 10 . by way of non-limiting example, the mems sensors 52 present in a wellbore servicing fluid situated in an interior of the casing 1120 and/or in the annulus 26 measure at least one wellbore parameter. the data interrogation/communication units 1140 in a vicinity of the mems sensors 52 interrogate the sensors 52 at regular intervals and receive data from the sensors 52 regarding the at least one wellbore parameter. the data interrogation/communication units 1140 then transmit the sensor data to the processor 1150 , which processes the sensor data. in an embodiment, the mems sensors 52 may be passive sensors, i.e., may be powered, for example, by bursts of electromagnetic radiation from the regional data interrogation/communication units 1140 . in a further embodiment, the mems sensors 52 may be active sensors, i.e., powered by a battery or batteries situated in or on the sensors 52 or other downhole power source. in an embodiment, batteries of the mems sensors 52 may be inductively rechargeable by the regional data interrogation/communication units 1140 . in a further embodiment, the casing 1120 may be used as a conductor for powering the data interrogation/communication units 1140 , or as a data line for transmitting mems sensor data from the data interrogation/communication units 1140 to the processor 1150 . fig. 15 illustrates an embodiment of a wellbore parameter sensing system 1200 , comprising the wellbore 18 inside which a wellbore servicing fluid loaded with mems sensors 52 is situated; the casing 20 ; a plurality of data interrogation/communication units 1210 situated on the casing 20 and spaced along a length of the casing 20 ; and a processing unit 1220 situated at an exterior of the wellbore 18 . in embodiments, the data interrogation/communication units 1210 may be situated on or in casing collars that couple casing joints together. in addition or alternatively, the data interrogation/communication units 1210 may be situated in an interior of the casing 20 , on an exterior of the casing 20 , or both. in embodiments, the data interrogation/communication units 1210 may each comprise an acoustic transmitter, which is configured to convert mems sensor data received by the data interrogation/communication units 1210 from the mems sensors 52 into acoustic signals that take the form of acoustic vibrations in the casing 20 , which may be referred to as acoustic telemetry embodiments. in embodiments, the acoustic transmitters may operate, for example, on a piezoelectric or magnetostrictive principle and may produce axial compression waves, torsional waves, radial compression waves or transverse waves that propagate along the casing 20 in an uphole direction denoted by arrows 1212 . a discussion of acoustic transmitters as part of an acoustic telemetry system is given in u.s. patent application publication no. 2010/0039898 and u.s. pat. nos. 3,930,220; 4,156,229; 4,298,970; and 4,390,975, each of which is hereby incorporated by reference in its entirety. in addition, the data interrogation/communication units 1210 may be powered as described herein in various embodiments, for example by internal batteries that may be inductively rechargeable by a recharging unit run into the wellbore 18 on a wireline or by other downhole power sources. in embodiments, the wellbore parameter sensing system 1200 further comprises at least one acoustic receiver 1230 , which is situated at or near an uphole end of the casing 20 , receives acoustic signals generated and transmitted by the acoustic transmitters, converts the acoustic signals into electrical signals and transmits the electrical signals to the processing unit 1220 . arrows 1232 denote the reception of acoustic signals by acoustic receiver 1230 . in an embodiment, the acoustic receiver 1230 may be powered by an electrical line running from the processing unit 1220 to the acoustic receiver 1230 . in embodiments, the wellbore parameter sensing system 1200 further comprises a repeater 1240 situated on the casing 20 . the repeater 1240 may be configured to receive acoustic signals from the data interrogation/communication units 1210 situated downhole from the repeater 1240 , as indicated by arrows 1242 . in addition, the repeater 1240 may be configured to retransmit, to the acoustic receiver 1230 , acoustic signals regarding the data received by these downhole data interrogation/communication units 1210 from mems sensors 52 . arrows 1244 denote the retransmission of acoustic signals by repeater 1240 . in further embodiments, the wellbore parameter sensing system 1200 may comprise multiple repeaters 1230 spaced along the casing 20 . in various embodiments, the data interrogation/communication units 1210 and/or the repeaters 1230 may contain suitable equipment to encode a data signal into the casing 20 (e.g, electrical/mechanical transducing circuitry and equipment). in operation, in an embodiment, the mems sensors 52 situated in the interior of the casing 20 and/or in the annulus 26 may measure at least one wellbore parameter and then transmit data regarding the at least one wellbore parameter to the data interrogation/communication units 1210 in their respective vicinities in accordance with the various embodiments disclosed herein, including but not limited to figs. 5-12 . the acoustic transmitters in the data interrogation/communication units 1210 may convert the mems sensor data into acoustic signals that propagate up the casing 20 . the repeater or repeaters 1240 may receive acoustic signals from the data interrogation/communication units 1210 downhole from the respective repeater 1240 and retransmit acoustic signals further up the casing 20 . at or near an uphole end of the casing 20 , the acoustic receiver 1230 may receive the acoustic signals propagated up the casing 20 , convert the acoustic signals into electrical signals and transmit the electrical signals to the processing unit 1220 . the processing unit 1220 then processes the electrical signals. in various embodiments, the acoustic telemetry embodiments and associated equipment may be combined with a network formed by the mems sensors and/or data interrogation/communication units (e.g., a point to point or “daisy-chain” network comprising mems sensors) to provide back-up or redundant wireless communication network functionality for conveying mems data from downhole to the surface. of course, such wireless communications and networks could be further combines with various wired embodiments disclosed herein for further operational advantages. referring to fig. 16 , a method 1300 of servicing a wellbore is described. at block 1310 , a plurality of mems sensors is placed in a wellbore servicing fluid. at block 1320 , the wellbore servicing fluid is placed in a wellbore. at block 1330 , data is obtained from the mems sensors, using a plurality of data interrogation units spaced along a length of the wellbore. at block 1340 , the data is telemetrically transmitted from an interior of the wellbore to an exterior of the wellbore, using a casing situated in the wellbore (e.g., via acoustic telemetry). at block 1350 , the data obtained from the mems sensors is processed. referring to fig. 17 , a schematic longitudinal sectional view of a portion of the wellbore 18 is illustrated. as is apparent from the figure, the wellbore 18 includes at least one washed-out region 42 at which material has broken off or eroded from a wall of the wellbore 18 (or the wellbore has intersected a naturally occurring void space within the formation, e.g., a lost circulation zone), as well as at least one constricted region 44 , for example caused by particular inflow from the formation into the wellbore, a partial wellbore collapse, a ledge or build-up of filter cake, or the like may be present. in an embodiment, a wellbore servicing fluid containing mems sensors may be pumped down the annulus 26 at a fluid flow rate and up the interior flow bore of casing 20 so as to establish a circulation loop. however, in a further embodiment, wellbore servicing fluid containing mems sensors may be pumped down the interior flow bore of casing 20 and up the annulus 26 . in further regard to fig. 17 , a mems sensor loading of the wellbore servicing fluid may be approximately constant throughout the fluid. in an embodiment, as the wellbore servicing fluid is pumped down the annulus 26 and up the casing 20 , positions and velocities of the mems sensors may be determined along the entire length of the wellbore 18 using data interrogation/communication units 150 . in some embodiments, the various data interrogation/communication units otherwise shown or described herein may be used to detect the mems sensors, determine the velocities thereof and otherwise communicate, store, and/or transfer data (e.g., form various networks), and any suitable configuration or layout of data interrogation/communication units as described herein may be employed to determine velocities, flow rates, concentrations, etc. of mems sensors, including but not limited to the embodiments of figs. 5-16 . for example, any of the data interrogator embodiments shown in figs. 5-16 may be used in combination with the data interrogation units of figs. 17 and 19 . given the fluid flow rate of the wellbore servicing fluid and an expected clearance between the casing 20 and the wellbore 18 in, for example, regions 46 , 48 , 50 in which the wellbore 18 is not enlarged or constricted, an approximate expected fluid velocity through these regions 46 , 48 and 50 may be calculated. furthermore, since the mems sensors are distributed throughout the wellbore servicing fluid and are carried along with the wellbore servicing fluid as the wellbore servicing fluid moves down the annulus 26 , the velocities of the mems sensors in a downhole direction will at least approximately correspond to the calculated fluid velocity for regions 46 , 48 and 50 of the wellbore 18 . accordingly, if, in a region of the wellbore 18 , the downhole velocities of the mems sensors are approximately equal to the expected fluid velocity or deviate from the expected fluid velocity by less than a threshold value, it may be concluded that a cross-sectional area of the annulus 26 in this region approximately corresponds to an expected cross-sectional area of the wellbore 18 minus an expected cross-sectional area of the casing 20 . likewise, if the fluid velocity deviates equal to or greater than a threshold value (e.g., higher or lower velocity than expected), such deviation may indicate the present of an undesirable constriction or expansion (e.g., volumetric constriction or expansion) of the wellbore. in an embodiment, if the wellbore servicing fluid moves through a washed out region of the wellbore 18 such as moving from region 46 to region 42 , the fluid velocity of the wellbore servicing fluid will decrease as the wellbore servicing fluid traverses from region 46 to region 42 , and then increase again as the wellbore servicing fluid enters regions 48 of the wellbore 18 . accordingly, as the mems sensors traverse region 42 of the wellbore 18 , the average downhole velocity of the mems sensors will decrease in comparison to the average downhole velocity of the mems sensors in region 46 . in addition, if it is assumed, at least initially, that no or minimal wellbore servicing fluid is being lost to the wellbore 18 , and that the fluid flow rate at which the wellbore servicing fluid is being pumped through the wellbore 18 remains approximately constant, then the fluid flow rate through every annular cross-section of the wellbore 18 is approximately constant. thus, referring to fig. 18 a , which is a schematic annular cross-section of the wellbore 18 taken at a-a in region 46 (and is also representative of regions 48 and 50 ), and fig. 18 b , which is a schematic annular cross-section of the wellbore 18 taken at b-b in section 42 , the fluid flow rate through these cross-sections remains approximately constant despite the larger annular cross-section of section b-b. if the fluid flow rate, e.g., in m 3 /s, is referred to as f, the annular cross-sectional area, e.g., in m 2 , of section a-a is referred to as a a , and the annular cross-sectional area, e.g., in m 2 , of section b-b is referred to as a b , then the average fluid velocities, e.g., in m/s, in sections a-a and b-b, respectively referred to as v a and v b , may be calculated as follows: v a =f/a a 1) v b =f/a b . 2) in addition, rearranging terms and noting that f is constant, one obtains: f=v a a a =v b a b . 3) thus, if cross-sectional area a b of section b-b in fig. 18 b is, e.g., 2 times greater than cross-sectional area a a of section a-a in fig. 18 a , then the average downhole fluid velocity v b through section b-b will be one half (e.g., 50%) of the average downhole fluid velocity v a through section a-a. stated alternatively, a 50% reduction in velocity (e.g., v b =½v a ) indicates a 100% increase in cross-sectional area (e.g., a b =2a a ). accordingly, the average downhole velocities of mems sensors 52 passing through an annular cross-section of the wellbore 18 may be used to determine the cross-sectional area of that annular cross-section, with a decrease in fluid velocity representing an expansion in the wellbore such as a washout, void space, vugular zone, fracture or other space/opening in the wellbore. referring now to fig. 18 c , which illustrates a schematic annular cross-section of the wellbore 18 taken at section c-c of region 44 of the wellbore 18 , it is apparent that at least a portion of the annulus 26 at section c-c is constricted, for example possibly due to a protruding ledge in the wellbore 18 or a build-up of filter cake or other particulate matter in the wellbore 18 . in an embodiment, if the wellbore servicing fluid moves through a constricted region of the wellbore 18 such as region 44 , the average fluid velocity of the wellbore servicing fluid will increase as the wellbore servicing fluid traverses from region 48 into region 44 , and then decrease again as the wellbore servicing fluid enters region 50 of the wellbore 18 . accordingly, as the mems sensors 52 traverse region 44 of the wellbore 18 , the average downhole velocity of the mems sensors 52 will increase in comparison to the average downhole velocity of the mems sensors 52 in region 48 . now, referring back to equation 3) and applying the equation to cross-section c-c in region 44 of the wellbore 18 , one obtains: f=v a a a =v c a c , 4) where v c is an average downhole fluid velocity through cross-section c-c and a c is a cross-sectional area of cross-section c-c. thus, if, for example, the average downhole velocity of the mems sensors 52 passing through cross-section c-c in region 44 is 2 times greater than the average downhole velocity of the mems sensors 52 passing through cross-section a-a in region 46 (which would be comparable to an annular cross-section taken in region 48 ), then the cross-sectional area a c of cross-section c-c is one half (e.g., 50%) of the cross-sectional area of cross-section a-a (or an equivalent cross-section taken in region 48 ). accordingly, the average downhole velocities of the mems sensors 52 passing through a constricted region of a wellbore 18 may be utilized to determine the cross-sectional area of an annular cross-section taken in that constricted region, with an increase in fluid velocity representing an constriction in the wellbore such as a partial collapse, swelling, particulate buildup or inflow, filter cake buildup, and the like. fig. 19 illustrates a schematic longitudinal sectional view of a portion of the wellbore 18 , in which a wellbore servicing fluid containing mems sensors 52 is pumped down the annulus 26 at a fluid flow rate and up the casing 20 so as to form a circulation loop, with the understanding that fluid may flow in the opposite direction in some embodiments. in addition, as is apparent from the figure, the wellbore 18 includes two fluid loss zones 54 , 56 at which respective fissures 58 , 60 extend outwards from the wellbore 18 and communicate with a hollow or permeable formation 62 . cross-sections of the wellbore 18 taken at lines e-e and g-g in regions 54 and 56 of the wellbore 18 are schematically illustrated in figs. 20 b and 20 d , respectively. in an embodiment, as the wellbore servicing fluid passes from region 62 through region 54 , a portion of the fluid is pressed (e.g., lost) through the fissure 58 and absorbed by formation 62 . such areas where a wellbore composition is lost to the surrounding formation may be referred to as fluid loss zone, loss or lost circulation zones, wash-outs, voids, vugulars, cavities, fissures, fractures, etc. if the fluid flow rate is referred to as f and the flow rate of fluid lost to the formation 62 via fissure 58 is referred to as f u , then the flow rate of fluid passing through annular cross-section d-d, which is situated in a region 62 of the wellbore 18 directly uphole from fissure 58 and is schematically illustrated in fig. 20 a , is f, whereas the flow rate of fluid passing through annular cross-section f-f, which is situated in a region 64 of the wellbore 18 directly downhole from fissure 58 and is schematically illustrated in fig. 20 c , is f-f l1 . similarly, as the wellbore servicing fluid passes from region 64 through region 56 , a portion of the fluid is pressed (e.g., lost) through the fissure 60 and absorbed by formation 62 . if the flow rate of fluid lost to the formation 62 via fissure 60 is referred to as f l2 , then the flow rate of fluid passing through annular cross-section h-h, which is situated in a region 66 of the wellbore 18 directly downhole from fissure 60 and is schematically illustrated in fig. 20 e , is f−(f l1 +f l2 ). now, considering the relationship between the fluid flow rate and the flow velocity given in equation 3), one obtains: f=v d a d 5) f−f l1 =v f a f 6) f− ( f l1 +f l2 )= v h a h 7) where v d , is the downhole flow velocity of the wellbore servicing fluid through annular cross-section d-d, a d is the cross-sectional area of annular cross-section d-d, v f is the downhole flow velocity of the wellbore servicing fluid through annular cross-section f-f, a f is the cross-sectional area of annular cross-section f-f, v h is the downhole flow velocity of the wellbore servicing fluid through annular cross-section h-h, and a h is the cross-sectional area of annular cross-section h-h. assuming that none of regions 62 , 64 and 66 includes a washed-out section or a constriction, then a d , a f and a h may be considered to be approximately equal to one another and referred to as a: a=a d =a f =a h 8) after combining equation 8) with equations 5), 6) and 7) and rearranging terms, one obtains: thus, after a fluid loss zone is traversed by the wellbore servicing fluid, the downhole flow velocity of the wellbore servicing fluid, and thus the average downhole velocity of the mems sensors 52 situated in the wellbore servicing fluid, will decrease in proportion with the fluid flow rate. accordingly, in an embodiment, if a decrease in the average downhole mems sensor velocity is detected, then an approximate flow rate of wellbore servicing fluid lost to a formation may be calculated from the decrease in the average downhole mems sensor velocity. it should be noted from the discussion above that an average downhole velocity of mems sensors 52 will decrease in both a washed-out region and a fluid loss zone. however, in an embodiment, a washed-out region and a fluid loss zone may be distinguished from one another in that in the case of a washed-out region, after the washed-out region is traversed, the average downhole velocity of the mems sensors will return to approximately the average mems sensor downhole velocity detected uphole from the washed-out region given that the total flow rate remains constant (i.e., there is no significant loss of fluid to the surrounding formation). in contrast, after the wellbore servicing fluid traverses a fluid loss zone, the average downhole velocity of the mems sensors 52 will not return to an average downhole mems sensor velocity detected uphole from the fluid loss zone, but will remain at a lower level. in further regard to fig. 19 , in an embodiment, a return fluid flow rate 68 up the casing 20 to, for example, circulating pumps situated at the rig 12 , may be determined from a flow meter situated upstream from the circulating pumps and compared to the original fluid flow rate of wellbore servicing fluid, and the flow rate of wellbore servicing fluid lost to the formation 62 may be calculated and compared to the fluid loss indicated by the decreases in the average downhole mems sensor velocities. upon detecting and/or locating fluid loss to the surrounding formation, remedial actions may be taken such as pumping a lost circulation material downhole to plug the leak, performing a squeeze job (e.g., cement squeeze, gunk squeeze), etc. in an alternative embodiment, all or a portion of the mems sensors are given unique identifiers, for example rfid serial numbers, and the data interrogation units 150 may be used to keep track of all or a portion of the uniquely identified sensors (e.g., a statistic sampling of same). for example, where unit 150 d records the presence of 100 uniquely identified mems sensors within a given sampling period, a failure by one or more downstream units (e.g., unit 150 h ) to detect a representative or threshold number of the same 100 uniquely identified mems sensors within an expected sampling time (e.g., the time expected for the sensors to travel the distance between units 150 d and 150 h based upon the fluid flow rate) may indicate a loss of said sensors to the surrounding formation, for example via fissures 58 and/or 60 , taking into account any normal variance in detection of uniquely identified mems sensors between upstream and downstream interrogation units over a given distance. for example, if over a statistically representative sampling period, only 80 of the 100 uniquely identified mems sensors for each sampling period are detected at a downstream interrogation unit, such may indicate a 20% fluid loss to the formation (or a fluid loss of 20% minus the normal variance/deviation in mems detection). in addition to or in lieu of (a) estimating a cross-sectional area of an annular cross-section of a wellbore, using a fluid flow rate of a mems sensor-loaded wellbore servicing fluid through the wellbore and the velocities of the mems sensors during traversal of the annular cross-section, and/or (b) estimating a flow rate of fluid lost to a formation in an annular region of a wellbore, using velocities of the mems sensors 52 uphole and downhole from the annular region, in various embodiments, (c) shapes of annular cross-sections of the wellbore 18 may be estimated, using detected positions of the mems sensors 52 , and any combination of (a), (b), and (c) is contemplated hereby, which may be referred to in some instances as annular mapping via flow rate and/or velocities of mems sensors conveyed through a wellbore (e.g., circulated through an annulus) via a wellbore servicing composition. in performing any annular mapping function, e.g., any of (a), (b), and/or (c) of this paragraph, the data interrogation units 150 may be spaced along the wellbore and supported upon the casing or other conveyance or structure in the wellbore. while fixed data interrogators are shown in figs. 17 and 19 , one or more mobile data interrogators (for example, as shown in figs. 2 and 8 ), may be employed to perform annular mapping functions, for example tripped into the wellbore and intermittently moved up the wellbore while mapping same. the data interrogation units 150 have a sensing or mapping range associated therewith, as represented by circles 151 . within the sensing or mapping range, the data interrogation units 150 are operable to sense the presence of various mems sensors in relation to the unit, and thus can create a mathematical representation of mems sensor presence, velocity, location, concentration, and/or identity (e.g., a particular sensor or group of sensors having a unique identifier or i.d. number) in relation to the position of a given unit 150 . by way of analogy and shown schematically in figs. 17 and 19 , the data interrogation units 150 constitute an overlapping network of “radar ranges” and thus can track the presence, location, concentration, velocity, and/or identity of the mems sensors as they flow through the wellbore with the servicing composition. referring back to figs. 17 and 18 a to 18 c , figs. 18 a to 18 c schematically and respectively depict annular cross-sections of the wellbore 18 at lines a-a, b-b and c-c in fig. 17 . as is apparent from figs. 18 a to 18 c , mems sensors 52 suspended in the wellbore servicing fluid traverse these cross-sections. in an embodiment, positions of the mems sensors 52 in the annular cross-sections, e.g., radial positions (or directional vector) of the mems sensors 52 with respect to the data interrogation units 150 , may be determined and mapped. in addition, a curve may be drawn through the innermost mems sensor positions with respect to the casing 20 , as well as through the outermost mems sensor positions, in order to approximate an outline of a wall of the wellbore 18 and an outer wall of the casing 20 in each cross-section, and such may be carried out in three dimensions (e.g., x, y, and z coordinates with respect to the data interrogation units 150 ) to provide a map of the annular geometry and/or surrounding formation. in an embodiment, positions of mems sensors 52 in an annular cross-section may be recorded and mapped over a time frame ranging from about 0.5 s to about 10 s, and over a distance (e.g., a distance from any given data interrogation unit location) of 1 ft, 5 ft, 10 ft, or 25 ft, depending on the sensing range (e.g., power) of the data interrogation units and/or the desired accuracy of an annular cross-sectional depiction. also, annular cross-sections may be taken a longitudinal distances traversing the wellbore of from about 0.25 ft, 0.5 ft, 0.75 ft, 1 ft, 1.5 ft, 2 ft, or any combination thereof. in an embodiment, annular cross-sections may be taken at longitudinal distances and/or intervals traversing the wellbore about equivalent to the distances and/or intervals used in wellbore logging activities, as would be apparent to those of skill in the art. in other embodiments, annular cross-sections may be taken a longitudinal distances and/or intervals traversing the wellbore in accordance with other embodiments disclosed herein (e.g., distances associated with processor 1720 ). referring back to fig. 19 , this figure schematically depicts regions 54 and 56 of the wellbore 18 , at which wellbore servicing fluid loaded with mems sensors 52 and pumped into the annulus 26 is partially lost to a formation 62 via respective fissures 58 , 60 . in addition, figs. 20 b and 20 d schematically depict cross-sections of the wellbore 18 taken at wellbore-side ends of the fissures 58 , 60 at lines e-e and g-g in fig. 19 . in an embodiment, as shown in figs. 20 b and 20 d , cross-sections of the annulus 26 at the fissures 58 , 60 may be mapped by recording positions of mems sensors 52 that pass through the annulus 26 and the fissures 58 , 60 . in addition, in a further embodiment, multiple annular cross-sections along the length of the wellbore 18 and in the vicinity of the fissures 58 , 60 may be mapped and combined, in order to form a three dimensional depiction of at least a portion of the fissures 58 , 60 and/or the formation 62 and to possibly facilitate the filling and sealing of the fissures 58 , 60 , e.g., a cement squeeze or plugging a lost circulation zone. as a result of determining the positions of the mems sensors 52 , in an embodiment, it may be determined, for example, that annular cross-section a-a in fig. 18 a is normal, i.e., the casing 20 is properly centralized in the wellbore 18 , and the wall of the wellbore 18 is not enlarged and does not have any debris attached to it; that the wellbore 18 at annular cross-section b-b in fig. 18 b is undesirably expanded, e.g., at least partially washed out and/or contains a fluid loss zone (e.g., loss of circulation zone), and thus may require remedial action such as secondary cementing to shore up the wall; and/or that the wellbore 18 at annular cross-section c-c in fig. 18 c is undesirably constricted, e.g., includes a ledge and/or attached debris and/or a build-up of filter cake along at least a portion of the wellbore wall and may require more fluid circulation or other remedial action to reduce/remove the build-up, and/or that the casing 20 is not properly centralized in the wellbore 18 . referring to fig. 21 , a method 1360 of servicing a wellbore is described. at block 1362 , a plurality of micro-electro-mechanical system (mems) sensors is placed in a wellbore servicing fluid. at block 1364 , the wellbore servicing fluid is pumped down the wellbore at a fluid flow rate. at block 1366 , positions of the mems sensors in the wellbore are determined. at block 1368 , velocities of the mems sensors along a length of the wellbore are determined. at block 1370 , an approximate cross-sectional area profile of the wellbore along the length of the wellbore is determined from at least the velocities and/or positions of the mems sensors and the fluid flow rate. in addition to or in lieu of using mems sensor to determine a characteristic or shape of the wellbore and/or surrounding formation, the mems sensors may provide information regarding the flow fluid (e.g., flow dynamics and characteristics) in the wellbore and/or surrounding formation. a plurality of mems sensors may be placed in a wellbore composition, the wellbore composition flowed (e.g., pumped) into the wellbore and/or surrounding formation (e.g., circulated in the wellbore), and one or more fluid flow properties, characteristics, and/or dynamics of the wellbore composition may be determined by data obtained from the mems sensors moving/flowing in the wellbore and/or formation. the data may be obtained from the mems sensors according to any of the embodiments disclosed herein (e.g., one or more mobile data interrogators tripped into and out of the wellbore and/or fixed data interrogators positioned within the wellbore), and may be further communicated/transmitted to/from or within the wellbore via any of the embodiments disclosed herein.) for example, areas of laminar and/or turbulent flow the wellbore composition may be determined within the wellbore and/or surrounding formation, and such information may be used to further characterize the wellbore and/or surrounding formation. the velocity and flow rate of the wellbore composition may further be obtained as described herein. in an embodiment, data from the mems sensors is used to perform one or more fluid flow dynamics calculations for the flow of the wellbore composition through the wellbore and/or the surrounding formation. for example, data from the mems sensors may be used as input to a computational fluid dynamics equation or software. such information may be used in designing down hole tools, for example designing a down hole tool/device in a manner to reduce drag and/or turbulence associated with the tool/device as the wellbore composition flows through and/or past the tool. in an embodiment, fluid flow data for the wellbore composition is obtained over at least a portion of the length of the wellbore, thereby providing a fluid flow profile over said length of wellbore. the fluid flow profile may be compared to a theoretical or design standard fluid flow profile, for example in real time during performance of a serving operation wherein the wellbore composition is being placed in the wellbore. such comparison may be used to determine whether the service is proceeding according to plan and/or to verify one or more characteristics of the wellbore. for example, an area of turbulent flow indicated by the mems sensors may correspond to a location of a particular wellbore feature expected to provide turbulence, such as the presence of a tool or device (e.g., casing collar, centralizer, spacer, shoe, etc.) in the wellbore that the fluid is flowing around or through which may be indicated or mapped in the theoretical or design fluid flow profile. likewise, turbulent or non-turbulent (e.g., laminar) flow may indicate desirable or undesirable characteristics of the fluid itself (e.g., desirable or undesirable mixing, stratification, etc.) and/or the surrounding surface that contacts the fluid (e.g., rough vs. smooth surfaces, etc.). by performing such comparisons in real time, the wellbore service may be altered or adjusted as needed to improve the outcome of the service. for example, one or more conditions of the wellbore and/or surrounding formation may be altered based upon a mems sensor derived indication of the fluid flow characteristics or dynamics. in an embodiment, a build up of a material on an interior surface of the wellbore and/or formation (e.g., gelled mud, filter cake, screen out material, sand, etc.) is reduced or removed via a remedial action such as acidizing, washing, physical scraping/contact, changing a flow rate of the wellbore composition, changing a characteristic of the wellbore composition, placing an additional composition in the wellbore to react with the build up or change a characteristic of the buildup, moving a conduit within the wellbore, placing a tool downhole to physically contact and removing the build up, or any combination thereof. in another embodiment, a fluid flow property or characteristic is an actual time of arrival of at least a portion of the wellbore composition comprising the mems sensors. the actual time of arrival may be compared to an expected time of arrival, and such comparison may be indicative of a further condition of the wellbore. for example, an expected time of arrival matching an actual time of arrival may be indicative of normal or expected operations. alternatively, an actual time of arrival before an expected time of arrival may be indicative of a decreased flow path through the wellbore (e.g., reduced flow bore diameter due to build up such as gelled mud, filter cake or other flow restriction), thus yielding an increased fluid velocity and decreased transit time for the mems sensors flowing through the wellbore. in an embodiment, the wellbore servicing operation comprises placing a plurality of mems sensors in at least a portion of a spacer fluid, a sealant composition (e.g., a cement slurry or a non-cementitious sealant), or both, pumping the spacer fluid followed by the sealant composition into the wellbore, and determining one or more fluid flow properties or characteristics of the spacer fluid and/or the cement composition from data provided by the mems sensors during the pumping of the spacer fluid and sealant composition into the wellbore. the sealant composition may be pumped down the casing and back up the annular space between the casing and the wellbore (e.g., a conventional cementing job) or may be pumped down the annulus between the casing and the wellbore in a reverse cementing job. the movement of the spacer and/or sealant composition through the wellbore may be monitored via the mems sensors, and such movement may be used to determine a characteristic of the wellbore and/or surrounding formation; to evaluate the fluid flow characteristics of the spacer fluid and/or sealant composition as it flows through the wellbore and/or surrounding formation; to determine a location of the spacer fluid and/or sealant composition (e.g., when the sealant has turned the corner at the terminal downhole end of the casing) and to further signal or bring about a halt to the placement (e.g., stop pumping) upon the spacer fluid and/or cement composition reaching a desired location; and to monitor the wellbore for movement of the mems sensors within the spacer fluid and/or sealant composition after halting pumping of same and to signal an operator and/or activating at least one device to prevent flow out of the wellbore upon detection of movement of the mems sensors after halting the pumping; or any combination thereof. figs. 22 a to 22 c illustrate a schematic view of an embodiment of a wellbore parameter sensing system 1400 , which comprises the wellbore 18 , the casing 20 situated in the wellbore 18 , a plurality of data interrogation units 1410 spaced along a length of the casing 20 , and a float shoe 1420 situated at a downhole end of the casing 20 . in an embodiment, the float shoe 1420 comprises a poppet valve 1422 , which is biased by a spring 1424 when the valve 1422 is in a neutral state and may be opened if a sufficient differential pressure develops between an interior of the casing 20 and the annulus 26 . while a float shoe and poppet value assembly is demonstrated in this embodiment, it is understood that any assembly (e.g., float collar, float shoe, valve assembly, etc.) suitable to terminate the downhole, distal end of the casing string (e.g., to protect and/or direct same into the wellbore) and to selectively open and/or close terminal end of the casing to fluid flow (from either interior to annulus or from annulus to interior) may be employed in the various embodiments disclosed herein, wherein communication with mems sensors may be used in determining when to selectively perform said open and/or close and wherein such communication may be with a data interrogation unit located in and/or proximate such distal assembly (e.g., coupled to and/or integral with a float collar, float shoe, valve assembly etc.) and/or located in a moveable member flowing through the wellbore (e.g., a wiper plug, ball, dart, etc.). thus, detection and/or communication with mems sensors by such data interrogation units may signal the opening and/or closing of a valve proximate the distal end of the casing in a conventional or reverse cementing operation, thereby allowing for the selective placement of the cement slurry. in an embodiment, a cement slurry 1430 may be pumped down the interior of the casing 20 in the direction of arrow 1432 , through the float shoe 1420 in the direction of arrows 1434 , and up the annulus 26 in the direction of arrows 1436 for the purpose of cementing the casing 20 to a wall of the wellbore 18 . the cement slurry 1430 may include a slug 1440 of mems sensors 52 that may be situated in a portion of the cement slurry 1430 that is pumped into the wellbore 18 prior to a remainder of the cement slurry 1430 , e.g., positioned at a leading edge/portion, face, or head of the slurry. in an embodiment, the mems sensors 52 are configured to measure and/or convey at least one parameter of the wellbore 18 , e.g., a longitudinal position of the mems sensors 52 in the wellbore 18 , and transmit data regarding the longitudinal positions of the mems sensors 52 in the wellbore 18 to the data interrogation unit 1410 most proximate to the mems sensors 52 . the data interrogation units 1410 may then transmit the mems sensor data to a processing unit situated at an exterior of the wellbore 18 , and such transmission may be carried out according to any embodiment disclosed herein (e.g., the embodiments of figs. 5-16 . in an embodiment, as the cement slurry 1430 travels through the wellbore 18 , a longitudinal position of the slug 1440 of mems sensors 52 , and hence a longitudinal position of a head of the cement slurry 1430 , may be determined in real time via interaction (e.g., communication) of the mems sensors 52 with the plurality of data interrogation units 1410 spaced along a length of the casing. for example, where all or a portion of the data interrogation units 1410 correspond with known locations in the wellbore (e.g., casing collars located at a known depth in the wellbore), detection of mems sensors by a given data interrogation unit 1410 indicates that the slug of mems sensors (and thus the leading edge of the cement slurry) is within the sensing/communication range of that particular data interrogation unit 1410 . as the slug of mems sensors flows downward in the interior of the casing, the mems sensors will be detected in an uphole to downhole sequence by the data interrogation units 1410 . in a further embodiment, a data interrogation unit may be incorporated in the float shoe 1420 (or located in close proximity thereto), thereby enabling a determination of when the leading edge of the cement slurry 1430 reaches the end of the casing, “turns the corner,” and enters the annulus 26 . upon entering the annulus, the slug of mems sensors will flow upward and will be detected in a downhole to uphole sequence by the data interrogation units 1410 . in a further embodiment, pumping of the cement slurry 1430 may be controlled (e.g., slowed and/or terminated) when the slug 1440 of mems sensors 52 is detected by a data interrogation unit 1410 situated most proximate to the exterior of the wellbore 18 , as illustrated in fig. 22 c . additionally or alternatively, a second slug of mems sensors may be included at the trailing edge of the cement slurry, thereby enabling a determination of when the trailing edge of the cement slurry 1430 reaches the end of the casing, “turns the corner,” and enters the annulus 26 . based upon detection of the first slug by a data interrogation unit (e.g., unit 1440 ) located a known distance above the float shoe 1420 and/or detection of the second slug by a data interrogation unit integral with and/or proximate to the float shoe 1420 , pumping of the cement slurry may be controlled (e.g., slowed and/or stopped) to provide for precise placement of the cement slurry into the annular space while, based upon the design parameters of the well, likewise optionally allowing for a controlled amount of cement to remain in the casing proximate the float collar or optionally allowing for removal of substantially all of the cement from the interior of the casing. in an embodiment, detection of mems allows for controlled placement of the cement slurry such that any contaminated cement (e.g., cement contaminated with mud located in front of a cementing/wiper plug) remains in the casing and/or shoe track and is not allowed to turn the corner, exit the casing and reach the annulus, thereby ensuring that all cement placed in the annulus is not contaminated and/or compromised. thus, mems may be used to avoid undesirably pushing a contaminated wellbore servicing fluid into the annulus. in addition, as also illustrated in fig. 22 c , when pumping of cement slurry 1430 is terminated, the pressure differential between the interior of the casing 20 and the annulus 26 decreases, thereby causing the valve 1422 to close. as a result, the cement slurry 1430 is prevented from re-entering the casing 20 . additionally or alternatively, the cement slurry (or other wellbore fluid) may be monitored for movement of the mems sensors after pumping has been terminated, as such movement may indicate a problem with the closure of the terminal end of the casing (e.g., closing of a valve such as the float shoe valve) and/or otherwise indicate a potential undesirable inflow and/or outflow into the wellbore and resultant loss of zonal isolation. such monitoring may be performed in any cementing job (or other wellbore servicing job), including but not limited to primary cementing (either traditional cementing with flow down the casing and up the annulus or reverse cementing with flow down the annulus) and/or secondary cementing (e.g., remedial cementing, squeeze jobs, etc.). for example, if a data interrogation unit located proximate the terminal end of the casing being cemented (either convention or reverse cementing) detects movement of mems sensors, such movement may be associated with fluid flow into or out of the casing, which may indicate that a valve associated with the terminal end of the casing has not properly closed, i.e., the valve did not close properly at the conclusion of cement pumping. additionally or alternatively, such movement may indicate an undesirable or problematic movement of a wellbore fluid (e.g., cement slurry, drilling fluid, isolation fluid, displacement fluid, production fluids, etc.), for example due to loss into the formation and/or flow of the fluid back up the wellbore (for example in response to downhole pressure build-up, and thus indicating the potential for a loss of zonal isolation or potentially a blowout). in an embodiment, where undesirable movement of the wellbore fluid is detected via movement of mems sensors, a signal may be generated to trigger an alarm and/or activate one or more safety devices such as downhole safety valves, blowout preventers, etc. in summary, if mems sensors are detected as moving uphole when they shouldn't be, then automatically and/or manually trigger one or more safety devices to shut in the well. detection of mems sensor movement may be used in combination with other mems sensed parameters (e.g., detection of gas entering the wellbore) to provide further cross-checking and/or redundancy to trigger alarms and/or safety systems. figs. 23 a to 23 c illustrate a schematic view of an embodiment of a wellbore parameter sensing system 1500 , which comprises the wellbore 18 , the casing 20 situated in the wellbore 18 , a plurality of data interrogation units 1510 spaced along a length of the casing 20 , and a casing shoe 1520 situated at a downhole end of the casing 20 . in an embodiment, the casing shoe 1520 comprises a poppet valve 1522 , which is biased open by a spring 1524 when the valve 1522 is in a neutral state and may be closed as the casing 20 is lowered into the wellbore 18 . while a float shoe and poppet value assembly is demonstrated in this embodiment, it is understood that any assembly (e.g., float collar, float shoe, valve assembly, etc.) suitable to terminate the downhole, distal end of the casing string (e.g., to protect and/or direct same into the wellbore) and to selectively open and/or close terminal end of the casing to fluid flow (from either interior to annulus or from annulus to interior) may be employed in the various embodiments disclosed herein, wherein communication with mems sensors may be used in determining when to selectively perform said open and/or close and wherein such communication may be with a data interrogation unit located in and/or proximate such distal assembly (e.g., coupled to and/or integral with a float collar, float shoe, valve assembly etc.) and/or located in a moveable member flowing through the wellbore (e.g., a wiper plug, ball, dart, etc.). thus, detection and/or communication with mems sensors by such data interrogation units may signal the opening and/or closing of a valve proximate the distal end of the casing in a conventional or reverse cementing operation, thereby allowing for the selective placement of the cement slurry. in an embodiment, a cement slurry 1530 may be pumped down the annulus 26 in the direction of arrows 1532 for the purpose of cementing the casing 20 to a wall of the wellbore 18 . fig. 23 a illustrates the wellbore 18 at the beginning of the pumping of the cement slurry 1530 , fig. 23 b illustrates the wellbore 18 when the cement slurry 1530 is partway down the wellbore 18 , and fig. 23 c illustrates the wellbore 18 when the cement slurry 1530 has arrived at or near a downhole end of the wellbore 18 . in an embodiment, the cement slurry 1530 may include a slug 1540 of mems sensors 52 that may be situated in a portion of the cement slurry 1530 that is pumped into the wellbore 18 prior to a remainder of the cement slurry 1530 , e.g., positioned at a leading edge/portion, face, or head of the slurry. in an embodiment, the mems sensors 52 are configured to measure and/or convey at least one parameter of the wellbore 18 , e.g., a longitudinal position of the mems sensors 52 in the wellbore 18 , and transmit data regarding the longitudinal positions of the mems sensors 52 in the wellbore 18 to the data interrogation unit 1510 most proximate to the mems sensors 52 . the data interrogation units 1510 may then transmit the mems sensor data to a processing unit situated at an exterior of the wellbore 18 , and such transmission may be carried out according to any embodiment disclosed herein (e.g., the embodiments of figs. 5-16 ). in an embodiment, as the cement slurry 1530 travels down the annulus 26 , a longitudinal position of the slug 1540 of mems sensors 52 , and hence a longitudinal position of a head of the cement slurry 1530 , may be determined in real time via interaction of the mems sensors 52 with the plurality of the data interrogation units 1510 spaced along the length of the casing as described herein (e.g., as described with reference to figs. 22 a - c ). in a further embodiment, a data interrogation unit may be incorporated in the casing shoe 1520 (or located in close proximity thereto), thereby enabling a determination of when the cement slurry 1530 arrives at or near a downhole end of the annulus 26 , as illustrated in fig. 23 c . in an embodiment, pumping of the cement slurry 1530 may be controlled (e.g., slowed and/or terminated) when the data interrogator incorporated in and/or positioned in close proximity to the casing shoe 1520 detects the slug 1540 of mems sensors 52 , thereby providing for precise placement of the cement slurry into the annular space while, based upon the design parameters of the well, likewise optionally allowing for a controlled amount of cement to be pumped through the float shoe and into the interior of the casing (or conversely preventing cement from entering into the interior of the casing). in an embodiment, reverse cementing may be carried out in accordance with embodiments described in u.s. pat. no. 7,357,181, which is hereby incorporated by reference herein in its entirety. in an embodiment, after the pumping of the cement slurry 1530 is terminated, the casing 20 may be lowered in the wellbore 18 until a head 1523 of the valve 1522 makes physical contact with the bottom 19 of the wellbore 18 . the casing 20 may then be lowered further in opposition to a force of spring 1524 until the valve head 1523 is seated on a downhole end of the casing shoe 1520 . in this manner, cement slurry 1530 is prevented from further entering the interior of the casing 20 . referring to fig. 23 d , a method 1550 of servicing a wellbore is described. at block 1552 , a cement slurry is pumped down the wellbore. a plurality of micro-electro-mechanical system (mems) sensors is added to a portion of the cement slurry, for example a slug of mems sensors added to a leading edge of the slurry that is added to the wellbore prior to a remainder of the cement slurry and/or a slug of mems sensors added to a trailing edge of the slurry. at block 1554 , as the cement slurry is traveling through the wellbore, positions of the mems sensors in the wellbore are determined along a length of the wellbore, thereby providing a determination of a corresponding location (e.g., leading and/or trailing edge) of the cement slurry. in embodiments, mems sensors having one or more identifiers associated therewith may be included in the wellbore servicing composition. by way of non-limiting example, one or more types of rfid tags, e.g., comprising an rfid chip and antenna, may be added to wellbore servicing fluids. the rfid tag allows the rfid chip on the mems sensor to power up in response to exposure to rf waves of a narrow frequency band and modulate and re-radiate these rf waves, thereby providing information such as a group identifier, sensor type identifier, and/or unique identifier/serial number for the mems sensors and/or data collected by the mems sensors, for example any combination of the various sensed parameters disclosed herein. if a data interrogation unit in a vicinity of the mems sensor generates an electromagnetic field in the narrow frequency band of the rfid tag, then the mems sensor can transmit sensor data to the data interrogator, and the data interrogator can determine that a mems sensor having a specific rfid tag is in the vicinity of the data interrogator. again, while various rfid embodiments are disclosed herein, any suitable technology compatible with and integrated into the mems sensors may be employed to allow the mems sensors to convey information, e.g., one or more identifiers and/or sensed parameters, to one or more interrogation units. in embodiments, mems sensors having a first identifier (e.g., a first type of rfid tag, for example tags exhibiting an “a” signature) may be added to/suspended in all or a portion of a first wellbore servicing fluid, and mems sensors having a second identifier (e.g., a second type of rfid tag, for example tags exhibiting a “b” signature) may be added to/suspended in all or a portion of a second wellbore servicing fluid. the first and second wellbore servicing fluids may be added consecutively to a wellbore in which a casing having regularly longitudinally spaced data interrogation units attached thereto is situated. as the first and second wellbore servicing fluids travel through the wellbore, the data interrogation units interrogate the respective mems sensors of the fluids, thereby obtaining data regarding the indentifier associated with the mems sensor (e.g., the type of rfid tag) and/or at least one wellbore parameter such as a position of the mems sensors in the wellbore or other sensed parameter (e.g., temperature, pressure, etc.). for example, the data interrogation units may interact with the mems sensor as described in relation to figs. 22 a - c and 23 a - d . as a result, in an embodiment, the positions of the different types of mems sensor (e.g., different types of rfid tags such as “a” tags and “b” tags) suspended in the two wellbore servicing fluids may be determined. in addition, using the aggregated positions of the mems sensors having the same and/or different type of rfid tag, a volume occupied by the first and/or second wellbore servicing fluids in the wellbore at a specific time and/or location in the wellbore may be determined. in an embodiment, the first and second wellbore servicing fluids may be substantially the same compositionally, and for example two or more different types of tags may be used to indicate different volumetic portions of the same fluid (e.g., a first 100 barrels having “a” tags followed by 500 barrels of “b” tags), thereby aiding in downhole identification, metering, measuring, and/or placement of fluids. in an alternative embodiment, the first and second wellbore servicing fluid may be compositionally different, and for example different types of tags may be used to indicate the different types of fluids (e.g., a first fluid such as cement having “a” tags followed by a second type of fluid such as a drilling fluid having “b” tags), thereby aiding in downhole identification, metering, measuring, and/or placement of fluids. such embodiments may be further combined, for example a first fluid having two different types of identifiers (“a” and “b” tags to denote different volumetric portions), followed by a second, different fluid having a third type of identifier (e.g., “c” tags) to denote the different composition or fluid type. in an embodiment, mems sensors having a third identifier (e.g., a third type of rfid tag, for example exhibiting a “c” signature) may be added to/suspended in a third wellbore servicing fluid and placed in the wellbore. for example, a third wellbore servicing fluid comprising “c” tags may be placed in the wellbore prior to, intermittent with, or subsequent to placement of first and second wellbore servicing fluids into the wellbore, having “a” and “b” tags, respectively. in an embodiment, the identifier (e.g., rfid tag) of the sensors in the third wellbore servicing fluid may be the same as the identifier (e.g., rfid tag) of the sensors in the first wellbore servicing fluid (for example a first fluid having “a” tags followed by a second fluid having “b” tags followed by a third fluid having “a” tags, wherein the first, second, and third fluids may be compositionally the same or different) or may be different from the identifier (e.g., rfid tag) of the sensors in the first wellbore servicing fluid (for example, a first fluid having “a” tags followed by a second fluid having “b” tags followed by a third fluid having “c” tags, wherein the first, second, and third fluids may be compositionally the same or different). the mems sensors may employ any suitable power source and/or transmission technology to convey an associated identifier to the interrogation units. in an embodiment, the mems sensors may be powered by the data interrogation units. in an alternative embodiment, the mems sensors may be powered by batteries disposed in the mems sensors. in an embodiment, instead of adding the mems sensors to the entire first and second wellbore servicing fluids, the mems sensors having the first identifier (e.g., first type of rfid tag) may be added as a slug to a portion of the first wellbore servicing fluid added to the wellbore prior to a remainder of the first wellbore servicing fluid; and the mems sensors having the second identifier (e.g., a second type of rfid) tag may be added as a slug to a portion of the second wellbore servicing fluid added to the wellbore prior to a remainder of the second wellbore servicing fluid. as the wellbore servicing fluids travel through the wellbore, the positions and mems sensors (e.g., rfid tags) in each slug, and therefore the positions of heads of the wellbore servicing fluids, may be determined by the data interrogation units. in an embodiment, the positions of the mems sensors having the second identifier (e.g., second type of rfid tag) may be used to determine an interface of the first and second wellbore servicing fluids in the wellbore. while examples of first, second, and/or third wellbore servicing fluids and associated first, second and/or third identifiers have been described, it should be understood that any desirable number of wellbore servicing fluids and associated identifiers (including more than one identifier type in a given wellbore servicing fluid type or composition) may be used to carryout the embodiments disclosed herein. referring to fig. 23 e , a method 1560 of servicing a wellbore is described. at block 1562 , a first wellbore servicing fluid comprising a plurality of micro-electro-mechanical system (mems) sensors having a first identifier (e.g., a first type of radio frequency identification device (rfid) tag) is placed into the wellbore. at block 1564 , after placing the first wellbore servicing fluid into the wellbore, a second wellbore servicing fluid comprising a plurality of mems sensors having a second identifier (e.g., a second type of rfid tag) is placed into the wellbore. at block 1566 , positions in the wellbore of the mems sensors having the first and second identifiers (e.g., first and second types of rfid tags) are determined along a length of the wellbore, thereby providing a determination of a corresponding location (e.g., leading and/or trailing edge) of the first and/or second fluids. the mems sensors comprising the first and second identifiers may be added to all or a portion (e.g., leading and/or trailing edge slug) of the first and second wellbore servicing fluids, respectively. in embodiments, the first and second wellbore servicing fluids may be compositionally the same or different. in an embodiment, mems sensors having a common or same identifier (e.g., a common or same type of rfid tag such as an “a” tag) may be added as slugs to portions of two or more wellbore servicing fluids added to a wellbore prior to remainders of the respective two or more wellbore servicing fluids. in embodiments, the two or more wellbore servicing fluids may be compositionally the same or compositionally different. in an embodiment, the mems sensor slugs of the respective wellbore servicing fluids may be of different fluid volumes and/or of different mems sensor loadings/concentrations. as the wellbore servicing fluids travel through the wellbore, the positions of the mems sensors in each slug may be determined in real time by data interrogation units spaced at regular intervals along a casing of the wellbore, thereby providing a determination of a corresponding location (e.g., a leading and/or trailing edge) of the wellbore servicing fluids. in addition, in an embodiment, the different volumes and/or different mems sensor loadings of each slug may be detectable as unique signals by the data interrogation units. accordingly, positions (e.g., heads or leading/trailing edges) of each of the wellbore servicing fluids in the wellbore may be identified using mems sensors having only one identifier (e.g., one type of rfid tag such as “a” tags). in an embodiment, volumes in the wellbore occupied by all but the last added wellbore servicing fluid may be determined using the positions of each mems sensor slug in the wellbore. furthermore, in an embodiment, three wellbore servicing fluids may be added to the wellbore in succession, whereby the first and third wellbore servicing fluids are compositionally the same and the second wellbore servicing fluid is a spacer fluid. referring to fig. 23 f , a method 1570 of servicing a wellbore is described. at block 1572 , a first wellbore servicing fluid comprising a plurality of micro-electro-mechanical system (mems) sensors having a first identifier (e.g., a first type of radio frequency identification device (rfid) tag) is placed into the wellbore. the mems sensors are added to all or a portion of the first wellbore servicing fluid (e.g., a leading and/or trailing edge slug of the first wellbore servicing fluid added to the well bore prior to a remainder of the first wellbore servicing fluid). at block 1574 , after placing the first wellbore servicing fluid into the wellbore, a second wellbore servicing fluid comprising a plurality of mems sensors having the first identifier (e.g., the first type of rfid tag is placed into the wellbore). the mems sensors are added to all or a portion of the second wellbore servicing fluid (e.g., a leading and/or trailing edge of the second wellbore servicing fluid added to the well bore prior to a remainder of the second wellbore servicing fluid). in embodiments, the concentration of the first identifier in the first fluid is different from the concentration of the first identifier in the second fluid. in embodiments, the first and second wellbore servicing fluids may be compositionally the same or different. at block 1576 , positions in the wellbore of the mems sensors having the first identifier (e.g., first type of rfid tag) are determined along a length of the wellbore, thereby providing a determination of a corresponding location (e.g., leading and/or trailing edge) of the first and/or second fluids. figs. 24 a to 24 c illustrate a schematic cross-sectional view of an embodiment of a wellbore parameter sensing system 1600 , which comprises the wellbore 18 , the casing 20 situated in the wellbore 18 , a plurality of data interrogation units 1610 spaced at regular or irregular intervals along a length of the casing 20 , a float shoe 1620 situated at a downhole end of the casing 20 , and four wellbore servicing fluids added to the wellbore 18 in succession, namely, a drilling fluid 1630 , a spacer fluid 1640 , a cement slurry 1650 and a displacement fluid 1660 . in an embodiment, the float shoe 1620 comprises a poppet valve 1622 , which, in a neutral state, is biased closed by a spring 1624 . in addition, the poppet valve 1622 may be opened in opposition to a force applied by spring 1624 when a differential pressure between an interior of the casing 20 and the annulus 26 is sufficiently high. in an embodiment, the drilling fluid 1630 , the spacer fluid 1640 , the cement slurry 1650 and the displacement fluid 1660 are added to the wellbore within the context of cementing the casing 20 to the wellbore 18 . in an embodiment, the drilling fluid 1630 comprises a slug 1632 of mems sensors 52 added to the wellbore 18 prior to a remainder of the drilling fluid 1630 , the spacer fluid 1640 comprises a slug 1642 of mems sensors 52 added to the wellbore 18 prior to a remainder of the spacer fluid 1640 , the cement slurry 1650 comprises a slug 1652 of mems sensors 52 added to the wellbore 18 prior to a remainder of the cement slurry 1650 , and the displacement fluid 1660 comprises a slug 1662 of mems sensors 52 added to the wellbore 18 prior to a remainder of the displacement fluid 1660 . however, in other embodiments, the mems sensors 52 may be mixed and suspended in entire volumes of one or more of the wellbore servicing fluids added to the wellbore 18 . in alternative embodiments, slugs of mems sensors may be added to the trailing edges of one or more of the fluids 1630 , 1640 , 1650 , and 1660 in lieu of or in addition to the slugs at the leading edges of the fluids. in addition, in the present embodiment, the mems sensors 52 in all of the slugs 1632 , 1642 , 1652 , 1662 comprise a same identifier (e.g., a same type of rfid tag such as an “a” tag). however, in alternative embodiments, the slugs 1632 , 1642 , 1652 , 1662 may comprise mems sensors 52 having two or more different types of identifiers (e.g., two or more different types of rfid tags such as “a”, “b”, “c”, and “d” tags.). furthermore, in the present embodiment, the slugs 1632 , 1642 , 1652 , 1662 are all of approximately the same volume and mems sensor loading. however, in alternative embodiments, the slugs 1632 , 1642 , 1652 , 1662 may be of different volumes and/or different mems sensor loadings so as to further identify and distinguish between the heads and interfaces of the wellbore servicing fluids 1630 , 1640 , 1650 , 1660 added to the wellbore 18 . in an embodiment, the drilling fluid 1630 , spacer fluid 1640 , cement slurry 1650 and displacement fluid 1660 are pumped down the interior of the casing 20 in succession, in the direction of arrow 1670 . in some embodiments, one or more plugs may be pumped along with the fluids, for example plugs at the interface of two of the fluids and providing an additional physical barrier between said fluid at the interface. for example, a wiper plug may be pumped behind the cement slurry 1650 and in front of the spacer fluid 1640 (e.g., the wiper plug positioned proximate ahead of the mems sensor slug 1662 ). as each wellbore servicing fluid 1630 , 1640 , 1650 , 1660 travels down the casing 20 , the data interrogators 1610 in a vicinity/proximity of the respective mems sensor slugs 1632 , 1642 , 1652 , 1662 are able to detect the mems sensors 52 in the slugs 1632 , 1642 , 1652 , 1662 and thus identify heads and interfaces of the wellbore servicing fluids 1630 , 1640 , 1650 , 1660 in the casing 20 . in an embodiment, as a pressure in the casing 20 increases due to the pumping of the wellbore servicing fluids 1630 , 1640 , 1650 , 1660 down the casing 20 , a pressure differential between the casing interior and the annulus 26 increases sufficiently to overcome the force applied by spring 1624 to the poppet valve 1622 and force the valve 1622 open. the drilling fluid 1630 may then pass through the poppet valve 1622 of the float shoe 1620 in the direction of arrows 1672 and travel up the annulus 26 in the direction of arrows 1674 , followed by spacer fluid 1640 , as shown in fig. 24 a . as the drilling fluid 1630 and the spacer fluid 1640 travel up the annulus 26 , the data interrogation units 1610 in the vicinity of the slugs 1632 and 1642 detect the mems sensors 52 in the slugs 1632 , 1642 and thus determine the location of the heads and the interface of the drilling fluid 1630 and the spacer fluid 1640 in the annulus 26 . referring to fig. 24 b , the displacement fluid 1660 has been pumped partway down the casing 20 , the cement slurry 1650 is partially in the casing 20 and partially in the annulus 26 , the spacer fluid 1640 is completely in the annulus 26 and most of the drilling fluid 1630 has exited the annulus 26 . as the spacer fluid 1640 and cement slurry 1650 travel up the annulus 26 , the data interrogation units 1610 detect the location of their respective heads and their interface via the mems sensors located in slugs 1642 and 1652 . similarly, as the displacement fluid 1660 travels down the casing 20 , the data interrogation units 1610 detect a location of the head of the displacement fluid 1660 via the mems sensors located in slug 1662 . referring now to fig. 24 c , the spacer fluid 1640 has been pumped out of the annulus 26 , the cement slurry 1650 has been pumped nearly all the way up the annulus 26 , and the displacement fluid 1660 has been pumped nearly all the way down the casing 20 , such that the mems sensor slug 1662 at the head of the displacement fluid 1660 is situated proximate to the float shoe 1620 . in an embodiment, a data interrogation unit may be incorporated/integral with and/or located proximate to the float shoe 1620 for the purpose of detecting the mems sensor slug 1662 at the head of the displacement fluid 1660 . however, in an alternative embodiment, the data interrogation unit may be incorporated in a float collar situated proximate uphole from the float shoe 1620 . when the sensor slug 1662 is detected at or near the float shoe 1620 , pumping of the wellbore servicing fluids may be controlled (e.g., slowed and/or terminated) to provide for precise placement of the cement slurry into the annular space while, based upon the design parameters of the well, likewise optionally allowing for a controlled amount of cement to remain in the casing proximate the float collar or optionally allowing for removal of substantially all of the cement from the interior of the casing. in an embodiment, pumping is controlled so as to prevent the displacement fluid from entering the annulus 26 and possibly degrading the cement slurry 1650 near a base of the annulus 26 . when pumping ceases, the pressure in the interior of the casing 20 decreases, thereby allowing the valve 1622 to close. additionally or alternatively, in an embodiment, when a data interrogation unit 1610 located at a desired/known position uphole (e.g., the position most proximate to the earth's surface 16 ) detects the mems sensor slug 1652 at the head of the cement slurry 1650 , then an operator may conclude that the cement slurry 1650 has filled most or all of the annulus 26 and may be allowed to cure. in an embodiment, mems sensors may be added to a hydraulic fracturing fluid comprising one or more proppants. the fracturing fluid may be introduced into the wellbore and into one or more fractures situated in the wellbore and extending outward into the formation. at least a portion of the mems sensors may be deposited, along with the proppant or proppants, into the fracture or fractures and remain therein. in an embodiment, the mems sensors situated in the fracture or fractures may measure at least one parameter associated with the fracture or fractures, such as a temperature, pressure, a stress, a strain, a co 2 concentration, an h 2 s concentration, a ch 4 concentration, a moisture content, a ph, an na + concentration, a k + concentration or a cl − concentration. in an embodiment, the presence of mems sensors deposited in one or more fractures facilitates the mapping of the fracture. for example, referring to fig. 19 , a fracturing fluid containing mems sensors may be pumped into fractures such as represented by fissures 58 and 60 extending into formation 62 and the mems sensors deposited therein. data interrogation units 150 may then provide a map of the fracture complexity in a manner similar to mapping the geometry of the wellbore (e.g., locating constrictions, expansions, etc.) as disclosed herein, for example in reference annular mapping embodiment of figs. 17-21 . furthermore, mobile data interrogation units may be used in addition to or in lieu of the fixed data interrogation units 150 shown in fig. 19 . e.g., a data interrogation unit located on a fracturing service workstring, for example located proximate an end of a coiled tubing workstring employed in a fracturing operation. in an embodiment, the mems sensors in a fracture measure moisture content. when the moisture content exceeds a threshold value, it may be concluded that the fracture is producing water, and the fracture may be plugged or treated so as to no longer produce water. in an embodiment, the mems sensors in a fracture measure ch 4 concentration. if the ch 4 concentration exceeds a threshold value, it may be concluded that the fracture is producing methane. in an embodiment, the mems sensors in a fracture measure a stress or mechanical force. if the stress or mechanical force exceeds a threshold value, it may be concluded that the fracture is producing sand, and the fracture may be treated so as to no longer produce sand. referring to fig. 24 d , a method 1680 of servicing a wellbore is described. at block 1682 , a plurality of mems sensors is placed in a fracture that is in communication with the wellbore, for example via pumping a fracturing fluid comprising mems sensors into the fracture, reducing pressure, and allowing the mems sensors (along with proppant) to be deposited in the formation. the mems sensors are configured to measure at least one parameter associated with the fracture, and at block 1684 , the at least one parameter associated with the fracture is measured. in an embodiment, the mems sensors provide positional data with respect to one or more data interrogation units located at a known position (e.g., located at casing collars at known depths within the wellbore), and thereby provide information about the geometry and layout of fractures within the formation. for example, within the sensing or mapping range, the data interrogation units are operable to sense the presence of various mems sensors in relation to the unit, and thus can create a mathematical representation of mems sensor presence, velocity, location, concentration, and/or identity (e.g., a particular sensor or group of sensors having a unique identifier or i.d. number) in relation to the position of a given unit 150 , along with other parameters such as moisture content, ch 4 concentration, mechanical measurements (stress, strain, forces, etc.), ion concentration, acidity, ph, temperature, pressure, etc. such information can be provided in real time, and an ongoing fracturing job may be adjusted in response to information provided by the mems sensors located in the fracture. for example, the mems sensors may provide a real time snapshot of fracture development, complexity, orientation, lengths, etc. that may be analyzed and used to further control the fracturing operation. at block 1686 , data regarding the at least one parameter associated with the wellbore, formation, and/or fracture is transmitted from the mems sensors to an exterior of the wellbore in accordance with any embodiment disclosed herein, e.g., figs. 5-16 . at block 1688 , the data is processed. in an alternative embodiment, the detection of mems sensors in one or more fractures is used to control a wellbore servicing operation when fracturing is not desired. for example, in certain wellbore servicing operations, such as during drilling and/or cementing, fracturing may be undesirable as leading to detrimental loss of fluids into the formation. as described above, mems sensors can be added to a wellbore servicing fluid (e.g., drilling fluid and/or cement slurry) to detect movement and/or placement of the mems into the formation via movement of the fluid, and where such movement of the fluid into the formation is undesirable, one or more process parameters (e.g., flow rate, pressure, etc.) may be controlled (e.g., in real time) to alter the servicing treatment and reduce, stop, or eliminate the undesirable formation of fractures and resultant loss of servicing fluid to the formation. thus, mems sensors may be used in a variety of wellbore servicing fluid to control fracturing of the surrounding formation, to desirably induce/promote and/or inhibit/prevent formation of fractures as appropriate for a given service type. in an embodiment, a plurality of micro-electro-mechanical system (mems) sensors are placed in a wellbore composition, the wellbore composition is placed in a wellbore, and the mems sensors are used to monitor and detect movement in the wellbore and/or the surrounding formation. the data may be obtained from the mems sensors according to any of the embodiments disclosed herein (e.g., one or more mobile data interrogators tripped into and out of the wellbore and/or fixed data interrogators positioned within the wellbore), and may be further communicated/transmitted to/from or within the wellbore via any of the embodiments disclosed herein.) for example, the mems sensors may be in a sealant composition that is placed within an annular casing space in the wellbore and wherein the movement comprises a relative movement between the sealant composition and the adjacent casing and/or wellbore wall. in other words, the mems sensors detect slippage or shifting of the cement sheath, the casing, and/or the wellbore wall/formation relative to one another. additionally or alternatively, at least a portion of the wellbore composition comprising the mems flows into the surrounding formation and movement in the formation is monitored/detected. for example, cracks, fissures, shifts, collapses, etc. of the formation may be detected over the life of the wellbore via the mems sensors. such movement may be detected via the motion and/or orientation sensing capabilities (e.g., accelerometers, x-y-z axis orientation, etc.) of the mems sensors as described herein. in particular, data collected from the mems sensors may be compared over successive monitoring or surveying intervals to detect movement and associated patterns. in particular, such movement may be correlated with production rates over the life of the well to help in optimizing production from the well both in terms of rate of production as well as total production over the life of the well. for example, in response to the detection of motion in the formation (e.g., a shift in the formation), one or more operating parameters of the wellbore may be adjusted, for example the production rate of the wellbore (e.g., the rate of production of hydrocarbons from the wellbore), and such adjustments may extend an expected operating life of the wellbore. in an embodiment, mems sensors may be mixed into a sealant composition (e.g. cement slurry) that is placed into the annulus 26 between a wall of the wellbore 18 and the casing 20 . in embodiments, the sealant composition may be pumped down the drillstring/casing and up the annulus in a conventional cementing service, or alternatively the sealant composition may be pumped down the annulus in a reverse cementing job. the mems sensors may be used to monitor the sealant composition and/or the annular space for the presence and/or concentration of gas, water, or both, including but not limited to monitoring for the presence of corrosive materials, such as corrosive gas (e.g., acid gases such as hydrogen sulfide, carbon dioxide, etc.) and/or corrosive liquids (e.g., acid). accordingly, the mems sensors may be configured to measure a concentration of a water and/or gas in the cement slurry, such as ch 4 , h 2 s, or co 2 , prior to the cement setting. a degree of gas and/or water influx into the cement slurry may be determined using the gas and/or water concentration measured by the mems sensors. in particular, the presence of mems in the cement slurry may aid in identification of any undesirable inflow or channeling formed by gas migrating or flowing into the cement slurry prior to setting of the cement, as such gas and/or water inflow may be adverse to the integrity of and zonal isolation provided by the annular sheath of set cement. furthermore, mems sensors fixed in the set cement may also further aid in the detection of any such flow channels or other defects via annular mapping of the cement sheath as described herein. the presence and/or movement of annular water and/or gas as detected by mems distributed along a portion of the set cement sheath may be indicative of a loss or potential loss of zonal isolation, and remedial actions such as a squeeze job may be required to restore zonal isolation and prevent further gas migration within the wellbore. in a further embodiment, the above-mentioned cement slurry comprising mems sensors is allowed to cure so as to form a cement sheath. the mems sensors, which are distributed throughout the cross section of the cement sheath, may be configured and/or operable to measure a water and/or gas presence and/or concentration in the cement sheath. again, the mems sensors may be used to monitor the set sealant composition and/or the annular space, for example at periodic monitoring or service intervals over an expected service life of the wellbore, for the presence and/or concentration of gas, water, or both, including but not limited to monitoring for the presence of corrosive materials, such as corrosive gas (e.g., acid gases such as hydrogen sulfide, carbon dioxide, etc.) and/or corrosive liquids (e.g., acid). if a water and/or gas is present in the wellbore in a vicinity of a region of the cement sheath, mems sensors situated in the region of the cement sheath, for example in an interior of the cement sheath and/or at an interface of the cement sheath and the wellbore, may measure the presence/concentration of the water and/or gas at corresponding locations in the interior of the cement sheath and/or at the cement sheath/wellbore interface. in an embodiment, an integrity (e.g., structural integrity as effective to provide/maintain zonal isolation) of the region of the cement sheath may be determined using the presence/concentration of the water and/or gas measured by the mems sensors in the interior of the cement sheath. the region of the cement sheath may be determined to be integral (e.g., uncompromised and of acceptable structural integrity) if the concentration of the water and/or gas measured by the mems sensors in the interior of the cement sheath is less than a threshold value, for example less than a concentration of gas measured at the cement sheath/wellbore interface, which indicates that water and/or gas is not penetrating from an exterior surface of the cement sheath into an interior location. in embodiments, the mems sensors in the unset sealant composition (e.g., cement slurry) and/or in the a set sealant composition (e.g., set cement forming a sheath) the mems sensors may be interrogated by running an interrogator into the wellbore, for example during and/or immediately after the cementing operation and/or at service interval over the life of the wellbore. in alternative embodiments, the mems sensors are interrogated via data interrogators permanently located in the wellbore. in embodiments, the mems sensors in the unset sealant composition (e.g., cement slurry) and/or in the a set sealant composition (e.g., set cement forming a sheath) detect the presence and/or concentration of water, gas, or both, including but not limited to monitoring for the presence of corrosive materials, such as corrosive gas (e.g., acid gases such as hydrogen sulfide, carbon dioxide, etc.) and/or corrosive liquids (e.g., acid). in such embodiments, an operator of a wellbore servicing operation, an field operator, or other person responsible for monitoring the wellbore may be signaled as to the detected gas and/or water (e.g., an alarm or alert may be signaled or activated). the mems sensors may be used to provide a location in the wellbore corresponding to the detection of gas and/or water. in an embodiment (for example, an emergency or urgent response), at least one device is activated to prevent fluid flow out of the well in response to the detection of gas and/or water, and in particular during a cementing operation where the cement has not yet hardened and set. such devices may include emergency shut off valves (e.g., sub-surface safety valves), blow out preventers, and the like. the activation of such devices may be automatic and/or manual in response to the detection signal and/or alarm. upon establishing and/or confirming control of the wellbore (e.g., the wellbore is safely contained and/or shut in), one or more remedial actions may be performed in response to the detection of gas and/or water. for example, a tool may be lowered into the wellbore proximate the location of the detected gas and/or water, and the surrounding area may be surveyed for damage such as cracks in the cement sheath, corrosion of the casing, etc. to determine the integrity thereof. upon assessing the nature and extent of damage, remedial services may be performed. for example, the area may be patched by placing additional sealant composition into the damaged area (e.g., squeezing cement into damaged areas such as flow channel, cracks, etc.). additionally or alternatively, a section of damaged casing may be replaced or repaired, for example by cutting out and replacing the damaged section or placing a reinforcing casing or liner within the damaged portion. such remedial actions may extend the expected service life of the wellbore. in alternative embodiments, the mems sensors in the a set sealant composition (e.g., set cement forming a sheath) detect the presence and/or concentration of water, gas, or both, including but not limited to monitoring for the presence of corrosive materials, such as corrosive gas (e.g., acid gases such as hydrogen sulfide, carbon dioxide, etc.) and/or corrosive liquids (e.g., acid), and in response one or more operating parameters of the wellbore are adjusted, for example the production rate of the wellbore (e.g., the rate of production of hydrocarbons from the wellbore). example of operating conditions or parameters further include temperature, pressure, production rate, length of service interval, or any combination thereof. adjusting one or more operating conditions of the wellbore, in addition to or in lieu of one or more remedial actions, may extend the expected service life of the wellbore. in an embodiment, the mems sensors may be mixed into a sealant composition (e.g. cement slurry) that is placed into the annulus 26 between a wall of the wellbore 18 and the casing 20 in a wellbore associated with carbon dioxide injection, for example a carbon dioxide injection well used to sequester carbon dioxide. the mems sensors may be used to detect leaks in such wells. for example, the detection of carbon dioxide in an annular space in the wellbore may indicate that the carbon dioxide injection well has lost zonal integrity or otherwise is leaking. accordingly, remedial actions may be taken as described above to repair the leaks and restore integrity. additionally or alternatively, such remedial actions may be taken to work-over pre-existing wells, for example to retrofit older wells that may no longer be economically viable for hydrocarbon production, and thereby render such wells suitable for carbon dioxide injection. such wells would be useful for sequestering carbon dioxide from large scale commercial sources for green house gas reduction purposes. fig. 25 illustrates an embodiment of a wellbore parameter sensing system 1700 comprising the wellbore 18 , the casing 20 situated in the wellbore 18 , a plurality of data interrogation units 1710 spaced along a length of the casing 20 , a processing unit 1720 situated at an exterior of the wellbore 18 , and a cement slurry placed into the annulus 26 between the wellbore 18 and the casing 20 and allowed to cure to form a cement sheath 1730 . in an embodiment, the data interrogation units 1710 may be powered by rechargeable batteries or a power supply situated at the exterior of the wellbore 18 , or otherwise as disclosed in various embodiments herein. in an embodiment, the cement sheath 1730 comprises mems sensors 52 , which are configured to measure at least one wellbore parameter, e.g., a spatial position of the mems sensors 52 with respect to the various data interrogation units 1710 and/or the casing 20 (e;g., data interrogation units mounted at known locations such as casing collars). the mems sensors 52 may be suspended in, and distributed throughout, the cement slurry and the cured cement sheath 1730 . the mems sensors 52 may be passive sensors, i.e., powered by electromagnetic pulses emitted by the data interrogation units 1710 , or active sensors, i.e., powered by batteries situated inside the mems sensors 52 or otherwise powered by a downhole power source. in an embodiment, the data interrogation units 1710 may interrogate the mems sensors 52 and receive from the mems sensors 52 data regarding, e.g., the spatial position of the mems sensors 52 , and transmit the data to the processing unit 1720 for processing. in an embodiment, the data interrogation units 1710 may transmit the sensor data to the processing unit 1720 via a data line that runs along the casing, for example as shown in figs. 5 , 7 , and 9 . in an alternative embodiment, the data interrogation units 1710 may transmit the sensor data wirelessly to neighboring data interrogation units 1710 and up the casing 20 to the processing unit 1720 , for example as shown in figs. 6 , 8 , and 10 . while fixed data interrogation units 1710 are shown, it should be understood that a mobile data interrogation units (for example, for examples unit 40 of fig. 2 , unit 620 of fig. 8 , and unit 740 of fig. 9 ) may be disposed and moved within the wellbore to further aid in obtaining and/or processing data associated with cross-sectional views of the annulus, cement sheath, and/or formation. in an embodiment, the processor 1720 may be configured to divide the wellbore 18 into a plurality of cross-sectional slices of a specified width that are situated along a length of the wellbore 18 . the width of each slice may be about 0.1 cm to 10 cm, alternatively about 0.5 cm to 5 cm, alternatively 0.5 cm to 1 cm. in an embodiment, the processor 1720 is configured to aggregate planar coordinates of the positions of the mems sensors 52 in each cross-sectional slice and plot the planar coordinates of the positions of the mems sensors 52 in each cross-sectional slice so as to approximate cross-sections of the cement sheath 1730 in the annulus 26 , along the length of the casing 20 . in an embodiment, the planar coordinates may comprise cartesian coordinates, in which a center of a casing cross-section serves as an origin. in a further embodiment the planar coordinates may comprise polar coordinates, in which a center of a casing cross-section serves as an origin. in embodiments, the cross-sectional slices of the wellbore may be used to determine an integrity of the cement sheath 1730 along the length of the casing 20 . as the mems sensors 52 are distributed throughout the cement sheath 1730 , the cross-sectional slices may be used to determine an extent of cement coverage in the annulus 26 and/or a cross-sectional shape of the annulus 26 . in an embodiment, in cross-sectional slices in which no mems sensors 52 are situated in specific regions outside of the casing 20 , the presence of a void in the cement sheath 1730 and/or a constriction in the annulus 26 may be determined. in an embodiment, in cross-sectional slices in which mems sensor coordinates extend beyond a boundary at which a wall of the wellbore 18 is thought to be situated, it may be concluded that the wellbore 18 is washed out and/or contains a significant fracture or fractures or permeable regions through which cement has migrated. in some embodiments, the mems sensors may extend from the wellbore into the formation, and likewise the cross-sectional slices may provide information regarding the formation, for example cross-sectional shapes of fractures/fissures such as shown in figs. 19 and 20 . for example, a cemented wellbore may be perforated, a fluid (e.g., fracturing fluid) comprising mems sensors may be pumped into the formation (e.g., via the perforations and/or fractures), and cross-sectional slices taken of the treated portion of the wellbore. in a further embodiment, in cross-sectional slices in which the mapped planar coordinates of the mems sensors 52 form an approximately annular shape without voids, it may be concluded that the cement sheath 1730 is in good condition in regions corresponding to these cross-sectional slices. fig. 26 a , fig. 26 b and fig. 26 c illustrate schematic cross-sectional views of the wellbore 18 taken at lines a-a, b-b and c-c, respectively. as is apparent from fig. 26 a , the cement sheath 1730 contains a void 1732 at which a strength or structural integrity of the cement sheath 1730 may be compromised. accordingly, remedial action such as secondary cementing may be required to eliminate the void 1732 . in addition, as is apparent from fig. 26 b , a region of the annulus 26 through which line b-b travels is devoid of cement. in this instance, the presence of drill cuttings and/or a ledge and/or a build-up of filter cake may be concluded, and, if necessary, appropriate remedial action may be undertaken. furthermore, as is apparent from fig. 26 c , the cross-sectional slice of the wellbore 18 taken at line c-c has a smooth, unbroken annular shape. accordingly, it may be concluded that the cement sheath 1730 is in good condition at this cross-sectional slice. accordingly, the use of mems sensors in a wellbore servicing fluid, including but not limited to a cement composition, may aid in an assessment of the wellbore, including providing information regarding annular condition/shapes (e.g., fig. 18 ), formation condition/shapes (e.g., fig. 20 ), cement sheath condition/shapes (e.g., fig. 26 ), and other downhole regions or conditions as would be apparent based upon the disclosure herein. referring to fig. 26 d , a method 1750 of servicing a wellbore is described. at block 1752 , a plurality of micro-electro-mechanical system (mems) sensors is placed in a cement slurry. at block 1754 , the cement slurry is placed in an annulus disposed between a wall of the wellbore and a casing situated in the wellbore. at block 1756 , the cement slurry is allowed to cure to form a cement sheath. at block 1758 , spatial coordinates of the mems sensors with respect to one or more known locations in the wellbore are determined (e.g., with respect to data interrogators spaced along the casing, for example at casing collars). at block 1760 , planar coordinates of the mems sensors are mapped in a plurality of cross-sectional planes spaced along a length of the wellbore. furthermore, one or more downhole conditions (e.g., a health or maintenance condition/state of the wellbore, formation, cement sheath, etc.) may be determined based upon the mapped cross-sectional planes (e.g., cross-sectional representations of the wellbore, formation, cement sheath, etc.). if appropriate, one or more remedial actions (e.g., servicing operations such as squeeze jobs, etc.) may be carried out in the area or region of the wellbore displaying a need there for based upon analysis of the cross-sectional representations. in embodiments, the cross-sectional analysis is performed in accordance with a service or inspection interval of the wellbore, and may further more comprise one or more mobile interrogation units (in addition to or in lieu of the fixed data interrogation units 1710 placed into the wellbore (e.g., via wireline or coiled tubing) during such services or inspections. in embodiments, for the purpose of measuring wellbore parameters, mems sensors may not only be mixed with and suspended in wellbore servicing fluids (for example, as disclosed in the embodiments of figs. 5-26 ), but may also be integral with wellbore servicing equipment and tools using, for example, contained or housed within the tool and/or molded or formed as a part of the tool formed of plastic or a composite resin material. in an embodiment, the tool houses a fluid (e.g., a hydraulic fluid) within space located in the tool (e.g., a fluid reservoir), and the fluid further comprises mems sensors. in addition or alternatively, data interrogation units may be molded onto wellbore servicing equipment and tools using, for example, a composite resin material. in embodiments, the composite resin material may comprise an epoxy resin. in further embodiments, the composite resin material may comprise at least one ceramic material. for example, the composite material may comprise a ceramic based resin including, but not limited to, the types disclosed in u.s. patent application publication nos. us 2005/0224123 a1, entitled “integral centraliser” and published on oct. 13, 2005, and us 2007/0131414 a1, entitled “method for making centralizers for centralising a tight fitting casing in a borehole” and published on jun. 14, 2007. for example, in some embodiments, the resin material may include bonding agents such as an adhesive or other curable components. in some embodiments, components to be mixed with the resin material may include a hardener, an accelerator, or a curing initiator. further, in some embodiments, a ceramic based resin composite material may comprise a catalyst to initiate curing of the ceramic based resin composite material. the catalyst may be thermally activated. alternatively, the mixed materials of the composite material may be chemically activated by a curing initiator. more specifically, in some embodiments, the composite material may comprise a curable resin and ceramic particulate filler materials, optionally including chopped carbon fiber materials. in some embodiments, a compound of resins may be characterized by a high mechanical resistance, a high degree of surface adhesion and resistance to abrasion by friction. in embodiments, wellbore servicing equipment or tools have mems sensors integrated therein may be formed from one or more composite materials. a composite material comprises a heterogeneous combination of two or more components that differ in form or composition on a macroscopic scale. while the composite material may exhibit characteristics that neither component possesses alone, the components retain their unique physical and chemical identities within the composite. composite materials may include a reinforcing agent and a matrix material. in a fiber-based composite, fibers may act as the reinforcing agent. the matrix material may act to keep the fibers in a desired location and orientation and also serve as a load-transfer medium between fibers within the composite. the matrix material may comprise a resin component, which may be used to form a resin matrix. suitable resin matrix materials that may be used in the composite materials described herein may include, but are not limited to, thermosetting resins including orthophthalic polyesters, isophthalic polyesters, phthalic/maelic type polyesters, vinyl esters, thermosetting epoxies, phenolics, cyanates, bismaleimides, nadic end-capped polyimides (e.g., pmr-15), and any combinations thereof. additional resin matrix materials may include thermoplastic resins including polysulfones, polyamides, polycarbonates, polyphenylene oxides, polysulfides, polyether ether ketones, polyether sulfones, polyamide-imides, polyetherimides, polyimides, polyarylates, liquid crystalline polyester, and any combinations thereof. in an embodiment, the matrix material may comprise a two-component resin composition. suitable two-component resin materials may include a hardenable resin and a hardening agent that, when combined, react to form a cured resin matrix material. suitable hardenable resins that may be used include, but are not limited to, organic resins such as bisphenol a diglycidyl ether resins, butoxymethyl butyl glycidyl ether resins, bisphenol a-epichlorohydrin resins, bisphenol f resins, polyepoxide resins, novolak resins, polyester resins, phenol-aldehyde resins, urea-aldehyde resins, furan resins, urethane resins, glycidyl ether resins, other epoxide resins, and any combinations thereof. suitable hardening agents that can be used include, but are not limited to, cyclo-aliphatic amines; aromatic amines; aliphatic amines; imidazole; pyrazole; pyrazine; pyrimidine; pyridazine; 1h-indazole; purine; phthalazine; naphthyridine; quinoxaline; quinazoline; phenazine; imidazolidine; cinnoline; imidazoline; 1,3,5-triazine; thiazole; pteridine; indazole; amines; polyamines; amides; polyamides; 2-ethyl-4-methyl imidazole; and any combinations thereof. the fibers may lend their characteristic properties, including their strength-related properties, to the composite. fibers useful in the composite materials used to form a collar and/or one or more bow springs may include, but are not limited to, glass fibers (e.g., e-glass, a-glass, e-cr-glass, c-glass, d-glass, r-glass, and/or s-glass), cellulosic fibers (e.g., viscose rayon, cotton, etc.), carbon fibers, graphite fibers, metal fibers (e.g., steel, aluminum, etc.), ceramic fibers, metallic-ceramic fibers, aramid fibers, and any combinations thereof. fig. 27 a illustrates an embodiment of a wellbore parameter sensing system 1800 , which comprises the wellbore 18 , the casing 20 situated in the wellbore 18 , a plurality of data interrogation units 1810 attached to the casing 20 and spaced along a length of the casing 20 , a processing unit 1820 situated at an exterior of the wellbore 18 , and a plug 1830 . in an embodiment, the plug 1830 is a wiper plug configured to be pumped down the casing 20 for the purpose of removing residues of a wellbore servicing fluid from an inner wall of the casing 20 , typically employed in a wellbore cementing operation wherein wiper plugs are deployed in front of and/or behind a cement slurry that is pumped downhole. while various embodiments herein refer to wiper plugs, it is to be understood that other types of plugs or pumpable members may be combined with mems sensors, for example balls, darts, etc., and employed in various other wellbore servicing operations or functions such as operating valves, sleeves, etc., where the mems sensors may be used to verify the location of the plug or pumpable member (e.g., to verify that if/when it has landed or seated properly). in an embodiment, the data interrogation units 1810 may be powered by rechargeable batteries or a power supply situated at the exterior of the wellbore 18 or by any other downhole power supply. in an embodiment, the plug 1830 may comprise mems sensors 1840 , which are configured to measure at least a vertical position of the mems sensors 1840 (and correspondingly the location of the plug 1830 ) in the casing 20 and a pressure exerted on the mems sensors 1840 (and correspondingly a pressure exerted on the plug 1830 ). in an embodiment, the mems sensors 1840 may be molded onto a downhole end (e.g., nose) of the plug 1830 , for example a wiper plug that is configured to mate with a float collar 1850 situated near a downhole end of the casing 20 . in an alternative embodiment, the mems sensors 1840 may be incorporated in a material of which the plug 1830 is made and situated at the downhole end of the plug 1830 such that the mems sensors are in proximity to a seat or other member that receives or mechanically interacts with the plug 1830 . in other embodiments, the mems sensors 1840 may be housed by, coupled to, or otherwise integral with the plug 1830 . in operation, in an embodiment, the plug 1830 (e.g., a wiper plug) may be pumped down the casing 20 in the direction of arrow 1832 by pumping a displacement fluid down the casing 20 , directly in back of the plug 1830 . as the plug 1830 travels down the casing 20 , data interrogation units 1810 nearest to the mems sensors 1840 in the plug 1830 interrogate the mems sensors 1840 . in response to being interrogated, the mems sensors 1840 may transmit to the data interrogation units 1810 data regarding at least the vertical position of the mems sensors 1840 in the casing 20 and the pressure exerted on the mems sensors 1840 . in an embodiment, the data interrogation units 1810 may then transmit the sensor data to the processing unit 1820 via a data line that runs along the casing or by other communication means or networks (e.g., wireless networks and/or telemetry) as disclosed herein. for example, the data interrogation units 1810 may transmit the sensor data wirelessly to neighboring data interrogation units 1810 (and/or via a mems sensor network where one or more wellbore servicing fluids, e.g., a cement composition, comprises mems sensors and/or up the casing 20 ) to the processing unit 1820 . in an embodiment, when the plug 1830 (e.g., a wiper plug) lands on a seat or receptacle such as the float collar 1850 , the pressure exerted on the mems sensors 1840 situated at the downhole end of the wiper plug 1830 will increase sharply due to a reaction force applied to the wiper plug 1830 by the float collar 1850 . in response to the pressure increase detected by the mems sensors and communicated to the surface, pumping of the displacement fluid behind the wiper plug 1830 may be controlled (e.g., slowed or terminated). in an embodiment, the pumping of the displacement fluid may be terminated when the pressure exerted on the mems sensors 1840 reaches a threshold value of about 200 psi to about 3000 psi depending upon depth of the well. referring to fig. 27 b , a method 1860 of servicing a wellbore is described. at block 1862 , a wellbore servicing fluid is placed downhole. for example, a cement slurry is pumped down a casing situated in the wellbore and up an annulus situated between the casing and a wall of the wellbore. at block 1864 , a plug comprising mems sensors is placed downhole. for example, a wiper plug comprising mems sensors is pumped down the casing. in an embodiment, the wiper plug comprises mems sensors at a downhole end of the wiper plug configured to engage with a float collar that is coupled to the casing and situated proximate to a downhole end of the casing. the mems sensors are configured to measure pressure and/or location/position within the wellbore, and correspondingly provide pressure and/or location information for the plug. at block 1866 , pumping of the plug is discontinued when a pressure measured by the mems sensors exceeds a threshold value, for example as a result of the plug coming into contact with or engaging a seat (e.g., the wiper plug seating on the float collar). fig. 28 a illustrates an embodiment of a wellbore parameter sensing system 1900 , which comprises the wellbore 18 , the casing 20 situated in the wellbore 18 , a plurality of mems sensor strips 1910 attached to and/or housed within the casing 20 and spaced along a length of the casing 20 , a processing unit 1920 situated at an exterior of the casing, and a plug 1930 situated inside of the casing 20 . in an embodiment, the mems sensor strips 1910 comprise a composite resin material, with which mems sensors 1912 are mixed, and which may be molded to the casing 20 , for example to an interior and/or outer wall of the casing or within a hollow or void space defined by the casing or a component thereof (e.g., a pocket or void space within a casing collar). in an embodiment, the mems sensor strips 1910 are located in grooves, recessions, scallops, channels or the like on the interior wall of the casing and form a flush interface with the interior wall of the casing such that the interior diameter of the casing is not adversely affected (e.g., roughened, restricted, etc.) by the presence of the mems sensor strips 1910 . in an embodiment as shown in fig. 28 a , the mems sensor strips 1910 may be embedded in grooves 1914 in the inner wall of the casing 20 so as not to protrude from the inner wall of the casing 20 . in an embodiment, the mems sensor strips 1910 may be mounted flush with the inner wall of the casing 20 . in a further embodiment, the mems sensor strips 1910 may be attached to casing collars. the mems sensors 1912 may be passive sensors or active sensors and may be configured to measure at least one wellbore parameter, e.g., a vertical position of the mems sensors 1912 along the casing 20 or an ambient condition (e.g., environmental condition) within the wellbore. in an embodiment, a plug 1930 (e.g., a wiper plug) may comprise a data interrogation unit 1940 , which is configured to interrogate mems sensors 1912 in a vicinity of the data interrogation unit 1940 . the data interrogation unit 1940 may be molded to the wiper plug 1930 using a composite resin material or may be otherwise housed by, coupled to, or integral with the plug 1930 . in an embodiment, the data interrogation unit 1940 may be powered by a rechargeable battery, for example a lithium ion battery. the battery may be charged prior to and/or after placement of the data interrogation unit into the wellbore. for example, a battery charger (e.g., inductive charger) may be lowered into the wellbore periodically to charge batteries associated with the data interrogation units and/or the mems sensors (e.g., active sensors). in an embodiment, the battery is capable of powering the data interrogation units for at least 1, 2, 3, or 4 weeks. in an embodiment, the data interrogation unit 1940 is powered by transport of the plug 1930 though the wellbore, for example via fluid flow through the plug driving a power generator. in a further embodiment, the data interrogation unit 1940 may be powered by a wireline run between the data interrogation unit 1940 and a power supply situated at the exterior of the wellbore. in operation, the plug 1930 may be pumped down the casing by pumping a displacement fluid into and down the casing 20 directly in back of the plug 1930 . as the plug 1930 nears and passes the mems sensor strips 1910 , the data interrogation unit 1940 interrogates the mems sensors 1912 in the respective strips 1910 and receives data from the mems sensors 1912 regarding at least the vertical position of the mems sensors 1912 in the casing 20 , and correspondingly the position of the plug 1930 in the wellbore. for example, as the plug 1930 passes through the wellbore, the data interrogation unit may successively identify the presence of the mems sensor strips 1910 , and the position of the plug 1930 may be determined for example by counting the number of strips 1910 passed (e.g., where a location of one or more strips is known and/or the distance between strips is known) and/or by employing one or more unique identifiers with the mems sensors (e.g., strips 1910 a, b, c, d , and e have corresponding unique identifies a, b, c, d, and e, and the location of a strip having a given identifier is known). the data interrogation unit 1940 may then transmit the sensor data to the processing unit 1920 for further processing, for example look-up or correlation of mems sensor identifiers with known locations in the wellbore. when the data interrogation unit 1940 reaches the mems sensor strip 1910 proximate to and/or integral with a seat such as a float collar 1950 positioned in the casing 20 , the data regarding the vertical position of the mems sensors 1912 in this mems sensor strip 1910 may be transmitted to the data interrogation unit 1940 and the processor 1920 and give the processor 1920 an indication that the plug 1930 has engaged/seated (e.g., the wiper plug as landed on the float collar 1950 or is very close to landing on the float collar 1950 ). in response to receiving this data, the processor 1920 may cause pumping of the displacement fluid to be controlled (e.g., slowed and/or terminated). in an embodiment, the data interrogation unit 1940 may transmit sensor data to the processor 1920 via a data line that is attached to the data interrogation unit 1940 and the processor 1920 and follows the data interrogation unit 1940 into the wellbore 18 . in a further embodiment, the data interrogation unit 1940 may transmit sensor data to the processor 1920 via regional communication boxes attached to the casing and spaced along a length of the casing. in alternative embodiments, the data interrogation unit may employ wireless communication, for example a mems sensor network where mems sensors are located in a wellbore servicing fluid proximate the plug (e.g., in a cement slurry located in front of the plug) and/or via telemetry induced via contact with the casing (e.g., during pumping and/or upon seating in the float collar). in an embodiment, the mems sensors 1912 in the mems sensor strips 1910 may be configured to measure a concentration of a gas in the casing 20 along the length of the casing 20 and transmit data regarding the gas concentration to the processor 1920 via communication boxes attached to the casing and spaced along a length of the casing or any other communication means disclosed herein. the gas may comprise, for example, ch 4 , h 2 s and/or co 2 . in an embodiment, from measured methane concentrations along the length of the casing 20 , the mems sensors 1912 may provide an indication, for example, that methane is advancing rapidly up the casing 20 , so that necessary emergency actions may be taken, e.g., signaling for the closing of one or more emergency or safety valves or blowout preventors. in a further embodiment, a wellbore servicing fluid (e.g., cement composition) comprising a plurality of mems sensors may be placed into the casing. the mems sensors may be suspended in and distributed throughout the wellbore servicing fluid (e.g., cement slurry and/or set cement forming a cement sheath). the mems sensors (e.g., in strips 1910 and/or in the wellbore servicing composition) may measure at least one wellbore parameter and transmit data regarding the wellbore parameter to the processor 1920 via a network consisting of the mems sensors in the wellbore servicing fluid and/or the mems sensors 1912 situated in the mems sensor strips 1910 . referring to fig. 28 b , a method 1960 of servicing a wellbore is described. at block 1962 , a plurality of micro-electro-mechanical system (mems) sensors is optionally placed in a wellbore servicing fluid, e.g., a cement composition. at block 1964 , the wellbore servicing fluid is placed in the wellbore. in addition to or in lieu of mems sensors in the wellbore servicing fluid, the wellbore further comprises mems sensors disposed in one or more composite resin or composite elements. for example, the composite resin elements may be molded to an inner wall of a casing situated in the wellbore and spaced along a length of the casing. at block 1966 , a network consisting of the mems sensors in the wellbore is formed (e.g., network of mems sensors in the wellbore servicing fluid and/or contained within one or more resin or composite elements. at block 1968 , data obtained by the mems sensors in the wellbore is transmitted from an interior of the wellbore to an exterior of the wellbore via the network. in embodiments, the data may be obtained from the mems sensors via one or more data interrogators present in a wellbore servicing tool run into the wellbore prior to, concurrent with, and/or subsequent to the wellbore servicing operation. in an embodiment, the one or more data interrogation units is integral with a wiper plug pumped behind a cement slurry. in an embodiment, a cement composition is pumped into a wellbore, followed by a wiper plug having a data interrogation unit integral therewith, and a float collar having mems sensors integral therewith is located at a terminal end of the casing, wherein engagement of the wiper plug with the float collar is signaled from downhole to the surface (e.g., via various communication means/networks as described herein) by the mems sensors interacting with the interrogation unit such that pumping of the cement composition may be controlled in response to the position of the wiper plug conveyed from downhole to the surface. fig. 29 a is a schematic view of an embodiment of a wellbore parameter sensing system 2000 , which comprises the wellbore 18 , the casing 20 situated in the wellbore 18 , a processing unit 2010 situated at an exterior of the wellbore 18 and a plurality of mems sensor strips 2020 attached to the casing 20 and spaced along a length of the casing 20 . in an embodiment, the mems sensor strips 2020 comprise a composite resin material, in which mems sensors 2022 are mixed and distributed, and which may be molded to the casing 20 . as shown in fig. 29 a , the sensor strips 2022 may be located on an exterior wall or surface of the casing 20 (e.g., a side facing or adjacent the wellbore wall). the sensor strips 2022 may be disposed with in the casing wall (e.g., outer surface) in accordance with sensor strips 1910 of fig. 28 a , which are shown by way of non-limiting example on an interior surface or wall of casing 20 . in an embodiment, the mems sensor strips 2020 may be embedded in grooves 2024 in the outer wall of the casing 20 so as not to protrude from the outer wall of the casing 20 . in an embodiment, the mems sensor strips 2020 may be mounted flush with the outer wall of the casing 20 . in a further embodiment, the mems sensor strips 2020 may be attached to casing collars. in an embodiment, a wellbore servicing fluid, e.g., a cement slurry comprising mems sensors 2032 mixed and distributed in the cement slurry, may be placed into the annulus 26 and, in the case of the cement slurry, allowed to cure to form a cement sheath 2030 . the mems sensors 2022 and/or 2032 may be active sensors, e.g., powered by batteries situated in the mems sensors. the batteries in the mems sensors may be inductively rechargeable by a recharging unit lowered into the casing 20 via a wireline. in embodiments, the mems sensors are powered and/or queried/interrogated by one or more interrogation units in the wellbore (fixed units and/or mobile units) as described in various embodiments herein. in addition, the mems sensors 2022 and/or 2032 may be configured to measure at least one wellbore parameter, e.g., a concentration of a gas such as ch 4 , h 2 s or co 2 in the annulus 26 . such gas detecting capability may be further used to monitor a cement composition placed in the annulus, for example monitoring for gas inflow/channeling while the slurry is being placed and/or monitoring for the presence of annular gas over the life of the wellbore (which may indicate cracks, delamination, etc. of the cement sheath thus requiring remedial servicing). in an embodiment, from measured methane concentrations in the annulus 26 along a length of the casing 20 , the mems sensors 2022 and/or 2032 may provide an indication, for example, that methane is advancing rapidly up the annulus 26 , so that necessary emergency actions may be taken. in operation, in an embodiment, the mems sensors 2032 in the cement sheath 2030 and/or the mems sensors in strips 2020 may measure the at least one wellbore parameter and transmit data regarding the at least one wellbore parameter up the annulus 26 to the processing unit 2010 via a network consisting of the mems sensors 2032 and/or the mems sensors 2022 . for example, the mems sensors may be powered up and/or interrogated by a mobile interrogation unit run into the wellbore, for example via a plug pumped into the wellbore (e.g., a wiper plug) and/or an interrogation tool deployed by wireline or coiled tubing. double arrows 2040 indicate transmission of sensor data between neighboring mems sensors 2032 , arrows 2042 , 2044 indicate transmission of sensor data up the annulus 26 from mems sensors 2032 to mems sensors 2022 , and arrows 2046 , 2048 indicate transmission of sensor data up the annulus 26 from mems sensors 2022 to mems sensors 2032 . referring to fig. 29 b , a method 2060 of servicing a wellbore is described. at block 2062 , a plurality of micro-electro-mechanical system (mems) sensors is placed in a wellbore servicing fluid and/or within one or more resin/composite elements disposed in the wellbore. at block 2064 , the wellbore servicing fluid is placed in the wellbore. at block 2066 , a network consisting of the mems sensors in the wellbore servicing fluid and/or mems sensors situated in composite resin elements is formed. in an embodiment, the composite resin elements are molded to an inner and/or outer wall of a casing situated in the wellbore and spaced along a length of the casing. at block 2068 , data is obtained from the mems sensors in the wellbore servicing fluid and/or resin/composite elements via one or more data interrogation units in the wellbore and is transmitted from an interior of the wellbore to an exterior of the wellbore via the network. in an alternative embodiment, mems sensor data is collected and stored by a mobile data interrogation unit that traverses the wellbore and is retrieved to the surface, which may be used in addition to or in lieu of the mems sensor network to transmit sensor data to the surface. fig. 30 a is a schematic view of an embodiment of a wellbore parameter sensing system 2100 , which comprises the wellbore 18 , the casing 20 situated in the wellbore 18 , a plurality of centralizers 2110 situated between the casing 20 and the wellbore 18 and spaced along a length of the casing 20 , and a processing unit 2120 situated at an exterior of the wellbore 18 . in an embodiment, the centralizers are bow-spring type centralizers comprising a plurality of bows extending between upper and lower collars. in an embodiment, the centralizers 2110 may comprise mems sensor strips 2130 , which for example are attached to at least one component (e.g., collar 2112 ) of each centralizer 2110 . the mems sensor strips 2130 may comprise a composite resin material, in which mems sensors 2132 are mixed and distributed, and which may be molded to and/or integral with the collars 2112 . in an embodiment, the mems sensor strips 2130 may be embedded in channels or grooves 2134 in the collars 2112 so as not to protrude from the collars 2112 . in an embodiment, the mems sensor strips 2130 may be mounted flush with the collars 2112 . in an embodiment, a wellbore servicing fluid, e.g., a cement slurry comprising mems sensors 2142 mixed and distributed in the cement slurry, may be placed into the annulus 26 and, in the case of the cement slurry, allowed to cure to form a cement sheath 2140 . while fig. 30 a shows the use of a centralizer in conjunction with casing, it should be understood that centralizers containing mems and/or data interrogation units as described herein may be used to position any type of downhole tool or servicing string (e.g., production tubing, etc.), and may be used in cased and/or uncased wellbores. in an embodiment, the mems sensors 2132 may be active sensors, e.g., powered by batteries situated in the mems sensors 2132 . the batteries in the mems sensors 2132 may be inductively rechargeable by a recharging unit lowered into the casing 20 via a wireline. in embodiments, the mems sensors are powered and/or queried/interrogated by one or more interrogation units in the wellbore (fixed units and/or mobile units) as described in various embodiments herein. the mems sensors 2142 situated in the cement slurry 2140 and/or the mems sensors 2132 in the centralizers may be configured to measure at least one wellbore parameter, e.g., a stress or strain and/or a moisture content and/or a ch 4 , h 2 s or co 2 concentration and/or a concentration and/or a temperature. in an embodiment, the mems sensors 2132 and/or 2142 may be configured to measure a concentration of a gas such as ch 4 , h 2 s or co 2 in the annulus 26 . such gas detecting capability may be further used to monitor a cement composition placed in the annulus, for example monitoring for gas inflow/channeling while the slurry is being placed and/or monitoring for the presence of annular gas over the life of the wellbore (which may indicate cracks, delamination, etc. of the cement sheath thus requiring remedial servicing). in an embodiment, from measured methane concentrations in the annulus 26 along a length of the casing 20 , the mems sensors 2132 and/or 2142 may provide an indication, for example, that methane is advancing rapidly up the annulus 26 , so that necessary emergency actions may be taken. in operation, in an embodiment, the mems sensors 2142 in the cement sheath 2140 and/or the mems sensors 2132 in the centralizers may measure the at least one wellbore parameter and transmit data regarding the at least one wellbore parameter up the annulus 26 to the processing unit 2120 via a network consisting of the mems sensors 2142 and/or the mems sensors 2132 . for example, the mems sensors may be powered up and/or interrogated by a mobile interrogation unit run into the wellbore, for example via a plug pumped into the wellbore (e.g., a wiper plug) and/or an interrogation tool deployed by wireline or coiled tubing. double arrows 2150 indicate transmission of sensor data between neighboring mems sensors 2142 , arrows 2152 , 2154 indicate transmission of sensor data up the annulus 26 from mems sensors 2142 to mems sensors 2132 , and arrows 2156 , 2158 indicate transmission of sensor data up the annulus 26 from mems sensors 2132 to mems sensors 2142 . referring to fig. 30 b , a method 2170 of servicing a wellbore is described. at block 2172 , a plurality of micro-electro-mechanical system (mems) sensors is placed in a wellbore servicing fluid and/or within one or more centralizers disposed in the wellbore. at block 2174 , the wellbore servicing fluid is placed in the wellbore. at block 2176 , a network consisting of the mems sensors in the wellbore servicing fluid and/or mems sensors situated in one or more centralizers is formed. for example, one or more composite resin elements are molded to or otherwise formed integral with (e.g., molded with) a plurality of centralizers disposed between a wall of the wellbore and a casing situated in the wellbore. the centralizers are spaced along a length of the casing. at block 2178 , data obtained from the mems sensors in the wellbore servicing fluid and/or in the centralizers via one or more data interrogation units in the wellbore and is transmitted from an interior of the wellbore to an exterior of the wellbore via the network. in an alternative embodiment, mems sensor data is collected and stored by a mobile data interrogation unit that traverses the wellbore and is retrieved to the surface, which may be used in addition to or in lieu of the mems sensor network to transmit sensor data to the surface. fig. 31 is a schematic view of an embodiment of a wellbore parameter sensing system 2200 , which comprises the wellbore 18 , the casing 20 situated in the wellbore 18 , a plurality of centralizers 2210 situated between the casing 20 and the wellbore 18 and spaced along a length of the casing 20 , and a processing unit 2220 . in an embodiment, the centralizers 2210 may comprise data interrogation units 2230 , which for example are attached to at least one component (e.g., collar 2212 ) of each centralizer 2210 . in an embodiment, the data interrogation units 2230 may be molded to the collars 2212 , using a composite resin material 2232 . the data interrogation units 2230 may be embedded in channels or grooves 2234 in the collars 2212 so as to not protrude from the collars 2212 . in an embodiment, the data interrogation units 2230 may be mounted flush with the collars 2212 . in an embodiment, a wellbore servicing fluid, e.g., a cement slurry comprising mems sensors 2242 mixed and distributed in the cement slurry, may be placed into the annulus 26 and, in the case of the cement slurry, allowed to cure to form a cement sheath 2240 . in an embodiment, data interrogation units 2230 are used to capture mems sensor data for use in fluid flow dynamic analysis as described herein (e.g., measuring turbulence of flow around/through the centralizers 2210 ). in an embodiment, the data interrogation units 2230 may be powered by an electrical line that may run along an outer wall of the casing 20 and couples each data interrogation unit 2230 with a power supply at an exterior of the wellbore 18 . in an alternative embodiment, the electrical line may run inside a longitudinal groove in the casing 20 . in a further embodiment, the data interrogation units 2230 may be powered by batteries. the batteries may be inductively rechargeable via a recharging unit that is lowered down the casing 20 on a wire line. in other embodiments, the data interrogation units 2230 may be powered by one or more downhole power sources (e.g., fluid flow, heat, etc.). in an embodiment, the data interrogation units 2230 may wirelessly communicate with each other and with the processing unit 2220 . in an alternative embodiment, the data interrogation units 2230 may communicate with each other and with the processing unit 2220 via a data line that may run along the casing 20 , outside of the casing 20 , and couples each data interrogation unit 2230 with the processing unit 2220 . in a further embodiment, the data interrogation units 2230 may communicate with each other and with the processing unit 2220 via a data line that runs inside a groove in the casing and couples the data interrogation units 2230 with each other and the processing unit 2220 . the data interrogation units may further communicate with each other via various networks disclosed herein, for example a network of mems sensors 2242 , a network of data interrogation units 2230 , and/or via one or more regional data interrogation units/or communication hubs such as unit 2141 (which may communicate wirelessly downhole and via wire to the surface). in embodiments, the data interrogation units 2230 may operate (e.g., gather and/or communicate data) via one or more means or modes as described with respect to figs. 5-16 . in an embodiment, the mems sensors 2242 may be active sensors, e.g., powered by batteries situated in the mems sensors 2242 . the batteries in the mems sensors 2242 may be inductively rechargeable by a recharging unit lowered into the casing 20 via a wireline. in embodiments, the mems sensors are powered and/or queried/interrogated by one or more interrogation units in the wellbore (fixed units 2230 and/or mobile units) as described in various embodiments herein. the mems sensors 2242 situated in the cement slurry 2240 may be configured to measure at least one wellbore parameter, e.g., a stress or strain and/or a moisture content and/or a ch 4 , h 2 s or co 2 concentration and/or a concentration and/or a temperature. in an embodiment, the mems sensors 2240 may be configured to measure a concentration of a gas such as ch 4 , h 2 s or co 2 in the annulus 26 . such gas detecting capability may be further used to monitor a cement composition placed in the annulus, for example monitoring for gas inflow/channeling while the slurry is being placed and/or monitoring for the presence of annular gas over the life of the wellbore (which may indicate cracks, delamination, etc. of the cement sheath thus requiring remedial servicing). in an embodiment, from measured methane concentrations in the annulus 26 along a length of the casing 20 , the mems sensors 2240 may provide an indication, for example, that methane is advancing rapidly up the annulus 26 , so that necessary emergency actions may be taken. in operation, in an embodiment, the mems sensors 2242 in the cement sheath 2240 may measure the at least one wellbore parameter and transmit data regarding the at least one wellbore parameter directly and/or indirectly (e.g., via one or more adjacent mems sensors, e.g., daisy-chain) to data interrogation units 2230 situated in a vicinity of the mems sensors 2242 . the data interrogation units 2230 may then transmit the sensor data wirelessly and/or via wire to the surface. in an embodiment the data interrogation units 2230 transmit the sensor data to neighboring data interrogation units 2230 (e.g., daisy-chain) and up the wellbore 18 to the processing unit and/or or transmit the sensor data through the data line, up the wellbore 18 and to the processing unit 2220 . the processing unit may then process the sensor data. double arrows 2250 indicate transmission of sensor data between neighboring mems sensors 2242 ; arrows 2254 , 2256 indicate transmission of sensor data uphole from mems sensors 2242 to closest data interrogation units 2230 ; arrows 2260 , 2262 indicate transmission of sensor data downhole from mems sensors 2242 to closest data interrogation units 2230 ; and arrows 2252 , 2258 represent the transmission of data up and down the wellbore, for example via a network of interrogation units 2230 and/or mems sensors 2242 . in an embodiment, mems sensors and/or one or more data interrogation units may be molded into a casing shoe, e.g., a guide shoe or a float shoe, and used to measure at least one parameter of a wellbore in which the casing shoe is situated. the casing shoe may be made of a homogeneous material, for example, a plastic such as a thermoplastic material or a thermoset material. in addition, the casing shoe may be formed by injection molding, thermal casting, thermal molding, extrusion molding, or any combination of these methods. examples of thermoplastic and thermoset materials suitable for forming the casing shoe may be found in u.s. pat. no. 7,617,879, which is hereby incorporated by reference herein in its entirety. in an embodiment, the mems sensors and/or data interrogation units may be molded into the thermoplastic or thermoset material of the casing shoe such that at least a portion of the mems sensors are situated at or immediately proximate to an outer surface of the casing shoe and are able to measure a parameter of the wellbore, e.g., a stress or strain and/or a moisture content and/or a ch 4 , h 2 s or co 2 concentration and/or a concentration and/or a temperature. it should be noted that any of the embodiments of figs. 27-31 may be combined with embodiments where mems sensors are contained in one or more wellbore servicing fluids or compositions, for example the embodiments of figs. 5-26 . where mems sensors are employed in at least one wellbore servicing fluid or composition in combination with mems sensors combined into one or more wellbore servicing equipment or tools, the mems sensors may be the same or different (e.g., type “a”, “b”, etc.), and such combinations of same and/or different sensor may be used to provide different or distinct signals to the data interrogators, for example as described in relation to the embodiments of figs. 22-24 , and such different or distinct signals may further facilitate action (e.g., changing, controlling, receiving, monitoring, etc.) with respect to one or more operational parameters or conditions of the downhole equipment and/or servicing operation. in embodiments, one or more acoustic sensors may be used in combination with mems sensors and/or data interrogation units placed in the wellbore. for example, one or more acoustic sensors may be incorporated into data interrogation and communication units for mems sensors, in order to measure further wellbore parameters and/or provide further options for transmitting sensor data from an interior of a wellbore to an exterior of the wellbore. fig. 32 illustrates an embodiment of a portion of a wellbore parameter sensing system 2300 . the wellbore parameter sensing system 2300 comprises the wellbore 18 , the casing 20 situated in the wellbore 18 , a plurality of interrogation/communication units 2310 attached to the casing 20 and spaced along a length of the casing 20 , a processing unit 2320 situated at an exterior of the wellbore and communicatively linked to the units 2310 , and a wellbore servicing fluid 2330 situated in the wellbore 18 . the wellbore servicing fluid 2330 may comprise a plurality of mems sensors 2340 , which are configured to measure at least one wellbore parameter. in an embodiment, fig. 32 represents an interrogation/communication unit 2310 located on an exterior of the casing 20 in annular space 26 and surrounded by a cement composition comprising mems sensors. the unit 2310 may further comprise a power source, for example a battery (e.g., lithium battery) or power generator. in embodiments, the components of unit 2310 are powered by any of the embodiments of figs. 33 , 34 , and 35 described herein. in an embodiment, the unit 2310 may comprise an interrogation unit 2350 , which is configured to interrogate the mems sensors 2340 and receive data regarding the at least one wellbore parameter from the mems sensors 2340 . in an embodiment, the unit 2310 may also comprise at least one acoustic sensor 2352 , which is configured to input ultrasonic waves 2354 into the wellbore servicing fluid 2330 and/or into the oil or gas formation 14 proximate to the wellbore 18 and receive ultrasonic waves reflected by the wellbore servicing fluid 2330 and/or the oil or gas formation 14 . in an embodiment, the at least one acoustic sensor 2352 may transmit and receive ultrasonic waves using a pulse-echo method or pitch-catch method of ultrasonic sampling/testing. a discussion of the pulse-echo and pitch-catch methods of ultrasonic sampling/testing may be found in the nasa preferred reliability practice no. pt-te-1422, “ultrasonic testing of aerospace materials,” which is incorporated by reference herein in its entirety. in alternative embodiments, ultrasonic waves and/or acoustic sensors may be provided via the unit 2310 in accordance with one or more embodiments disclosed in u.s. pat. nos. 5,995,477; 6,041,861; or 6,712,138, each of which is incorporated herein in its entirety. in an embodiment, the at least one acoustic sensor 2352 may be able to detect a presence and a position in the wellbore 18 of a liquid phase and/or a solid phase of the wellbore servicing fluid 2330 . in addition, the at least one acoustic sensor 2352 may be able to detect a presence of cracks and/or voids and/or inclusions in a solid phase of the wellbore servicing fluid 2330 , e.g., in a partially cured cement slurry or a fully cured cement sheath. in a further embodiment, the acoustic sensor 2352 may be able to determine a porosity of the oil or gas formation 14 . in a further embodiment, the acoustic sensor 2352 may be configured to detect a presence of the mems sensors 2340 in the wellbore servicing fluid 2330 . in particular, the acoustic sensor may scan for the physical presence of mems sensors proximate thereto, and may thereby be used to verify data derived from the mems sensors. for example, where acoustic sensor 2352 does not detect the presence of mems sensors, such lack of detection may provide a further indication that a wellbore servicing fluid has not yet arrived at that location (for example, has not entered the annulus). likewise, where acoustic sensor 2352 does detect the presence of mems sensors, such presence may be further verified by interrogation on the mems sensors. furthermore, a failed attempt to interrogate the mems sensors where acoustic sensor 2352 indicates their presence may be used to trouble-shoot or otherwise indicate that a problem may exist with the mems sensor system (e.g., a fix data interrogation unit may be faulty thereby requiring repair and/or deployment of a mobile unit into the wellbore). in various embodiments, the acoustic sensor 2352 may perform any combination of the listed functions. in an embodiment, the acoustic sensor 2352 may be a piezoelectric-type sensor comprising at least one piezoelectric transducer for inputting ultrasonic waves into the wellbore servicing fluid 2330 . a discussion of acoustic sensors comprising piezoelectric composite transducers may be found in u.s. pat. no. 7,036,363, which is hereby incorporated by reference herein in its entirety. in an embodiment, the interrogation/communication unit 2310 may further comprise an acoustic transceiver 2356 . the acoustic transceiver 2356 may comprise an acoustic receiver 2358 , an acoustic transmitter 2360 and a microprocessor 2362 . the microprocessor 2362 may be configured to receive mems sensor data from the interrogation unit 2350 and/or acoustic sensor data from the at least one acoustic sensor 2352 and convert the sensor data into a form that may be transmitted by the acoustic transmitter 2360 . in an embodiment, the acoustic transmitter 2360 may be configured to transmit the sensor data from the mems sensors 2340 and/or the acoustic sensor 2352 to an interrogation/communication unit situated uphole (e.g., the next unit directly uphole) from the unit 2310 shown in fig. 32 . the acoustic transmitter 2360 may comprise a plurality of piezoelectric plate elements in one or more plate assemblies configured to input ultrasonic waves into the casing 20 and/or the wellbore servicing fluid 2330 in the form of acoustic signals (for example to provide acoustic telemetry communications/signals as described in various embodiments herein). examples of acoustic transmitters comprising piezoelectric plate elements are given in u.s. patent application publication no. 2009/0022011, which is hereby incorporated by reference herein in its entirety. in an embodiment, the acoustic receiver 2358 may be configured to receive sensor data in the form of acoustic signals from one or more acoustic transmitters disposed in one or more interrogation/communication units situated uphole and/or downhole from the unit 2310 shown in fig. 32 . in addition, the acoustic receiver 2358 may be configured to transmit the sensor data to the microprocessor 2362 . in embodiments, a microprocessor or digital signal processor may be used to process sensor data, interrogate sensors and/or interrogation/communication units and communicate with devices situated at an exterior of a wellbore. for example, the microprocessor 2362 may then route/convey/retransmit the received data (and additionally/optionally convert or process the received data) to the interrogation/communication unit situated directly uphole and/or downhole from the unit 2310 shown in fig. 32 . alternatively, the received sensor data may be passed along to the next interrogation/communication unit without undergoing any transformation or further processing by microprocessor 2362 . in this manner, sensor data acquired by interrogators 2350 and acoustic sensors 2352 situated in units 2310 disposed along at least a portion of the length of the casing 20 may be transmitted up or down the wellbore 18 to the processing unit 2320 , which is configured to process the sensor data. in embodiments, sensors, processing electronics, communication devices and power sources, e.g., a lithium battery, may be integrated inside a housing (e.g., a composite attachment or housing) that may, for example, be attached to an outer surface of a casing. in an embodiment, the housing may comprise a composite resin material. in embodiments, the composite resin material may comprise an epoxy resin. in further embodiments, the composite resin material may comprise at least one ceramic material. in further embodiments, housing of unit 2310 (e.g., composite housing) may extend from the casing and thereby serving additional functions such as a centralizer for the casing. in alternative embodiments, the housing of unit 2310 (e.g., composite housing) may be contained within a recess in the casing and by mounted flush with a wall of the casing. alternative configurations and locations for the unit 2310 (e.g., a composite housing) are shown in figs. 33-35 as described herein. any of the composite materials described herein may be used in embodiments to form a housing for unit 2310 . in embodiments, sensors (e.g., the acoustic sensors 2352 and/or the mems sensors 2340 ) may measure parameters of a wellbore servicing material in an annulus situated between a casing and an oil or gas formation. the wellbore servicing material may comprise a fluid, a cement slurry, a partially cured cement slurry, a cement sheath, or other materials. parameters of the wellbore and/or servicing material may be acquired and transmitted continuously or in discrete time, depending on demands. in embodiments, parameters measured by the sensors include velocity of ultrasonic waves, poisson's ratio, material phases, temperature, flow, compactness, pressure and other parameters described herein. in embodiments, the unit 2310 may contain a plurality of sensor types used for measuring the parameters, and may include lead zirconate titanate (pzt) acoustic transceivers, electromagnetic transceivers, pressure sensors, temperature sensors and other sensors. in embodiments, unit 2310 may be used, for example, to monitor parameters during a curing process of cement situated in the annulus. in further embodiments, flow of production fluid through production tubing and/or the casing may be monitored. in embodiments an interrogation/communication unit (e.g., unit 2310 ) may be utilized for collecting data from sensors, processing data, storing information, and/or sending and receiving data. different types of sensors, including electromagnetic and acoustic sensors as well as mems sensors, may be utilized for measuring various properties of a material and determining and/or confirming an actual state of the material. in an embodiment, data to be processed in the interrogation/communication unit may include data from acoustic sensors, e.g., liquid/solid phase, annulus width, homogeneity/heterogeneity of a medium, velocity of acoustic waves through a medium and impedance, as well as data from mems sensors, which in embodiments include passive rfid tags and are interrogated electromagnetically. in an embodiment, each interrogation/communication unit may process data pertaining to a vicinity or region of the wellbore associated to the unit. in a further embodiment, the interrogation/communication unit may further comprise a memory device configured to store data acquired from sensors. the sensor data may be tagged with time of acquisition, sensor type and/or identification information pertaining to the interrogation/communication unit where the data is collected. in an embodiment, raw and/or processed sensor data may be sent to an exterior of a wellbore for further processing or analysis, for example via any of the communication means, methods, or networks disclosed herein. in an embodiment, data acquired by the interrogation/communication units may be transmitted acoustically from unit to unit and to an exterior of the wellbore, using the casing as an acoustic transmission medium. in a further embodiment, sensor data from each interrogation/communication unit may be transmitted to an exterior of the wellbore, using a very low frequency electromagnetic wave. alternatively, sensor data from each interrogation/communication unit may be transmitted via a daisy-chain to an exterior of the wellbore, using a very low frequency electromagnetic wave to pass the data along the chain. in a further embodiment, a wire and/or fiber optic line coupled to each of the interrogation/communication units may be used to transmit sensor data from each unit to an exterior of the wellbore, and also used to power the units. in an embodiment, a circumferential acoustic scanning tool comprising an acoustic transceiver may be lowered into a casing, along which the interrogation/communication units are spaced. the acoustic transceiver in the circumferential acoustic scanning tool may be configured to interrogate corresponding acoustic transceivers in the interrogation/communication units, by transmitting an acoustic signal through the casing to the acoustic transceiver in the unit. in an embodiment, the memory devices in each interrogation/communication unit may be able to store, for example, two weeks worth of sensor data before being interrogated by the circumferential acoustic scanning tool. the acoustic transceiver in the circumferential acoustic scanning tool may further comprise a mems sensor interrogation unit, and thereby interrogate and collect data from mems sensors. in embodiments, data interrogation/communication units or tools of the various embodiments disclosed herein may be powered by devices configured to generate electricity while the units are located in the wellbore, for example turbo generator units and/or quantum thermoelectric generator units. the electricity generated by the devices may be used directly by components in the interrogation/communication units or may be stored in a battery or batteries for later use. fig. 33 illustrates an embodiment of a turbo generator unit 2370 situated in a side compartment 2380 (e.g., side pocket mandrel) of the casing 20 . the turbo generator unit 2370 may comprise a generator 2390 driven by a turbine 2400 . the turbo generator unit 2370 may also comprise a battery 2410 for storing electricity generated by the generator 2390 . in an embodiment, a portion of a wellbore servicing fluid 2420 flowing through casing 20 in the direction of arrows 2430 may be diverted in a direction of arrows 2432 , into a flow channel 2440 of side compartment 2380 , and past turbine 2400 . a force of the wellbore servicing fluid 2420 flowing past turbine 2400 causes the turbine 2400 to rotate and drive the generator 2390 . in an embodiment, electricity generated by the generator 2390 may power components in one or more interrogation/communication units directly and/or may be stored in battery 2410 for later use by components in one or more interrogation/communication units. in a further embodiment, the turbo generator unit 2370 may also comprise a controller for regulating current flow into the battery 2410 and/or current flow into components of the interrogation/communication units. in an embodiment, the turbo generator unit 2370 is proximate to and/or integral with a unit powered thereby. fig. 34 illustrates a further embodiment of the turbo generator unit 2370 shown in fig. 33 . in this embodiment, the turbo generator unit 2370 is situated in the annulus 26 between the wellbore 18 and the casing 20 . in addition, the turbo generator unit 2370 is oriented in the annulus 26 such that a wellbore servicing fluid 2450 pumped down an interior of the casing 20 in the direction of arrows 2460 and up the annulus 26 in the direction of arrows 2462 forces the turbine 2400 to rotate and drive generator 2390 . as in the embodiment illustrated in fig. 33 , electricity generated by generator 2390 may be stored in battery 2410 or used directly by components situated in an interrogation/communication unit. in addition to or in lieu of the flow of a wellbore servicing fluid as driving the turbo generator unit 2370 , a flow of fluid from the formation and/or up the wellbore (e.g., the recovery of hydrocarbons from the well) may provide the fluid flow that powers the turbo generator unit. in further embodiments, the turbo generator unit 2370 may be oriented in the interior of the casing 20 or in the annulus 26 such that a wellbore servicing fluid flowing in a downhole direction can drive the generator 2390 . in other embodiments, the turbo generator unit 2370 may be attached to production tubing instead of the casing 20 , and the production of formation fluids may power the turbo generator. an example of a generator attached to production tubing is described in u.s. pat. no. 5,839,508, which is hereby incorporated by reference herein in its entirety. in embodiments, thermoelectricity, which may be generally defined as the conversion of temperature differences to electricity, may be used for generating electricity in a wellbore via a thermoelectric generator. in one example of thermoelectricity, electrons in a first material that is at a higher temperature than a second material may quantum-mechanically tunnel from the first material to the second material when a distance between the two materials is sufficiently small. the quantum-mechanical tunneling of the electrons may generate a current that may be used to power downhole devices, e.g., interrogation/communication units and/or mems sensors. examples of utilizing thermoelectricity for powering downhole devices may be found in u.s. pat. no. 7,647,979, which is hereby incorporated by reference herein in its entirety. fig. 35 illustrates an embodiment of a quantum thermoelectric generator 2470 , which is disposed in the casing 20 situated in wellbore 18 and is electrically coupled to the interrogation/communication unit 2310 . the quantum electric generator 2470 may comprise an emitter electrode 2472 , a collector electrode 2474 and leads 2476 , 2478 that couple electrodes 2472 , 2474 to the unit 2310 . in an embodiment, the wellbore servicing fluid 2330 situated in annulus 26 may comprise a cement slurry, which has been pumped down an interior of the casing 20 and up the annulus 26 and is allowed to cure to form a cement sheath. as the cement cures, exothermic hydration reactions may raise the temperature of the curing slurry, thereby heating an outer wall 20 a of the casing 20 and creating a temperature gradient in the casing between the outer wall 20 a and an inner wall 20 b of the casing 20 . in an embodiment, the inner wall 20 b may be in contact with a displacement fluid, which may have a conductivity and a heat capacity sufficient to maintain the temperature gradient. in an embodiment, in response to a difference in temperature between the emitter electrode 2472 and the collector electrode 2474 , electrons 2480 may flow from the emitter electrode 2472 to the collector electrode 2474 , thereby generating a current that flows through leads 2476 , 2478 . in an embodiment, the current generated by quantum thermoelectric generator 2470 may be used to power components in the interrogation/communication unit 2310 and may be fed to the components directly or stored in a battery. in embodiments, the quantum thermoelectric generator 2470 may be situated in production tubing instead of the casing 20 . in other embodiments, heat from other wellbore servicing fluids such as drilling mud may be used to generate a current in the quantum thermoelectric generator 2470 . in further embodiments, heat from the oil or gas formation 14 adjacent to the wellbore 18 , e.g., from fluids such as hydrocarbons recovered from the formation, may be used to generate a current in the quantum thermoelectric generator 2470 . disclosed herein is a method of servicing a wellbore, comprising placing a plurality of micro-electro-mechanical system (mems) sensors in a wellbore servicing fluid, pumping the wellbore servicing fluid down the wellbore at a fluid flow rate, determining positions of the mems sensors in the wellbore, determining velocities of the mems sensors along a length of the wellbore, and determining an approximate cross-sectional area profile of the wellbore along the length of the wellbore from at least the velocities of the mems sensors and the fluid flow rate. in an embodiment, a constriction in the wellbore is determined in a volumetric region of the wellbore in which average velocities of the mems sensors exceed a threshold average velocity determined using the fluid flow rate of the wellbore servicing fluid. in an embodiment, the average velocities of the mems sensors fall below the threshold average velocity after the mems sensors traverse the constriction. in an embodiment, a washout in the wellbore is determined in a volumetric region of the wellbore in which average velocities of the mems sensors fall below a threshold average velocity determined using the fluid flow rate of the wellbore servicing fluid. in an embodiment, the average velocities of the mems sensors exceed the threshold average velocity after the mems sensors traverse the washout. in an embodiment, a fluid loss zone is determined in a volumetric region of the wellbore in which average velocities of the mems sensors fall below, and remain below, a threshold average velocity determined using the fluid flow rate of the wellbore servicing fluid. in an embodiment, the method further comprises determining a return fluid flow rate of the wellbore servicing fluid up the wellbore, wherein the fluid loss zone is additionally determined using the return fluid flow rate of the wellbore servicing fluid. in an embodiment, the positions of the mems sensors in the wellbore, the velocities of the mems sensors along the length of the wellbore, and the approximate cross-sectional area profile of the wellbore are determined at least approximately in real time. in an embodiment, the positions of the mems sensors in the wellbore are determined using a plurality of data interrogation units spaced along the length of the wellbore. in an embodiment, the positions of the mems sensors are sensed by the mems sensors and are transmittable by a network consisting of the mems sensors from an interior of the wellbore to an exterior of the wellbore. in an embodiment, the mems sensors are powered by a plurality of power sources spaced along the length of the wellbore. in an embodiment, the mems sensors are self-powered. in an embodiment, the mems sensors comprise radio frequency identification device (rfid) tags. in an embodiment, the method further comprises determining shapes of wellbore cross-sections along the length of the wellbore, using positions of the mems sensors detected as the mems sensors traverse the wellbore cross-sections. disclosed herein is a method of servicing a wellbore, comprising placing a plurality of micro-electro-mechanical system (mems) sensors in a wellbore servicing fluid, placing the wellbore servicing fluid in the wellbore, obtaining data from the mems sensors using a plurality of data interrogation units spaced along a length of the wellbore, and processing the data obtained from the mems sensors. in an embodiment, the wellbore servicing fluid comprises a drilling fluid, a spacer fluid, a sealant, a fracturing fluid, a gravel pack fluid or a completion fluid. in an embodiment, the mems sensors determine one or more parameters. in an embodiment, the one or more parameters comprises at least one physical parameter. in an embodiment, the one or more parameters comprises at least one chemical parameter. in an embodiment, the at least one physical parameter comprises at least one of a temperature, a stress or a strain. in an embodiment, the at least one chemical parameter comprises at least one of a co 2 concentration, an h 2 s concentration, a ch 4 concentration, a moisture content, a ph, an na + concentration, a k + concentration and a cl − concentration. in an embodiment, the data interrogation units are powered via a power line running between the data interrogation units and a power source situated at an exterior of the wellbore. in an embodiment, the data interrogation units are powered by at least one turbogenerator situated in the wellbore. in an embodiment, a turbine in the turbogenerator is driven by at least one of the wellbore servicing fluid and a production fluid flowing through the wellbore. in an embodiment, the data interrogation units are powered by at least one quantum thermoelectric generator situated in the wellbore. in an embodiment, the at least one quantum thermoelectric generator is situated in a casing disposed in the wellbore. in an embodiment, the at least one quantum thermoelectric generator is situated in production tubing disposed in the wellbore. in an embodiment, the mems sensors comprise radio frequency identification device (rfid) tags. in an embodiment, the mems sensors are powered by the data interrogators. in an embodiment, the mems sensors are self-powered. in an embodiment, the wellbore servicing fluid is a cement slurry, wherein the cement slurry is placed in an annulus situated between a wall of the wellbore and an outer wall of a casing situated in the wellbore, wherein the cement slurry is allowed to cure so as to form a cement sheath, and wherein the mems sensors are configured to measure at least one of a temperature in the cement sheath, a gas concentration in the cement sheath, a moisture content in the cement sheath, a ph in the cement sheath, a chloride ion concentration in the cement sheath and a mechanical stress of the cement sheath. in an embodiment, the mems sensors are configured to measure a gas concentration in the cement slurry, wherein a degree of gas influx into the cement slurry is determined using the gas concentration in the cement slurry. in an embodiment, the method further comprises determining an integrity of the cement sheath using the data obtained from the mems sensors. in an embodiment, the mems sensors are configured to measure a gas concentration in the cement sheath, wherein a region of the cement sheath is considered to be integral if the gas concentration measured by mems sensors situated in an interior of the cement sheath in the region of the cement sheath is less than a threshold value. in an embodiment, the data interrogation units or the mems sensors may be activated by a ground-penetrating signal generated by a transmitter situated at an exterior of the wellbore. disclosed herein is a method of servicing a wellbore, comprising placing a plurality of micro-electro-mechanical system (mems) sensors in a wellbore servicing fluid, placing the wellbore servicing fluid in the wellbore, forming a network comprising the mems sensors, and transferring data obtained by the mems sensors from an interior of the wellbore to an exterior of the wellbore via the network. in an embodiment, the mems sensors are powered by a plurality of power sources spaced along a length of the wellbore. in an embodiment, the mems sensors are self-powered. in an embodiment, the wellbore servicing fluid comprises a drilling fluid, a spacer fluid, a sealant, a fracturing fluid, a gravel pack fluid or a completion fluid. in an embodiment, the mems sensors determine one or more parameters. in an embodiment, the one or more parameters comprises at least one physical parameter. in an embodiment, the one or more parameters comprises at least one chemical parameter. in an embodiment, the at least one physical parameter comprises at least one of a temperature, a stress or a strain. in an embodiment, the at least one chemical parameter comprises at least one of a co 2 concentration, an h 2 s concentration, a ch 4 concentration, a moisture content, a ph, an na + concentration, a k + concentration and a cl − concentration. in an embodiment, the mems sensors comprise radio frequency identification device (rfid) tags. disclosed herein is a system, comprising a wellbore, a wellbore servicing fluid situated in the wellbore, the wellbore servicing fluid comprising a plurality of micro-electro-mechanical system (mems) sensors, a plurality of data interrogation units spaced along a length of the wellbore and adapted to obtain data from the mems sensors, and a processing unit adapted to receive the data from the data interrogation units and process the data. in an embodiment, the wellbore servicing fluid comprises a drilling fluid, a spacer fluid, a sealant, a fracturing fluid, a gravel pack fluid or a completion fluid. in an embodiment, the mems sensors are configured to determine one or more parameters. in an embodiment, the one or more parameters comprises at least one physical parameter. in an embodiment, the one or more parameters comprises at least one chemical parameter. in an embodiment, the at least one physical parameter comprises at least one of a temperature, a stress and a strain. in an embodiment, the at least one chemical parameter comprises at least one of a co 2 concentration, an h 2 s concentration, a ch 4 concentration, a moisture content, a ph, an na + concentration, a k + concentration and a concentration. in an embodiment, the data interrogation units are powered via a power line running between the data interrogation units and a power source situated at an exterior of the wellbore. in an embodiment, the data interrogation units are powered by at least one turbogenerator situated in the wellbore. in an embodiment, a turbine in the turbogenerator is driven by at least one of the wellbore servicing fluid and a production fluid flowing through the wellbore. in an embodiment, the data interrogation units are powered by at least one quantum thermoelectric generator situated in the wellbore. in an embodiment, the at least one quantum thermoelectric generator is situated in a casing disposed in the wellbore. in an embodiment, the at least one quantum thermoelectric generator is situated in production tubing disposed in the wellbore. in an embodiment, the mems sensors comprise radio frequency identification device (rfid) tags. in an embodiment, the mems sensors are powered by the data interrogators. in an embodiment, the mems sensors are self-powered. in an embodiment, the data interrogation units or the mems sensors may be activated by a ground-penetrating signal generated by a transmitter situated at an exterior of the wellbore. disclosed herein is a system, comprising a wellbore, a wellbore servicing fluid situated in the wellbore, the wellbore servicing fluid comprising a plurality of micro-electro-mechanical system (mems) sensors, wherein the mems sensors are configured to measure at least one parameter and transmit data associated with the at least one parameter from an interior of the wellbore to an exterior of the wellbore via a data transfer network consisting of the mems sensors, and a processing unit adapted to receive the data from the mems sensors and process the data. in an embodiment, the wellbore servicing fluid comprises a drilling fluid, a spacer fluid, a sealant, a fracturing fluid, a gravel pack fluid or a completion fluid. in an embodiment, the mems sensors are configured to determine one or more parameters. in an embodiment, the mems sensors are powered by a plurality of power sources spaced along a length of the wellbore. in an embodiment, the mems sensors comprise radio frequency identification device (rfid) tags. in an embodiment, the mems sensors are self-powered. in an embodiment, the mems sensors may be activated by a ground-penetrating signal generated by a transmitter situated at an exterior of the wellbore. disclosed herein is a method of servicing a wellbore, comprising placing a plurality of micro-electro-mechanical system (mems) sensors in a wellbore servicing fluid, placing the wellbore servicing fluid in the wellbore, obtaining data from the mems sensors using a plurality of data interrogation units spaced along a length of the wellbore, telemetrically transmitting the data from an interior of the wellbore to an exterior of the wellbore, using a casing situated in the wellbore, and processing the data obtained from the mems sensors. in an embodiment, the wellbore servicing fluid comprises a drilling fluid, a spacer fluid, a sealant, a fracturing fluid, a gravel pack fluid or a completion fluid. in an embodiment, the mems sensors determine one or more parameters. in an embodiment, the one or more parameters comprises at least one physical parameter. in an embodiment, the one or more parameters comprises at least one chemical parameter. in an embodiment, the at least one physical parameter comprises at least one of a temperature, a stress or a strain. in an embodiment, the at least one chemical parameter comprises at least one of a co 2 concentration, an h 2 s concentration, a ch 4 concentration, a moisture content, a ph, an na + concentration, a k + concentration and a concentration. in an embodiment, the data interrogation units are powered via a power line running between the data interrogation units and a power source situated at the exterior of the wellbore. in an embodiment, the data interrogation units are powered by at least one turbogenerator situated in the wellbore. in an embodiment, a turbine in the turbogenerator is driven by at least one of the wellbore servicing fluid and a production fluid flowing through the wellbore. in an embodiment, the data interrogation units are powered by at least one quantum thermoelectric generator situated in the wellbore. in an embodiment, the at least one quantum thermoelectric generator is situated in the casing. in an embodiment, the at least one quantum thermoelectric generator is situated in production tubing disposed in the wellbore. in an embodiment, the mems sensors comprise radio frequency identification device (rfid) tags. in an embodiment, the mems sensors are powered by the data interrogators. in an embodiment, telemetrically transmitting the data from an interior of the wellbore to an exterior of the wellbore comprises transmitting the data on at least one insulated cable embedded in a longitudinal groove in the casing. in an embodiment, telemetrically transmitting the data from an interior of the wellbore to an exterior of the wellbore comprises transmitting the data on the casing, using the casing as an electrically conductive medium for transmission. in an embodiment, telemetrically transmitting the data from an interior of the wellbore to an exterior of the wellbore comprises converting the data into acoustic vibrations of the casing. disclosed herein is a system, comprising a wellbore, a casing situated in the wellbore, a wellbore servicing fluid situated in the wellbore, the wellbore servicing fluid comprising a plurality of micro-electro-mechanical system (mems) sensors, a plurality of data interrogation units spaced along a length of the wellbore and adapted to obtain data from the mems sensors and telemetrically transmit the data from an interior of the wellbore to an entrance of the wellbore via the casing, and a processing unit adapted to receive the data from the data interrogation units and process the data. in an embodiment, the wellbore servicing fluid comprises a drilling fluid, a spacer fluid, a sealant, a fracturing fluid, a gravel pack fluid or a completion fluid. in an embodiment, the mems sensors are configured to determine one or more parameters. in an embodiment, the mems sensors comprise radio frequency identification device (rfid) tags. in an embodiment, the mems sensors are self-powered. in an embodiment, the mems sensors are powered by the data interrogators. in an embodiment, the data interrogation units or the mems sensors may be activated by a ground-penetrating signal generated by a transmitter situated at an exterior of the wellbore. in an embodiment, the casing comprises at least one cable embedded in a groove that runs longitudinally along at least part of a length of the casing. in an embodiment, the at least one cable is electrically insulated from a remainder of the casing. in an embodiment, the at least one cable comprises a plurality of cables. in an embodiment, the data interrogation units are electrically connected to the at least one cable. in an embodiment, the at least one cable is configured to at least one of a) supply power to the data interrogation units; and b) transmit the data from the data interrogation units to the processing unit. in an embodiment, the casing is configured to at least one of a) supply power to the data interrogation units; and b) transmit the data from the data interrogation units to the processing unit. in an embodiment, the data interrogation units are powered by at least one turbogenerator situated in the wellbore. in an embodiment, a turbine in the turbogenerator is driven by at least one of the wellbore servicing fluid and a production fluid flowing through the wellbore. in an embodiment, the data interrogation units are powered by at least one quantum thermoelectric generator situated in the wellbore. in an embodiment, the at least one quantum thermoelectric generator is situated in the casing. in an embodiment, the at least one quantum thermoelectric generator is situated in production tubing disposed in the wellbore. in an embodiment, the system further comprises at least one acoustic transmitter configured to transmit the data from the mems sensors to the processing unit as telemetry signals in the form of acoustic vibrations in the casing. in an embodiment, the system further comprises an acoustic receiver configured to receive the telemetry signals transmitted by the at least one acoustic transmitter. in an embodiment, the system further comprises at least one repeater configured to receive and retransmit the telemetry signals. in an embodiment, each data interrogation unit comprises an acoustic transmitter. disclosed herein is a method of servicing a wellbore, comprising pumping a cement slurry down the wellbore, wherein a plurality of micro-electro-mechanical system (mems) sensors is added to a portion of the cement slurry that is added to the wellbore prior to a remainder of the cement slurry, and as the cement slurry is traveling through the wellbore, determining positions of the mems sensors in the wellbore along a length of the wellbore. in an embodiment, the cement slurry is pumped down a casing situated in the wellbore and up an annulus bounded by the casing and the wellbore. in an embodiment, the cement slurry is pumped down an annulus bounded by a casing situated in the wellbore and the wellbore. in an embodiment, the positions of the mems sensors in the wellbore are determined using a plurality of data interrogation units spaced along the length of the wellbore. in an embodiment, entry of the cement slurry into a downhole end of the annulus is determined when at least a portion of the mems sensors are detected by a data interrogation unit situated proximate to the downhole end of the annulus. in an embodiment, the pumping is discontinued when at least a portion of the mems sensors are detected by a data interrogation unit situated proximate to an uphole end of the annulus. in an embodiment, the pumping is discontinued when at least a portion of the mems sensors are detected by a data interrogation unit situated proximate to a downhole end of the annulus. in an embodiment, the mems sensors are powered by a plurality of power sources spaced along the length of the wellbore. in an embodiment, the mems sensors are self-powered. in an embodiment, the mems sensors comprise radio frequency identification device (rfid) tags. disclosed herein is a method of servicing a wellbore, comprising placing into a wellbore a first wellbore servicing fluid comprising a plurality of micro-electro-mechanical system (mems) sensors having a first type of radio frequency identification device (rfid) tag, after placing the first wellbore servicing fluid into the wellbore, placing into the wellbore a second wellbore servicing fluid comprising a plurality of mems sensors having a second type of rfid tag, and determining positions in the wellbore of the mems sensors having the first and second types of rfid tags. in an embodiment, the method further comprises determining volumetric regions in the wellbore occupied by the first and second wellbore servicing fluids, using the positions in the wellbore of the mems sensors having the first and second types of rfid tags. in an embodiment, the mems sensors having the first type of rfid tag are added to a portion of the first wellbore servicing fluid added to the well bore prior to a remainder of the first wellbore servicing fluid, and the mems sensors having the second type of rfid tag are added to a portion of the second wellbore servicing fluid added to the well bore prior to a remainder of the second wellbore servicing fluid. in an embodiment, the method further comprises determining an interface of the first wellbore servicing fluid and the second wellbore servicing fluid based on the positions in the wellbore of at least a portion of the mems sensors having the second type of rfid tag. in an embodiment, the method further comprises after placing the second wellbore servicing fluid into the wellbore, placing into the wellbore at least one third wellbore servicing fluid comprising a plurality of mems sensors having a type of rfid tag different from the rfid tag of the mems sensors of the second wellbore servicing fluid. in an embodiment, the rfid tags of the mems sensors of the at least one third wellbore servicing fluid are of the same type as the rfid tags of the mems sensors of the first wellbore servicing fluid. in an embodiment, the positions of the mems sensors in the wellbore are determined using a plurality of data interrogation units spaced along a length of the wellbore. in an embodiment, the mems sensors are powered by a plurality of power sources spaced along a length of the wellbore. in an embodiment, the mems sensors are self-powered. in an embodiment, apart from the rfid tags, the first and second wellbore servicing fluids are substantially the same compositionally. in an embodiment, irrespective of the rfid tags, the first and second wellbore servicing fluids are compositionally different. disclosed herein is a method of servicing a wellbore, comprising placing into a wellbore a first wellbore servicing fluid comprising a plurality of micro-electro-mechanical system (mems) sensors having a first type of radio frequency identification device (rfid) tag, after placing the first wellbore servicing fluid into the wellbore, placing into the wellbore a second wellbore servicing fluid comprising a plurality of mems sensors having the first type of rfid tag, and determining positions in the wellbore of the mems sensors having the first type of rfid tag, wherein the mems sensors of the first wellbore servicing fluid are added to a portion of the first wellbore servicing fluid added to the well bore prior to a remainder of the first wellbore servicing fluid, and the mems sensors of the second wellbore servicing fluid are added to a portion of the second wellbore servicing fluid added to the well bore prior to a remainder of the second wellbore servicing fluid. in an embodiment, the portions of the first and second wellbore servicing fluids are at least one of (a) of different volumes and (b) of different mems sensor loadings. in an embodiment, the at least one of the different volumes and the different sensor loadings of the portions of the first and second wellbore servicing fluids is detectable as a signal by a plurality of data interrogation units spaced along a length of the wellbore and transmittable from the data interrogation units to a processing unit situated at an exterior of the wellbore. in an embodiment, the method further comprises determining at least one of a volumetric region of the wellbore occupied by a wellbore servicing fluid and an interface of the wellbore servicing fluids, using the at least one of the different volumes and the different sensor loadings of the portions of the first and second wellbore servicing fluids. in an embodiment, the method further comprises after placing the second wellbore servicing fluid into the wellbore, placing into the wellbore at least one third wellbore servicing fluid comprising a plurality of mems sensors having the first type of rfid tag, wherein the mems sensors of the at least one third wellbore servicing fluid are added to a portion of the at least one third wellbore servicing fluid added to the well bore prior to a remainder of the at least one third wellbore servicing fluid. in an embodiment, the first, second and at least one third wellbore servicing fluids are substantially the same compositionally. in an embodiment, the first, second and at least one third wellbore servicing fluids are compositionally different. in an embodiment, the first and at least one third wellbore servicing fluids are substantially the same compositionally, and the second wellbore servicing fluid comprises a spacer fluid. in an embodiment, the first, second and at least one third wellbore servicing fluids comprise a drilling fluid, a spacer fluid and a cement slurry, respectively. in an embodiment, the method further comprises after placing the at least one third wellbore servicing fluid into the wellbore, placing into the wellbore a fourth wellbore servicing fluid comprising a plurality of mems sensors having the first type of rfid tag, wherein the mems sensors of the fourth wellbore servicing fluid are added to a portion of the fourth wellbore servicing fluid added to the well bore prior to a remainder of the fourth wellbore servicing fluid, wherein the fourth wellbore servicing fluid comprises a displacement fluid. in an embodiment, the first, second, at least one third and fourth wellbore servicing fluids are pumped down a casing of the wellbore; wherein after reaching a downhole end of the wellbore, the first, second and at least one third wellbore servicing fluids are displaced into an annulus bounded by the wellbore and the casing, wherein when the fourth wellbore servicing fluid reaches the downhole end of the wellbore, pumping of the wellbore servicing fluids is discontinued so as to prevent the fourth wellbore servicing fluid from entering the annulus. in an embodiment, the positions of the mems sensors in the wellbore are determined using a plurality of data interrogation units spaced along a length of the wellbore. in an embodiment, the mems sensors are powered by a plurality of power sources spaced along a length of the wellbore. in an embodiment, the mems sensors are self-powered. disclosed herein is a method of servicing a wellbore, comprising placing a plurality of mems sensors in a fracture that is in communication with the wellbore, the mems sensors being configured to measure at least one parameter associated with the fracture, measuring the at least one parameter associated with the fracture, transmitting data regarding the at least one parameter from the mems sensors to an exterior of the wellbore, and processing the data. in an embodiment, the at least one parameter comprises a temperature, a stress, a strain, a co 2 concentration, an h 2 s concentration, a ch 4 concentration, a moisture content, a ph, an na + concentration, a k + concentration or a concentration. in an embodiment, the data regarding the at least one parameter is transmitted from the mems sensors to the exterior of the wellbore via a plurality of data interrogation units spaced along a length of the wellbore. in an embodiment, the mems sensors are powered by a plurality of power sources spaced along a length of the wellbore. in an embodiment, the mems sensors are self-powered. disclosed herein is a method of servicing a wellbore, comprising placing a plurality of micro-electro-mechanical system (mems) sensors in a cement slurry, placing the cement slurry in an annulus disposed between a wall of the wellbore and a casing situated in the wellbore, allowing the cement slurry to cure to form a cement sheath, determining spatial coordinates of the mems sensors with respect to the casing, mapping planar coordinates of the mems sensors in a plurality of cross-sectional planes spaced along a length of the wellbore. disclosed herein is a system, comprising a wellbore, a wellbore servicing fluid situated in the wellbore, the wellbore servicing fluid comprising a plurality of micro-electro-mechanical system (mems) sensors, a casing situated in the wellbore, a plurality of centralizers disposed between a wall of the wellbore and the casing, and spaced along a length of the casing, a plurality of data interrogation units, each data interrogation unit being coupled to a separate centralizer, the data interrogation units being adapted to obtain data from the mems sensors, and a processing unit situated at an exterior of the wellbore and adapted to receive the data from the data interrogation units and process the data. in an embodiment, the data interrogation units are molded to the centralizers. in an embodiment, the data interrogation units are molded to the centralizers, using a composite resin material. in an embodiment, the data interrogation units are powered by at least one turbogenerator situated in the wellbore. in an embodiment, a turbine in the turbogenerator is driven by at least one of the wellbore servicing fluid and a production fluid flowing through the wellbore. in an embodiment, the data interrogation units are powered by at least one quantum thermoelectric generator situated in the wellbore. in an embodiment, the at least one quantum thermoelectric generator is situated in the casing. in an embodiment, the at least one quantum thermoelectric generator is situated in production tubing disposed in the wellbore. disclosed herein is a system, comprising a wellbore, a casing situated in the wellbore, a float collar coupled to the casing proximate to a downhole end of the casing, and a wiper plug comprising mems sensors attached to a downhole end of the wiper plug, the wiper plug being configured to engage with the float collar, the mems sensors being configured to measure pressure. in an embodiment, the mems sensors are molded to the wiper plug, using a composite resin material. in an embodiment, the system further comprises a plurality of data interrogation units attached to an inner wall of the casing and spaced along a length of the casing. in an embodiment, the data interrogation units are molded to the casing, using a composite resin material. disclosed herein is a system, comprising a wellbore, a casing situated in the wellbore, a wiper plug, and a float collar coupled to the casing proximate to a downhole end of the casing, the float collar comprising mems sensors attached to an uphole end of the float collar, the uphole end of the float collar being configured to engage with the wiper plug, the mems sensors being configured to measure pressure. in an embodiment, the mems sensors are molded to the float collar, using a composite resin material. disclosed herein is a method of servicing a wellbore, comprising pumping a cement slurry down a casing situated in the wellbore and up an annulus situated between the casing and a wall of the wellbore, pumping a wiper plug down the casing, the wiper plug comprising mems sensors at a downhole end of the wiper plug configured to engage with a float collar, the float collar being coupled to the casing and situated proximate to a downhole end of the casing, the mems sensors being configured to measure pressure, discontinuing pumping of the wiper plug when a pressure measured by the mems sensors exceeds a threshold value. in an embodiment, the mems sensors are molded to the wiper plug, using a composite resin material. in an embodiment, pumping the wiper plug down the casing comprises pumping a displacement fluid down the casing in back of the wiper plug, wherein discontinuing pumping of the wiper plug comprises terminating pumping of the displacement fluid. in an embodiment, the method further comprises determining a position of the wiper plug along a length of the casing as the wiper plug is pumped down the casing. in an embodiment, determining the position of the wiper plug along the length of the casing comprises interrogating the mems sensors using data interrogation units attached to an inner wall of the casing and spaced along the length of the casing. disclosed herein is a system, comprising a wellbore, a casing situated in the wellbore, and a plurality of composite resin elements molded to an inner wall of the casing and spaced along a length of the casing, the composite resin elements comprising micro-electro-mechanical system (mems) sensors. in an embodiment, the system further comprises a wiper plug situated in the casing, the wiper plug comprising a data interrogation unit configured to interrogate mems sensors in a vicinity of the wiper plug. in an embodiment, the mems sensors are configured to measure a ch 4 concentration in the casing. in an embodiment, the system further comprises a wellbore servicing fluid situated in the wellbore, the wellbore servicing fluid comprising a plurality of mems sensors, wherein the mems sensors in the wellbore servicing fluid are configured to measure at least one parameter and transmit data associated with the at least one parameter from an interior of the wellbore to an exterior of the wellbore via a data transfer network consisting of the mems sensors in the wellbore servicing fluid and the mems sensors in the composite resin elements, and a processing unit situated at an exterior of the wellbore and adapted to receive the data from the mems sensors and process the data. in an embodiment, the composite resin elements are embedded in grooves in the casing. in an embodiment, the composite resin elements are not raised with respect to the inner wall of the casing. in an embodiment, the composite resin elements are mounted flush with the inner wall of the casing. in an embodiment, the composite resin elements are situated on casing collars. disclosed herein is a system, comprising a wellbore, a casing situated in the wellbore, and a plurality of composite resin elements molded to an outer wall of the casing and spaced along a length of the casing, the composite resin elements comprising micro-electro-mechanical system (mems) sensors. in an embodiment, the mems sensors are configured to measure at least one of a ch 4 concentration, a co 2 concentration and an h 2 s concentration in an annulus situated between the casing and a wall of the wellbore. in an embodiment, the system further comprises a wellbore servicing fluid situated in the wellbore, the wellbore servicing fluid comprising a plurality of mems sensors, wherein the mems sensors in the wellbore servicing fluid are configured to measure at least one parameter and transmit data associated with the at least one parameter from an interior of the wellbore to an exterior of the wellbore via a data transfer network consisting of the mems sensors in the wellbore servicing fluid and the mems sensors in the composite resin elements, and a processing unit situated at an exterior of the wellbore and adapted to receive the data from the mems sensors and process the data. in an embodiment, the composite resin elements are embedded in grooves in the casing. in an embodiment, the composite resin elements are not raised with respect to the outer wall of the casing. in an embodiment, the composite resin elements are mounted flush with the outer wall of the casing. in an embodiment, the composite resin elements are situated on casing collars. disclosed herein is a method of servicing a wellbore, comprising placing a plurality of micro-electro-mechanical system (mems) sensors in a wellbore servicing fluid, placing the wellbore servicing fluid in the wellbore, forming a network consisting of the mems sensors in the wellbore servicing fluid and mems sensors situated in composite resin elements, the composite resin elements being molded to an inner wall of a casing situated in the wellbore and spaced along a length of the casing, and transmitting data obtained by the mems sensors in the wellbore servicing fluid from an interior of the wellbore to an exterior of the wellbore via the network. disclosed herein is a method of servicing a wellbore, comprising placing a plurality of micro-electro-mechanical system (mems) sensors in a wellbore servicing fluid, placing the wellbore servicing fluid in the wellbore, forming a network consisting of the mems sensors in the wellbore servicing fluid and mems sensors situated in composite resin elements, the composite resin elements being molded to an outer wall of a casing situated in the wellbore and spaced along a length of the casing, and transmitting data obtained by the mems sensors in the wellbore servicing fluid from an interior of the wellbore to an exterior of the wellbore via the network. disclosed herein is a system, comprising a wellbore, a casing situated in the wellbore, a plurality of centralizers disposed between a wall of the wellbore and the casing and spaced along a length of the casing, a plurality of composite resin elements molded to the centralizers, the composite resin elements comprising micro-electro-mechanical system (mems) sensors. in an embodiment, the mems sensors are configured to measure at least one of a ch 4 concentration, a co 2 concentration and an h 2 s concentration in an annulus situated between the casing and a wall of the wellbore. in an embodiment, the system further comprises a wellbore servicing fluid situated in the wellbore, the wellbore servicing fluid comprising a plurality of mems sensors, wherein the mems sensors in the wellbore servicing fluid are configured to measure at least one parameter and transmit data associated with the at least one parameter from an interior of the wellbore to an exterior of the wellbore via a data transfer network consisting of the mems sensors in the wellbore servicing fluid and the mems sensors in the composite resin elements, and a processing unit situated at an exterior of the wellbore and adapted to receive the data from the mems sensors and process the data. disclosed herein is a method of servicing a wellbore, comprising placing a plurality of micro-electro-mechanical system (mems) sensors in a wellbore servicing fluid, placing the wellbore servicing fluid in the wellbore, forming a network consisting of the mems sensors in the wellbore servicing fluid and mems sensors situated in composite resin elements, the composite resin elements being molded to a plurality of centralizers disposed between a wall of the wellbore and a casing situated in the wellbore, the centralizers being spaced along a length of the casing, and transmitting data obtained by the mems sensors in the wellbore servicing fluid from an interior of the wellbore to an exterior of the wellbore via the network. disclosed herein is a system, comprising a wellbore, a casing situated in the wellbore, and a plastic casing shoe comprising micro-electro-mechanical system (mems) sensors. in an embodiment, the casing shoe comprises a guide shoe. in an embodiment, the casing shoe comprises a float shoe. disclosed herein is a system, comprising a wellbore, a casing situated in the wellbore, a wellbore servicing fluid situated in the wellbore, the wellbore servicing fluid comprising a plurality of micro-electro-mechanical system (mems) sensors, a plurality of interrogation/communication units spaced along a length of the wellbore, wherein each interrogation/communication unit comprises a radio frequency (rf) transceiver configured to interrogate the mems sensors and receive data from the mems sensors regarding at least one wellbore parameter measured by the mems sensors, at least one acoustic sensor configured to measure at least one further wellbore parameter, an acoustic transceiver configured to receive the mems sensor data from the rf transceiver and data from the acoustic sensor regarding the at least one further wellbore parameter and convert the mems sensor data and the acoustic sensor data into acoustic signals, the acoustic transceiver comprising an acoustic transmitter configured to transmit the acoustic signals representing the mems sensor data and the acoustic sensor data on and up the casing to a neighboring interrogation/communication unit situated uphole from the acoustic transmitter, and an acoustic receiver configured to receive acoustic signals representing the mems sensor data and the acoustic sensor data from a neighboring interrogation/communication unit situated downhole from the acoustic receiver and to send the acoustic signals representing the mems sensor data and the acoustic sensor data to the acoustic transmitter for further transmission up the casing, and a processing unit situated at an exterior of the wellbore, the processing unit being configured to receive the acoustic signals representing the mems sensor data and the acoustic sensor data and to process the mems sensor data and the acoustic sensor data. in an embodiment, the interrogation/communication units are powered via a power line running between the units and a power source situated at an exterior of the wellbore. in an embodiment, the interrogation/communication units are powered by at least one turbogenerator situated in the wellbore. in an embodiment, a turbine in the turbogenerator is driven by at least one of the wellbore servicing fluid and a production fluid flowing through the wellbore. in an embodiment, the interrogation/communication units are powered by at least one quantum thermoelectric generator situated in the wellbore. in an embodiment, the at least one quantum thermoelectric generator is situated in the casing. in an embodiment, the at least one quantum thermoelectric generator is situated in production tubing disposed in the wellbore. in an embodiment, the mems sensors comprise radio frequency identification device (rfid) tags. disclosed herein is a method of servicing a wellbore, comprising placing a wellbore servicing fluid comprising a plurality of micro-electro-mechanical system (mems) sensors in the wellbore, placing a plurality of acoustic sensors in the wellbore, obtaining data from the mems sensors and data from the acoustic sensors using a plurality of data interrogation and communication units spaced along a length of the wellbore, transmitting the data obtained from the mems sensors and the acoustic sensors from an interior of the wellbore to an exterior of the wellbore using the casing as an acoustic transmission medium, and processing the data obtained from the mems sensors and the acoustic sensors. in an embodiment, the method further comprises determining a presence of a liquid phase and a solid phase of a cement slurry situated in the wellbore, using the acoustic sensors. in an embodiment, the method further comprises determining a presence of at least one of cracks and voids in a cement sheath situated in the wellbore, using the acoustic sensors. in an embodiment, the method further comprises detecting a presence of mems sensors in the wellbore servicing fluid, using the acoustic sensors. in an embodiment, the method further comprises determining a porosity in a formation adjacent to the wellbore, using the acoustic sensors. disclosed herein is a method of servicing a wellbore, comprising placing a plurality of micro-electro-mechanical system (mems) sensors in a wellbore composition, flowing the wellbore composition in the wellbore, and determining one or more fluid flow properties or characteristics of the wellbore composition from data provided by the mems sensors during the flowing of the wellbore composition, wherein the fluid flow properties or characteristics include an indication of laminar and/or turbulent flow of the wellbore composition, wherein the fluid flow properties or characteristics include velocity and/or flow rate of the wellbore composition, and wherein the wellbore composition is circulated in the wellbore and a fluid flow profile is determined over at least a portion of the length of the wellbore. in an embodiment, the method further comprises comparing the fluid flow profile to a theoretical or design standard for the fluid flow profile, wherein the comparing is carried out in real-time during the servicing of the wellbore. in an embodiment, the method further comprises altering or adjusting one or more operational parameters of the servicing of the wellbore in response to the comparing in real time, wherein the altering or adjusting is effective to change a condition of the wellbore, wherein the condition of the wellbore is a build up of material on an interior of the wellbore and the altering or adjusting includes remedial action to reduce an amount of the build up, wherein the wellbore composition is a drilling fluid and the build up is a gelled mud or filter cake, wherein the wellbore is treated to remove at least a portion of the build up, wherein the treatment to remove at least a portion of the build up comprises changing a flow rate of the wellbore composition, changing a characteristic of the wellbore composition, placing an additional composition in the wellbore to react with the build up or change a characteristic of the buildup, moving a conduit within the wellbore, placing a tool downhole to physically contact and removing the build up, or any combination thereof, wherein the fluid flow property or characteristic is an actual time of arrival of at least a portion of the wellbore composition comprising the mems sensors, wherein the actual time of arrival is compared to an expected time of arrival to determine a condition of the wellbore, wherein where the actual time of arrival is before the expected time of arrival indicates a decreased flow path through the wellbore, wherein the decreased flow path through the wellbore is attributable at least in part to a build up of gelled mud or filter cake on an interior of the wellbore, and wherein the flow profile identifies a location of one or more areas of restricted flow in the wellbore. in an embodiment, the method further comprises comparing the location of one or more areas of restricted flow in the wellbore to a theoretical or design standard for the wellbore, wherein the one or more areas of restricted fluid flow correspond to an expected location of a downhole tool or component based upon the theoretical or design standard for the wellbore, wherein the downhole tool or component is a casing collar, centralizer, or spacer. also disclosed herein is a method of servicing a wellbore, comprising placing a plurality of micro-electro-mechanical system (mems) sensors in at least a portion of a spacer fluid, a sealant composition, or both, pumping the spacer fluid followed by the sealant composition into the wellbore, and determining one or more fluid flow properties or characteristics of the spacer fluid and/or the cement composition from data provided by the mems sensors during the pumping of the spacer fluid and sealant composition into the wellbore, wherein the wellbore comprises a casing forming an annulus with the wellbore wall, wherein the sealant composition is a cement slurry, and wherein the cement slurry is pumped down the annulus in a reverse cementing service. in an embodiment, the method further halts the pumping of the cement slurry in the wellbore in response to detection of mems sensors at a given location in the wellbore. in an embodiment, the method further comprises monitoring the wellbore for movement of the mems sensors after the halting of the pumping. in an embodiment, the method further comprises signaling an operator upon detection of movement of the mems sensors after the halting of the pumping. in an embodiment, the method further comprises activating at least one device to prevent flow out of the well upon detection of movement of the mems sensors after the halting of the pumping. disclosed herein is a method of servicing a wellbore, comprising placing a plurality of micro-electro-mechanical system (mems) sensors in at least a portion of a sealant composition, placing the sealant composition in an annular space formed between a casing and the wellbore wall, and monitoring, via the mems sensors, the sealant composition and/or the annular space for a presence of gas, water, or both, wherein the sealant composition is a cement slurry and wherein the monitoring is carried out prior to setting of the cement slurry. in an embodiment, the method further comprises signaling an operator upon detection of gas and/or water. in an embodiment, the method further comprises providing a location in the wellbore corresponding a detection of gas and/or water. in an embodiment, the method further comprises applying pressure to the well upon detection of gas and/or water. in an embodiment, the method further comprises activating at least one device to prevent flow out of the well upon detection gas and/or water, wherein the cement slurry is pumped down the annulus in a reverse cementing service, wherein the cement slurry is pumped down the casing and up the annulus in a conventional cementing service, wherein the sealant composition is a cement slurry and wherein the monitoring is carried out after setting of the cement slurry, and wherein the monitoring is carried out by running an interrogator tool into the wellbore at one or more service intervals over the operating life of the well. in an embodiment, the method further comprises providing a location in the wellbore corresponding a detection of gas and/or water. in an embodiment, the method further comprises assessing the integrity of the casing and/or the cement proximate the location where gas and/or water is detected. in an embodiment, the method further comprises performing a remedial action on the casing and/or the cement proximate the location where gas and/or water is detected, wherein the remedial action comprises placing additional sealant composition proximate the location where gas and/or water is detected, wherein the remedial action comprises replacing and/or reinforcing the casing proximate the location where gas and/or water is detected. in an embodiment, the method further comprises upon detection of gas and/or water, adjusting an operating condition of the well, wherein the operating condition comprises temperature, pressure, production rate, length of service interval, or any combination thereof, wherein adjusting the operating condition extends an expected service life of the wellbore. also disclosed herein is a method of servicing a wellbore, comprising placing a plurality of micro-electro-mechanical system (mems) sensors in a wellbore composition, placing the wellbore composition in the wellbore, and monitoring, via the mems sensors, the wellbore and/or the surrounding formation for movement, wherein the mems sensors are in a sealant composition placed within an annular casing space in the wellbore and wherein the movement comprises a relative movement between the sealant composition and the adjacent casing and/or wellbore wall, wherein at least a portion of the wellbore composition comprising the mems flows into the surrounding formation and wherein the movement comprises a movement in the formation. in an embodiment, the method further comprises upon detection of the movement in the formation, adjusting an operating condition of the well, wherein the operating condition comprises a production rate of the wellbore, wherein adjusting the production rate extends an expected service life of the wellbore, wherein the gas comprises carbon dioxide, hydrogen sulfide, or combinations thereof, wherein a corrosive gas is detected, wherein the integrity of the casing and/or cement is compromised via corrosion and further comprising performing a remedial action on the casing and/or the cement proximate the location where corrosion is present, wherein the wellbore is associated with a carbon dioxide injection system and wherein the monitoring an undesirable leak or loss of zonal isolation in the wellbore. in an embodiment, the method further comprises performing a remedial action on the casing and/or the cement proximate a location where the leak or loss of zonal isolation is detected. in an embodiment, the method further comprises placing carbon dioxide into the wellbore and surrounding formation to sequester the carbon dioxide. improved methods of monitoring wellbore and/or surround formation parameters and conditions (e.g., sealant condition) from inception (e.g., drilling and completion) through the service lifetime of the wellbore as disclosed herein provide a number of advantages. such methods are capable of detecting changes in parameters in wellbore and/or surrounding formation such as moisture content, temperature, ph, the concentration of ions (e.g., chloride, sodium, and potassium ions), the presence of gas, etc. such methods provide this data for monitoring the condition of the wellbore and/or formation from the initial quality control period (e.g., during drilling and/or completion of the wellbore, for example during cementing of the wellbore), through the well's useful service life, and through its period of deterioration and/or repair. such methods are cost efficient and allow determination of real-time data using sensors capable of functioning without the need for a direct power source (i.e., passive rather than active sensors), such that sensor size be minimal to avoid an operational limitations (for example, small mems sensors to maintain sealant strength and sealant slurry pumpability). the use of mems sensors for determining wellbore and/or formation characteristics or parameters may also be utilized in methods of pricing a well servicing treatment, selecting a treatment for the well servicing operation, and/or monitoring a well servicing treatment during real-time performance thereof, for example, as described in u.s. pat. pub. no. 2006/0047527 a1, which is incorporated by reference herein in its entirety. while embodiments of the methods have been shown and described, modifications thereof can be made by one skilled in the art without departing from the spirit and teachings of the present disclosure. the embodiments described herein are exemplary only, and are not intended to be limiting. many variations and modifications of the methods disclosed herein are possible and are within the scope of this disclosure. where numerical ranges or limitations are expressly stated, such express ranges or limitations should be understood to include iterative ranges or limitations of like magnitude falling within the expressly stated ranges or limitations (e.g., from about 1 to about 10 includes, 2, 3, 4, etc.; greater than 0.10 includes 0.11, 0.12, 0.13, etc.). use of the term “optionally” with respect to any element of a claim is intended to mean that the subject element is required, or alternatively, is not required. both alternatives are intended to be within the scope of the claim. use of broader terms such as comprises, includes, having, etc. should be understood to provide support for narrower terms such as consisting of, consisting essentially of, comprised substantially of, etc. accordingly, the scope of protection is not limited by the description set out above but is only limited by the claims which follow, that scope including all equivalents of the subject matter of the claims. each and every claim is incorporated into the specification as an embodiment of the present disclosure. thus, the claims are a further description and are an addition to the embodiments of the present disclosure. the discussion of a reference herein is not an admission that it is prior art to the present disclosure, especially any reference that may have a publication date after the priority date of this application. the disclosures of all patents, patent applications, and publications cited herein are hereby incorporated by reference, to the extent that they provide exemplary, procedural or other details supplementary to those set forth herein.
131-679-503-118-806
GB
[ "ES", "GB", "AT", "EP", "CA", "DE", "AU", "US", "WO" ]
G05B23/02
1997-03-13T00:00:00
1997
[ "G05" ]
dynamic plant monitoring apparatus displays fault information
monitoring the operation of a dynamic plant apparatus such as a gas turbine 10, by processing signals from distributed sensors 11-15 and generating high level fault signals based on a series of detected low level faults. diagnostic module 32 processes signals from sensors 11-15 and, when a signal value lies outside a stored limit, produces a basic fault token signal which is sent to fault manager 33. fault manager 33 comprises a buffer (50, fig. 2) of fault token signals ordered in time, and operates in a clocked manner so that each of a plurality of different storage sites or temporal buckets (40, fig. 2), having a number of storage locations (41-43, fig. 2) for different predetermined basic fault tokens, is opened for a specified clock period for receipt of basic fault tokens. if all the storage locations of a storage site are filled within that clock period, an associated high level fault signal is produced and a fault message displayed (fig. 4).
a monitoring system for monitoring operation of dynamic plant apparatus comprising: a plurality of sensors measuring dynamically varying operating parameters of the monitored dynamic plant apparatus and for generating electrical parameter signals indicative of the measured operating parameters; electronic processing means for processing the electrical parameter signals and which is capable of thereby producing a plurality of different fault signals each of which respectively indicates that the monitored dynamic plant apparatus has a respective one of a plurality of faults; display means for displaying fault information to a user of the monitoring system, the display means being controlled by the fault signals produced by the electronic processing means; wherein: the electronic processing means compares the values of at least some of the measured parameter signals each with respective predefined limit values stored in memory by the electronic processing means and when the comparison shows that the value of a measured parameter signal is outside the respective limit value the electronic processing means produces a respective basic fault token signal, whereby: the electronic processing means has storage means for storing electrical signals which operates in a clocked manner and stores the basic fault token signals; the storage means has a plurality of different storage sites, each storage site having a plurality of storage locations for predetermined ones of the basic fault token signals; and when all storage locations of a storage site are filled by basic fault token signals produced in a prespecified clocked interval then the electronic processing means produces a high level fault signal, each storage site having an associated high level fault signal, the high level fault signal causing the display means to display a fault message. a monitoring system as claimed in claim 1 wherein the electronic processing means compares the value of a first measured parameter signal with the value of a second measured parameter signal and if after a change of the value of the first measured parameter signal there is not a related change of value of the second measured parameter signal within a predefined time period then the electronic processing means produces a basic fault token signal. a monitoring system as claimed in claim 1 or claim 2 wherein on generation of the high level fault signal the display means displays information both regarding the high level fault and also the basic faults associated with the high level fault signal. a monitoring system as claimed in any of the preceding claims wherein the storage means stores predefined relationships between high level fault signals and basic fault token signals and when a high level fault signal is produced the monitoring system can determine which basic fault token signals, other than those in the storage site related to said high level fault signal, result from the detected high level fault, and information regarding such resulting basic faults is displayed by the display means. a gas turbine having a monitoring system as claimed in any one of claims 1 to 4. a gas turbine as claimed in claim 5, wherein the measured dynamically varying operating parameters include: a current to a servo-motor for a second stage nozzle of the gas turbine; a desired position control signal for controlling position of the second stage nozzle; a position feedback signal indicative of a measured position of the second stage nozzle; a signal indicative of rotational speed of a compressor shaft of the gas turbine; and a signal indicative of rotational speed of an output shaft of the gas turbine. a gas turbine as claimed in claim 6 wherein the monitoring system produces a first basic fault token signal when the current to the servo-motor exceeds a first predetermined threshold. a gas turbine as claimed in claim 6 or claim 7 wherein the monitoring system produces a second basic fault token signal when the desired position control signal for controlling position of the second stage nozzle exceeds a second predefined threshold. a gas turbine as claimed in any one of claim 6 to 8 wherein the monitoring system produces a third basic fault token when the signal indicative of the rotational speed of the compressor shaft exceeds a third predefined threshold. a gas turbine as claimed in any of claims 6 to 9 wherein the monitoring system produces a fourth basic fault token signal when the measured position of the second stage nozzle fails to match the desired position of the second stage nozzle within a first predefined time period. a gas turbine as claimed in any one of claims 6 to 10 wherein the monitoring system produces a fifth basic fault token signal when a comparison of the signal indicative of the rotational speed of the output shaft with a turbine output shaft reference speed signal shows that the output shaft speed does not match the reference speed within a second predefined time period. a gas turbine as claimed in claim 6 wherein the monitoring system produces: a first basic fault token signal when the current to the servo-motor exceeds a first predefined threshold; a second basic fault token signal when the desired position control signal for controlling position of the second stage nozzle exceeds a second predefined threshold; a fourth basic fault token signal when the measured position of the second stage nozzle does not match the desired position of the second stage nozzle within a first predefined time period; and the monitoring system produces a high level fault token signal indicating failure of a servo-motor of the second stage nozzle when a storage site has three storage locations filled one each with the first, second and fourth basic fault token signals within a specified time period. a gas turbine as claimed in claim 12 wherein the storage means stores predefined relationships between the high level fault signal indicating failure of the servo-motor of the second stage nozzle and: a third basic fault token signal which is produced when the signal indicative of compressor shaft speed exceeds a specified threshold; and a fifth basic fault token signal which is produced when a comparison of the signal indicative of the rotational speed of the output shaft with a turbine output shaft reference speed signal indicates that the rotational speed of the output shaft does not match the reference speed within a second predefined time period; and wherein: the monitoring system uses the predefined relations to record that production of the third and fifth basic fault token signal can be accounted for by a high level fault of saturation of the servo-motor of the second stage nozzle and when the high level fault token signal indicating failure of the servo-motor is produced the monitoring system displays on the display means a message detailing recent third and fifth basic fault token signals which have been produced and notes that such signals can be accounted for by the high level fault detected. a gas turbine as claimed in any one of claims 5 to 13 wherein the measured dynamically varying parameters include: a signal indicative of running gas pressure; and a signal indicative of current supplied to a running gas speed ratio valve. a gas turbine as claimed in claim 14 wherein a sixth basic fault token signal is produced when the signal indicative of the running gas pressure exceeds a sixth predefined limit and a seventh basic fault token signal is produced when the signal indicative of current supplied to a running gas speed ratio valve exceeds a seventh predefined limit. a gas turbine as claimed in claim 15 wherein a high level fault token signal indicating failure of a coil of running gas speed ratio valve is produced when a storage site has two storage locations filled one each with the sixth and seventh basic fault token signals in a specified period. a gas turbine as claimed in any one of claims 5 to 16 wherein a basic fault token signal is produced when a change in position of a gas fuel valve is sensed by a sensor without a corresponding change in gas fuel flow being sensed by another sensor within a predefined time period. a gas turbine as claimed in any one of claims 5 to 17 wherein a basic fault token signal is produced when a change in temperature of gas discharged from a compressor part of the gas turbine is sensed by a temperature sensor without a corresponding change in flow of fuel gas being sensed by a gas flow sensor within a predefined time period. a gas turbine as claimed in any one of claims 5 to 18 wherein a basic fault token signal is produced when a change in flame intensity is sensed by a flame detector sensor without a corresponding change in gas control valve position reference signal within a predefined time period. a gas turbine as claimed in any one of claims 5 to 19 wherein the gas turbine is connected to an electricity generator and drives the generator to produce electrical power, sensors are provided to sense dynamically varying operating parameters of the electricity generator and a basic fault token signal is produced when a change in power output of the electricity generator is sensed by a power sensor without a corresponding change in temperature of a wheel space of a turbine of the gas turbine being sensed by a temperature sensor within a predefined time period. a gas turbine as claimed in any one of claims 5 to 19 wherein the gas turbine is connected to an electricity generator and drives the electricity generator to produce electrical power, sensors are provided to sense dynamically varying operating parameters of the electricity generator and a basic fault token signal is produced when a change in fuel gas flow rate is sensed by a flow rate sensor without a corresponding change in power output of the electricity generator being sensed by a power sensor within a predefined time period. a gas turbine as claimed in any one of claims 5 to 19 wherein the gas turbine is connected to an electricity generator and drives the electricity generator to produce electrical power, sensors are provided to sense dynamically varying operating parameters of the electricity generator and a basic fault token signal is produced when a change in power output of the electricity generator is sensed by a power sensor without a corresponding change in flame intensity being sensed by a flame detector within a predefined time period. a gas turbine as claimed in any one of claims 5 to 19 wherein the gas turbine is connected to an electricity generator and drives the electricity generator to produce electrical power, sensors are provided to sense dynamically varying operating parameters of the electricity generator and a basic fault token signal is produced when a change in power output of the electricity generator is sensed by a power sensor without a corresponding change in temperature of exhaust gasses of the gas turbine being sensed by a temperature sensor within a predefined time period. a gas turbine as claimed in any one of claims 5 to 19 wherein the gas turbine is connected to an electricity generator and drives the electricity generator to produce electrical power, sensors are provided to sense dynamically varying operating parameters of the electricity generator and a basic fault token signal is produced when a change in temperature of a stator of the electricity generator is sensed without a corresponding change in the maximum temperature of air drawn into a compressor part of the gas turbine being sensed by a temperature sensor within a predefined time period. a gas turbine as claimed in any one of claims 5 to 19 wherein the gas turbine is connected to an electricity generator and drives the electricity generator to produce electrical power, sensors are provided to sense dynamically varying operating parameters of the electricity generator and a basic fault token signal is produced when a change in temperature of a stator of the electricity generator is sensed by a temperature sensor without a corresponding change in temperature of the exhaust gasses of the gas turbine being sensed by another temperature sensor within a predefined time period. a gas turbine as claimed in any one of claims 5 to 19 wherein the gas turbine is connected to an electricity generator and drives the electricity generator to produce electrical power, sensors are provided to sense dynamically varying operating parameters of the electricity generator and a basic fault token signal is produced when a change in temperature of a bearing of the electricity generator is sensed by a temperature sensor without a corresponding change in temperature of cooling gas entering the electricity generator being sensed by another temperature sensor within a predefined time period. a gas turbine as claimed in any one of claims 5 to 19 wherein the gas turbine is connected to an electricity generator and drives the electricity generator to produce electrical power, sensors are provided to sense dynamically varying operating parameters of the electricity generator and a basic fault token signal is produced when the value of electrical power output of the generator as sensed by a power sensor is less than zero for more than a predefined time period. a gas turbine as claimed in any one of claims 5 to 27 wherein the gas turbine is provided with no x control apparatus for controlling no x emissions in exhaust gasses of the gas turbine by injecting steam into the exhaust gasses and sensors are provided to monitor dynamically varying operating parameters of the no x control apparatus and a basic fault token signal is produced when a change in the rate of flow of steam to combustors of the gas turbine is sensed by a flow meter without a corresponding change in temperature of exhaust gasses of the gas turbine being sensed by a temperature sensor within a predefined time period. a gas turbine as claimed in any one of the claims 5 to 27 wherein the gas turbine is provided with no x control apparatus for controlling no x emissions in exhaust gasses of the gas turbine by injecting steam into the exhaust gases and sensors are provided to monitor dynamically varying operating parameters of the no x control apparatus and a basic fault token signal is produced when a change in injection pressure of a steam injector of the no x control apparatus is sensed by a pressure sensor without a corresponding change in flow rate of steam to combustors of the gas turbine being sensed by a flow meter within a predefined time period. a gas turbine as claimed in any one of claims 5 to 29 wherein a basic fault token signal is produced when a change in temperature of gasses discharged by a compressor part of the gas turbine is sensed by a temperature sensor without a corresponding change in temperature of gasses at the inlet of the compressor part being sensed by another temperature sensor within a predefined time period. a gas turbine as claimed in any one of claims 5 to 30 wherein a basic fault token signal is produced when a change in temperature of compressed gas leaving a compressor part of the gas turbine is sensed by a temperature sensor without a corresponding change in temperature of exhaust gasses of the gas turbine being sensed by another temperature sensor within a predefined time period. a gas turbine as claimed in any one of claims 5 to 31 wherein a basic fault token signal is produced when a change in temperature of exhaust gasses of the gas turbine is sensed without a corresponding change in fuel gas intervalve pressure being sensed by a pressure sensor within a predefined time period. a gas turbine as claimed in any one of claims 5 to 31 wherein a basic fault token signal is produced when a digital master protection signal changes value from 1 to 0. a gas turbine as claimed in claims 24 or 26 wherein the monitoring system produces a high level fault token signal indicating a mechanical fault in the electricity generator when a storage site has two storage location filled within a specified time period one each with the following basic fault token signals: the basic fault token signal indicating an increase in the bearing temperature without a corresponding change in the temperature of the cooling gas entering the electricity generator; and the basic fault token signal indicating a change in the stator temperature without a corresponding change in the maximum temperature of air drawn into the compressor part of the gas turbine. a gas turbine as claimed in claim 19 or claim 29 wherein the monitoring system produces a high level fault token signal indicating a problem in steam supply to the combustors when a storage site has two storage locations filled within a specified time period one each with the following basic fault token signals: the basic fault token signal indicating a change in sensed flame intensity without a corresponding change in the gas control valve position reference signal; and the basic fault token signal indicating a change in the injection pressure of the steam without a corresponding change in flow rate of steam to the combustors of the gas turbine. a gas turbine as claimed in either claim 6, claim 21,claim 22 or claim 23 wherein the monitoring system produces a high level fault token signal indicating a problem of air flow disturbance in the gas turbine affecting power output when a storage site has first and second storage locations filled in a specified time period one each with the following basic fault token signals: the basic fault token signal indicating a change in power output of the electricity generator is sensed without a corresponding change in flame intensity; and the basic fault token signal indicating a change in sensed power output without a corresponding change in the temperature of the exhaust gases; and when in the same specified time period a third storage location of the storage site remains unfilled by the basic fault token signal indicating that change in flow rate of fuel gas is not followed by a corresponding change in the sensed power output. a gas turbine as claimed in claim 19 or claim 32 wherein the monitoring system produces a high level fault token signal indicating a problem with turbine combustors when a storage site has two storage locations filled within a specified time period one each with the following basic fault token signals: the basic fault token signal indicating a change in flame intensity without a corresponding change in gas control valve position reference signal; and the basic fault token signal indicating a change in temperature of the exhaust temperature of the gas turbine without a corresponding change in fuel gas intervalve pressure. a gas turbine as claimed in claim 30 or claim 31 wherein the monitoring system produces a high level fault token signal indicating a problem with a temperature change in the compressor part of the gas turbine when a storage site has two storage locations filled within a specified time period one each with the following basic fault token signals: the basic fault token signal indicating a change in the temperature of the gasses discharged by the compressor part of the turbine without a corresponding change in temperature of the gasses at the inlet of the compressor part; and the basic fault token signal indicating a change in the temperature of the gasses discharged by the compressor part of the turbine without a corresponding change in the exhaust temperature of the gas turbine. a gas turbine as claimed in claim 27 or claim 33 wherein the monitoring system produces a high level fault token signal indicating a problem with the generator being in reverse power mode when a storage site has two storage locations filled within a first specified time period one each with the following basic fault token signals: the basic fault token signal indicating a change in value of the digital master protection signal from 1 to 0; and the basic fault token signal indicating that the electrical power output of the electricity generator is less than zero. a gas turbine as claimed in claim 39 wherein if the two storage locations remain filled for more than a second specified time period the monitoring system produces a high level fault token signal indicating a major fault.
the present invention relates to a monitoring system. the present invention will be described with reference to a monitoring system used to monitor a gas turbine, but the invention should not be considered as limited to such an application. indeed, the invention could be used to monitor the performance of many complex machines and/or processes, although it has particular relevance and application to the monitoring of the performance of gas turbines. gas turbines are used in many industrial plants. the maintenance costs of gas turbines can be quite high and it is therefore important to find ways of reducing these maintenance costs. in the past, routine preventative maintenance checks were used to minimise major problems by routine check of the performance of the gas turbine and the cure of minor problems. however, this can be improved by monitoring the turbine on a regular basis to establish when maintenance action needs to be performed based upon the actual condition of the gas turbine rather than on the fact that the gas turbine has operated for a defined number of operating hours. the difficulty in doing this is to provide a system which both takes the correct measurements from the gas turbine and which also can interpret the measurements in a manner which can alert a user of gas turbine failure and give an idea of the causes of the failure. relevant prior art is disclosed in document de 3 301 743 a1. the present invention provides a monitoring system for monitoring operation of dynamic plant apparatus comprising: a plurality of sensors measuring dynamically varying operating parameters of the monitored dynamic plant apparatus and for generating electrical parameter signals indicative of the measured operating parameters; electronic processing means for processing the electrical parameter signals and which is capable of thereby producing a plurality of different fault signals each of which respectively indicates that the monitored dynamic plant apparatus has a respective one of a plurality of faults; display means for displaying fault information to a user of the monitoring system, the display means being controlled by the fault signals produced by the electronic processing means; wherein: the electronic processing means compares the values of at least some of the measured parameter signals each with respective predefined limit values stored in memory by the electronic processing means and when the comparison shows that the value of a measured parameter signal is outside the respective limit value the electronic processing means produces a respective basic fault token signal, whereby: the electronic processing means has storage means for storing electrical signals which operates in a clocked manner and stores the basic fault token signals; the storage means has a plurality of different storage sites, each storage site having a plurality of storage locations for predetermined ones of the basic fault token signals; and when all storage locations of a storage site are filled by basic fault token signals produced in a prespecified clocked interval then the electronic processing means produces a high level fault signal, each storage site having an associated high level fault signal, the high level fault signal causing the display means to display a fault message. the present invention is advantageous in quickly providing to a user an indication of a fault and is also in providing a high level output based on a series of detected low level faults. a preferred embodiment of the present invention will now be described with reference to the accompanying drawings in which: figure 1 is a schematic flow chart showing in broad outline the operation of the monitoring system of the invention. figure 2 is a schematic diagram giving a flow chart illustrating the operation of part of the monitoring system of the invention; figure 3 is a diagram showing how various measured signals vary with time during a sensed failure. figure 4 shows how a display of the monitoring apparatus will look when a fault is reported. the enclosed diagrams and the description which follows will show how the monitoring system of the present invention is innovative in processing signals generated by sensors distributed throughout a gas turbine in order to produce output signals which indicate a smaller number of fault conditions. this will permit a user monitoring the gas turbine to rapidly assess the state of the gas turbine and its problems without having to himself consider each sensed signal independently. indeed the monitoring system of the present invention will produce a fault message which can be read by an engineer using the monitoring system. in figure 1 the monitored gas turbine 10 is illustrated schematically. the gas turbine is connected to an electricity generator (dynamo) 100 to produce electrical power. various sensors will be used to measure the dynamically varying parameters of operation of the turbine 10 and dynamically varying parameters of the electricity generator. sensors 11, 12, 13, 14 and 15 are shown schematically in the figure, connected to the turbine 10 to measure operating parameters of the turbine 10 or connected to the electricity generator 100. the sensors will measure the following parameters: table-tabl0001 1. tanz - second stage nozzle current 2. tsrnz - second stage nozzle reference position 3. tsnz - second stage nozzle actual position 4. tnh - compressor shaft actual speed 5. tnl - turbine output shaft actual speed 6. tnr - turbine output shaft reference speed 7. bb4 - output of a vibration sensor located close to the second stage nozzles of the turbine 8. ctim - maximum compressor inlet temperature (i.e. a measure of the maximum temperature of the air flow into the compressor portion of the gas turbine) 10. ctd - compressor discharge temperature (i.e. a measure of the temperature of the gases at the outlet of the compressor portion of the gas turbine) 11. ttws1 - temperature of a wheel space of the first stage turbine portion of the gas turbine 12. ttws2 - temperature of a wheel space of the second stage turbine portion of the gas turbine 13. ttws3 - temperature of a wheel space of the third stage turbine portion of the gas turbine 14. ttxd - exhaust temperature of the gas turbine 15. tgsd - electricity generator stator winding temperature 16. dtggc - electricity generator cooling gas temperature on intake (cold) 17. dtggh - electricity generator cooling gas temperature on output (hot) 18. btgj - bearing temperature 19. ftg - fuel gas temperature 20. fqg - fuel gas flow rate 21. fprgout - gas ratio servo-valve demand signal 22. fsgr - speed ratio valve position signal 23. cpd - compressor discharge pressure 24. fd - flame detector signal 25. wqr - required flow rate of steam to the combustors 26. wqj - actual flow of steam to the combustors 27. spsj - steam injection supply pressure 28. dwatt - sensed power output of electricity generator driven by the gas turbine 29. fsrout - gas control valve position reference signal 30. fpg2 - gas fuel intervalve pressure 31. l4 - master protect digital signal the above-noted signals are provided both to a turbine control system 20 and are also provided to a monitoring system 30 according to the present invention which is shown in various modules in figure 1, namely a data acquisition module 31, a diagnostic module 32, a fault manager module 33, and a display module 34. the signals mentioned above can initially be provided as analog or digital sensed signals for use by the turbine controller 20, but the data signals relayed from the turbine controller 20 to the monitoring system 30 will all be digital signals. the monitoring system 30 acquires signals from the physical sensors 11 to 15 (in the illustrated example via the turbine controller 20) by use of its module 31. the digital signals are then relayed to a diagnostic module 32 which can generate basic fault token signals which are then sent to fault manager 33. the fault manager module 33 then processes the basic fault token signals to produce high level fault token signals which are relayed to the display means 34 for display to the user of the monitoring system. the fault manager 33 can generate, amongst others, a high level fault token signal noz(sat) which will cause the display means 34 to display to the user a message indicating that the second stage nozzles of the turbine 10 have saturated. the fault manager 33 produces the signal noz(sat) when a "temporal bucket" of the fault manager 33 is full. the term "temporal bucket" will be used as a convenient 'shorthand' to describe a storage site (e.g. 40, see figure 2) within the memory of the fault manager 33 which has a number of locations for a plurality of basic fault token signals. in the case of generation of the high level fault signal noz(sat) this signal is generated when three low level fault tokens are produced by the diagnostic module 32, these error signals being: 1. an error signal tanz (n,h)- this shows the current tanz supplied to a servo-motor for the second stage nozzle of the turbine has moved from a normal value (n) to a high value (h). the high value is determined by the diagnostic module 32 as a value that is too high by comparing the value of the sensed parameter signal with a predetermined value stored in memory by the monitoring system 30. 2. tsrnz (n,h) - this shows that the reference signal (tsrnz) supplies to the second stage nozzles has grown from the normal value (n) to a value which is too high (h). again, this is determined by comparison of the sensed parameter tsrnz with a predefined threshold value stored in memory. 3. follows (tsnz-tsrnz) in 45 - this basic fault token signal indicates that signal tsnz representing the actual position of the second stage nozzle did not reach the value of the reference signal tsrnz within 30 seconds of the change of the reference signal tsrnz (in fact the basic fault token signal shows that the change took place over 45 seconds, which is a period longer than the threshold value of 30 seconds preprogrammed in the memory of the monitoring system 30). the temporal bucket 40 illustrated in figure 2 is a storage site which has three locations 41, 42 and 43, the first location 41 being a storage location for the tanz (n,h) basic fault token signal, the location 42 being a storage space for the tsrnz (n,h) basic fault token signal and the location 43 being the site for the follows (tsnz-tsrnz) in 45 basic fault token signal. when the temporal bucket 40 is full then the high level fault token signal noz (sat) is sent by the fault manager 33 to display means 34. the basic fault token signals generated by the diagnostic module 32 are received by the fault manager 33 in a buffer 50 of fault token signals which are ordered in time. the fault manager 33 will operate in a clocked manner and will open each temporal bucket for receipt of basic fault error token signals for a specified clocked period, before erasing the temporal bucket then recommencing collection of basic fault token signals. thus, for any higher level fault token signal to be generated, the respective temporal bucket in the fault manager 33 must be filled within a specified clocked period, i.e. a required time span. in figure 3 there are shown graphs of how various sensed parameters vary in a manner which indicates saturation of the second stage nozzle. the basic fault token message tanz (n,h) is produced when a tanz signal exceeds an upper limit of 5 milliamps. the diagnostic module will also have built in a dead band region of 0.1 milliamp, so that a further tanz signal is not produced until the signal has moved 0.1 milliamps from its value when the basic fault token signal is first produced. the basic fault token signal tsrnz (n,h) signal is produced when the reference signal exceeds 5 milliamps and again there will be a dead band of 0.1 milliamps used for the signal. the basic fault token signal tnh (n,h) will be produced when the actual speed of the compressor exceeds 5,120 r.p.m. and the diagnostic module will use a dead band of 5 r.p.m. the bb4 (n,h) signal will be generated when the vibration sensor produces a current with a value above 2 milliamps, the diagnostic module using a dead band of 0.5 milliamps for the vibration signal. the basic fault token signal follows (tsnz-tsrnz) is generated when the second stage nozzle actual position signal tsnz fails to follow the second stage nozzle reference signal tsrnz within 30 seconds, a tolerance of 0.5 seconds being applied to a 30 second stop. the basic fault token signal follows (tnl-tnr) is generated when the signal tnl, the turbine output shaft actual speed fails to follow the signal tnr, the turbine output shaft reference speed, within 60 seconds, a tolerance of 10 seconds being used. when the monitoring system produces a high level fault token signal then the user of the monitoring system not only receives information regarding the high level fault but also receives information regarding the basic fault tokens which led to the conclusion of the high level fault and also receives additional information regarding low level faults which can be accounted for by a diagnosis of the high level fault. an example of a diagnosis screen is shown in figure 4 and in this figure it can be seen that nozzle saturation is noted as the fault and this was determined by the too high second stage nozzle current tanz, the too high second stage nozzle reference position (n,h) and the fact that the second stage nozzle actual position failed to follow the second stage nozzle reference position within 30 seconds. this screen also shows that the too high compressor shaft actual speed and the failure of the turbine output shaft actual speed to follow the turbine shaft reference speed in 1 minute can be accounted by the saturation of the servo-motor for the actuator second stage nozzle. in a preferred embodiment, the information is provided to the user in a windowed environment and the user can "click" on a piece of text to access help pages which give information regarding the diagnosed faults and how to correct them. the fact that some basic fault token signals can be accounted for by a higher fault diagnosis will be stored in the memory of the monitoring system, so that when a high level fault token signal is generated the monitoring system can then determine which other basic fault token signals are associated with the high level diagnosed fault. the fault manager will continuously scan the buffer 70 to match and fill the temporal buckets defined in the fault manager 33. in the case of the temporal bucket for noz (sat), the defined time interval of a bucket is 45 seconds. once the fault manager has found that the temporal bucket is filled with the three signals tsrnz (n,h), tsanz (n,h) and follows (tsnz-tsrnz) within 45 seconds then it will generate the high level fault token signal and will also scan the buffer for basic fault token signals present in the time span 1 minute 30 seconds around the 45 second time band to look for any basic fault error token signals which can be accounted for by the diagnosed high level fault. the fault manager 33 can also look for other faults in the buffer which result from signals produced by sensors located in the same area as the diagnosed failed component, which are not directly accounted for by the failure. in the given example, a transition of the signal bb4 of the vibration sensor located close to the nozzles from a normal level to a high level has been noted by the basic fault token signal bb4 (n,h) and this basic fault token signal is noted in the display to the user (see figure 4). in the fault manager 33 there will be a number of different temporal buckets, each producing a respective high level fault signal. for instance, one temporal bucket could be set up to generate a high level fault signal which indicates a suspected coil failure on the run gas speed ratio valve of the gas turbine. this high level fault token signal would be generated when in a specified time period the temporal bucket was filled with a signal indicating that the running gas pressure was too high (e.g. a value of 12.27 bar being above a limit value of 12.25 bar) and a second basic fault token signal was received in the temporal bucket indicating that the running gas speed ratio value current was too high (e.g. a value of 2.89 milliamps compared to a limit of 0.5 milliamps). various examples of basic fault token signals are as follows, some being given for arrangements in which the gas turbine is connected to an electricity generator and operating parameters of both the gas turbine and the electricity generator are measured: 1. a basic fault token signal would be generated when a change in position of the gas valve is indicated by a change in fprgout without a corresponding change within a defined time limit in fqg, the sensed gas fuel flow (although all changes in fqg are not necessarily occasioned by a change in fprgout); 2. a basic fault token signal would be generated when a change in compressor discharge temperature is indicated by a change in ctd, the compressor discharge temperature, without a corresponding change within a defined time limit in gas fuel flow indicated by fqg (although not all changes in gas fuel flow fqg are occasioned by a change in ctd); 3. a basic fault token signal would be generated when a change in sensed flame intensity indicated by the flame detector signal fd is not preceded within a defined time limit by a corresponding change in the gas control valve position reference signal fsrout; 4. a basic fault token signal would be generated when a change in sensed power output dwatt of a generator connected to the gas turbine is not preceded within a defined time limit by a corresponding change in any of ttws1, the first stage turbine wheel space temperature, ttws2, the second stage turbine wheel space temperature, or ttws3, the third stage turbine wheel space temperature, the signals ttws1, ttws2 and ttws3 being provided by appropriately placed sensors; 5. a basic fault token signal would be generated when a change in the fqg signal, which indicates the flow rate of fuel gas to the gas turbine is not followed by a corresponding change in sensed power output dwatt; 6. a basic fault token signal would be generated when a change in sensed power output dwatt of a generator connected to the gas turbine is not preceded within a defined time limit by a corresponding change in the sensed flame intensity indicated by a change in fd; 7. a basic fault token signal would be generated when a change in sensed power output dwatt of a generator connected to the gas turbine is not preceded within a defined time limit by a corresponding change in exhaust temperature of the gas turbine as indicated by the signal ttxd; 8. a basic fault token signal would be generated when a change in the tgsd signal which indicates the stator winding temperature of a generator attached to the gas turbine is not preceded within a defined time limit by a change in the ctim signal, which indicates the maximum temperature of the air drawn into the compressor of the gas turbine; 9. a basic fault token signal would be generated if the tgsd signal, which is a signal indicating temperature of the stator of a generator attached to the gas turbine, changes without a related change in the signal ttxd, which indicates exhaust temperature of the turbine, within a predefined time limit; 10. a basic fault token signal would be generated if the btgj signal, which indicates a bearing temperature of a generator attached to the gas turbine, changes without a corresponding preceding change in the dtggc signal within a predefined time limit, the dtggc signal indicating the temperature of the cooling gas entering the generator; 11. a basic fault token signal would be generated if dwatt, the electrical power output, is less than zero for more than 2 seconds; 12. a basic fault token signal would be generated if dwatt, the electrical power output, is less than zero for more than 5 seconds; 13. a basic fault token signal would be generated when a change in the signal wqj, which indicates a change in the flow rate of steam to the combustors to control no x in the exhaust emissions of the gas turbine, is not preceded within a defined time limit by a corresponding change in exhaust temperature if the gas turbine as indicated by the signal ttxd; 14. a basic fault token signal would be generated when a change in the signal spsj which indicates the injection pressure of a steam generator for generating steam to control no x in the emissions of the gas turbine does not follow a corresponding change in the signal wqj, which indicates a change in the flow rate of steam to the combustors within a predefined period; 15. a basic fault token signal would be generated if the signal ctd, which indicates compressor discharge temperature, changes without a corresponding change within a predefined time limit of the signal ctim, the signal indicative of maximum compressor inlet temperature; 16. a basic fault token signal would be generated if the signal ctd, which indicates the temperature of gasses leaving the compressor part of the gas turbine, is not followed by a corresponding change in ttxd, the exhaust temperature of the gas turbine; 17. a basic fault token signal would be generated if the signal ttxd, which indicates exhaust temperature of the gas turbine, is not preceded by a corresponding change in fpg2, the fuel gas intervalve pressure; 18. a basic fault token signal would be generated if the l4 signal changes from 1 to 0. the above noted 'basic fault token signals can be used to generate several high level fault tokens, by using appropriately structured temporal buckets. for instance, the following high level fault token signals could be produced; a. a high level fault token signal could be produced which causes a fault message to be displayed indicating "mechanical fault in the generator", such high level fault token signal being produced when a temporal bucket is filled within a prespecified clocked interval by: i) the basic fault token signal indicating a change in tgsd which is not preceded by a change in ctim (see paragraph 8 on page 13 above); and ii) the basic fault token signal indicating an increase in btgj without a preceding change in dtggc (see paragraph 10 on page 14 above). b. a high level fault token signal could be produced which causes a fault message to be displayed indicating "steam supply problem", such high level fault token signal being produced when a temporal bucket is filled within a prespecified clocked interval by: i) the basic fault token signal indicating a change in fd not preceded by a corresponding change in fsrout (see paragraph 3 on page 12 above); and ii) the basic fault token signal indicating that a change in spsj is not followed by a corresponding change in wqj (see paragraph 14 on page 15 above). c. a high level fault token signal could be produced which causes a fault message to be displayed indicating "air flow disturbance in the turbine affecting power output", such high level fault token signal being produced when a temporal bucket has two storage locations filled within a prespecified clocked interval by either of: i) the basic fault token signal indicating a change in dwatt which is not preceded by a corresponding change in fd (see paragraph 6, page 12 above); or ii) the basic fault token signal indicating a change in dwatt not preceded by a corresponding change in ttxd (see paragraph 7, page 12 above); at the same time that a third storage location remains unfilled by: iii) the basic fault token signal indicating a change in fqg without a corresponding change in dwatt (see paragraph 5, page 13 above). d. a high level fault token signal could be produced which causes a fault message to be displayed indicating "turbine combustion problem", such high level fault token signal being produced when a temporal bucket has two storage locations filled within a prespecified clocked interval by: i) the basic fault token signal indicating a change in fd which is not preceded by a corresponding change in fsrout (see paragraph 3, page 12); and ii) the basic fault token signal indicating a change in ttxd not preceded by a change in fpg2 (see paragraph 17, page 15). e. a high level fault token signal could be produced which causes a fault message to be displayed indicating "temperature change in the compressor, possible on line wash detected", such high level fault token signal being produced when a temporal bucket is filled within a prespecified clocked interval by: i) the basic fault token signal indicating a change in ctd without a corresponding change in ctim (see paragraph 15, page 15 above); and ii) the basic fault token signal indicating a change in ctd without a corresponding change in ttxd (see paragraph 16, page 15 above). f. a high level fault token signal could be produced which causes a fault message to be displayed indicating "minor reverse power", such high level fault token signal being produced when a temporal bucket is filled within a prespecified clocked interval of 2 seconds by: i) the basic fault token signal indicating a change in l4 from 1 to 0 (see paragraph 16, page 15 above); and ii) the basic fault token signal indicating that dwatt is less than zero (see paragraph 17, page 15 above); and indeed if dwatt remains less than zero for more than 5 seconds from the change in l4 then a second high level fault token signal triggers the message "major reverse power". whilst above only selected high level fault tokens have been described, it will be appreciated that there will be many temporal buckets in the fault diagnosis module 33 with many associated high level faults to be diagnosed, the monitoring system storing in memory for each high level fault diagnosis a series of related basic fault tokens which can be accounted for by the high level fault and also a list of sensors located in the same area as the component of the gas turbine noted as failed. whilst the specific embodiments above have been described with reference to use of the monitoring system to monitor a gas turbine, the monitoring system could be applied to any dynamic plant apparatus, the word 'dynamic' being used in this sense to indicate that the plant apparatus has parameters which vary throughout operation with time.
132-406-387-101-830
US
[ "TW", "KR", "CN", "US" ]
C23C16/02,C23C16/30,C23C16/455,C23C16/52,H01L21/02,H01L21/762,C23C16/34,C23C16/40,C23C16/50
2019-02-20T00:00:00
2019
[ "C23", "H01" ]
cyclical deposition method including treatment step and apparatus for same
a method and apparatus for depositing a material on a surface of a substrate are disclosed. the method can include a treatment step to suppress a rate of material deposition on the surface of the substrate. the method can result in higher-quality deposited material. additionally or alternatively, the method can be used to fill a recess within the surface of the substrate with reduced or no seam formation.
1. a method of depositing a material on a surface of a substrate surface, the method comprising the steps of: providing the substrate in a reaction chamber; forming first active species from a first reactant comprising nitrogen to modify a surface of the substrate; and performing one or more deposition cycles to deposit the material, wherein each deposition cycle comprises: introducing a second reactant comprising silicon to the substrate, wherein the second reactant reacts with the surface to form chemisorbed material; and forming second active species from a third reactant comprising oxygen that reacts with the chemisorbed material to form deposited material, wherein a ratio of a number of steps of forming first active species and a number of deposition cycles ranges from about 1:1 to about 1:10. 2. the method of claim 1 , wherein the ratio of a number of steps of forming first active species and a number of deposition cycles ranges from about 1:1 to about 1:5. 3. the method of claim 1 , wherein the surface comprises silicon. 4. the method of claim 1 , wherein the first active species removes one or more of hydrogen and a hydroxyl group from the surface. 5. the method of claim 1 , wherein a flow of the first reactant is continuous during the step of forming first active species and the step of performing one or more deposition cycles. 6. the method of claim 5 , wherein the step of forming first active species does not include a purge step. 7. the method of claim 1 , further comprising a step of providing an inert gas to the reaction chamber. 8. the method of claim 7 , wherein the inert gas is provided continuously during the steps of forming first active species and performing one or more deposition cycles. 9. the method of claim 1 , wherein the first reactant comprises one of more of nitrogen, nh 3 , no, n 2 o, no 2 . 10. the method of claim 1 , wherein the second reactant comprises silicon silanediamine. 11. the method of claim 1 , wherein the second reactant comprises one or more of a silane, an aminosilane, a siloxane amine, a silazane amine, an iodosilane, and a chloride. 12. the method of claim 1 , wherein the third reactant comprises one or more of water, hydrogen peroxide, and ozone. 13. the method of claim 1 , wherein a temperature of a substrate support within the reaction chamber is less than 450° c. 14. the method of claim 1 , wherein a power applied to electrodes to form the second active species is about 400 w to about 1500 w. 15. the method of claim 1 , wherein the steps of forming first active species and performing one or more deposition cycles are repeated until a recess on the surface is filled with the deposited material. 16. the method of claim 15 , wherein the deposited material within the recess is seamless. 17. a method of depositing material on substrate surface, the method comprising the steps of: providing the substrate in a reaction chamber; forming first active species from a first reactant comprising nitrogen to modify a surface of the substrate; and performing one or more deposition cycles to deposit the material, wherein each deposition cycle comprises: introducing a second reactant comprising silicon to the substrate, wherein the second reactant reacts with the surface to form chemisorbed material; and forming second active species from a third reactant comprising oxygen that reacts with the chemisorbed material to form deposited material, wherein a ratio of a number of steps of forming first active species and a number of deposition cycles ranges from about 1:1 to about 1:10, wherein an inert gas is continuously provided to the reaction chamber during the steps of forming first active species and performing one or more deposition cycles, and wherein a flow of the first reactant is continuous during the step of forming first active species and performing one or more deposition cycles. 18. the method of claim 17 , wherein the first reactant is activated by plasma for greater than 3 seconds. 19. a semiconductor processing apparatus comprising: one or more reaction chambers for accommodating a substrate comprising a surface; a first source for a first reactant comprising nitrogen in gas communication via a first valve with one of the reaction chambers; a second source for a second reactant comprising silicon in gas communication via a second valve with one of the reaction chambers; a third source for a third reactant comprising oxygen in gas communication via a third valve with one of the reaction chambers; and a controller operably connected to the first, second, and third gas valves and configured and programmed to control: forming first active species from the first reactant to modify a surface of the substrate; and performing one or more deposition cycles to deposit material, wherein each deposition cycle comprises: introducing the second reactant to the substrate, wherein the second reactant reacts with the surface to form chemisorbed material; and forming second active species from the third reactant that react with the chemisorbed material to form deposited material, wherein a ratio of a number of steps of forming first active species and a number of deposition cycles ranges from about 1:1 to about 1:10.
cross-reference to related applications this application claims the benefit of u.s. provisional patent application no. 62/808,262 filed on feb. 20, 2019, the disclosure of which is incorporated herein in its entirety by reference. field of disclosure the present disclosure generally relates to methods and apparatus for manufacturing electronic devices. more particularly, the disclosure relates to methods and apparatus for depositing films during the formation of the electronic devices. background during manufacturing of electronic devices, such as integrated circuits, films or layers of material are often deposited onto a surface of a substrate. such films can be patterned and etched to form desired structures. additionally or alternatively, films can be deposited to fill recesses, such as vias, trenches, or spaces between fins, on a surface of a substrate. in the case of filling a recess, a typical film deposition process may be subjected to drawbacks, including void formation in the recess. voids may be formed when the deposited material forms a constriction near a top of the recess before the recess is completely filled with the deposited material. such voids may compromise device isolation of the devices of an integrated circuit (ic) as well as the overall structural integrity of the ic. unfortunately, preventing void formation during recess fill may place size constraints on the recesses, which may limit device packing density of the ic. void formation may be mitigated by decreasing recess depth and/or tapering recess sidewalls, so that the openings of the recess are wider at the top than at the bottom of the recess. a trade off in decreasing the recess depth may be reducing the effectiveness of the device isolation, while the larger top openings of recesses with tapering sidewalls may use up additional ic real estate. such problems can become increasingly problematic when attempting to reduce device dimensions. furthermore, it may be generally desirable to form films of relatively high quality—e.g., films having relatively high etch rates in, for example, hydrofluoric and/or phosphoric acid. accordingly, improved methods and apparatus for forming high-quality films and/or for filling a recess are desired. summary various embodiments of the present disclosure relate to methods of depositing a material onto a surface of a substrate and to apparatus for depositing the material. while the ways in which various embodiments of the present disclosure address drawbacks of prior methods are discussed in more detail below, in general, exemplary embodiments of the disclosure provide improved methods and apparatus for depositing high-quality material and/or to methods for seamlessly filling high aspect ratio recesses with the deposited material. as set forth in more detail below, exemplary methods can include a step of treating a surface of a substrate to inhibit or slow a growth rate of the deposited material. the growth-rate inhibition is thought to improve a quality of the deposited material and/or to facilitate seamless filling of a recess with the deposited material. additionally, high-quality material can be deposited, without post-treatment annealing of the deposited material that is otherwise often performed to improve the quality of the deposited material. in accordance with at least one embodiment of the disclosure, a method of depositing a material on a surface of a substrate surface includes the steps of: providing the substrate in a reaction chamber; forming first active species from a first reactant to modify a surface of the substrate; and performing one or more deposition cycles to deposit the material. each deposition cycle can include introducing a second reactant to the substrate, wherein the second reactant reacts with the surface to form chemisorbed material; and forming second active species from a third reactant that react with the chemisorbed material to form deposited material. a ratio of a number of steps of forming first active species and a number of deposition cycles can range from about 1:1 to about 1:10. in accordance with various aspects, a flow of the first reactant is continuous during and through the step of forming first active species and the step of performing one or more deposition cycles. in accordance with further aspects, the inert gas can be provided continuously during and through the steps of forming first active species and performing one or more deposition cycles. according to a further embodiment, there is provided a semiconductor processing apparatus to provide, for example, an improved or at least an alternative deposition method, such as a method described herein. in accordance with at least one embodiment of the disclosure, a semiconductor processing apparatus includes one or more reaction chambers for accommodating a substrate; a first source for a first reactant in gas communication via a first valve with one of the reaction chambers; a second source for a second reactant in gas communication via a second valve with one of the reaction chambers; a third source for a third reactant in gas communication via a third valve with one of the reaction chambers; and a controller operably connected to the first, second, and third gas valves and configured and programmed to control: forming first active species from a first reactant to modify a surface of the substrate; and performing one or more deposition cycles to deposit material. each deposition cycle can include: introducing a second reactant to the substrate, wherein the second reactant reacts with the surface to form chemisorbed material; and forming second active species from a third reactant that react with the chemisorbed material to form deposited material. a ratio of a number of steps of forming first active species and a number of deposition cycles ranges from about 1:1 to about 1:10. the controller can be further configured to provide inert gas continuously during the steps of forming first active species and performing one or more deposition cycles. additionally or alternatively, the controller can be configured to provide a flow of the first reactant continuously during the step of forming first active species and the step of performing one or more deposition cycles. the controller can additionally or alternatively be configured to provide a flow of the third reactant (e.g., continuously) from a treatment purge step and through the step of performing one or more deposition cycles. further, the apparatus as described herein can be used to perform one or more methods as described herein. in accordance with yet further exemplary embodiments of the disclosure, a semiconductor structure can be formed using a method and/or an apparatus as described herein. for purposes of summarizing the invention and the advantages achieved over the prior art, certain objects and advantages of the invention have been described herein above. of course, it is to be understood that not necessarily all such objects or advantages may be achieved in accordance with any particular embodiment of the invention. thus, for example, those skilled in the art will recognize that the invention may be embodied or carried out in a manner that achieves or optimizes one advantage or group of advantages as taught or suggested herein without necessarily achieving other objects or advantages as may be taught or suggested herein. these and other embodiments will become readily apparent to those skilled in the art from the following detailed description of certain embodiments having reference to the figures, the invention not being limited to any particular embodiment(s) disclosed. brief description of the drawing figures a more complete understanding of exemplary embodiments of the present disclosure can be derived by referring to the detailed description and claims when considered in connection with the following illustrative figures. fig. 1 illustrates a method for depositing a material in accordance with at least one embodiment of the disclosure. fig. 2 illustrates a process sequence in accordance with at least one embodiment of the present disclosure. fig. 3 illustrates a process sequence in accordance with at least one embodiment of the present disclosure. fig. 4 illustrates wet etch rate ratios of material deposited in accordance with at least one embodiment of the disclosure. fig. 5 illustrates scanning electron microscope images of recesses filled with deposited material in accordance with at least one embodiment of the present disclosure. fig. 6a illustrates schematic representation of a peald (plasma-enhanced atomic layer deposition) apparatus suitable for filling a recess in accordance with at least one embodiment of the present disclosure. fig. 6b illustrates a schematic representation of a precursor supply system using a flow-pass system (fps) usable in accordance with at least one embodiment of the present disclosure. fig. 7 illustrates transmission electron microscopy images for material deposited with various first reactant plasma activation times. it will be appreciated that elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. for example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help improve understanding of illustrated embodiments of the present disclosure. detailed description of exemplary embodiments although certain embodiments and examples are disclosed below, it will be understood by those in the art that the invention extends beyond the specifically disclosed embodiments and/or uses of the invention and obvious modifications and equivalents thereof. thus, it is intended that the scope of the invention disclosed should not be limited by the particular disclosed embodiments described below. exemplary embodiments of the disclosure can be used to deposit material on a surface of a substrate. for example, exemplary methods and apparatus can be used to fill recesses, such as trenches, vias, and/or areas between fins, on a surface of a substrate. in accordance with examples of the disclosure, a treatment step is used to suppress a growth rate of a subsequently deposited film—e.g., by removal of hydrogen and/or hydroxyl groups from a surface of the substrate. it is thought that the suppression of the growth rate contributes to filling a recess, while mitigating or eliminating void and/or seam formation within the recess. in addition, the suppression of growth rate can contribute to deposition of higher-quality films, compared to films deposited using conventional techniques. further, the methods and apparatus can be used to deposit high-quality material, without a need for further post treatment, such as annealing, of the material. although methods described herein can be configured to reduce a deposition growth rate, as discussed in more detail below, various process steps can be configured, such that an overall process time to deposit the film is kept relatively low. as used herein, the term substrate may refer to any underlying material or materials that may be used to form, or upon which, a device, a circuit, or a film may be formed. a substrate can include a bulk material, such as silicon (e.g., single-crystal silicon) and can include one or more layers overlying the bulk material. further, the substrate can include various topologies, such as recesses, lines, and the like formed within or on at least a portion of a layer of the substrate. by way of examples, a substrate can include a material that includes hydrogen and/or hydroxyl group terminated sites. for example, the substrate can be or include silicon and/or silicon oxide with hydroxyl terminated groups and/or hydrogen terminated groups. as used herein, the term atomic layer deposition (ald) may refer to a vapor deposition process in which deposition cycles, typically a plurality of consecutive deposition cycles, are conducted in a process chamber. generally, during each cycle, a precursor is chemisorbed to a deposition surface (e.g., a substrate surface that can include a previously deposited material from a previous ald cycle or other material), forming about a monolayer or sub-monolayer of material that does not readily react with additional precursor (i.e., a self-limiting reaction). thereafter, in some cases, a reactant (e.g., another precursor or reaction gas) may subsequently be introduced into the process chamber for use in converting the chemisorbed precursor to the desired material on the deposition surface. the reactant can be capable of further reaction with the precursor. further, purging steps can also be utilized during each cycle to remove excess precursor from the process chamber and/or remove excess reactant and/or reaction byproducts from the process chamber after conversion of the chemisorbed precursor. further, the term atomic layer deposition, as used herein, is also meant to include processes designated by related terms, such as chemical vapor atomic layer deposition, atomic layer epitaxy (ale), molecular beam epitaxy (mbe), gas source mbe, or organometallic mbe, and chemical beam epitaxy when performed with alternating pulses of precursor(s)/reactive gas(es), and purge (e.g., inert carrier) gas(es). the terms reactant and precursor can be used interchangeably. turning now to the figures, fig. 1 illustrates a method of depositing a material on a surface of a substrate 100 in accordance with at least one embodiment of the disclosure. method of depositing a material on a surface of a substrate 100 can be used to, for example, fill one or more recesses, sometimes referred to as gaps or features, created during manufacturing of a structure—e.g., structures formed during the manufacture of electronic devices. an opening at a top of a recess may be, for example, less than 40 or even 20 nm wide; a depth of the recess may be more than 40, 100, 200 or even 400 nm. an aspect ratio of the recesses can range from, for example, about 5:1 to about 30:1. method of depositing a material on a surface of a substrate 100 can be a cyclic deposition process, such as an ald process. in the illustrated example, method of depositing a material on a surface of a substrate 100 includes the steps of providing the substrate in a reaction chamber (step 102 ), forming first reactive species (step 104 ), and performing one or more deposition cycles (step 106 ). as illustrated, step 106 can be repeated a number of times, as illustrated by loop 108 , prior to ending method of depositing a material on a surface of a substrate 100 . additionally or alternatively, steps 104 and 106 can be repeated (with step 106 optionally additionally repeated), as illustrated by loop 110 . a ratio of step 104 (also referred to herein as a treatment step) and step 106 (also referred to herein as a deposition cycle) can be, for example, 1:1, 1:3, 1:5, 1:10 and any range between such values. providing the substrate in a reaction chamber step 102 includes providing a substrate to a reaction chamber for processing in accordance with method 100 . by way of example, a substrate can include a layer of or a layer including silicon and having at least one recess formed therein. additionally or alternatively, the substrate can include a layer of, for example, silicon oxide or photoresist. during step 102 , the substrate can be brought to a desired temperature for subsequent processing using, for example, a substrate heater and/or radiative or other heaters. a temperature during steps 102 - 106 can be less than 450° c. or less than 300° c., or range from about 20° c. to about 450° c. or about 50° c. to about 300° c. a pressure within the reaction chamber during steps 102 - 106 can be from about 1 torr to about 5 torr or about 2 torr to about 4 torr. during step 104 , a first active species from a first reactant is formed. the first reactive species can be used to modify a surface of a substrate—e.g., to slow a growth rate of a material deposited during step 106 . for example, the first active species can be used to passivate otherwise active/reactive sites on the surface of a substrate. as a result, a growth per cycle of deposited material on the surface of the substrate (e.g., a surface of a recess formed within the substrate) can be reduced, compared to a growth per cycle of deposited material on a surface (e.g., another portion of the surface or another substrate surface) that has not been treated. the active species can be formed using an in-situ or remote plasma. a plasma power during step 104 can range from about 400 w to about 1500 w or about 500 w to about 1000 w. the plasma can be formed using a pulse time of and/or an on time for the plasma during step 104 can range from about 3 seconds to about 20 (e.g., 10) seconds or about 1 seconds to about 10 (e.g., 5) seconds or about 8 seconds to about 12 seconds or about 3 seconds to about 7 seconds. in accordance with examples of the disclosure, the first reactant can comprise nitrogen or a gas comprising nitrogen. in accordance with further examples, the first reactant can include one or more of nitrogen, nh 3 , no, n 2 o, no 2 and n 2 h 4 , or derivatives thereof. step 104 can include a first reactant purge sub step. during the first reactant purge sub step, excess reactant(s) and reaction byproducts, if any, may be removed from the reaction space/substrate surface, for example, by a purging gas pulse and/or vacuum generated by a pumping system. the purging gas can be any inert gas, such as, without limitation, argon (ar), nitrogen (n 2 ) and/or helium (he). a phase is generally considered to immediately follow another phase if a purge (i.e., purging gas pulse) or other reactant removal step intervenes. a flowrate of a purge gas during the purge sub step can range from about 500 sccm to about 5000 sccm or about 1000 sccm to about 4000 sccm. a time of the gas flow during the purge sub step can be relatively short to facilitate relatively rapid deposition of material. by way of examples, a time of the gas flow during this purge sub step can be greater than 0 and less than 1 second or range from about 0.1 seconds to about 0.9 seconds or about 0.3 seconds to about 0.5 seconds. step 106 includes performing a deposition cycle, such as an ald deposition cycle. each deposition cycle can include introducing a second reactant to the substrate, wherein the second reactant reacts with the surface to form chemisorbed material, and forming second active species from a third reactant that react with the chemisorbed material to form deposited material. a pressure within a reaction chamber during step 106 can be the same or similar to the pressure within the reaction chamber during any of steps 102 and 104 . by way of example, the pressure within the reaction chamber during step 106 can be about 1 torr to about 5 torr or about 2 torr to about 4 torr. the second reactant can be introduced to the reaction chamber to form chemisorbed material. the second reactant can include, for example, silicon. by way of examples, the second reactant can include one or more of silane amines (aminosilanes), siloxane amines and silazane amines. alternatively, the second reactant can include a halide, such as a chloride or an iodide (e.g., a chlorosilane or an iodosilane). by way of particular example, the second reactant can be or include a silanediamine, such as n,n,n′,n′-tetraethyl silanediamine. a pulse/flow time to introduce the second reactant to the reaction chamber can range from, for example, about greater than 0 to less than 1 second or about 0.1 to 0.5 (e.g., 0.2) seconds. the third reactant can be or include oxygen. by way of example, the third reactant can be or include one or more of water, hydrogen peroxide, and ozone. a pulse/flow time to introduce the third reactant to the reaction chamber can range from, for example, about greater than 0 to less than 1 second or about 0.1 to 0.5 (e.g., 0.3) seconds. during step 106 , a second active species is formed from the third reactant. the second active species can react with the chemisorbed material (e.g., formed using the second reactant) to form deposited material. the second active species may, for example, react with the chemisorbed material and remove ligands from the chemisorbed material to thereby form deposited material. the second active species can be formed using a direct plasma or a remote plasma unit. a power for producing the plasma can be, for example, between about 400 w and about 1500 w. similar to step 104 , step 106 can include one or more purge sub steps to purge the second and/or third reactants. during a second and/or third reactant purge sub step, excess reactant(s) and reaction byproducts, if any, can be removed from the substrate surface, for example, as described above. the purging sub steps under step 106 may be particularly desirable to mitigate any unwanted cvd reactions that might otherwise occur. a flowrate of a purge gas during the second and/or third reactant purge sub steps can range from about 500 sccm to about 5000 sccm or about 1000 sccm to about 4000 sccm. a time of the gas flow during the second and/or third reactant purge sub steps can range from about greater than 0 seconds to less than 1 second or from about 0.1 seconds to about 0.5 (e.g., 0.3) seconds after introducing the second reactant and can be greater than 0 seconds to less than 1 second or from about 0.1 seconds to about 0.5 (e.g., 0.2) seconds after introducing the third reactant. step 106 can include an additional purge—e.g., with the gas flow rates noted above for a period of about 1 to about 5 (e.g., about 2) seconds. fig. 2 illustrates a process sequence 200 in accordance with at least one embodiment of the disclosure. process sequence 200 can be suitable for use with method of depositing a material on a surface of a substrate 100 . fig. 2 illustrates on/off sequences for gas flow and for plasma power or for provision of active species. as illustrated, a deposition sequence 202 can include a treatment step 204 , a purge step 206 , a deposition cycle 208 , and a final purge step 210 . treatment step 204 can be repeated m times, where m ranges from about 1 to about 5 and deposition cycle 208 can be repeated n times, where n ranges from about 1 to about 25. a ratio of m:n can range from, for example, 1:1, 1:3, 1:5, 1:10 or anywhere between such values. further, deposition sequence 202 can be repeated a number of times (loop 226 ) until a desired thickness of material is deposited. a ratio of m:n can vary or remain the same for each iteration of loop 226 . step 204 can be the same or similar to step 104 and can follow step 102 . in the illustrated example, step 204 includes an optional initial purge step 212 , introduction or formation of first active species 214 , and first reactant purge step 216 . as illustrated, the supply of purge gas can be continuous throughout process sequence 200 . a gas for forming a first active species can be provided (e.g., only) during step 216 and the plasma power for forming the first active species can be activated (e.g., only) during step 214 . alternatively, the first reactant can be supplied during one or more (e.g., all) of steps 212 - 224 and 210 , as described in more detail in connection with fig. 3 . similarly, a third reactant can be supplied during one or more of steps 214 - 224 and 210 , and only activated during step 222 . step 216 can be the same or similar to first reactant purge step described above. during step 206 , another first reactant purge step can be used to facilitate removal of any unwanted material remaining from step 204 . a flowrate of a purge gas during step 206 can be the same as the first purge gas flowrate described above. a time for step 206 can range from about 1 to about 10 seconds or about 4 to about 6 seconds. step 208 can be the same or similar to step 106 , described above. as illustrated, each deposition cycle can include introduction of a second reactant (step 218 ), a second reactant purge (step 220 ), forming second active species from a third reactant (step 222 ), and a third reactant purge (step 224 ). steps 218 - 224 can be the same or similar to step 106 described above. process sequence 200 can include a final purge step 210 . the flowrate of a purge gas during step 210 can be the same or similar to third reactant purge sub step described above. a time for step 210 can range from about 0.1 to about 10 seconds or about 1 to about 5 seconds. table 1 below illustrates exemplary temperatures, power, frequency, pressure, flowrates, and reactor conditions suitable for method 100 and/or process sequence 200 . the ranges provided below illustrate examples of the disclose. unless noted otherwise, the ranges are not meant to limit the scope of the disclosure. table 1exemplaryexemplaryunitrangerangecommonsourceone or more of aaminosilanesilane,precursoraminosilane, asiloxane amine, asilazaneamine, aniodosilane, and achloridesus temp° c.250-35050-450wall° c.150-19050-200flange° c.150-19050-200source temp° c.25-3520-50rf powerw700-900500-1000rf frequencymhz13-3013-30depopressuretorr2-41-5step timesecsource feeding:source feeding:0.2-0.40.1-0.5purge:purge:0.2-0.40.1-0.5plasma:plasma:0.2-0.40.1-0.5purge:purge:0.2-0.40.1-0.5purge arsccm3000-35001000-4000o 2sccm600-800400-1000carrier arsccm800-1200500-1500n2pressuretorr2-41-5treatmentstep timesecpurge:purge:10-151.0-20.0plasma:plasma:8-121.0-20.0purge:purge:0.3-0.60.1-1.0purge arsccm3000-35001000-4000n 2sccm3000-5000orifice full opencarrier arsccm800-1200500-1500 fig. 4 illustrates wet etch rate ratios of silicon oxide, e.g., formed according to method 100 and/or process sequence 200 , in lal 15 etchant for a period of 30 seconds. the removal rates are measured at a center and edge of a substrate for films formed according to process sequence 200 . as illustrated, the higher the treatment to deposition ratio, the lower the wet etch rate of the deposited film. this indicates that higher-quality films can be formed using treatment to deposition ratios of between about 1:1 and about 1:10. fig. 5 illustrates material deposited—e.g., according to method 100 and/or process sequence 200 into a recess having a 20:1 aspect ratio. fig. 5 shows that, while the seam is significantly reduced using a treatment step as described herein, a relatively small seam can form with a deposition: treatment cycle ratio of about 1:10, but that no seam is observed at ratios of 1:5, 1:3, or 1:1. fig. 3 illustrates another process sequence 300 in accordance with at least one embodiment of the disclosure. method 100 can use process sequence 300 for depositing material on a surface of a substrate. process sequence 300 is similar to process sequence 200 , except process sequence 300 includes fewer purge steps, and includes a continuous flow of a first reactant. the continuous flow of the first reactant is thought to contribute to more stable process environment and to improve uniformity (e.g., composition and/or thickness) of the material deposited onto the substrate surface. similar to process sequence 200 , process sequence 300 includes a deposition sequence 302 that includes a treatment step 304 and a deposition cycle/step 306 . unlike process sequence 200 , process sequence 300 does not include a purge step 206 or a final purge 210 . this allows process sequence 300 to be relatively short, which, in turn, allows for relatively rapid deposition of high-quality deposited material and high through-put, which can be used to, for example, fill a recess within a substrate surface. treatment step 304 can be repeated m times, where m ranges from about 1 to about 5 and deposition cycle 208 can be repeated n times, where n ranges from about 1 to about 2. a ratio of m:n can range from, for example, 1:1, 1:3, 1:5, 1:10 or anywhere between such values. further, deposition sequence 302 can be repeated a number of times (loop 308 ) until a desired thickness of material is deposited. a ratio of m:n can vary or remain the same for each iteration of loop 308 . as illustrated in fig. 3 , process sequence 300 can begin with forming a first active species from a first reactant step 310 , wherein a first reactant and a purge gas are continuously provided to a reaction chamber. during step 310 , a first reactant may be activated by rf power to form first active species, as described above in connection with fig. 1 . a time for the plasma activation of the first reactant can range from greater than 3 seconds to about 10 (e.g., 5) seconds or about 4 seconds to about 8 seconds. plasma ignition time is also thought to be an important factor for seamless fill of deposited material in a recess, and can depend on various factors, including an aspect ratio of a feature and a ration of m:n as defined above. for example, as illustrated in fig. 7 , for a m:n ratio of 1:3 and for a feature having an aspect ratio of 20:1, a plasma ignition time of less than 3 seconds in each treatment step may leave a seam in the deposited material. in the illustrative example, a plasma ignition time of five seconds or more in each treatment step did not result in seam formation in the deposited material. thus, plasma ignition time may be based, at least in part, on aspect ratios of features on a surface of the substrate. during step 312 , purge gas and first reactant are allowed to flow through the reaction chamber. exemplary flowrates of purge gas and first reactant are provided below in table 2. during deposition cycle 306 , a second reactant can be introduced to the reaction chamber. a flowrate of the second reactant and a pulse time for the second reactant can be the same or similar to the flowrate of the second reactant during steps 106 and 218 , described above in connection with figs. 1 and 2 . the second reactant can then be purged during step 316 by allowing the first reactant, the purge gas, and optionally the third reactant to continue to flow, as illustrated. when the third reactant is allowed to flow for additional steps (e.g., steps 312 - 316 and 320 in addition to step 318 ), the third reactant can be activated for a time period in step 318 , such that second active species formed from the third reactant that react with the chemisorbed material to form deposited material is formed (e.g., only) during step 318 . alternatively, the third reactants can be flowed only during step 318 . exemplary power levels and times for activation of the third reactant are provided below in table 2. table 2 below illustrates exemplary temperatures, power, frequency, pressure, flowrates, and reactor conditions suitable for method 100 , process sequence 200 and/or process sequence 300 . the ranges provided below illustrate examples of the disclose. unless noted otherwise, the ranges are not meant to limit the scope of the disclosure. table 2exemplaryexemplaryunitrangerangecommonsourceone or moreaminosilaneof a silane,precursoraminosilane,a siloxaneamine, a silazaneamine, aniodosilane,and a chloridesubstrate° c.250-35050-450tempwall° c.150-19050-200flange° c.150-19050-200source temp° c.25-3520-50rf powerw700-900500-1000rf frequencymhz13-3013-30depopressuretorr2-41-5step timesecsource feeding:source feeding:0.2-0.40.1-0.5purge:purge:0.2-0.40.1-0.5plasma:plasma:0.2-0.40.1-0.5purge:purge:0.1-0.30.1-0.5purge arsccm3000-35001000-400002sccm600-800400-1000carrier arsccm800-1200500-1000n 2pressuretorr2-41-5treatmentstep timesecplasma:plasma:4.0-8.04.0-10.0purge:purge:0.2-0.60.1-1.0purge arsccm3000-35001000-4000n 2sccm3000-5000orifice full opencarrier arsccm800-1000500-1200 fig. 6a and fig. 6b illustrate a semiconductor processing apparatus 30 in accordance with exemplary embodiments of the disclosure. semiconductor processing apparatus 30 includes one or more reaction chambers 3 for accommodating a substrate that can include a surface that can include a recess formed therein; a first source 21 for a first reactant in gas communication via a first valve 31 with one of the reaction chambers; a second source 22 for a second reactant in gas communication via a second valve 32 with one of the reaction chambers; a third source 25 for a third reactant in gas communication via a third valve 33 with one of the reaction chambers; an optional fourth source 26 (e.g., for a purge or carrier gas) in gas communication via a fourth valve 34 with one of the reaction chambers; and a controller 27 operably connected to the first, second, third, and optionally fourth gas valves and configured and programmed to control: forming first active species from a first reactant to modify a surface of the substrate and performing one or more deposition cycles to deposit material. each deposition cycle can include introducing a second reactant to the substrate, wherein the second reactant reacts with the surface to form chemisorbed material; and forming second active species from a third reactant that react with the chemisorbed material to form deposited material. a ratio of a number of steps of forming first active species and a number of deposition cycles ranges from about 1:1 to about 1:10. the fourth gas can be introduced with any of the first, second, and/or third reactants, and/or can be used as a purge gas as described herein. although not illustrated, semiconductor processing apparatus 30 can include additional sources and additional components, such as those typically found on semiconductor processing apparatus. optionally, semiconductor processing apparatus 30 is provided with a heater to activate the reactions by elevating the temperature of one or more of the substrate, the first, second and third reactants. exemplary single wafer reactors, designed specifically to perform cyclic or ald processes, are commercially available from asm international nv (almere, the netherlands). exemplary batch ald reactors, designed specifically to perform ald processes, are also commercially available from asm international nv. semiconductor processing apparatus 30 may be provided with a radiofrequency source operably connected with the controller constructed and arranged to produce a plasma of at least one of the first, second and/or third reactant or combination of thereof. the plasma enhanced atomic layer deposition (peald) may be performed in a reactor available from asm international nv of almere, the netherlands which apparatus comprises a plasma source to activate one or more of the reactants. process steps with a plasma may be performed using semiconductor processing apparatus 30 , desirably in conjunction with controls programmed to conduct the sequences described herein, usable in at least some embodiments of the present disclosure. in the apparatus illustrated in fig. 6a , by providing a pair of electrically conductive flat-plate electrodes 4 , 2 in parallel and facing each other in the interior 11 (reaction zone) of reaction chamber 3 , applying rf power (e.g., 13.56 mhz or 27 mhz) from a power source 20 to one side, and electrically grounding the other side 12 , a plasma is excited between the electrodes. a temperature regulator can be provided in a lower stage 2 (the lower electrode), and a temperature of substrate 1 placed thereon can be kept at a relatively constant temperature. the upper electrode 4 can serve as a shower plate as well, and reactant gas (and optionally an inert gas, such as a noble gas) and/or purge gasses can be introduced into the reaction chamber 3 through gas lines 41 - 44 , respectively, and through the shower plate 4 . additionally, in the reaction chamber 3 , a circular duct 13 with an exhaust line 7 is provided, through which gas in the interior 11 of the reaction chamber 3 is exhausted. additionally, a lower portion 5 of the reaction chamber 3 —e.g., disposed below an upper portion 45 of the reaction chamber 3 —is provided with a seal gas line 24 to introduce seal gas into the interior 11 of the reaction chamber 3 via the lower space 16 of the low portion 5 of the reaction chamber 3 , wherein a separation plate 14 for separating the reaction zone between an upper electrode 4 and a lower stage 2 and the lower space 16 is provided (a gate valve through which a wafer is transferred into or from the lower portion of the reaction chamber 3 is omitted from this figure). the lower portion of the reaction chamber 5 is also provided with an exhaust line 6 . in some embodiments, the deposition of a multi-element film and a surface treatment (e.g., steps 104 - 108 ) are performed in the same reaction space, so that all the steps can continuously be conducted without exposing the substrate to air or other oxygen-containing atmosphere. in some embodiments, a remote plasma unit can be used for exciting a gas—e.g., from one or more of sources 21 , 22 , 25 , and/or 26 . in some embodiments, in the apparatus depicted in fig. 6a , a system of switching flow of an inactive gas and flow of a precursor or reactant gas is illustrated in fig. 6b ; this system can be used to introduce the precursor or reactant gas in pulses without substantially fluctuating pressure of the reaction chamber. fig. 6b illustrates a precursor supply system using a flow-pass system (fps) according to an embodiment of the present disclosure (black valves indicate that the valves are closed). as shown in (a) in fig. 6b , when feeding a precursor to a reaction chamber (not shown), first, a carrier gas such as ar (or he) flows through a gas line with valves b and c, and then enters a bottle (reservoir) 20 . the carrier gas flows out from the bottle 20 while carrying a precursor gas in an amount corresponding to a vapor pressure inside the bottle 20 and flows through a gas line with valves f and e, and is then fed to the reaction chamber together with the precursor. in this case, valves a and d are closed. when feeding only the carrier gas (e.g., noble gas) to the reaction chamber, as shown in (b) in fig. 6b , the carrier gas flows through the gas line with the valve while bypassing the bottle 20 . in this case, valves b, c, d, e, and f are closed. a reactant may be provided with the aid of a carrier gas. a plasma for deposition may be generated in situ, for example, using one or more gasses that flow—e.g., continuously throughout the deposition cycle. in other embodiments, a plasma may be additionally or alternatively generated remotely and active species provided to the reaction chamber. in some embodiments, a multi chamber reactor (more than two sections or compartments for processing wafers disposed closely to each other) can be used, wherein a reactant gas and an inert gas, such as a noble gas, can be supplied through a shared line, whereas a precursor gas can be supplied through unshared lines. or a precursor gas can be supplied through shared lines. an apparatus can include one or more controller(s), such as controller 27 , programmed or otherwise configured to cause the deposition processes described herein to be conducted. the controller(s) can be communicated with the various power sources, heating systems, pumps, robotics, and gas flow controllers or valves of the reactor. it is to be understood that the configurations and/or approaches described herein are exemplary in nature, and that these specific embodiments or examples are not to be considered in a limiting sense, because numerous variations are possible. the specific routines or methods described herein may represent one or more of any number of processing strategies. thus, the various acts illustrated may be performed in the sequence illustrated, in other sequences, or omitted in some cases. the subject matter of the present disclosure includes all novel and nonobvious combinations and sub-combinations of the various processes, systems, and configurations, and other features, functions, acts, and/or properties disclosed herein, as well as any and all equivalents thereof.
132-497-707-338-041
FR
[ "DE", "AU", "CA", "AT", "FR", "US", "EP", "WO", "ES" ]
A62C99/00,A62C3/02,A62C31/02,A62C37/08,B05B7/06
2001-11-22T00:00:00
2001
[ "A62", "B05" ]
device for protecting premises in particular a tunnel against fire
the invention concerns a device for protecting premises in particular a tunnel against fire, consisting of a circular housing open at its two ends and deisgned to be fixed in the upper part of the premises to be protected such that its median axis (xx′) is substantially vertical, said housing comprising a substantially cylindrical lower part and an upper part substantially tapered in the shape of a bell, and containing part of vacuum-producing means capable of accelerating air circulation in and around the housing and spray means capable of injecting fine water droplets in the resulting circulating air.
1 . a device for protecting premises, in particular a tunnel, against fire, characterized in that it is constituted by a housing generated by revolution which is open at its two ends and which is to be fixed in position at the upper portion of the premises to be protected in such a manner that its median axis (xx′) is substantially vertical, the housing comprising a substantially cylindrical lower portion and an upper portion narrowed substantially in the shape of a bell, and containing, on the one hand, vacuum-producing means capable of accelerating the circulation of air in and around the housing and, on the other hand, atomizing means capable of injecting fine droplets of water into the air thus set in circulation. 2 . a device according to claim 1 , characterized in that the vacuum-producing means are constituted by a disc which is centered on the median axis (xx′) of the housing, is mounted substantially perpendicularly to that axis and is driven in rotation at high speed about that axis by a driving motor, the disc being equipped on its upper face with a series of blades that have a variable cross-section and that extend into the narrowed upper portion of the housing in such a manner as to create a partial vacuum and to enable air to be sucked in from the upper portion of the housing towards the lower portion thereof. 3 . a device according to claim 2 , characterized in that the driving motor is constituted by an action or reaction water turbine such as a water turbine of the pelton type or a water turbine of the francis type. 4 . a device according to claim 3 , characterized in that the atomizing means are constituted by a cylindrical chamber mounted securely on the rotary disc in such a manner as to recover at least some of the water used in the turbine, the chamber being equipped on its peripheral wall with calibrated orifices permitting the injection of fine droplets of water into the air set in circulation. 5 . a device according to claim 4 , characterized in that the housing is equipped with a perforated fixed cylinder mounted opposite the orifices of the centrifugal chamber in order to facilitate the generation of the droplets of water. 6 . a device according to claim 1 , characterized in that it comprises a fire detector cooperating with the atomizing means in order to control the injection of fine droplets of water into the air set in circulation.
background the present invention relates to a device for protecting premises, in particular a tunnel, against fire. summary the damage both human and material that can be caused by fire need hardly be described and specialists have long been attempting to propose means for combating this scourge. to that end, the authorities have made it compulsory to equip some buildings, used privately or professionally, or some public places, with various types of extinguisher whose efficiency is unfortunately all too often inadequate. by way of example, extinguishing devices have already been proposed for the purpose that operate by atomizing water and that are equipped with heads that generate droplets and that are supplied either with pressurized water or by a jet of water sheared by a gas, such as air. such devices operate in the following manner: owing to the heat which it releases, fire creates a temperature gradient which brings about a circulation of air promoting its propagation, because a fire cannot be propagated without the consumption of oxygen. if fine droplets of water are injected, especially in the form of a spray, into the air circulating in the vicinity of a fire, they are sucked into the flames and evaporated, thus enabling the flames to be extinguished through lack of oxygen. however, as a general rule, conventional heads of the above-mentioned type for generating water droplets have a flow rate of only from 2 to 20 l/min and can protect efficiently only a maximum volume of the order of 500 m3 with a height limitation of the order of 7 m, which is insufficient, in particular for the protection of high-risk or large premises, such as, for example, tunnels. the object of the present invention is to propose a device for protecting premises against fire, which device has a markedly improved efficiency. the device is characterized in that it is constituted by a housing generated by revolution which is open at its two ends and which is to be fixed in position at the upper portion of the premises to be protected in such a manner that its median axis is substantially vertical. according to the invention, the housing comprises a substantially cylindrical lower portion and an upper portion narrowed substantially in the shape of a bell, and contains, on the one hand, vacuum-producing means capable of accelerating the circulation of air caused by the temperature gradient and by the thermal exchanges, especially by convection effect, and, on the other hand, atomizing means capable of injecting fine droplets of water into the air set in circulation. the basic feature of the device according to the invention is associated with the presence of the vacuum-producing means which create a major partial vacuum by the application of the bernoulli principle and thus permit the channelling and acceleration of the natural circulation of air generated by the fire owing to the presence of a temperature gradient. if the air thus set in circulation is charged with fine droplets of water, the evaporation of the water is also accelerated, thus enabling the fire to be rapidly extinguished through lack of oxygen. it should be noted that the device according to the invention may in some special cases correspond to semi-fixed equipment on a fire engine, in particular mounted at the end of an articulated arm or at the end of a telescopic arm and supplied with pressurized water. such a device, which can be introduced above the seat of a fire or into the vicinity thereof, makes it easier for firemen to get closer in order to fight the fire. according to a preferred feature of the invention, the vacuum-producing means are constituted by a disc which is centered on the median axis of the housing, is mounted substantially perpendicularly to that axis and is driven in rotation at high speed about that axis by a driving motor. according to the invention, the disc is equipped on its upper face with a series of blades that have a variable cross-section and that extend into the narrowed upper portion of the housing in such a manner as to create a substantial partial vacuum and to enable air to be sucked in from the upper portion of the housing towards the lower portion thereof. the use of rotary discs having a diameter of the order of one metre has afforded highly satisfactory protection of large premises. in order to protect a tunnel, a housing of that type can advantageously be put in place approximately every 50 metres. such a configuration of the vacuum-producing means has the advantage of having a low manufacturing cost. the driving motor of the disc can of course be of any type without departing from the scope of the invention and can operate with any source of power (electricity, gas, air, . . . ). however, it is particularly advantageous to use for the purpose an action or reaction water turbine, such as a water turbine of the pelton type or a water turbine of the francis type. the atomizing means too may be of any type and, by way of example, they may be constituted by conventional nozzles although the latter have the disadvantage of requiring the use of high-pressure pumps or of bottles of air. therefore, and according to a preferred feature of the invention, the atomizing means are constituted by a cylindrical centrifugal chamber mounted securely on the rotary disc in such a manner as to recover at least some of the water used in the turbine. according to the invention, the chamber is equipped, on its peripheral wall, with calibrated orifices permitting the injection of fine droplets of water into the air set in circulation. bearing that configuration in mind, the water penetrating into the centrifugal chamber rotating at high speed is pressurized, thrown against the lateral walls of the chamber under the action of the centrifugal force and atomized in the form of fine droplets at the calibrated orifices. the water can therefore thus be injected into the air set in circulation by the vacuum-producing means, the speed of which is sufficiently high to enable the droplets to be sucked in. according to the invention, the housing may also be equipped with a perforated fixed cylinder mounted opposite the orifices of the centrifugal chamber in order to facilitate the generation of the droplets of water. according to another feature of the invention, the device comprises a fire detector co-operating with the atomizing means in order to control the injection of fine droplets of water into the air set in circulation. according to a first variant, the detector may be a detection head of the sprinkler type which is known per se and which is constituted by a glass tube which contains a gas and which is fixed in position at the lower portion and the median portion of the housing. under the effect of the heat generated by a fire, such a tube explodes so as to free a mechanism capable of controlling the actuation of the atomizing means, in particular capable of opening the feed valve of the water turbine jets. according to a second variant, such a detector may be constituted by a glass tube of the above-mentioned type which is positioned at the site most favourable for detecting fires and which is located on the control circuit of a diaphragm valve mounted upstream of the water supply duct for the atomizing means. according to the invention, it is also possible to consider injecting atomized water or droplets of water into the air sucked in, directly at the upper portion of the housing in order to cool that air, which is at a high temperature. in that case, it is advantageous to equip the housing, at its internal portion, with a radial chamber concentric with the blades in order to recover some of the water which has been injected upstream under the action of the centrifugal force. during its injection into the air, the water has become charged with particles produced by combustion, thus providing slight filtering of the fumes. it should be noted that the use of the device, to which the invention relates, in a tunnel, in particular one of great length, is particularly advantageous because, in a tunnel, there is generally a circulation of air which may be natural but which may also be forced by means of mechanical devices, such as fans. when there is a fire in a tunnel, the pressure and the temperature increase locally. the local excess pressure prevents any ventilation of the tunnel, and the fans, when present, become ineffective given that the fire has created a hot air lock. the device to which the invention relates enables that disadvantage to be remedied owing to the major partial vacuum created, the effect of which is to channel the circulation of the air, while charging it with fine droplets of water. the air is therefore cooled, and its density increases. the air lock is consequently reduced and the mechanical or natural ventilation becomes effective again. it is thus possible to protect a tunnel, even one of great length, against fire by the regular distribution of a series of devices according to the invention along the tunnel, at its upper portion. those devices do not necessarily enable a fire to be extinguished but they still have the advantage of preventing the creation of heat locks and permitting the intervention of firemen and the evacuation of people. it should also be noted that tunnels are as a general rule surmounted by hills or mountains. in this respect it is therefore advantageous to create, in an upper region, a storage basin of sufficient capacity whose water can be used to supply the atomizing means, in particular the ducts for supplying the jets of the water turbines of devices according to the invention for protection against fire. for the difference in level enables the water to be pressurized. it is thus possible to obtain an efficient and reliable system of protection against fire whose maintenance is easy and whose frost-proofing is simplified. brief description of the drawing the features of the device for protection against fire to which the invention relates will be described in more detail with reference to the appended figure. fig. 1 is a diagrammatic view of a non-limiting example of the configuration of such a device. detailed description for the sake of clarity, in fig. 1 the fixed elements have been represented with a fine line and the rotary elements have been represented with a thicker line. according to fig. 1 , the protection device comprises a housing 1 generated by revolution which is open at its two ends and which is to be fixed in position at the upper portion of premises to be protected in such a manner that its median axis x-x′ is substantially vertical. to be more precise, the housing 1 comprises a cylindrical lower portion 10 which is extended at the top by a portion 11 which is narrowed in the shape of a bell. the housing 1 also contains, at its internal portion, a turbine 2 of the pelton type whose jets 15 are supplied with pressurized water in accordance with the arrow a by an axial duct 3 and whose bucket wheel 4 is fixedly joined to a disc 5 which is centered on the median axis x-x′ of the housing 1 and mounted substantially perpendicularly to that axis. the rotary disc 5 is therefore driven in rotation at high speed by the turbine 2 . according to fig. 1 , the rotary disc 5 is mounted in the junction region between the cylindrical lower portion 10 and the upper portion 11 , narrowed in the shape of a bell, of the housing 1 and is equipped on its upper face remote from the turbine 2 with a series of blades 6 that have a variable cross-section and that extend into the narrowed upper portion 11 of the housing 1 . the housing 1 also contains a cylindrical centrifugal chamber 7 which is driven in rotation at high speed by the bucket wheel 4 with the rotary disc 5 and which is mounted in such a manner as to recover at least some of the water used in the turbine 2 . the centrifugal chamber 7 is equipped on its peripheral wall 8 with a series of calibrated orifices 9 . the housing 1 is also equipped with a perforated fixed cylinder 12 , which is mounted opposite the calibrated orifices 9 of the centrifugal chamber 7 , and also with two grids 13 , 14 which are mounted at the open ends of the housing 1 in order to act as filtration members. the mode of operation of the device is the following: the high-speed rotation of the blades 6 creates at the internal portion of the housing 1 a partial vacuum which causes air to be sucked in from the top towards the bottom in accordance with the arrows b. that air, which circulates in accordance with the arrows c at the internal portion of the housing 1 , passes opposite the calibrated orifices 9 of the centrifugal chamber 7 , before passing back up again in accordance with the arrows d along the external periphery of the housing 1 . forced circulation of air in and around the housing 1 is thus created. at the same time, under the action of the centrifugal force, the water coming from the turbine 2 is thrown against the lateral walls 8 of the chamber 7 and atomized at right-angles to the calibrated orifices 9 of that chamber in order to be injected into the air set in circulation in accordance with the arrows c opposite those orifices. the fixed tube 12 located at right-angles to the calibrated orifices 9 facilitates the generation of the droplets.
133-572-081-434-165
US
[ "EP", "JP", "DE", "US" ]
H03F1/26,H03F1/30,H03F1/42,H03F1/52,H03F3/20,H03F3/30
1983-05-18T00:00:00
1983
[ "H03" ]
enhanced-aaccuracy semiconductor power amplifier.
a dc amplifier uses complementary npn and pnp output transistors on n-typ (24) and p-type (26) substrates respectively. the output transistors (24b, 26b) in an emitter-­follower configuration with no emitter resistor to prevent thermal runaway. instead, emitter-follower driver-stage transitors (29a, 26a) provided on the same substrates as the output transistors to force a reduction in the bias voltage on the output stage when the temperature of an output transistor increases. this circuit prevents thermal runaway and temperature-­dependent offsets without emitter resistors, which would increase output impedance, and without feedback from the output stage to the input stage, which would slow the response of the amplifier. additionally, compensation-­network transistors (22b, 24c, 26c) are provided to eliminate offsets resulting from driver- and output-transistor base-to-­emitter voltage differences caused not only by temperature differences between the transistors on different substrates but also by manufacturing variations. the compensation-network transistors, like the driver-stage transistors, are on the same substrates as the output transistors. they are connected to provide a compensation voltage that is equal to the difference between the base-to-emitter voltages of transistors on the two substrates. the compensation voltage is used to cancel offsets that would otherwise result from such differences.
1. a transistor amplifier circuit comprising a unity-gain amplifier including one amplifier transistor or a combination of more than one amplifier transistor (22a, 24a, and 26b; 22a, 26a, and 24b) connected horizontally in series, each amplifier transistor being formed on a semiconductor substrate (22, 24, and 26) associated therewith and being connected in an emitter-follower configuration to form a signal train in which signals appearing at the base of an amplifier transistor are reproduced at the emitter thereof with unity gain but with an offset equal to the base-to-emitter voltage of that amplifier transistor, characterized by : a subtraction circuit (28), including a reference transistor (22b, 24c, and 26c) associated with each amplifier transistor and forward biased to have substantially the same base-to-­emitter voltage as its associated amplifier transistor, each reference transistor being formed on the same substrate as its associated amplifier transistor so that its base-to-emitter voltage substantially tracks that of its associated amplifier transistor with changes in temperature, the subtraction circuit being connected to subtract from a voltage in the signal train of the unity-gain amplifier the base-to-emitter voltage of each reference transistor and thereby cancel the offset of each amplifier transistor. 2. an amplifier circuit as defined in claim 1 wherein the amplifier transistors include first and second npn transistors (26a and b) formed on a first semiconductor substrate (26) of n-type semiconductor material and first and second pnp transistors (24a and b) formed on a second semiconductor substrate (24) of p-type semiconductor material, said second npn and pnp transistors forming an output stage in which their emitters are connected together in a complementary emitter-­follower configuration for driving an amplifier load connected to said emitters, said first npn and pnp transistors being arranged in a previous stage with their bases connected together in a complementary emitter-follower stage, the emitters of said first npn and pnp transistors being coupled to the bases of the second pnp and npn transistors, respectively, so that the sum of the base-to-emitter voltages of said first transistors is applied between the bases of said second transistors.
background of the invention field of the invention this invention relates to a semiconductor amplifier. more specifically, it relates to a wide-band dc amplifier having a moderate power output and characterized by faithful reproduction of its input signals. prior art in many dc amplifier applications it is desirable to provide a moderate output power, e.g. 2 or 3 watts, together with a low output impedance. the latter characteristic is usually accomplished by means of negative feedback which, however, adversely affects the high-frequency or transient response of the amplifier. specifically, with negative feedback the output of the amplifier is characterized by excessive delay or overshoot in response to sudden changes in the input voltage. one might instead resort to an essentially open-loop amplifier configuration with an emitter-follower output stage which provides the desired low-impedance output characteristic. however, a conventional, i.e. push-pull, output stage is subject to thermal runaway and consequent burnout of one or both of the output transistors unless the amplifier includes provisions for preventing this from happening. for example, emitter resistors might be connected in series with the emitters of the output transistors, but this would increase the output impedance. one solution to the thermal runaway problem has been to use a monolithic amplifier construction in which all of the transistors are formed on the same semiconductor chip. all of the transistors thus have essentially the same temperature. accordingly, an increase in the temperature of one of the output transistors, which would otherwise increase the current through that transistor, will, by virtue of the temperature increase in other transistors in the circuit, cause an offsetting reduction in the bias voltage applied to the output transistors, thereby preventing an excessive increase in quiescent current and, consequently, eliminating runaway. however, the latter circuit is characterized by temperature-dependent offsets in the output voltage. moreover, the output current capability is not as high as desirable for a power amplifier because, in a complementary monolithic amplifier, transistors of one type, npn or more likely pnp, have inherently high collector resistances or other deficiencies. therefore, inverse feedback is still required for operation of the amplifier. summary of the invention i have overcome the foregoing problems by using a "bilithic" construction in which the npn transistors are formed on one chip and the pnp transistors are formed on another chip. on each chip the collectors of the transistors are part of the common substrate, and this provides a desirably low intrinsic collector resistance. the output and driver stages of the amplifier are connected in a conventional emitter-follower arrangement, with an npn driver transistor driving the pnp output transistor and an pnp driver transistor driving an npn output transistor. with this arrangement, if the temperature of one of the output transistors increases, with a corresponding increase in the other transistors of the same type, there will be a decrease in the base-­emitter diode drop of the driver transistor of the same type. this will decrease the bias voltage applied between the bases of the output-stage transistors, thereby tending to decrease the currents in the output transistors and thus preventing thermal runaway. the amplifier as thus far described is subject to a temperature dependent offset of the output voltage because of the difference between the diode drops in the pnp and npn driver and output transistors. i have provided further circuitry to substantially elminate this offset. specifically, the amplifier includes a further pnp-npn transistor pair, formed on the same chips as the output and driver transistors and connected so that their base-emitter diode drops are subtracted from each other. the resulting difference voltage is inverted and passed through the amplifier to cancel the offset. the amplifier thus exhibits a low output impedance and is protected against thermal runaway, both without the use of multiple-stage negative feedback. moreover, it is capable of providing a substantial output power. brief description of the drawings these and further features and advantages of the present invention are described in connection with the accompanying drawings, in which: fig. 1 is a schematic diagram of the input and intermediate stages of one embodiment of the present invention together with compensation and voltage-divider networks; and fig. 2 is a schematic diagram of the driver and output stages of the same embodiment. detailed description of the preferred embodiment figs. 1 and 2 together constitute a schematic diagram that depicts a dc amplifier in which the circuitry depicted in fig. 1 is connected to that of fig. 2 at node 12 in figs. 1 and 2. the input node of the amplifier is at a switch 14, while the output node is indicated by reference number 16. the switch 14 switches rapidly between high and low reference voltages generated by operational amplifiers 18a and 18b. it feeds the selected voltage to a transistor 22a formed on a substrate 22 and connected in an emitter-follower configuration. the output of this stage is applied to the bases of two transistors 24a and 26a in a driver stage. these transistors are also in an emitter-follower configuration. the output voltages of the driver transistors are applied to the bases of transistors 24b and 26b, which together belong to the output stage. they are connected in an emitter-follower arrangement with their emitters connected together at the output node 16 of the amplifier. although output transistor 24b is depicted as a single transistor, it preferably comprises multiple transistors in parallel, as does output transistor 26b. the emitter-follower configuration of the output stage provides a low output impedance. further to provide a low output impedance, the emitter resistor commonly used to prevent thermal runaway is omitted. moreover, the output current capability of the amplifier is high because the npn transistor 24b and pnp transistor 26b are fabricated on different multiple-transistor chips 24 and 26, respectively; the npn transistor is fabricated on a substrate of n-type material, and the pnp transistor is fabricated on a substrate of p-type material. this allows the collector of each transistor to be part of its respective low-resistance substrate, so the output current capability will be high. to prevent thermal runaway, the amplifier employs the thermal responses of transistors 24a and 26a in its driver stage instead of emitter resistors. as will be described in more detail below, driver transistors 24a and 26a are on the same chips as output transistors 24b and 26b, respectively, so the temperature of a given driver transistor is substantially the same as that of the output transistor on the same substrate. the same temperature increase that, without compensation, would tend to increase the current through, say, output transistor 24b also decreases the diode drop in transistor 24a on the same chip, and the circuit configuration is such that the reduced diode drop reduces the differential voltage applied to the output transistors 24a and 26b and thus counteracts the tendency toward increased current. with the foregoing arrangement, there will be a temperature-dependent offset in the output of the amplifier, and in many applications it will be desirable to eliminate this offset. as will be described in more detail below, further transistors 22b, 24c, and 26c, which are on substrates 22, 24, and 26, respectively, are connected in a compensation network 28 that presents a compensating, temperature-dependent input to the operational amplifier 18. this compensating input is forwarded to the amplifier output terminal 16, where it essentially eliminates the offsets that would otherwise result from the diode drops in the amplifier transistors of the intermediate, driver, and output stages. we now turn to a more detailed description of the amplifier. the reference voltages between which the switch 14 alternates originate in voltage levels v hi and v lo at the non-inverting input terminals of operational amplifiers 18a and 18b, respectively. the output of, say, operational amplifier 18a is fed back to its inverting input terminal through a resistor 30a. the current flowing through resistor 30a is divided between further resistors 32a and 34a. resistor 32a leads to ground, and resistor 34a leads to the emitter of transistor 26c, whose voltage differs from ground potential by approximately one diode drop, as will be discussed below. if the resistance of resistors 30a and 34a is 100 kilohms, for example, while that of resistor 32a is 250 kilohms, a closed-loop gain of 2.4 from the non-inverting terminal of amplifier 18a to its output node results. the output voltage of operational amplifier 18a is thus 2.4 x v hi plus the absolute value of the negative voltage at the emitter of transistor 26c. the circuitry to the left of the switch 14 deals in general with slowly changing voltages. however, the switch 14 is a high-speed electronic device that switches between the two reference voltages applied to it at rates that can be on the order of 100 mhz. thus, the circuitry to the right of the switch deals with high-frequency signals and so includes no feedback loops that degrade the pulse transmission accuracy. transistor 22a is biased by a current sink including transistor 40. bias voltage v₄ for transistor 40, as well as other bias voltages in the amplifier, is provided by voltage-­divide networks 41 and 42. since transistor 22a is in an emitter-follower configuration, it passes the signal with unity voltage gain--but with an offset equal to its base-­to-emitter voltage--to the bases of complementary driver transistors 24a and 26a. as was mentioned before, this signal is forwarded with unity gain by the driver and output transistors to the output node 16. a common bias network biases the driver and output transistors. transistors 24a and 24b are on the same substrate and have collectors in common. this is indicated by the common notation vc1 for their collector voltage. the collectors of these transistors are connected to an overload-protection circuit 46. the collectors of transistors 26a and 26b are similarly connected to an overload-protection circuit 48, as is indicated by the notation vc2 . the bias circuit includes current sources comprising transistors 50 and 52, a current sink comprising transistor 54, and switches consisting of transistors 56 and 58. when the amplifier is being used to drive a load, transistor 58 is turned off by a signal at its base to act as an open switch, while transistor 56 is turned on to act as a closed switch. transistors 50 and 52 are biased to provide, for example, 10 milliamperes apiece, and the current from transistor 52 is divided between the emitter of transistor 24a and the base of transistor 26b. transistor 54 operates as a current sink that draws 20 milliamperes, so the emitter of transistor 26a and the base of transistor 24b together add 10 milliamperes to the 10 milliamperes flowing from the collector of transistor 50. the circuit of fig. 2 can be analyzed as complementary successive stages of emitter-follower amplifiers because the current sources and sink comprising transistors 50, 52, and 54 exhibit high small-­signal impedances, while roll-off networks 60 and 62, which connect the emitters of the driver transistors to the bases of the output transistors, present negligible impedance at low frequencies. although not necessary for an understanding of the present invention, a zener-diode circuit 64 is also depicted in fig. 2. with switch transistor 56 on and switch transistor 58 off, the zener-diode circuit does not conduct current, so it does not contribute to the functioning of the amplifier. however, when the states of switch transistors 56 and 58 are reversed, current from the voltage source that includes transistor 50 flows from the collector of transistor 50 through the zener circuit 64 to the collector of transistor 58 and thereby back biases the output transistors 24b and 26b so that the amplifier, which is used in testing other circuits, can present a high impedance at the output node 16. a review of the signal path from the switch 14 to the output node 16 reveals that, without the operation of the compensation network 28, the output of the amplifier would be offset from the desired value by a quantity equal to the diode drop of intermediate amplifier transistor 22a and the difference between the diode drops of transistors 24a and 26b. in this analysis, the upper branch in fig. 2 will be considered, but it will be apparent that the same results are obtained if the lower branch, which includes transistors 26a and 24b, is analyzed. specifically, we proceed from left to right through the circuit and note that the diode drop of transistor 22a is subtracted from the input voltage. the diode drop of transistor 24a is added to the resultant voltage, and the diode drop of transistor 26b is subtracted from that value. the compensation network 28 eliminates the offset by combining the diode drops of transistors 22b, 24c, and 26c and subtracting the resultant voltage from the reference voltages from which the switch 14 produces the input signal. the diode drops of transistors 22b, 24c, and 26c are essentially the same as the diode drops in transistors 22a, 24a, and 26b because they are on the same substrates and thus are at the same temperatures. their combination thus cancels the offset contributed by transistors 22a, 24a, and 26b. specifically, in the compensation network 28, ground potential is applied to the base of transistor 22b, so the voltage at the emitter of transistor 22b is below ground potential by the base-to-emitter voltage of the intermediate-stage amplifier transistor 22a. this voltage is applied to the base of transistor 24c through a small resistance included for radio-frequency stability. the emitter voltage of transistor 24c is above that of transistor 22b by the base-to-emitter voltage of driver transistor 24a. this voltage in turn is applied through another small resistance to the base of transistor 26c, whose emitter voltage differs from its base voltage by the base-to-emitter voltage of output transistor 26b. the emitter voltage of transistor 26c is the output of the compensation network 28. since resistors 34a and 34b have the same value as resistors 30a and 30b, the negative emitter-ground voltage of transistor 26c is subtracted with unity gain from the voltages resulting from amplification of v hi and v lo . the compensation-network output thus is inverted and forwarded with unity gain to the output node 16. the offsets of the several stages of the amplifier are therefore cancelled out by subtracting a signal generated from transistors on the same substrates as those causing the offsets. this is accomplished without the use of a multiple-stage feedback loop, which would degrade the response of the amplifier. we now turn to the manner in which the amplifier prevents thermal runaway. with constant reference voltages, the voltage at the output node 16 for a given state of the switch 14 should remain constant. however, a temperature increase in an output transistor 24b or 26b increases its collector current for a given base voltage. the driver stage, which has a low output impedance, controls the voltages at the bases of the output-stage transistors. thus, without compensation, the base voltage would be held constant by the driver stage and therefore the temperature increase would result in increased base current and thus increased collector current. without compensation, there would thus be increased power dissipation, further temperature increases, and still further increase in current. in short, thermal runaway would occur if there were no compensation. according to the present invention, however, the amplifier is configured so that a driver transistor 24a or 26a causes the potential difference between the bases of the output transistors 24b and 26b to decrease in response to an increase in the temperature of the output transistor on the same chip. this tends to counteract the current increase and thereby prevent thermal runaway. more specifically, since transistors 24a and 24b are on the same substrate, their temperatures are almost exactly the same, and heat generated by dissipation in output transistor 24b raises the temperature of the entire chip--and, specifically, the temperature of the base-emitter junctions of transistors 24a and 24b. as a consequence, the base-to-emitter diode drop of driver transistor 24a decreases. this tends to counteract thermal runaway in output transistor 24b, as can readily be understood from the fact that driver transistors 24a and 26a force the sum of the base-to-emitter voltages of output transistors 24b and 26b to be equal to the sum of the base-to-emitter voltages of the driver transistors 24a and 26a. if driver transistor 26a on the other substrate 26 remains at a constant temperature and thus maintains a substantially constant diode drop, the total potential difference between the emitters of the driver transistors is reduced, and so is that between the output-transistor bases. it can be shown that, for a given temperature, the product of the magnitudes of the collector currents in the output transistors 24b and 26b is decreased with a decreased potential difference between their bases, so thermal runaway is counteracted. furthermore, an offset in the output, which would otherwise result from the change in the diode drop of driver transistor 24a, is compensated for by the action of the compensation network 28 as described above. accordingly, output transistor 26b, whose temperature has not changed, receives the same base voltage as it did before the temperature change in transistor 24b, but the base voltage at transistor 24b is raised to reduce its tendency for increased current flow. thermal runaway resulting from an increase in the temperature of output transistor 26b is similarly prevented by driver transistor 26a. accordingly, the present invention prevents thermal runaway and compensates for temperature-dependent offsets without the use of feedback loops, which would degrade the response of the amplifier, and without emitter resistors in the output stage, which would increase the output impedance of the amplifier. it also maintains a high output current capability by avoiding the high intrinsic collector resistances that occur in complementary monolithic amplifiers.
133-658-992-062-70X
US
[ "EP", "CN", "KR", "US", "WO" ]
G06F3/0488,G06F3/01,G06F1/16,G06F3/0484,G06F3/0482,G06F3/048
2014-02-28T00:00:00
2014
[ "G06" ]
text input on an interactive display
in one embodiment, a non-transitory computer-readable storage media contains instructions for displaying on a small display an interactive element that is or includes one or more characters. the instructions can identify, based on an input, one or more of the interactive elements or characters.
1 . one or more non-transitory computer-readable storage media embodying instructions that are operable when executed by one or more processors to: display on a small display an interactive element, the interactive element having a plurality of positions that each correspond to a character from set of characters, wherein at least some of the characters from the set of characters are not displayed at the same time on the display; identify, based on an input, a first position within the interactive element; and display on the display a first character corresponding to the first position. 2 . the media of claim 1 , wherein the set of characters comprises the english alphabet. 3 . the method of claim 1 , wherein the interactive element does not comprise any characters from the set of characters. 4 . the media of claim 3 , wherein: the display comprises a rectangular display; and the interactive element comprises a first bar displayed along an edge of the rectangular display. 5 . the media of claim 4 , wherein: the input from the user comprises contact between a finger of the user and a portion of the display corresponding to the first position; and the instructions are further operable when executed to: determine a movement of the finger from the first position to a second position within the interactive element, the movement comprising continuous contact with the display; remove from the display the first character; and display on the display a second character corresponding to the second position. 6 . the media of claim 1 , wherein: the input from the user comprises contact between a finger of the user and a portion of the display corresponding to the first position; the first character is displayed on a first portion of the display while the user's finger contacts the portion of the display corresponding to the first position; and the instructions are further operable when executed to: determine that the user has removed the finger from the display; and display the first character on a second portion of the display. 7 . the media of claim 3 , wherein: the display comprises a circular display; and the interactive element comprises a curved bar displayed along an edge of the circular display. 8 . the media of claim 1 , wherein the interactive element comprises a first bar along an edge of the display. 9 . the media of claim 8 , wherein: the interactive element further comprises a second bar parallel and adjacent to the first bar; the first bar comprises one or more characters from the set of characters; and the second bar comprises one or more suggested character strings. 10 . the media of claim 9 , wherein the instructions are further operable when executed to: determine that the user has executed a swiping gesture on the second bar; and in response to the swiping gesture, replace some of the one or more suggested character strings in the second bar with one or more other suggested character strings. 11 . the media of claim 8 , wherein the instructions are further operable when executed to: determine that a finger of a user has contacted a first portion of the display corresponding to the first position; and determine that the finger has made a continuous movement substantially perpendicular to the bar to a second position of the display outside of the bar. 12 . the media of claim 8 , wherein the instructions are further operable when executed to: determine that the user has executed a swiping gesture on the first bar; and in response to the swiping gesture, replace some of the one or more characters in the first bar with one or more other characters. 13 . the media of claim 8 , wherein: the interactive element comprises a second bar parallel and adjacent to the first bar; the first bar comprises one or more characters from the set of characters; and the second bar comprises one or more icons operable to alter the characters displayed in the first bar. 14 . the media of claim 13 , wherein the one or more icons are operable to: change a capitalization of the characters in the first bar; change a typesetting of the characters in the first bar; display numbers in the first bar; display letters in the first bar; or display symbols in the first bar. 15 . the media of claim 1 , wherein the characters comprise character components. 16 . the media of claim 1 , wherein: the display comprises a circular display; the interactive element comprises a semi-circular portion of the display, the semi-circular portion comprising a display of least one of the characters. 17 . the media of claim 16 , wherein the media is within a wearable device comprising: the display; a rotatable element about the display; and a detector configured to detect rotation of the rotatable element; and a band coupled to the device body. 18 . the media of claim 17 , wherein the instructions are further operable when executed to: determine a rotation of the rotatable element; and replace, based on the rotation, at least one of the characters in the semi-circular portion with another one of the characters from the set of characters. 19 . a device, comprising: a small display configured to: display an interactive element, the interactive element having a plurality of positions that each correspond to a character from set of characters, wherein at least some of the characters from the set of characters are not displayed at the same time on the display; and a processor configured to: identify, based on an input, a first position within the interactive element; and instruct the display to display a first character corresponding to the first position. 20 . the device of claim 19 , wherein the interactive element comprises a bar along an edge of the display. 21 . the device of claim 19 , wherein: the input comprises a gesture made on the display; and the processor identifies the first position based on the gesture. 22 . one or more non-transitory computer-readable storage media embodying instructions that are operable when executed by one or more processors to: display on a substantial part of an outer portion of a surface of small display a plurality of characters, the surface of the display having an inner portion that includes a center of the display surface; identify, based on an input, one or more of the displayed characters; and display on a portion of the display comprising inputted text the one or more identified characters. 23 . the media of claim 22 , wherein the display comprises a substantially circular display; and the outer and inner portions are substantially circular. 24 . the media of claim 23 , wherein the media is within a wearable device comprising: the display; a rotatable element about the display; a detector configured to detect rotation of the rotatable element; and a band coupled to a body of the device. 25 . the media of claim 24 , wherein the instructions are further operable when executed to: determine a rotation of the rotatable element; and select, based on the rotation, at least one of the characters displayed on the outer portion. 26 . the media of claim 23 , wherein the outer portion is near or at the outer edge of the surface of the display. 27 . the media of claim 22 , wherein the display comprises a substantially rectangular display; and the outer and inner portions are substantially rectangular. 28 . the media of claim 27 , wherein the outer portion is near or at the outer edge of the surface of the display. 29 . the media of claim 22 , wherein the input from the user comprises a swiping gesture made by a finger of the user on or around one or more displayed characters. 30 . the media of claim 29 , wherein the instructions that are operable when executed to identify, based on the input from a user, one or more of the displayed characters comprise instructions that are operable when executed to identify: contact between the user's finger and the inner portion; a first movement of the user's finger from the inner portion to the outer portion; the first character swiped by the user's finger after performing the first movement; a second movement of the user's finger from the outer portion to the inner portion; a third movement of the user's finger from the inner portion to the outer portion; and the first character swiped by the user's finger after performing the third movement. 31 . the media of claim 30 , wherein the third movement comprises swiping from the outer portion to or near to the center of the display. 32 . the media of claim 22 , wherein the instructions are further operable when executed to remove, highlight, or deemphasize, based on the identified characters, one or more characters displayed on the outer portion. 33 . one or more non-transitory computer-readable storage media embodying instructions that are operable when executed by one or more processors to: display on a first portion of a small display a plurality of interactive elements, each interactive element comprising: a plurality of characters; and a visual indicator indicating that the plurality of characters are grouped together; display within interactive element at least one character from the plurality of characters of that interactive element; identify, based on an input, one or more of the interactive elements. 34 . the media of claim 33 , wherein the plurality interactive elements comprise: a first interactive element at a first layer of a hierarchy; and a second interactive element at a second layer of the hierarchy. 35 . the media of claim 34 , wherein the plurality of characters of the first interactive element comprise characters that are more frequently used than the plurality of characters in the second interactive element. 36 . the media of claim 34 , wherein: the first interactive element comprises: a first row having a plurality of columns, each column comprising one or more characters; or a first column having a plurality of rows, each row comprising one or more characters; and the second interactive element comprises: a second row between an edge of the display and the first row, the second row having a plurality of columns, each column comprising one or more characters; or a second column between an edge of the display and the first column, the second column having a plurality of rows, each row comprising one or more characters. 37 . the media of claim 36 , wherein the instructions that are operable when executed to display the plurality of interactive elements comprises instructions that are operable when executed to: display the first interactive element on the display; and display, in response to a gesture made by a portion of the user's hand on the display, the second interactive element on the display. 38 . the media of claim 37 , wherein the gesture comprises: contact between a finger of the user and the first interactive element; and a continuous movement from the first interactive element in the direction of the edge of the display. 39 . the media of claim 33 wherein the visual indicator comprises one or more lines of a perimeter of the interactive element. 40 . the media of claim 33 , wherein the visual indicator comprises an icon near each of the plurality of characters in the interactive element. 41 . the media of claim 33 , wherein the visual indicator comprises a coloring of the plurality of characters in the interactive element. 42 . the media of claim 33 , wherein the visual indicator appears in response to the input from the user. 43 . the media of claim 33 , wherein: the input from the user comprises contact between a finger of the user and an interactive element; and the instructions are further operable when executed to: display outside of the interactive element the characters of that interactive element; and identify a motion of the finger of the user from the interactive element to one of the characters displayed outside of the interactive element; and select that character for input to the display. 44 . the media of claim 43 , wherein: the plurality of interactive elements comprise a row of interactive elements and the characters displayed outside of the interactive element comprise a column of characters; or the plurality of interactive elements comprise a column of interactive elements and the characters displayed outside of the interactive element comprises a row of characters. 45 . the media of claim 33 , wherein the input by the user identifies at least one of the characters of the identified interactive element, the input comprising one or more of: a number of taps by a finger of the user on the interactive element, each character in the interactive element corresponding to a predetermined number of taps; or a duration of contact between the finger of the user and the interactive element, each character in the interactive element corresponding to one or more predetermined durations.
related application(s) this application claims the benefit, under 35 u.s.c. §119(e), of u.s. provisional patent application no. 61/946,509 filed on 28 feb. 2014, which is incorporated herein by reference. technical field this disclosure generally relates to text input on an interactive display. background electronic devices may contain a display screen that displays information to a user of the device. an electronic device may also contain an input screen that receives input from the user. at times the input screen and the display screen may be the same or share the same surface. a user of a device can provide input to the device through the input screen while viewing content on the display screen. when the two are the same, the user can view content on the display screen while inputting content on the same screen. for example, a user can interact with a button or icon displayed on the display screen, or can input text, such as for example numbers, characters, symbols, or any combination thereof, to an input screen of a device. brief description of the drawings fig. 1 illustrates an example embodiment of an wearable electronic device. fig. 2 illustrates an example stack-up of a device. figs. 3a-3e illustrate example form factors of a device. fig. 4a illustrates an example cross-section of a device body. figs. 4b-c illustrate example connections between components of a device. figs. 5a-5f illustrate example displays of a device. figs. 6a-c illustrate example cross-sectional views of a device display. figs. 7a-7d illustrate example outer elements about a device body. figs. 8a-8c illustrate example outer elements about a device body. fig. 9 illustrates an example sealing ring of a device. fig. 10 illustrates an example retention ring of a device. fig. 11 illustrates various example embodiments for wearing a device. figs. 12a-12b illustrate a band attached to a body of a device. figs. 13a-13i illustrate example embodiments for fastening or affixing a band of a device. figs. 14a-d illustrate example camera placements on a device. fig. 15 illustrates an example device with a band and optical sensor. fig. 16 illustrates an example viewing triangle including a user, a device, and an object. fig. 17 illustrates an example angle of view for an optical sensor of a device. figs. 18a-18b illustrate example optical sensors of a device. fig. 19 illustrates an example sensor detection system of a device. figs. 20a-20c illustrate example chargers operable with a device. figs. 21a-21b illustrate example chargers operable with a device. figs. 22a-22b illustrate example charging units operable with a device. fig. 23 illustrates an example charging scheme for a charging unit operable with a device. fig. 24 illustrates an example charging scheme for a charging unit operable with a device. figs. 25a-25e illustrate example embodiments of energy storage and charging in a device and a charging unit. fig. 26 illustrates an example charging unit architecture. figs. 27-92 illustrate example gestures for use with a device. figs. 93a-93b illustrate example user inputs to a device. figs. 94a-94c illustrate example user inputs to a device. figs. 95a-95d illustrate example user touch input to a device. figs. 96a-96b illustrate example graphical user interface models of a device. fig. 97 illustrates an example graphical user interface model of a device. figs. 98a-98g illustrate example graphical user interface models of a device. fig. 99 illustrates an example graphical user interface model of a device. figs. 100a-100c illustrate example graphical user interface models of a device. figs. 101a-101b illustrate example screens of a graphical user interface of a device. figs. 102a-102d illustrate example screens of a graphical user interface of a device. figs. 103a-103d illustrate example screens of a graphical user interface of a device. fig. 104 illustrates an example menu of a graphical user interface of a device. figs. 105a-105d illustrate example menus of a graphical user interface of a device. figs. 106a-106c illustrate example menus of a graphical user interface of a device. figs. 107a-107c illustrate example menus of a graphical user interface of a device. fig. 108 illustrates an example menu of a graphical user interface of a device. figs. 109a-109c illustrate example menus of a graphical user interface of a device. figs. 110a-110b illustrate examples of scrolling in a graphical user interface of a device. fig. 111a-111c illustrate examples of scrolling in a graphical user interface of a device. fig. 112 illustrates examples of overlay and background content in a graphical user interface of a device. figs. 113a-c illustrate examples of overlay and background content in a graphical user interface of a device. figs. 114a-114b illustrate example visual transition effects in a graphical user interface of a device. figs. 115a-115b illustrate example visual transition effects in a graphical user interface of a device. figs. 116a-116b illustrate example visual transition effects in a graphical user interface of a device. figs. 117a-117b illustrate example visual transition effects in a graphical user interface of a device. figs. 118a-118c illustrate example visual transition effects in a graphical user interface of a device. figs. 119a-119c illustrate example visual transition effects in a graphical user interface of a device. figs. 120a-120c illustrate example visual transition effects in a graphical user interface of a device. figs. 121a-121b illustrate example visual transition effects in a graphical user interface of a device. fig. 122 illustrates an example use of a physical model in a graphical user interface of a device. fig. 123 illustrates example screens of a graphical user interface of a device. fig. 124 illustrates example screens of a graphical user interface of a device. fig. 125 illustrates an example method for automatic camera activation in a device. fig. 126 illustrates an example method for delegation by a device. fig. 127 illustrates example delegation models including a device. fig. 128 illustrates an example method for delegating by a device. figs. 129a-129d illustrate example modes of a device. fig. 130 illustrates an example mode of a device. figs. 131a-131d illustrate example modes of a device. fig. 132 illustrates an example method for providing augmented reality functions on a device. fig. 133 illustrates an example network environment in which a device may operate. fig. 134 illustrates an example of pairing between a device and a target device. fig. 135 illustrates an example method for pairing a device with a target device. fig. 136 illustrates example screens of a graphical user interface of a device. fig. 137 illustrates an example computer system comprising a device. figs. 138a-e illustrate an example device with an example circular display that contains a display portion for inputting text, a portion for displaying inputted text, and a portion for displaying text available for input. figs. 139a-f illustrate an example device with an example circular display that contains a display portion for inputting text, a portion for displaying inputted text, and a portion for displaying text available for input. figs. 140a-b illustrate an example device with an example circular display that contains a display portion for inputting text, a portion for displaying inputted text, and a portion for displaying text available for input, and a portion for suggesting selectable character strings to a user. figs. 141a-b illustrates an example device having a portion on which a user can input handwritten text. figs. 142a-b illustrate an example device having a portion on which a user can input guided handwritten text. figs. 143a-b illustrate an example device having a portion on which a user can input guided handwritten text. figs. 144a-b illustrate an example device having a portion on which a user can input guided handwritten text. figs. 145a-b illustrate example gestures that can be captured by a sensor of an example device to input text onto the device. figs. 146a-c illustrate example character layouts for small displays displaying the english alphabet. figs. 146d-e illustrate example highlighting of selected text. figs. 147a-c illustrate examples of altering displayed characters available for selection based on previously selected characters. figs. 148a-c illustrate example layouts for example character sets displayed on the outer edge of an example circular display. figs. 149a-c illustrate example layouts for example character sets displayed on an example rectangular display. figs. 150a-d illustrate example hierarchical layouts for presenting characters to input to a display. figs. 151a-d illustrate example gestures to select characters to input on example displays. figs. 152a-c illustrate example interfaces for inputting text onto a display. figs. 153a-b illustrate example interfaces for inputting text onto a display. figs. 154a-d illustrate example interfaces for inputting text onto a display. figs. 155a-d illustrate example interfaces that rearrange characters as a user enters text onto a display. figs. 156a-p illustrate example groupings of characters for entry onto a display. figs. 157a-d illustrate example groupings of characters for entry onto a display. figs. 158a-b illustrate example groupings of characters for entry onto a display. figs. 159a-f illustrate example displays having example interactive elements for inputting characters onto the display. fig. 160 illustrate an example display having example interactive elements for inputting characters and suggested character strings onto the display. fig. 161 illustrates an example display having example interactive elements for inputting characters onto the display. fig. 162 illustrate an example display having example interactive elements for inputting characters and suggested character strings onto the display. fig. 163 illustrate an example display having example interactive elements for inputting characters onto the display. figs. 164a-b illustrate example displays having example interactive elements for inputting characters onto the display. fig. 165 illustrates an example interface for constructing characters for input to an example display using example character portions. description of example embodiments fig. 1 illustrates an example embodiment of an wearable electronic device 100 . device 100 includes a body 105 containing all or some of the circuitry, structure, and display of device 100 . for example, body 105 may include all or some of the processing components, data storage components, memory, sensors, wiring, or communication components of device 100 . in particular embodiments, device 100 may include a display. the display may take any suitable form or shape, such as for example a circular shape, as illustrated by circular display 110 . as used herein, where appropriate, “circular display” includes substantially circular displays or circular-like displays, such as for example elliptical displays. in particular embodiments, device 100 may include an element about the display. as used herein, an element about the display includes a rotatable element encircling the display or the body on or in which the display sits. as an example, an element may be an outer ring 115 about a circular display 110 . in particular embodiments, the element about the display may move relative to the display or body. for example, outer ring 115 may rotate relative to the body of device 100 , as described more fully below. in particular embodiments, device 100 may include a band 120 attached to body 105 . in particular embodiments, device 100 may include a sensor module, such as for example camera module 125 housing a camera, affixed in or to body 105 or band 125 , as described more fully below. particular embodiments of an wearable electronic device include a stack-up that allows some or all of the processing and display system to fit inside the body of the device, which may be encompassed by an element, such as an outer ring, that provides at least one way for the user to interact with the device. in addition or the alternative, particular embodiments may include external components incorporated into the band for additional functionality, as described more fully herein. fig. 2 illustrates an example stack-up 200 of an wearable electronic device. as illustrated in fig. 2 , some or all of the components of stack-up 200 may adopt the form of the device, which is circular in the example of fig. 2 . stack-up 200 may include a layer of protective glass (or other suitable transparent, solid material) 205 . other components may be laminated to protective glass 205 , or be attached to base 245 . in addition or the alternative, protective layer 205 may be mechanically connected to outer ring 235 , or any other suitable component of the body of the device. directly beneath protective glass 205 may be a touch-sensitive layer 210 . touch-sensitive layer 210 may be composed of any suitable material and be of any suitable type, such as for example resistive, surface acoustic wave, capacitive (including mutual capacitive or self-capacitive), infrared, optical, dispersive, or any other suitable type. touch-sensitive layer 210 may be applied directly to protective glass 205 , laminated onto it, or physically affixed to. touch-sensitive layer 210 may be a fully two-dimensional touch surface, or may be composed of touch-sensitive regions, such as a number of capacitive buttons or areas. touch-sensitive layer 210 may be connected to processor board 215 via a flexible connector at the edge of the touch surface, as described more fully herein below the touch-sensitive layer 210 may be a circular display 215 , which may be laminated or mechanically affixed to any of the preceding or forgoing layers. in particular embodiments, lamination may reduce glare and improve display legibility by reducing internal reflections. as described more fully below, display 215 may have an outer inactive area that may be symmetric or asymmetric. display 215 may be positioned such that it is axially centered relative to protective layer 205 for a visually symmetric presentation. display 215 may be of any suitable type, such as for example light-emitting diode (led), organic light emitting diode (oled), or liquid crystal display (lcd). in particular embodiments, display 215 may be flexible. in particular embodiments, display 215 may be partially transparent. in particular embodiments, display 215 may be translucent. below display 215 may be battery 220 , which in particular embodiments may be positioned so that base 245 may be reduced in diameter without affecting the size of the battery. battery 220 may be of any suitable type, such as for example lithium-ion based. battery 220 may adopt the circular shape of the device, or may adopt any other suitable shape, such as a rectangular form, as illustrated. in particular embodiments, battery 220 may “float” in the device, e.g. may have space above, below, or around the battery to accommodate thermal expansion. in particular embodiments, high-height components such as for example haptic actuators or other electronics may be positioned in the additional space beyond the edge of the battery for optimal packing of components. in particular embodiments, connectors from processor board 225 may be placed in this space to reduce the overall height of the device. below battery 220 may be processor board 225 . processor board 225 may include any suitable processing components, such as for example one or more processing units, drive units, sense units, caches, memory elements, or integrated circuits. processor board 225 may include one or more heat sensors or cooling units (such as e.g., fans) for monitoring and controlling the temperature of one or more processor board components. in particular embodiments, body 105 of the device may itself act as the heat sink below the processor board may be an encoder 230 , encircled by one or more outer rings 235 . as described more fully below, encoder 230 may be of any suitable type, and may be part of outer ring 235 or may be a separate component, as illustrated in fig. 2 . in particular embodiments, outer ring 235 may provide the haptic feel of the detent of the outer ring or position sensing of the outer ring 235 . when encoder 230 is a mechanical encoder separate from the device body, as illustrated in fig. 2 , the encoder may support the outer ring 235 . for example, in particular embodiments encoder 230 is mounted to base 245 , and the connections to base 245 or to band 240 may pass through some portion of the encoder, such as, for example, the center of the encoder. in particular embodiments, processor board 225 and one or more layers above may be attached to a central post passing through encoder 235 . the post may transfer mechanical forces on components of the device to the post, which may allow components such as the processor board and the display to be supported by the post rather than by the encoder, reducing strain on the encoder. in particular embodiments, outer ring 235 attaches to the moveable portion of the encoder via prongs or other suitable connections. the device body may conclude with a base 245 . base 245 may be stationary relative to the one or more rotatable components of the device, such as outer ring 235 . in particular embodiments, base 245 connects to band 240 , described more fully herein. connections may be mechanical or electrical, such as for example part of the circuitry linking wired communication components in band 240 to processing board 225 . in particular embodiments, connectors are positioned to avoid the encoder and the anchor points for the bands. in particular embodiments, band 240 may be detachable from base 245 . as described more fully herein, band 240 may include one or more inner connectors 250 , one or more optical sensing modules 255 , or one or more other sensors. in particular embodiments, the interior of the device, or portions of that interior, may be sealed from the external environment. while this disclosure describes specific examples of components in stack-up 200 of wearable electronic device 100 and of the shape, size, order, connections, and functionality of those components, this disclosure contemplates that a wearable device, such as device 100 , may include any suitable components of any suitable shape, size, and order connected or communicating in any suitable way. as merely one example, battery 220 may be placed more toward the bottom of the stack up than is illustrated in fig. 2 . as another example, the body of the device may take any suitable form factor, such as elliptoid or disc-like as illustrated by the example of fig. 3a , tapered on one end as illustrated by the example of fig. 3b , or beveled or rounded at one or more edges as illustrated by the example of figs. 3c-3d illustrating beveled edge 315 . fig. 3e illustrates additional example form factors of the device body, such as for example bodies 320 a-e having a polygonal shape with a flat protective covering or display or a curved protective covering or display. as another example, bodies 325 a-d have a partially-curved shape with a flat protective covering or display or a curved protective covering or display. bodies 330 a-c have a curved shape. one or more internal components of the device body, such as for example one or more internal components, may take any form factor suitable for the body in which they sit. fig. 4a illustrates an example cross section of a device body. as illustrated, the device body has a width of d 1 , such as for example approximately 43 millimeters. particular embodiments may include a slight gap d 4 between the outer ring and the oled display, such as for example a gap of up to 0.3 millimeters. likewise, there may also be a distance between the outer ring and a glass protective covering (which may have a width d 3 , such as for example approximately 42.6 millimeters), such as for example 0.2 millimeters. in particular embodiments, the gap between the glass protective covering and the outer ring is greater than the gap between the display and the outer ring. the outer ring (which may include serration) may have a width d 2 of, for example, 1.0 millimeter. figs. 4b-4c illustrate example set of connections between components of the device. fig. 4b illustrates a touch glass 405 above a display 410 . the display is attached to the top of inner body 440 with, for example, adhesive sealant 425 . display flexible printed circuit 430 couples the display to the electronics within the device body. adhesive sealing membrane 445 may be used to connect band 450 to the device, and one or more retention rings 435 may be used to connect outer ring 415 to the inner body 440 . in particular embodiments, the retention rings may inhibit twisting of the outer ring on its vertical axis and provide physical spacing between the outer ring and the glass covering. a layer of protective glass may sit on the top of the inner body, providing an environmental seal. in particular embodiments, a retention ring may also provide an environmental seal for the inner body. for example, fig. 5c illustrates an example retention ring 465 attaching an outer ring to the device body and provides an environmental seal between the outer ring and the inner body. in addition or the alternative, flock-type material, possibly coated with a hydrophobe such as, for example, teflon, may be used to prevent water and dirt intrusion into the cavity. as another example, the outer ring may be sealed to the inner body with a ring of metal or plastic, preventing air (and thus water vapor or other particles) from moving through the cavity between the outer ring and the inner body. gap 455 allows the outer ring to move, such as for example by rotation, relative to the inner device body. adhesive sealant 460 attaches the display to the body and provides an environmental seal between the display and components of the inner body. in particular embodiments, the display of the device has a circular or elliptical form, and houses a circular display unit, such as for example an lcd display, and an oled display. the display unit may be mounted such that the visible area is centrally located within the display module. should the display unit have an offset design, one or more appropriate maskings may be used to obscure part of the display to produce a circular and correctly placed visual outline. in particular embodiments, a display module has an outer ring that is part of the user interface of the device. the outer ring may rotate while the band holds the bottom and inside part of the device stable. fig. 5a illustrates an example of a top-view of the device's display relative to other device components. outer ring 510 may be attached to the front surface 512 of device 508 , or it may be independent of front surface 512 . in particular embodiments, display 506 does not rotate regardless of rotation of outer ring 510 surrounding display 506 . that may be achieved by attaching display 506 to the portion 504 of display module that is affixed to band 502 , or by programming displayed content to remain static while the display unit rotates. in the latter case, displayed content is rotated such that the visual vertical axis of the image displayed by the display unit remains parallel to the band at all times. a display module may additionally incorporate one or more sensors on or near the same surface as the display. for example, the display module may include a camera or other optical sensor, microphone, or antenna. one or more sensors may be placed in an inactive area or of the display. for example, fig. 5b illustrates device 522 with a camera module 516 placed coplanar with the battery below display 520 , with optical opening 514 positioned under the clear section of display 520 . camera module 516 may be placed between gird line connectors 518 for display 520 . any camera or other suitable sensors may be placed coplanar with the display, such as antenna 524 of fig. 5c , which is placed is inactive area 526 . in addition or the alternative, sensors may be placed below or above the display, may be placed in any suitable location in or on the outer body of the device, may be placed in any suitable location in or in the band of a device, or any suitable combination thereof, as described more fully herein. for example, a front-facing-camera maybe placed under the display, on the display, or above the display. in particular embodiments, the packaging of a circular display includes an inactive area, as illustrated in fig. 5d . in a traditional display, row drive lines powering the display are routed to the nearest lateral edge, then either routed down along the inactive areas, or connected directly to the driver integrated chips along that edge. a number of approaches may be taken to reduce the amount of inactive area for the display. for example, particular embodiments reduce the size of the inactive area by rerouting grid control lines powering the display to one edge of the display. fig. 5d illustrates grid control lines 532 routed to one edge of display 536 and connected to a connector 538 routing the lines to the processing center of device 528 . in that configuration, inactive area 530 may be minimized. fig. 5e illustrates another example embodiments for reducing the inactive area of a display 554 of device 540 by creating a polygonal-type display outline, with a circular area masked in the center by one or more masks 550 . connectors 552 are arranged in a polygonal design. rows 546 and columns 542 of grid lines are routed to the nearest connector 552 . in particular embodiments, connectors 552 connect to a flexible circuit behind the display that carries the driver chip. due to the reduced density of connection, the electronics of fig. 5e may be easier to connect to a flexible printed circuit board (fpc board) and thus increases yield. in addition, by moving the driver integrated circuit to the back of the display, one or more inactive areas 548 can be further reduced while allowing the integrated circuit to remain on a stable and flat surface. this design is particularly suited to oled displays, but may be used with lcds, given that a backlight unit (blu) may be laminated on to the device before the fpc board is connected. while the above example illustrates a polygonal arrangement of connectors, any suitable arrangement of connectors may be used as long as all pixels are reached by grid lines. fig. 5f illustrates an example physical arrangement and sizing of a display of a device. the device has a diameter of d 4 , such as for example approximately 41.1 millimeters. the device includes one or more inactive areas having a width d 3 , such as for example approximately 1.55 millimeters. the device includes a visible area with a diameter d 2 , such as for example approximately 38 millimeters. the device includes connectors 568 for column lines 564 and row lines 566 . connectors 568 may be coupled to the device by one or more fpc bonds 570 , which have a width of d 1 , such as for example approximately 0.2 millimeters. connectors 568 may have a width d 5 , such as for example approximately 6 millimeters. display connector fpc 556 may be used to connected the electronics of the display, such as for example circuitry from connectors 568 , to driver chip 558 , which may be below the display or on the back of the device body. figs. 6a-c illustrate example cross-sectional views of a device display, including manufacturing of the device. in fig. 6a , hotbar knife 605 is used to solder the flexible printed circuit(s) 610 coupling the electronics of the display to processing electronics of the device. a support 615 may be used to stabilize the fpc 610 during this process. fig. 6b illustrates the connected fpc 620 , which has been folded over (portion 625 ) and glued to the back of the display using adhesive 630 . fig. 6c illustrates an example finished display. fpc 645 has been laminated to the back of protective display glass 635 , and is bent over the front of glass 635 and is attached to the front of glass 635 via microbond 649 . adhesive 650 connects the fpc 645 to the device. the fpc pass over driver chip 655 , which is connected to device by adhesive 650 . in particular embodiments, all processing and rf components are located within the body of the device, which may create a challenge in allowing rf signals to pass out of the device. the fpc board may additionally be attached to sides of the polygon where there is no connection to the display itself to allow the mounting of strip line, stub, ceramic, or other antennae (or other suitable sensors) in the same plane as the display, as illustrated in fig. 5c . as the antenna of fig. 5c is coplanar with the display, interference from the dense mesh of wiring (e.g. as illustrated in fig. 5e ) from the display is reduced. in particular embodiments, a display may be shielded from electromagnetic interference with the main processor board using a metal shield. in particular embodiments, the metal shield may also be used as a heat sink for the battery, and thus may improve charge or discharge rates for the battery. in particular embodiments, an wearable electronic device may include one or more outer elements (which may be of any suitable shape) about the device body. fig. 7a illustrates an outer element by an example outer ring 710 about a display 705 . outer ring may be composed of any suitable material, such as for example stainless steel or aluminum. in particular embodiments, outer ring 710 may be rotatable in one direction, both directions, or may be used in both configurations based on e.g. a switch. in particular embodiments, one outer ring 710 may rotate in one direction while a second outer ring 710 rotates in the opposite direction. outer ring 710 may be coupled to base 720 of the device by a retention ring 715 . fig. 7b illustrates outer ring 710 attached to base 720 either by a delrin ring 715 a or by a sprint steel retention ring 715 b. springs or clips 725 affix the rings to base 720 . figs. 7c-d illustrate retention ring 715 affixed to base 720 via screws 725 screwed into corresponding posts of base 720 . the device may include fasteners/spacers 730 , as illustrated in fig. 7c . in particular embodiments, detents or encoders (which may be used interchangeably, where suitable) of an outer element may provide a user with haptic feedback (e.g. a tactile click) provided by, for example, a detent that allows the user to determine when the element has been moved one “step” or “increment”, which may be used interchangeably herein. this click may be produced directly via a mechanical linkage (e.g. a spring mechanism) or may be produced electronically via a haptic actuator (e.g. a motor or piezo actuator). for example, a motor may provide resistance to motion of a ring, such as for example by being shorted to provide resistance and unshorted to provide less resistance, simulating the relative high and low torque provided by a mechanical detent system. as another example, magnetic systems may be used to provide the haptic feel of a detent. for example, a solenoid mechanism may be used to disengage the detent spring or escapement as needed. the spring or escapement provides the actual mechanical feedback. however, this arrangement allows the device to skip a number of détentes as needed, while re-engaging the detent at exact intervals to create the sensation of detents, such as those that have changed size. as another example, a rotatable outer element (such as, for example, the outer ring) may be magnetized, such as by an electromagnetic used to attract the ring at “detent” positions, increasing torque and simulating detent feedback. as another example, a rotatable outer element may have alternating north-south poles, which repels and attracts corresponding magnetic poles in the device body. as another example, a permanent magnet may be used to lock the ring in place when the electromagnet is not in use, preventing freewheeling. as another example, instead of an electromagnet, an easily-magnetizable ferromagnetic alloy may be used within a solenoid. this allows the electromagnetic field of the solenoid to “reprogram” the magnetic orientation of the core, thus maintaining the effect of the magnetic actuation even when the solenoid itself is disengaged. while this disclosure provides specific examples of detents, detent-like systems, and encoders, this disclosure contemplates any suitable detents, detent-like systems, or encoders. fig. 8a illustrates an outer ring 805 with notches for a spring-based detent system etched onto the inner surface of outer ring 805 . springs 820 attached to spring posts 810 . retention ring 815 may be made of delrin, steel, or any other suitable material, and may be segmented or solid/continuous. fig. 8b illustrates an example outer ring having small notches 830 that engage a spring-loaded element to provide haptic feedback from the illustrated detent. in the case of an electronic feedback system, the feedback may be produced in rapid synchrony with the motion of the ring, and must have a sufficient attack and decay rate such that successive movements of the ring are distinguishable from each other. in particular embodiments, an outer ring may be freely (e.g. continuously) rotatable, without any clicking or stepping. in particular embodiments, a ring may be capable of both continuously rotating and rotating in steps/increments, based on, for example, input from a user indicating which rotational mode the outer ring should be in. the ring may also or in the alternative rotate freely in one direction and in increments in the other. different functionality may occur based on the rotational mode used. for example, rotating in continuous mode may change a continuous parameter, such as e.g. volume or zooming, while rotation in incremental mode may change a discrete parameter, such as e.g. menu items or contacts in a list, as described more fully herein. in particular embodiments, when rotating freely the ring may provide haptic feedback to the user, for example a force applied such that the ring appears to rotate in viscous media (e.g. the more quickly the ring is rotate the more it resists rotation). in particular embodiments, an outer ring may be depressed or raised in the direction of the axis the outer ring rotates about, such as for example as part of a gesture or to change rotational modes. in particular embodiments, an outer ring may have touch-sensitive portions. in particular embodiments, an encoder or detent may be used to determine the position of the outer ring relative to the device body. particular embodiments utilize an encoder that is affixed to the device body, as illustrated by encoder 230 of fig. 2 . in particular embodiments, the encoder is part of the inner surface of the outer ring itself, as illustrated by printed optical elements 825 in fig. 8b . in those embodiments, the outer ring acts as the rotating part of the encoder directly. an optical encoder pattern is printed onto the inner surface, and is read out by an optical module on the processing board. the encoder on the interior of the outer ring should have sufficient optical contrast for detectors, and may be etched on the outer ring via e.g. printing or laser-etching. the inner and outer rings may be environmentally sealed with a low-friction ring (such as for example, ring 840 of fig. 8c ) made of a material such as teflon or delrin that maintains a tight fit while preventing contaminants from entering the inner part of the device. in particular embodiments, a lip on the inner ring may engage a similar lip on the outer ring, allowing the two rings to be joined while still allowing free rotation. a larger lip at the bottom of the inner ring provides further sealing by deflecting environmental hazards from below. as illustrated in fig. 9 , in particular embodiments, sealing ring 915 may fit into groove 905 of the base, which may include a grip area 910 . in particular embodiments, a retention ring connecting the outer ring to the body of the device may have strain gages to detect pressure on the outer ring. as an example, fig. 10 illustrates a retention ring connected to four strain gauges (which are also connected to the inner body) that are symmetrically placed around the ring. as used herein, the four strain gauges may be an electronic component detecting strain. as a result of the symmetric placing, normal motion or contact with the outer ring will place mostly asymmetric strain on the outer ring, because the ring merely moves relative to the device in the plane of the ring, and thus one end compresses and the opposite end elongates, as illustrated by the top ring of fig. 10 . in contrast, squeezing a larger portion of the outer ring will likely produce a symmetric strain on opposite pairs of strain gauges (e.g. due to elongation of the ring under pressure). the relative difference in strain between the two pairs of strain gauges thus differentiates intentional squeezing of the outer ring from regular motion of or contact with the outer ring. while this disclosure describes specific examples of the number and placement of strain gauges in the retention ring, this disclosure contemplates placement of any suitable number of strain gauges in any suitable component of the device to detect pressure on the component. as one example, strain gauges may be placed on the band of the device or in the outer ring. when strain is placed on a component containing strain gauges or any other suitable strain or pressure detection system, the detected strain may result in any suitable functionality. for example, when strain is placed on the outer ring, such as for example by a user squeezing the outer ring, feedback may be provided to the user. that feedback may take any suitable form, such as tactile feedback (e.g. vibration, shaking, or heating/cooling), auditory feedback such as beeping or playing a particular user-defined tone, visual feedback (e.g. by the display of the device), or any other suitable feedback or combination thereof. functionality associated with squeezing a ring is described more fully herein, and this disclosure contemplates any suitable functionality resulting from strain or pressure placed on and detected by any suitable components. an wearable electronic device may be attached to a band to affix the device to the user. here, reference to a “band” may encompass any suitable apparatus for affixing a device to the user, such as for example a traditional band 1405 that can be worn around the arm, wrist, waist, or leg of the user, as illustrated by way of example in fig. 14a ; a clip 1415 for affixing to a piece of clothing, as illustrated by way of example in fig. 14b ; a necklace or bracelet 1420 configuration, as illustrated by way of example in fig. 14c ; a keychain 1425 or other accessory configuration to secure the device, for example, in the user's pocket, as illustrated by way of example in fig. 14d ; or any other suitable configuration. each of those embodiments may include a camera 1410 located on the device, on the band, or on the body. fig. 11 illustrates various embodiments for wearing the device, such as for example around a neck as illustrated in 1105 ; pinned to clothing (such as, for example, the chest as illustrated by 1110 ); on a belt as illustrated in 115 ; on an appendage (such as, for example, an arm as illustrated in 1120 ); on the wrist as illustrated in 1125 , or in a pocket as illustrated in 1130 . while this disclosure describes specific examples of bands and ways of affixing devices to a user, this disclosure contemplates any suitable bands or ways of affixing a device to a user. in particular embodiments, sensors and corresponding electronics may be attached to a band, where appropriate. for example, the bands of figs. 14a-14c may be suitable for housing an optical sensor. all illustrated, particular embodiments may be suited for including a touch-sensitive area. this disclosure contemplates any suitable bands including any suitable sensors or electronics, such as for example communication components (such as antennae), environmental sensors, or inertial sensors. in particular embodiments, the band may be detachable from the device, and may communicate remotely with the device when not attached to the device. in particular embodiments, wiring associated with electrical components in the band may also be housed in the band, for example to minimize the volume of the device or to minimize electromagnetic interference with internal device components. for example, devices that may cause high levels of internal emi (for example, camera or communication systems), that may require additional volume (for example, battery or speaker), that may require the environmental seal of the main body (for example, power/data connector), or that may require additional contact with the skin of the user (for example, biometric sensors) may benefit by housing at least some of electronics in a band of the device. in particular embodiments, when wiring is contained in a band, a display module may be attached to the band such that electronic connections made to or via the band do not twist when the outer ring is rotated. the module may use a connector that is user-removable, such that the display module or device body can be removed and attached by the user at will. as an example attachment of a band to a device, a band 1215 as illustrated in fig. 12a may be attached to the body by being placed over one or more posts 1205 and then affixed to those posts using fasteners (e.g. screws) 1210 . in particular embodiments, in addition to fasteners and posts a retention plate 1215 may be used to secured the band to device 1225 , as illustrated in fig. 12b . this disclosure contemplates any suitable interface between the band and the device. for example, a usb interface may be provided between the band and the body of the device, to for example communicate data between the device and the band or components of the device and components of the band. in particular embodiments, an interface may enable a user of the device to easily detach, attach, or change the band of the device. this disclosure contemplates any suitable structure for connecting a band as illustrated in fig. 14a to itself, for example when worn by a user. for example, fig. 13a illustrates example structures for fastening band 1305 having a camera module 1310 to a wearer of device 1300 . fasteners may include one or more snaps 1315 , holes 1320 and 1335 and corresponding components, clasps 1340 , or clips 1325 with push buttons 1330 . fig. 13b illustrates an example mechanism for affixing band 1301 to a wearer using clips 1311 and 1303 . components 1309 insert in the cavity on the other side of components 1307 to fasten band 1301 . fig. 13b further illustrates example internal mechanisms for clips 1303 and 1311 . component 1317 of clip 1313 (corresponding to clip 1311 ) may include one or more magnetic portions, which may be attracted to magnets in cavity 1323 . for example, component 1317 may include a magnetic portion at its outer edge, and a magnet of opposite polarity may be placed in front of spring 1319 to attract the magnet of component 1317 . components 1317 may then fill cavity 1323 , fastening clip 1313 to clip 1303 by coupling of the magnets. once inserted, components 1321 may be used to engage springs 1319 , which force components 1317 out of cavity 1323 . clip 1313 may be detached from clip 1303 . in addition to magnets on components 1317 and in cavity 1323 , magnets may also be placed within clip 1313 , for example to assist removal of clip 1313 when springs 1319 are engaged or to prevent components 1317 from sliding in and out of clip 1313 when not fastened to clip 1303 . for example, one or more magnets may be placed in the center of clip 1313 equidistant from components 1317 and in the same plane as components 1317 , attracting magnets of each component (and thus, the components themselves) toward the center of clip 1313 . fig. 13c illustrates example structure for affixing a band 1327 using fasteners 1333 and 1331 , for example through the use of cavity 1329 and components 1337 and 1341 . fig. 13c illustrates the internal structure of fasteners 1331 and 1333 . fasteners 1339 (corresponding to fastener 1333 ) includes components 1337 . when fastener 1343 (corresponding to fastener 1331 ) is inserted into fasteners 1339 , components 1341 attach to components 1337 , and may be secured by extending over a lip of fastener 1339 . when fastener 1339 is pulled upwards the lip increasingly forces components 1337 out, moving components 1341 past the lip of fastener 1339 and enabling fastener 1339 to be removed from fastener 1343 . in particular embodiments, magnets may be placed in or on fasteners 1333 and 1331 to fasten them together. for example, a magnet may be placed at the edge of each of component 1341 and 1337 . when fastener 1343 is brought into fastener 1337 (or vice versa) the magnets attract and secure component 1341 to component 1337 . in addition, a magnet may be placed in fastener 1343 , for example to assist removal of component 1341 from component 1337 or to prevent components 1341 from sliding in and out of fastener 1343 when not affixed to fastener 1339 . for example, one or more magnets may be placed in the center of fastener 1343 equidistant from components 1341 and in the same plane as components 1341 , attracting magnets at the end of each component (and thus, the components themselves) toward the center of fastener 1343 . fig. 13d illustrates an alternative arrangement for affixing band 1351 using fasteners 1349 and 1353 . when affixed, fastener 1357 (corresponding to fastener 1353 ) may be twisted, disengaging components 1359 (which may be rounded) from cavities 1363 , and enabling fastener 1361 (corresponding to fastener 1349 ) to be removed from fastener 1357 , and vice versa. in particular embodiments, one or magnets may be used to affix fasteners 1357 and 1361 to each other and/or remove fasteners 1357 and 1361 from each other. for example, magnets may be placed in cavities 1363 and at the outer (convex) edge of components 1359 , attracting components 1359 into cavities 1363 and securing fastener 1361 to fastener 1357 . as another example, magnets may be placed on the inner edge of components 1359 (i.e., on the concave surface of components 1359 ), attracting components 1359 into fastener 1361 , for example to assist removal of components 1359 from cavities 1363 or to prevent components 1359 from sliding in and out of fastener 1361 when not affixed to fastener 1357 . corresponding magnets may also be placed on the surfaces of fastener 1361 that are in contact with components 1359 when those components are not extended into cavities 1363 . in other words, those magnets may attract (and, in particular embodiments, ultimately make directed contact with) magnets on the concave surface of components 1359 , securing components 1359 to fastener 1361 . figs. 13e-13g illustrate example embodiments of affixing a band 1369 with camera module 1373 to itself, for example when worn by a user of device 1367 . in fig. 13e , one or more magnets 1371 on one side of band 1369 may be attracted to one or more magnets 1379 on the other side of band 1369 . magnets may be strips of magnetic material partially crossing a band, as illustrated by magnetic strip 1307 in fig. 13h , may be strips of magnetic material fully cross the band, as illustrated by strips 1321 and 1327 in fig. 13i , or may be areas of magnetic material 1393 as illustrated in fig. 13f . in addition to magnets 1371 and 1379 , band 1369 may include holes 1391 and one or more posts 1377 for securing band 1369 to the wearer of device 1367 . fig. 13g illustrates fasteners 1387 (e.g. screws 1396 ) affixing to fasteners 1371 (e.g. nut with covering 1395 ) to affix band 1381 to a wearer of device 1367 using holds 1383 ( 1398 ). in particular embodiments, a band containing electrical components may also incorporate a traditional physical contact connector, as illustrated by connector 250 of fig. 2 . the connector may allow for connectivity with the device, for example, for charging, system updates, debugging, or data transfer. such a connector may be of the pogo variety or may be plated surfaces to which a charging cable can interface by contact. such connectors may be plated in precious metals to prevent corrosion from exposure to moisture from the environment and the human body. in particular embodiments, physical connectors may be used only for power, and data may be transferred using short-range communication modalities, such as bluetooth, near field communication (nfc) technology, or wi-fi. in particular embodiments, a band may be used to house flexible batteries (such as, e.g., lithium-based batteries) to increase the energy storage of the device. as energy storage capacity may be tied to total volume, batteries internal to the band increase the storage capacity for volume-limited wearable devices without impacting the total size of the device body. as described more fully below, an wearable electronic device may include one or more sensors on or in the device. for example, an wearable electronic device may include one or more optical sensors or depth sensors. optical sensors may be placed in any suitable location, such as for example on the face of the device, on a band facing outward from the user's body, on a band facing opposite the face, on a band facing toward the user's body, or any suitable combination thereof. fig. 15 illustrates a device 1500 with a band having an outward-facing optical sensor 1505 . placement of an optical sensor on the band may reduce the number of high-frequency signals inside the case, allowing for lighter shielding within the device body and thus weight and volume savings. figs. 14a-14d illustrate example camera placements for different embodiments of an wearable electronic device. in particular embodiments, electronics such as that for processing camera input may be located in the band as well, for example in a “volcano” shape housing the camera, as illustrated by housing 125 in fig. 1 . in particular embodiments, other sensors may be placed near an optical sensor, such as for example in the same housing as the optical sensor on the band of the device. for example, a depth sensor may be used in conjunction with an optical camera to enhance display or detection of a device's environment, or to determine which object a user is pointing at or interacting with via a gesture. in particular embodiments, placement of an optical sensor on the band may be adjustable by the user within a predetermined range. in particular embodiments, placement of an optical sensor on the band may be optimized such that the sensor is conveniently aimable by the user. for example, as illustrated by fig. 15 if the user wears the device about the user's wrist, optical sensor 1505 may be placed in an outward-facing fashion such that the optical sensor aims outward from the user's body when the user's palm is roughly parallel to the ground. in particular embodiments, placement of an optical sensor may be such that the user may view the display of the device while the sensor is pointing outward from the user's body. thus, the user may view content captured by the optical sensor and displayed by the device without blocking the user's view of the physical scene captured by the sensor, as illustrated by the viewing triangle in fig. 16 . a display 1620 of a device 1600 may have an associated viewing cone, e.g., the volume within which the display can be reasonably viewed. in fig. 16 , user 1615 (1) views a real trophy 1610 and (2) views an image of the trophy on display 1620 of device 1600 from within the viewing cone of display 1620 by aiming sensor 1605 at the real trophy. sensor 1605 has an associated angle of view corresponding to a volume within which images can be reasonably captured by sensor 1605 . note that in the example of fig. 16 , sensor 1605 is placed such that the user can conveniently aim sensor 1605 outward while maintaining display 1620 of device 1600 in a direction facing the user, and can do so without device 1600 blocking the user's view of trophy 1610 . fig. 17 illustrates an example angle of view for an optical sensor. when object 1725 is in the angle of view of optical sensor 1705 , a user may view both object 1725 and an image 1710 or 1715 of object 1725 as displayed on device 1700 . for example, when the user's hand 1720 is in the angle of view, the user may view object 1725 , hand 1720 , and an image 1710 of object 1725 and hand 1720 on display 1700 of the device. in contrast, when hand 1720 is not in the angle of view of sensor 1705 , hand 1720 is not displayed by image 1715 presented on display 1700 . when worn by a user, the device's sensor may capture the user's hand/arm/fingers in the angle of view of the sensor while performing a gesture to be captured by the same or other sensors (e.g. a gesture selecting an object in the angle of view of the device, such as, for example, pinching, tapping, or pulling toward or pushing away). the sensor and display may be oriented such that, when worn by a user, an object to be displayed on the device is in the angle of view of the device while the device does not block the user's view of the object and the user's gaze is within the viewing cone of the device's display. in particular embodiments, a user may interact with the image captured by the sensor or displayed on the device, such as, for example, by tapping on the portion of the display at or near where the image is displayed, by performing a gesture within the angle of view of the sensor, or by any other suitable method. this interaction may provide some functionality related to the object, such as, for example, identifying the object, determining information about the object, and displaying at least some of the information on the display; by capturing a picture of the object; or by pairing with or otherwise communicating with the object if the object has pairing/communicating capabilities. in particular embodiments, an optical or depth sensor module (which may be used interchangeably, where appropriate) may communicate with a device via a simple extension of the bus the optical sensor would use if it were directly mounted on the main printed circuit board (pcb), as illustrated in fig. 18a . in fig. 18a , optical sensor 1825 transmits data over flexible printed circuits or wiring 1820 to an integrated control 1810 , which in the example of fig. 18a is located in or on device 1805 , which houses the main printed circuit board. fig. 18b illustrates the optical sensor integrated circuit 1850 on or in the optical sensor module 1860 , which also houses optical sensor 1855 . communication between the main printed circuit board of device 1830 and electronics in camera module 1860 occur via flexible printed circuit 1845 . the arrangement of fig. 18b may allow an integrated circuit to compress and otherwise process the data and send it via a method that requires fewer signal lines, or that requires a smaller transfer of data. that may be beneficial since the band must flex when the user wears the device, and thus a smaller number of lines may be desirable. such an approach can reduce the number of lines to one or two signal lines and two power lines, which is advantageous for packaging, molding, and reliability. in particular embodiments, one or more of the electronics described above must be shielded to prevent electromagnetic interference from the long high-frequency cabling. the use of a parallel bus is common is such cases, and may require the use of a larger cable or fpc. in one embodiment, the camera control integrated circuit may be mounted directly on a small circuit board at the optical module, as illustrated in figs. 18a-b . an wearable electronic device may include any suitable sensors. in particular embodiments, one or more sensors or its corresponding electronics may be located on a band of the device, in or on the body of a device, or both. sensors may communicate with each other and with processing and memory components through any suitable wired or wireless connections, such as for example direct electrical connection, nfc, or bluetooth. sensors may detect the context (e.g. environment) or state of the device, the user, an application, or another device or application running on another device. this disclosure contemplates an wearable electronic device containing any suitable configuration of sensors at any suitable location of the wearable electronic device. in addition, this disclosure contemplates any suitable sensor receiving any suitable input described herein, or initiating, involved in, or otherwise associated with the provision of any suitable functionality or services described herein. for example, touch-sensitive sensors may be involved in the transition between graphical user interfaces displayed on the device, as described more fully herein. this disclosure further contemplates that functionality associated with the wearable device, activation/deactivation of sensors, sensitivity of sensors, or priority of sensor processing may be user-customizable, when appropriate. fig. 19 illustrates an example sensor detection system and illustrates example sensors for an wearable electronic device. sensors send data in a sensor-specific format to the sensor hub subsystem of the device. for example, sensors 19 a illustrated in example sensor module 1924 may include one or more: face-detecting cameras 1902 , outward-facing cameras 1904 , face proximity sensors 1906 , face touch sensors 1908 , band touch sensors 1910 , acoustic skin touch sensors 1912 , inertial measurement system (imu) 1914 , gravity vector sensors 1916 , touch sensors 1918 and 1920 , and any other suitable sensors 1922 . data from sensors is sent to sensor hub 19 b illustrated in example sensor hub module 1944 . the data is conditioned and cleaned of noise in steps 1928 and 1930 as needed and transferred to a locked-state detector 1942 . locked state detector 1942 detects when the device is inactive, and disables sensors as needed to conserve power, while monitoring the sensor data for a gesture or other suitable input that may reactivate the device. for example, numeric gesture detectors receive sensor output and compare that output to one or more numeric thresholds to determine an outcome. heuristic gesture detectors 1934 receive sensor output and make decisions based on one or more decision trees, such as for example anding rules applied to more than one threshold. pattern-based gesture detectors 1938 evaluate sensor input against a predetermined library of gesture patterns 1940 , such as for example patterns determined by empirical evaluation of sensor output when a gesture is performed. one or more gesture priority decoders 1948 evaluate output from gesture detectors, locked state detectors, or both to determine which, if any, of the detected gestures should be utilized to provide functionality to a particular application or system-level process. more broadly, in particular embodiments, when the device is active, application-requested or system-requested sensor detectors are activated in turn and provide their data to the sensor priority decoder. in particular embodiments, the priority detector determines which, if any, of a plurality of sensor input to process, and this disclosure contemplates that combined input from multiple sensors may be associated with functionality different than functionality associated with each sensor input individually. the decoder decides when a sensor has been detected with sufficient certainty, and provides sensor data to the sensor hub driver. the driver provides an application programming interface (api) to the end applications and system controllers, which in turn produce necessary output and navigation. for example, fig. 19 illustrates example sensor hub driver 1950 , application apis 1952 , system navigation controllers 1954 for, for example, determining appropriate system functionality (for example, system-level navigation 1962 through a graphical user interface of the device), and application-level gesture priority detectors for applications 1956 . while sensor hub 19 b and application processor 19 c (illustrated in example application processor module 1964 ) of fig. 19 are illustrated as separate entities, they may be expressed by (and their functions performed by) at least some of the same or similar components. in particular embodiments, the boundaries delineating the components and functions of the sensor hub and the application processor may be more or less inclusive. the boundaries illustrated in fig. 19 are merely one example embodiment. as for sensors themselves, functions executed by and components of the sensor hub system and application processor may occur or be in the device body, in the band, or both. particular embodiments may use more than one sensor hub or application processor, or components therein, to receive and process sensor data. sensors may internally produce sensor data, which may be simply filtered or reformatted by, for example, a detector or data conditioner. raw data may be formatted to an uniform format by the data formatter for ingestion by the application api. recognizers may use numeric models (such as decision trees), heuristic models, pattern recognition, or any other suitable hardware, software, and techniques to detect sensor data, such as gesture input. recognizers may be enabled or disabled by the api. in such cases, the associated sensors may also be disabled if the recognizer is not to receive data from the sensors or is incapable of recognizing the sensor data. a device may incorporate a database of sensor outputs that allow the same detector to detect many different sensor outputs. depending on the requests produced by the api, a sensor priority decoder may suppress or pass through sensor output based on criteria supplied. the criteria may be a function of the design of the api. in particular embodiments, recognizers may ingest the output of more than one sensor to detect sensor output. in particular embodiments, multiple sensors may be used to detect similar information. for example, both a normal and a depth sensing camera may be used to detect a finger, or both a gyroscope and a magnetometer may be used to detect orientation. when suitable, functionality that depends on or utilizes sensor information may substitute sensors or choose among them based on implementation and runtime considerations such as cost, energy use, or frequency of use. sensors may be of any suitable type, and as described herein, may be located in or on a device body, in or on a band, or a suitable combination thereof. in particular embodiments, sensors may include one or more depth or proximity sensors (terms which may be used interchangeably herein, when appropriate), such as for example infrared sensor, optical sensors, acoustic sensors, or any other suitable depth sensors or proximity sensors. for example, a depth sensor may be placed on or near a display of a device to detect when, e.g., the user's hand, finger, or face comes near the display. as another example, depth sensors may detect any object that a user's finger in the angle of view of the depth sensor is pointing to, as described more fully herein. depth sensors also or in the alternative may be located on a band of the device, as described more fully herein. in particular embodiments, sensors may include on or more touch-sensitive areas on the device body, band or both. touch-sensitive areas may utilize any suitable touch-sensitive techniques, such as for example resistive, surface acoustic wave, capacitive (including mutual capacitive or self-capacitive), infrared, optical, dispersive, or any other suitable techniques. touch-sensitive areas may detect any suitable contact, such as swipes, taps, contact at one or more particular points or with one or more particular areas, or multi-touch contact (such as, e.g., pinching two or more fingers on a display or rotating two or more fingers on a display). as described more fully herein, touch-sensitive areas may comprise at least a portion of a device's display, ring, or band. like for other sensors, in particular embodiments touch-sensitive areas may be activated or deactivated for example based on context, power considerations, or user settings. for example, a touch-sensitive portion of a ring may be activated when the ring is “locked” (e.g. does not rotate) and deactivated when the ring rotates freely. in particular embodiments, sensors may include one or more optical sensors, such as suitable cameras or optical depth sensors. in particular embodiments, sensors may include one or more inertial sensors or orientation sensors, such as an accelerometer, a gyroscope, a magnetometer, a gps chip, or a compass. in particular embodiments, output from inertial or orientation sensors may be used to activate or unlock a device, detect one or more gestures, interact with content on the device's display screen or a paired device's display screen, access particular data or activate particular functions of the device or of a paired device, initiate communications between a device body and band or a device and a paired device, or any other suitable functionality. in particular embodiments, sensors may include one or more microphones for detecting e.g. speech of a user, or ambient sounds to determine the context of the device. in addition, in particular embodiments a device may include one or more speakers on the device body or on the band. in particular embodiments, sensors may include components for communicating with other devices, such as network devices (e.g. servers or routers), smartphones, computing devices, display devices (e.g. televisions or kiosks), audio systems, video systems, other wearable electronic devices, or between a band and a device body. such sensors may include nfc readers/beacons, bluetooth technology, or antennae for transmission or reception at any suitable frequency. in particular embodiments, sensors may include sensors that receive or detect haptic input from a user of the device, such as for example piezoelectrics, pressure sensors, force sensors, inertial sensors (as described above), strain/stress sensors, or mechanical actuators. such sensors may be located at any suitable location on the device. in particular embodiments, components of the device may also provide haptic feedback to the user. for example, one or more rings, surfaces, or bands may vibrate, produce light, or produce audio. in particular embodiments, an wearable electronic device may include one or more sensors of the ambient environment, such as a temperature sensor, humidity sensor, or altimeter. in particular embodiments, an wearable electronic device may include one or more sensors for sensing a physical attribute of the user of the wearable device. such sensors may be located in any suitable area, such as for example on a band of the device or on base of the device contacting the user's skin. as an example, sensors may include acoustic sensors that detects vibrations of a user's skin, such as when the user rubs skin (or clothing covering skin) near the wearable device, taps the skin near the device, or moves the device up and down the user's arm. as additional examples, a sensor may include one or more body temperature sensors, a pulse oximeter, galvanic-skin-response sensors, capacitive imaging sensors, electromyography sensors, biometric data readers (e.g. fingerprint or eye), and any other suitable sensors. such sensors may provide feedback to the user of the user's state, may be used to initiate predetermined functionality (e.g. an alert to take particular medication, such as insulin for a diabetic), or may communicate sensed information to a remote device (such as, for example, a terminal in a medical office). an wearable electronic device may include one or more charging components for charging or powering the device. charging components may utilize any suitable charging method, such as capacitive charging, electromagnetic charging, trickle charging, charging by direct electrical contact, solar, kinetic, inductive, or intelligent charging (for example, charging based on a condition or state of a battery, and modifying charging actions accordingly). charging components may be located on any suitable portion of the device, such as in or on the body of the device or in or on the band of a device. for example, fig. 20a illustrates a charger 2000 with slot 2005 for connecting a charging component with the charger. for example, slot 2005 may use friction, mechanical structures (such as latches or snaps), magnetism, or any other suitable technique for accepting and securing a prong from a charging component such that the prong and charger 2000 make direct electrical contact. fig. 20c illustrates prong 2015 on band 2010 utilizing pogo-style connectors to create a circuit connection between charger 2022 and band 2010 through contacts 2020 . in particular embodiments, prong 2015 may be on charger 2022 and slot 2005 of fig. 20a may be on the band or body of the wearable device. in particular embodiments, contacts 2020 (such as, for example, pogo-style connectors) may be on the body of the device, which may be used to create a circuit between the band or the charger for charging the device. charger 2000 of fig. 20a may be connected to any suitable power source (such as, for example, power from an alternating current (ac) outlet or direct current (dc) power from a usb port on a computing device) by any suitable wired or wireless connection. charger 2000 may be made of any suitable material, such as acrylic, and in particular embodiments may have a non-slip material as its backing, such as e.g. rubber. in particular embodiments, charger 2000 may be affixed or attached to a surface, for example may be attached to a wall as illustrated in fig. 20b . attachment may be made by any suitable technique, such as for example by mechanically, magnetically, or adhesively. in particular embodiments, an wearable electronic device may be fully usable while attached to the charger. for example, when a charging component is located on the body of the device, the device may sit in the charger while a user interacts with the device or other devices communicate with the device. as another example of a charging components in a wearable electronic device, figs. 21a-21b illustrate additional example chargers using e.g. inductive charger. as illustrated in figs. 21a-21b , a band may include one or more charging coils 2110 . as described above, this disclosure contemplates charging coils (or any other suitable charging component) incorporated in or on the body of the device, in alternative to or in addition to on the band of the device. a magnetic field 2105 generated by e.g. charging surface 2115 or charging surface 2120 passes through charging coil 2110 . charging surface 2120 of fig. 21b may improve the density of the magnetic field 2105 through charging coil 2110 relative to charging surface 2115 and allows more precise placement than charging surface 2115 , thus improving the charge transfer rate of the system. this disclosure contemplates that, when suitable, charging may power components on or on the body of the device, components in or on the band, or both. in particular embodiments, the band or device may implement an antenna for a wireless charging solution. since wireless charging operates optimally in the absence of ferrous metals, this allows a wider choice of materials for the body of the device, while allowing improved wireless charging transfer capacity by allowing the coil to be held between the poles of a charging driver (as described above) rather than being simply coplanar to the driver. as described above and illustrated in fig. 2 , the active band may also incorporate a traditional internal physical contact connector 250 . in particular embodiments a charging unit with an internal charge reservoir may be associated with a wearable electronic device. when plugged into the wall, the charging unit can charge both an attached device and the charging unit's internal reservoir. when not plugged in, the charging unit can still charge an attached device from its reservoir of power until that reservoir is depleted. when only the charger is connected to a power source without a device, it still charges itself, so that it can provide additional power for the device at a later point. thus, the charging unit described herein is useful with and without being plugged into a power source, as it also can power any partially-charged device for a while when a person is not able to connect to a power source, for example when travelling, on plane, train station, outdoors, or anywhere a user might need charge for a device but does not have access to a power source. the device can be both in standby or in-use while the charger charges the device, and no modifications to the software or hardware of the target device is needed. additional benefits of one or more embodiments of the invention may include reducing the number of items one must carry, providing the benefits of both a charger and a power pack, making charger useful to carry when on the move, and reducing the number of cables and connectors one must carry to extend the battery life of their devices. this disclosure contemplates that such a charging unit may be applied to any suitable electronic devices, including but not limited to an wearable electronic device. figs. 22a-22b illustrate particular embodiments of an example charging unit 2210 with example connections 2205 to device 2200 and connections 2215 and 2220 . for example, fig. 22a illustrates cable connectivity from the charging unit 2210 to device 2200 and to an external power source. as another example, fig. 22b illustrates charging unit 2210 with cable connectivity from device 2200 and direct connectivity to a power source. this disclosure contemplates any suitable connections between a device, the charging unit, and a power source charging the charging unit. for example, connections both to the device and to the power source may be direct, via cabling, or wireless. as described above, a charging unit can charge a device from the charging unit's internal charging reservoir even when not connected to an external power source, and can charge itself, a connected device, or both when connected to an external power source. this disclosure contemplates any suitable scheme for allocating charge between the charging unit and device. such allocation scheme may depend on the amount of charge internal to the device, internal to the charging unit, the amount of power being consumed by the device, the charging capabilities of an external power source, or any suitable combination thereof. in addition or the alternative, charging threshold may determine which allocation scheme to use. for example, one charging scheme may be used when the device is near full charge and the charging unit has little charge left, and another may be used when the device has little charge left. figs. 23-24 illustrate example charging schemes for the charging unit and connected device. for example, as illustrated in fig. 24 , when a device gets connected to a charger as in step 2400 , step 2405 determined whether the device is fully charged. if yes, no further charging action is taken. if not, step 2410 determines whether the charger is connected to an external power source, such as for example line voltage. if so, the device is charged from that external source in 2425 . if not, step determines whether the charger has any power left, and if so, the device is charged from the charger's internal power source in step 2420 from line voltage rather than the charging unit's reservoir when the charging unit is connected to the line voltage. fig. 23 illustrates a similar decision tree. if a device is connected to a charger (step 2300 ) that is connected to a power source (step 2300 ), then step 2310 determines whether the device is fully charged, and if not, the device is charged from the power source the charger is connected to (step 2315 ). similarly, step 2320 determines whether the charger is fully charged, and if not, the charger unit is charged from the power source in step 2325 . in particular embodiments, the allocation scheme used may be determined or customized by a user. figs. 25a-25e illustrate example embodiments of energy storage and charging in a device and a charging unit. in fig. 25a of the illustrated embodiment, charge reservoir 2500 of the device and charge reservoir 2520 of the charging unit are both depleted. figs. 25b-25c illustrate charging the charge reservoir 2500 of the device and charge reservoir 2505 of the device after the charging unit has been connected to external power source 2510 . after a short time, both charging unit and the device are charged simultaneously, with charging being distributed such that each is given the same percent of its total charge capacity. both charging reservoir 2500 of the device and the charging reservoir 2505 of the charging unit are completely charged after some time, as illustrated in fig. 25c . as described herein, the amount of charge allocated to the device or the charging unit may vary based on any suitable charge allocation scheme. for example, if the power conversion capability of the charging unit is limited, the charging unit's reservoir is nearly full and the device's charge reservoir is nearly empty, or the energy demand of the device is very high, the charging unit may prioritize the charging of the device before charging its internal reserves. as another example, charging of the charging unit may continue until a predetermined threshold charge has been reached. figs. 25d-25e illustrate transfer of charge between the charging unit and the device when the charging unit is not connected to an external power source. as illustrated in fig. 25d , a device with little charge remaining in its reservoir 2500 is connected to a charging unit with a fully charged reservoir 2505 . as discussed above, this disclosure contemplates any suitable charge allocation scheme between the device and the charger when the charger is not connected to an external power source. that allocation scheme may be the same as or different from the allocation schemed used when the charging unit is connected to an external power source. for example, fig. 25e illustrates an allocation scheme that maximizes the charge of the charging reservoir 2500 of the device. as long as the charging unit still has charge, it continues charging device until the device is fully charged or until the charging reservoir 2505 of the charger is completely empty. fig. 26 illustrates an example internal architecture of an example charging unit 2600 . line voltage converter 2605 produces a lower voltage direct current from the high voltage line current 2610 . this voltage is fed both to battery charger/regulator 2630 and connector 2615 , to which a device can be connected via connection 2620 for charging. battery charger 2630 uses available power from line voltage converter 2605 to charge the energy reservoir (battery 2635 ). it may take an equal share of the power as the device, take a smaller share when the device demand is high (device priority), or take a larger share when internal power reserves are low (charger priority). those priorities may be user-selectable. continuing the example of fig. 26 , when line voltage converter 2605 is not providing power, charger/regulator 2630 produces the appropriate charging voltage from the power on battery 2635 . regulator 2630 may be always on, or it may be switched on by connection to the device, or the press of a button that indicates the user wishes to charge the device. once activated, regulator 2630 will charge the device until internal reserves are depleted. at that point, some charge may still remain in battery 2635 to improve battery life, but it will not be available to the user. the device may incorporate an emergency mode that allows access to some of this energy to gain a minimal amount of emergency usage time, at the cost of battery lifetime. regulator 2630 may continue to provide energy until either the device is unplugged, or until the device only draws a minimal amount of energy, indicating completion of charge. finally, charger/regulator 2630 may include an on-demand display that shows the amount of energy remaining in reserve to the user. since displays generally use energy, a button or other input may be used to trigger the display for a limited time. while fig. 26 illustrates an example internal architecture of an example charging unit 2600 , this disclosure contemplates any suitable internal architecture of any suitable charging unit described herein, and contemplates that such a charging unit may be of any suitable size and shape. in particular embodiments, functionality or components of the device (such as e.g. sensors) may be activated and deactivated, for example, to conserve power or reduce or eliminate unwanted functionality. for example, a locked state detector detects when the device is inactivated, and disables sensors as needed to conserve power, while monitoring the sensor data for a gesture or other suitable input that may reactivate the device. a device may have one or more power modes, such as sleep mode or fully active mode. as one example, in particular embodiments the device is arm-worn, and a touch surface of the device may come in contact with objects and persons while in regular use. to prevent accidental activation, an accelerometer or other inertial sensor in the body or band of the device can be used to gauge the approximate position of the device relative to the gravity of the earth. if the gravity vector is detected towards the sides of the device (e.g. the device is determined by at the user's side or the display is determined not to be pointed at the user) the touch screen can be locked and display disabled to reduce energy use. when the gravity vector is determined to be pointing below the device (e.g. the device is roughly horizontal, resulting in a determination that the user is viewing or otherwise using the device), the system may power up the display and enable the touch screen for further interactions. in particular embodiments, in addition or in the alternative to the direction of the gravity vector waking or unlocking a device, a rate of change of the direction or magnitude of the gravity vector may be used to wake or unlock a device. for example, if the rate of change of the gravity vector is zero for a predetermined amount of time (in other words, the device has been held in a particular position for the predetermined amount of time) the device may be woken or unlocked. as another example, one or more inertial sensors in the device may detect a specific gesture or sequence of gestures for activating a display or other suitable component or application. in particular embodiments, the encoder of the device is robust to accidental activation, and thus can be left active so that the user may change between selections while bringing the device up to their angle of view. in other embodiments the encoder may be deactivated based on context or user input. in addition or the alternative to power conservation, particular embodiments may lock one or more sensors, particular functionality, or particular applications to provide security for one or more users. appropriate sensors may detect activation or unlocking of the secure aspects of the device or of another device paired with or communicating with the wearable device. for example, a specific gesture performed with the device or on a touch-sensitive area of the device may unlock one or more secure aspects of the device. as another example, particular rotation or sequence of rotations of a rotatable ring of the device may unlock one or more secure aspects of the device, on its own or in combination with other user input. for example, a user may turn a rotatable ring to a unique sequence of symbols, such as numbers or pictures. in response to receiving the sequence of rotational inputs used to turn the rotatable ring, the display may display the specific symbol(s) corresponding to each rotational input, as described more fully herein. in particular embodiments, the symbols used may be user-specific (such as, e.g., user pictures stored on or accessible by the device or symbols pre-selected by the user). in particular embodiments, different symbols may be presented to the user after a predetermined number of unlockings or after a predetermined amount of time. the example inputs described above may also or in the alternative be used to activate/deactivate aspects of the device, particular applications, or access to particular data. while this disclosure describes specific examples of user input unlocking secure aspects of a device, this disclosure contemplates any suitable input or combination of inputs for unlocking any secure aspect of the device. this disclosure contemplates that input or other suitable parameters for unlocking secure aspects of a device or activating/deactivating components of the device may be user-customizable. in particular embodiments, an wearable electronic device may detect one or more gestures performed with or on the device. gestures may be of any suitable type, may be detected by any suitable sensors (e.g. inertial sensors, touch sensors, cameras, or depth sensors), and may be associated with any suitable functionality. for example, one or more depth sensors may be used in conjunction with one or more cameras to capture a gesture. in particular embodiments, several depth sensors or cameras may be used to enhance the accuracy of detecting a gesture or the background associated with a gesture. when appropriate, sensors used to detect gestures (or processing used to initiate functionality associated with a gesture) may be activated or deactivated to conserve power or provide security, as described more fully above. as shown above, fig. 19 illustrates an example sensor detection system and provides specific examples of gesture detection, processing, and prioritization. in particular embodiments, specific applications may subscribe to specific gestures or to all available gestures; or a user may select which gestures should be detectable by which applications. in particular embodiments, gestures may include manipulation of another device while using the wearable device. for example, a gesture may include shaking another device while aiming, moving, or otherwise utilizing the wearable device. this disclosure contemplates that, where suitable, any of the gestures described herein may involve manipulation of another device. while the examples and illustrations discussed below involve specific aspects or attributes of gestures, this disclosure contemplates combining any suitable aspects or attributes of the gesture and sensor described herein. in particular embodiments, an wearable electronic device may detect one or more gestures performed with or on the device. gestures may be of any suitable type, may be detected by any suitable sensors (e.g. inertial sensors, touch sensors, cameras, or depth sensors), and may be associated with any suitable functionality. for example, one or more depth sensors may be used in conjunction with one or more cameras to capture a gesture. in particular embodiments, several depth sensors or cameras may be used to enhance the accuracy of detecting a gesture or the background associated with a gesture. when appropriate, sensors used to detect gestures (or processing used to initiate functionality associated with a gesture) may be activated or deactivated to conserve power or provide security, as described more fully above. fig. 19 . as described more fully above, fig. 19 illustrates an example sensor detection system and provides specific examples of gesture detection, processing, and prioritization. in particular embodiments, specific applications may subscribe to specific gestures or to all available gestures; or a user may select which gestures should be detectable by which applications. in particular embodiments, gestures may include manipulation of another device while using the wearable device. for example, a gesture may include shaking another device while aiming, moving, or otherwise utilizing the wearable device. this disclosure contemplates that, where suitable, any of the gestures described herein may involve manipulation of another device. while the examples and illustrations discussed below involve specific aspects or attributes of gestures, this disclosure contemplates combining any suitable aspects or attributes of the gesture and sensor described herein. in particular embodiments, gestures may include gestures that involve at least on hand of the user and an appendage on which the device is worn, such as e.g. the other wrist of the user. for example, in particular embodiments, a user may use the hand/arm on which the device is worn to appropriately aim an optical sensor of the device (e.g. a camera or depth sensor) and may move or position the other arm/hand/fingers to perform a particular gesture. as described herein and illustrated in figs. 16-17 , in particular embodiments the scene aimed at may be displayed on the device's display, such that a user can view both the real scene, the scene as-displayed on the device, and the user's hand/arm/fingers, if in the angle of view of the. in particular embodiments, the displayed scene may include the hands/fingers/arm used detected by the sensor and used to perform the gesture. figs. 27-28 illustrate example gestures in which the user aims an outward-facing (e.g. away from the body of the user) sensor on the device (e.g. on the band of the device, as illustrated in the figures) and moves or positions his other arm/hand/fingers to perform a gesture. for example, in fig. 27 , an outward sensor detects an object in the angle of view of the sensor 2705 , an outward sensor (which may be the same sensor detecting the object) detects one or more fingers pointing at the object 2710 , and when the pointing finger(s) are determined to be at rest 2715 , a gesture is detected 2720 . referring to fig. 19 , raw gesture data captured by the outward-facing camera can be conditioned and cleaned of noise and that data can be sent to the heuristic gesture detector. the gesture priority decoder processes the gesture data and determines when the gesture has been identified with sufficient certainty. when the gesture has been identified, the gesture is sent to the sensor hub driver which provides an api to the end applications and system controllers. as examples of functionality associated with this gesture, a camera may focus on the object, the object detected and pointed at may then appear on the display, information about that object may appear on the display, and displayed content may be transferred to another device's display (e.g. when the object is another device). fig. 28 illustrates an example gesture similar to the gesture of fig. 27 ; however, the illustrated gesture includes the outward-facing sensor detecting a “tapping” motion of the finger(s) (e.g. that the finger(s) are moving away from the sensor). for example, the gesture of fig. 28 may include detecting an object in the scene of a camera (or other suitable sensor) in step 2805 , detecting the finger in the scene in step 2810 , detecting a lack of lateral movement of the finger in step 2815 , detecting the finger tip moving further away from the sensor in step 2820 , and detecting a gesture in step 2825 . the gesture illustrated in fig. 28 may provide any suitable functionality. for example, the “tapped” object may be selected from the objects displayed on the display screen. figs. 29-30 illustrate example gestures where an object is detected with an outward-facing sensor along with movement of the user's fingers and hand. for example, fig. 29 illustrates the outward-facing sensor detecting two fingers separated 2915 , the two fingers coming together (e.g. in a pinching motion) 2920 , and then the pinched fingers moving towards the sensor 2925 . the motion of the fingers coming together and moving toward the sensor may occur simultaneously or in sequence, and performing the steps in sequence (or time between steps in the sequence) or simultaneously may each be a different gesture. in fig. 30 , the two fingers illustrated are initially near together 3010 , and the outward-facing sensor detects the fingers moving apart 3020 and the hand moving away 3015 . as for fig. 30 , the movement of the fingers and the hand may be simultaneous or in any suitable sequence. in addition, aspects of figs. 29-30 may be combined to form a gesture. for example, pinching fingers together and moving away from the sensor may be a unique gesture. in particular embodiments, the detected fingers or hand may be manipulating another device, and that manipulation may form part of the gesture. as for all example gestures described herein, this disclosure contemplates any suitable functionality associated with gestures illustrated in figs. 29-30 . figs. 31-32 illustrate example gestures similar to figs. 29-30 , except that here all fingers are used to perform the gesture. in fig. 31 , the fingers are detected as initially close together (e.g. in a first) 3105 , the first is detected moving away from the sensor 3110 , and the sensor detects the first opening 3115 . again, the sequence of steps illustrated may occur in any suitable order. fig. 32 illustrates the reverse of fig. 31 . figs. 31-32 may be associated with any suitable functionality. for example, fig. 31 illustrates an example of sending all or a portion of content displayed on the device to another device, such as the television illustrated in fig. 31 . likewise, the gesture in fig. 32 may pull some or all of the content displayed on another device to the display of the wearable device. for example, the gestures of figs. 31-32 may be implemented when the user performs the gestures with the wearable device in proximity of another device, such as a smart phone, tablet, personal computing device, smart appliance (e.g. refrigerator, thermostat, or washing machine), or any other suitable device. the described functionality are merely examples of functionality that may be associated with gestures illustrated in figs. 31-32 , and this disclosure contemplates that other suitable gesture may perform the described functionality. figs. 33-37 illustrate an outward-facing sensor detecting a hand or portion of an arm swiping in front of the sensor. in particular embodiments, swiping with the front of the hand may be a different gesture than swiping with the back of the hand. figs. 33-34 illustrate the hand being swiped from right to left 3310 - 3315 and left to right 3410 - 3415 across the sensor's angle of view, and figs. 35-37 illustrate the hand being swiped from bottom to top 3510 - 3515 (as well as 3735 - 3740 ) and top to bottom 3610 - 3615 (as well as 3710 - 3715 ) across the sensor's angle of view. as illustrated, the hand may initially start in the angle of view, pass through the angle of view, and exit the angle of view (as illustrated in fig. 36 ); may start outside of the angle of view, pass through the angle of view, and exit the angle of view (as illustrated in fig. 37 ); may start out of the angle of view, pass through a portion of the angle of view, and remain in the angle of view (as illustrated in figs. 33-35 ); or may start in the angle of view, pass through a portion of the angle of view, and remain in the angle of view. this disclosure contemplates the hand being swiped at other angles, such as, e.g., entering at a 45 degree angle below and to the right of the device and exiting at a 45 degree angle relative to the top and to the left of the device. further, this disclose contemplates detecting hand swipes in motions other than a straight line, such as curved swipes or triangular swipes. this disclosure contemplates any suitable functionality associated with any or all of the gestures illustrated in figs. 33-37 , such as, for example, transitioning among user interfaces displayed on the device or among applications active and displayed on the device, opening or closing applications, or scrolling through displayed content (e.g. documents, webpages, or images). as reiterated elsewhere, this disclosure contemplates any suitable gesture associated with the functionality described in relation to figs. 33-37 . figs. 38-39 illustrate example gestures where the outward-facing sensor detects the user's hand in the angle of view 3805 and detects one or more fingers pointing in a direction (along with, in particular embodiments, a portion of the user's hand or arm) 3815 . the gesture detected may depend on the fingers detected or direction the detected fingers are pointed. for example, as illustrated in fig. 38 the finger may be a thumb pointing upwards 3820 , and in fig. 39 the finger may be a thumb pointing downwards 3920 . any suitable functionality may be associated with gestures illustrated in figs. 38-39 , such as saving or deleting a file locally on the device or on an associated device, or approving or disapproving of changes made to settings or other content. fig. 40 illustrates an example gesture involving a shape made with multiple fingers or a portion of the hand in the angle of view of the outward-facing sensor. as illustrated in fig. 40 , the shape may be a ring 4010 , and the gesture may include fingers not involved in the shape pointing in a specific direction 4015 . as illustrated in fig. 40 , a gesture may include holding the shape 4020 (and possibly the other fingers) for a predetermined amount of time. figs. 41-42 illustrate example gestures including covering all or a portion of the outward-facing sensor with the user's fingers or hand. covering the sensor from the top of the device with a thumbs-down type gesture 4105 (as illustrated in fig. 41 ) may be a different gesture than covering the sensor from the bottom of the device 4210 (as illustrated in fig. 42 ) or the sides of the device. the direction of covering may be detected by, e.g., the shape of the hand when covering the device, the orientation of the hand when covering the device, data from other sensors indicating the direction in which the outward-facing sensor is being covered (e.g. detecting that the display and the outward-facing sensor are covered), or any other suitable technique. figs. 43-44 illustrate example gestures where one or more of the user's fingers or portion of a hand/arm are detected in the angle of view of the outward-facing sensor 4305 / 4405 , and then move within the angle of view (or “frame”) to perform a specific gesture 4310 / 4320 / 4410 / 4420 . in particular embodiments, a gesture may be any suitable movement or may be movement in a specific pattern. in particular embodiments, a gesture may be associated with the fingers or a portion of the hand/arm detected. for example, a single pointing finger may be associated with a gesture 4305 (as illustrated in fig. 43 ) or multiple fingers/a palm may be associated with a gesture 4405 (as illustrated in fig. 44 ). in particular embodiments, the direction of the palm (e.g. front, back, at an angle) may be detected and associated with a gesture. fig. 45 illustrates an example gesture include detecting a shape with multiple fingers or the hand/arm of the user 4505 , and detecting movement of the shape in the angle of view 4510 / 4520 . fig. 45 illustrates the shape of fig. 40 moving throughout the outward-facing sensor's angle of view. fig. 46 illustrates an example gesture involving detecting one or more fingers (some or all of a user's hand/arm) and their initial orientation, and subsequently detecting the change in orientation or the rate of change of orientation over time. for example, fig. 46 illustrates detecting two fingers in a angle of view at step 4605 , detecting the fingers and edge of the hand in the angle of view at step 4610 , detecting the fingers making a “c” shape at step 4615 , decoding an initial orientation of the “c” shape at step 4620 , decoding a change in orientation of the “c” shape at step 4625 , determining a relative rotational value of the “c” shape at step 4630 , and detecting the gesture at step 4635 . this disclosure contemplates any suitable shape made with the user's fingers/hand/arm. fig. 47 illustrates an example gesture that involves detecting the number of fingers in a particular position in the outward-facing sensor's angle of view. for example, fig. 47 illustrates detecting fingertips in an angle of view at step 4705 , such as for example one outstretched thumb, an outstretched thumb and a finger, or an outstretched thumb and two fingers. the specific fingertip orientation configuration is detected at step 4710 , and the mapping of the configuration to at least a numeric count of the fingers is performed in step 4715 to detect the gesture in step 4725 . each of the displayed images may be a different gesture. this disclosure contemplates any suitable position of the fingers that comprise a gesture. as for all other example gesture described herein, this disclosure contemplates any suitable functionality associated with the gestures. for example, each gesture of fig. 47 may be associated with a contact to call, e-mail, or text and the detected gesture may activate the call, e-mail, or text to the contact assigned to the gesture. in particular embodiments, the position of the hand/arm/fingers may indicate which method of contact should be used for the contact associated with the gesture. figs. 48-49 illustrate example gestures involving dual sensors on the device. for example, fig. 48 illustrates a sensor on the bottom band portion of the device. that sensors detects the position of the user's other hand relative to the device, and detects separation of the hand from the sensor. in particular embodiments, the gesture may include determining that both hands are moving, such as for example by additional information supplied by one or more inertial sensors in the device or by an inward-facing (e.g. facing the body of the user) camera detecting movement of the device via change in scenery. for example, in fig. 48 a hand is detected in the angle of view at step 4805 . a sensor detects that the hand is in a pinched shape at step 4810 and the same or another sensor detects that the device is in a horizontal orientation in step 4815 . a sensor detects the hand moving relative to the device at step 4820 and estimates the relative position at step 4825 . the gesture is detected at step 4830 . similarly, fig. 49 illustrates an example gesture also involving detection of the user's hand in the angle of view and subsequently moving away from a device sensor. however, in fig. 49 the device sensor is positioned on the top of the device (e.g. a front-facing sensor). as an example, a hand is detected in the angle of view of a front-facing camera in step 4905 . the hand is detected in a pinched shape in step 4910 , and the device is detected in a horizontal orientation in step 4915 . the hand moves closer or further from the device in step 4920 , and the relative position estimate is performed in step 4925 , at which point the gesture is detected in step 4930 . figs. 50-58 illustrate example gestures detected by at least one front-facing sensor (e.g. sensor on the top of the device). any of the gestures of figs. 50-58 may be detected by sensors in any other suitable location (e.g. outward-facing, as described above), and any of the gestures detected by a sensor described in another location may be detected by a front-facing sensor, where appropriate. fig. 50 illustrates an example gesture involving one or more fingertips hovering above the device, and the front-facing sensor detects the fingertips in step 5005 , detects the position of the fingertips or motion (or lack of motion) of those fingertips in steps 5010 and 5015 to detect a gesture in step 5020 . fig. 51 illustrates an example gesture in which steps 5105 and 5110 are identical to 5005 and 5010 , respectively. however, the detected fingertips move away from the front-facing sensor in step 5115 ; in particular embodiments, a gesture may include detecting one or more of the fingertips changing position relative to each other, such as for example moving apart as in step 5120 . fig. 52 illustrates the fingertips detected by the sensor in step 5205 , the fingertips moving together in step 5210 , the fingers moving toward the device in step 5215 , and the duration of which the motion lasts in step 5220 to detect the gesture in step 5225 . as illustrated in fig. 53 , in particular embodiments a gesture may include detecting a change in relative position of the fingertips in addition to the motion of the fingertips toward the sensor. for example, in step 5305 one or two fingers are detect on the front surface; in step 5310 the fingers are detected moving upward or downward; and a gesture is detected in step 5315 . in particular embodiments, the duration of the gesture of figs. 50-52 may determine whether a gesture is detected, or different durations may comprise different gestures. figs. 54-57 illustrate example gestures involving motion of one or more fingers or motion of a portion of a hand/arm across the face of the device (and thus across the front-facing sensor). as illustrated, a gesture may depend on the number of fingers used (e.g. two fingers vs. a whole palm); on the direction of motion across the device face (e.g. bottom to top or left to right); on the duration of motion across the device face; on the proximity of the detected fingers or hand/arm to the device face; on the portion of the device face (e.g. all or a portion, and the relative location of the portion (e.g. bottom half)); or whether the detected portions are initially in the front-facing sensor's angle of view, initially out of the angle of view, end in the angle of view, or end out of the angle of view. for example, the gesture of fig. 54 may include detecting one or two fingers detected on the front surface in step 5405 ; detecting the fingers moving left in step 5410 , and detecting the gesture in step 5415 . as another example, fig. 55 may include detecting one or two fingers detected on the front surface in step 5505 ; detecting the fingers moving right in step 5510 , and detecting the gesture in step 5515 . as another example, fig. 56 may include detecting no fingers in step 5605 , detecting multiple fingers entering the angle of view from the left, detecting the front surface covered, detecting the fingers exiting the frame in step 5620 , and detecting a gesture in step 5625 . as yet another example, fig. 57 may include detecting no fingers in step 5705 , detecting multiple fingers entering the angle of view from the right in step 5710 , detecting a covering of the full front surface in step 5715 , detecting the fingers exiting the angle of view in step 5720 , and detecting a gesture in step 5725 . as with all gestures described herein, any suitable combination of those factors (and any other suitable factors associated with the gestures) may be used to determine a gesture or functionality corresponding to the gesture. any suitable functionality may be associated with a gesture, such as, for example, transitioning between graphical user interface screens, scrolling through displayed content, or scrolling through available applications or devices to communicate/pair with. fig. 58 illustrates an example gesture involving one or more fingers detected on the edge of the device, and may include movement of those fingers around all or a portion of the edge of the device. for example, as illustrated in fig. 58 , a gesture may include detecting no fingers in step 5805 , detecting a single finger at the edge of the front face in step 5810 , detecting a finger moving along the edge in step 5815 , decoding the angular motion of the finger relative to the device in step 5820 , and detecting a gesture in step 5825 . as an example of functionality associated with this gesture, the movement of the finger may rotate some or all of the displayed content on the device. in particular embodiments, a gesture may include a motion of the wearable device, such as, for example, by the arm wearing the device. the motion may be detected by any suitable sensors, such as inertial sensors, orientation sensors, or any suitable combination thereof. figs. 59-66 illustrate example gestures involving detection of the gravity vector relative to the device (e.g. pointing in the direction of the device face or pointing down through the base) and detecting subsequent motion of the device relative to that gravity vector. for example, fig. 59 may include detecting the gravity pointing downward through the face in step 5905 , detecting acceleration of the device along the same axis as the gravity vector is pointing in step 5910 , detecting that the acceleration of the device remains for some time step in step 5915 , and detecting a gesture in step 5920 . fig. 60 is substantially similar to the gesture of fig. 59 , except that the gravity vector points down through the base (rather than the face) in step 6005 . fig. 61 illustrates a gesture that uses a gravity vector to determine orientation/position of the device, for example, that the device is not by the user's body. motion of the device from the detected orientation (such, as for example, perpendicular to the gravity vector) may be detected, resulting in a gesture. for example, a detected gravity orientation may indicate that an arm is not by the side of the body in step 6105 , a lateral acceleration of the device may be detected in step 6110 , the acceleration may be detected for some time in step 6115 , and a gesture may be detected in step 6120 . as figs. 59-61 indicate, detecting an aspect of the motion (e.g. duration of acceleration) may trigger a gesture, and ranges of an aspect (ranges of duration of motion) may each correspond to a different gesture. figs. 62-63 illustrate rotational motion of a device. as in fig. 61 , detection of the initial orientation or position of the device may be part of the gesture detection. for example, the gesture of fig. 62 may include detecting that the gravity vector indicates the arm is not by the side of the body in step 6205 , detecting some rotational motion in step 6210 , estimating that the radius of the rotational motion is large enough for elbow motion in step 6215 , estimating the relative rotation in step 6220 , and detecting a gesture in step 6225 . as another example, the gesture of fig. 63 may include detecting that the gravity vector indicates the arm is not by the side of the body in step 6305 , detecting some rotational motion in step 6310 , estimating that the radius of the rotational motion is small enough for wrist motion in step 6315 , estimating the relative rotation in step 6320 , and detecting a gesture in step 6325 . as illustrated in figs. 62-63 , a gesture may include estimating the type of rotation of the device, such as, for example, rotation primarily from the shoulder ( fig. 62 ), rotation primarily from the elbow ( fig. 63 ), or any other suitable rotation. in addition or in the alternative to the radius of rotation, a gesture may include detecting the amount of rotation, duration of rotation, radial acceleration of the rotation, any other suitable aspect of the rotation, or any suitable combination thereof. like for figs. 61-63 , fig. 64 indicates a gesture involving detecting the initial orientation or position of the device. for example, the gesture of fig. 64 may include detecting the gravity vector indicates that the arm is not by the side of the body in step 6405 , detecting lateral acceleration of the arm along the axis of the arm in step 6410 , detecting that the acceleration remains for some time in step 6415 , and detecting a gesture in step 6420 . fig. 65 illustrates that a gesture may include motion of the device along the axis of the appendage wearing the device, such as, for example, the acceleration of the device along that axis. the gesture may include an impact along the path of motion (e.g. caused by the hand stopping or contacting an object) and subsequent reversal of the motion. the back-and-forth motion may repeat until the motion stops or the hand returns to some position, such as, e.g., the user's side. in particular embodiments, different gestures may be based on the number or frequency of the back-and-forth motion. for example, the gesture of fig. 65 may include detecting the gravity vector indicates that the arm is not by the side of the body in step 6505 , detecting that the hand is in motion in step 6510 , detecting an impulse (impact) along the path of the motion in step 6515 , detecting that the hand reversed motion along the same linear path in step 6520 , repeating steps 6515 and 6520 as suitable, detecting that the motion stops for some time in step 6525 , and detecting a gesture in step 6530 . figs. 66-68 illustrate example gestures based on detection of motion that matches a predetermined motion template, which may be user-customizable or user-creatable. in particular embodiments, customizable gestures may include an initial position or orientation of the device, motion or aspects of motion in a particular direction, stopping and starting of motion, duration of motion, or any other suitable motion parameter. some or all of the parameters may be user-customizable, in particular embodiments. in particular embodiments, a detected gesture may be determined by matching the detected motion to the closest available motion template. for example, as illustrated in figs. 66-68 , a gesture may correspond to a horizontal position or motion of the arm or fingers. for example, as illustrated in fig. 66 , a gesture may include detecting a gravity vector oriented down through the bottom of the base of the device in step 6605 , detecting motion forward and inward in step 6610 , matching a motion template in step 6615 (for example, using heuristic, numeric, or pattern-based gesture recognition modules of fig. 19 ), and detecting a gesture in step 6620 . fig. 67 may include detecting a gravity vector oriented sideways through the bottom of the base of the device in step 6705 , detecting motion forward and inward in step 6710 , matching a motion template in step 6715 (for example, using heuristic, numeric, or pattern-based gesture recognition modules of fig. 19 ), and detecting a gesture in step 6720 . fig. 68 may include detecting a gravity vector indicating an arm is not by the side of the body in step 6805 , detecting motion of the device in step 6810 , detecting motion stopping in step 6815 , matching a motion template in step 6820 , selecting the best motion-template match in step 6825 , and detecting a gesture in step 6830 . while figs. 66-68 illustrate specific examples of customizable gestures corresponding to specific motion templates, this disclosure contemplates any suitable gestures (or any aspect thereof) detected by any suitable sensors being customizable by a user of the device. in particular embodiments, gesture may optionally include detecting some non-motion or non-orientation input. for example figs. 69-71 illustrate a gesture comprising detection of acoustics, although the gestures illustrated do not require such detection. fig. 69 illustrates an acoustic output (such as, e.g., ringing from an incoming or outgoing telephone call) or response, followed by some motion of the device (such as the device being brought to a user's face). for example, an audio response or output is initiated in step 6905 , upward motion is detected in step 6910 , stopping of upward motion is detected in step 6915 , the gravity vector is within a predetermined window in step 6920 , and a gesture is detected in step 6925 . in particular embodiments, a gesture may include detecting the gravity vector in a particular orientation or orientation window, as illustrated. the gesture of fig. 69 may also include detecting the position of the user's hand/fingers. as an example of functionality that may be associated with the gesture illustrated in fig. 69 , if the fingers are brought near the ear or face in the position indicated, the user may answer or place a telephone call. fig. 70 and steps 7005 - 7025 illustrates an example gesture having similar attributes as those described for fig. 69 , but involving different orientation of the user's hand/fingers. fig. 71 illustrates an example gesture including acoustics generated by the user (e.g. by the user snapping her fingers together), which are detected by a microphone associated with the device. for example, fig. 71 may include detecting a gravity vector indicating an arm is not by the side of the body in step 7105 , detecting a motion with relatively high acceleration in step 7110 , detecting a sudden change in one or more acoustic frequencies in step 7115 , and detecting a gesture in step 7120 . as illustrated in fig. 71 , the snap motion may be detected solely by the motion generated by the snap alone (e.g. by the vibration of the user's hand/skin or by some degree or rate of change of rotation due to the snap), or may be detected by the combination of motion plus an auditory input generated by the snap. in particular embodiments, the auditory confirmation must be detected within a predetermined time of the motion for the gesture to be detected. figs. 72-73 illustrate example gestures involving periodic motion of the device, such as shaking of the arm the device is on in the lateral or vertical direction. fig. 72 illustrates a gesture including detecting the gravity vector indicating the arm is not beside the body in step 7205 , detecting the device moving laterally forward on an axis in step 7210 , detecting the device moving backwards on the same axis in step 7215 , repeating the steps of 7210 and 7215 as is desirable, and detecting a gesture in step 7220 . fig. 73 illustrates a gesture including detecting the gravity vector indicating the arm is not beside the body in step 7305 , detecting the device moving vertically forward on an axis in step 7310 , detecting the device moving backwards on the same axis in step 7315 , repeating the steps of 7310 and 7315 as is desirable, and detecting a gesture in step 7220 . fig. 74 illustrates an example gesture involving an adjustment of the position/orientation of the device relative to the user's body. for example, the gesture of fig. 74 may include including detecting the gravity vector indicating the arm is beside the body in step 7405 , detecting the gravity vector indicating the arm is beside the body in step 7410 , detecting a gesture in step 7415 . any suitable functionality may be associated with the gestures of figs. 72-75 , such as, for example, waking the device from a low-power state. fig. 75 illustrates an example gesture involving the height of the device or the relative change in height of the device from start to stop of the device. in addition to the height of the device, a gesture may include the orientation of the device before, during, or after the gesture. for example, a gesture may include detecting the gravity vector indicating the arm is not beside the body in step 7505 , detecting upward motion in step 7510 , detecting halt of upward motion in step 7515 , detecting that the gravity vector points through the side of the device's base in step 7520 , and detecting a gesture in step 7525 . any suitable functionality may be associated with the gesture of fig. 75 , such as, for example, activating equipment paired with the device, turning on one or more lights in a room, or activating equipment near the device. in particular embodiments, a gesture may include interacting directly with the body or band of a wearable device. for example fig. 76 illustrates a gesture involving contact with a touch-sensitive area of a band worn about the user's wrist. the gesture may include detecting that the device is not in a locked state in step 7605 , detecting an absence of touch on a band in step 7610 , detecting touch on the band in step 7615 , decoding the position of the ouch in step 7620 , and detecting a gesture in step 7625 . fig. 77 illustrates that touches in multiple positions may be determined to be a single gesture, such as, for example, to unlock a device or aspects of the device. the gesture may include detecting that the device is not in a locked state in step 7705 , detecting an absence of touch on a band in step 7710 , detecting touch on the band in step 7715 , decoding the position of the ouch in step 7720 , decoding an action in step 7725 , and detecting a gesture in step 7730 . fig. 78 illustrates that a gesture may include contacting a touch-sensitive area of a device and sliding across a touch-sensitive area while maintaining contact with the device. the gesture may include detecting that the device is not in a locked state in step 7805 , detecting an absence of touch on a band in step 7810 , detecting touch on the band in step 7815 , detecting movement of the touch point(s) in step 7820 , decoding relative motion in step 7825 , and detecting a gesture in step 7830 . in particular embodiments, a gesture may include the duration of contact, physical area of contact (e.g. with one finger or two fingers), the sequence of contact, pressure generated by contact, or any other suitable contact-related attribute. while figs. 76-78 illustrate contact with a touch-sensitive area on a band, this disclosure contemplates that a gesture may involve contact on a touch-sensitive area on any suitable location of the device, such as the device band, ring, display, or any suitable combination thereof. for example, figs. 79-80 illustrate contact with touch sensitive areas on a ring of the device, similar to the gestures of figs. 77-78 . for example, a gesture may include detecting that the device is not in a locked state in step 7905 , detecting lack of touch on a ring in step 7915 , detecting touch on the ring in step 7920 , and detecting a gesture in step 7925 . as another example, a gesture may include detecting that the device is not in a locked state in step 8005 , detecting lack of touch on a ring in step 8010 , detecting touch on the ring in step 8015 , detecting movement of the touch point in step 8020 , decoding relative motion in step 8025 , and detecting a gesture in step 8030 . fig. 81 illustrates a gesture involving multi-touch contact with a touch-sensitive area of a device face, and detecting subsequent motion of the contact points, caused by, e.g., motion of the fingers contacting the touch-sensitive area or by movement of the wrist/hand on which the device is worn. the gesture may include detecting that the device is not in a locked state in step 8105 , detecting lack of touch on a surface in step 8110 , detecting at least two fingers touching the surface in step 8115 , detecting movement of the touch points in step 8120 , decoding relative motion in step 8125 , and detecting a gesture in step 8130 . motion of the wrist/hand may be detected by, e.g., inertial sensors in the device, allowing the different ways of moving touch points to be two distinct gestures. fig. 82 illustrates a gesture involving initial contact with a device, which may detected by one or more proximity sensors on or in the device, or inertial sensors on or near the device. the gesture may involve detecting that the contact persists, indicating that, e.g., the user has put the device on. for example, the gesture may include detecting no contact with the rear or band proximity sensor in step 8205 , detecting contact by the proximity sensor in step 8210 , detecting that the contact persists in step 8215 , and detecting a gesture in step 8220 . the gesture of fig. 82 may unlock or power on a sleeping device, or provide any other suitable functionality. in particular embodiments, a gesture may include contact with skin near the device. fig. 83 illustrates a gesture involving tapping on the skin near where the device is worn. the tapping may be detected by vibration sensors in the device. the tapping motion may be confirmed by, e.g., one or more acoustic sensors detecting sound generated by the tapping gesture. for example, the gesture may include detecting that the device is unlocked in step 8305 , detecting motion with a relatively high acceleration in step 8310 , detecting the sound of, for example, a tap in step 8315 , matching the motion or sound to a pattern in step 8320 , and detecting a gesture in step 8325 . fig. 84 illustrates a gesture involving swiping of the skin near the device, which may be detected and confirmed by the sensors described in fig. 83 , above. for example, the gesture may include detecting that the device is unlocked in step 8405 , detecting motion with a relatively high acceleration in step 8410 , detecting the sound of, for example, a tap in step 8415 , detecting the vibrations or sound of lateral movement on the skin in step 8420 , matching the motion or sound to a pattern in step 8425 , and detecting a gesture in step 8430 . in particular embodiments, gestures may involve detecting metaphoric gestures made by the hand not wearing the device. for example, such gesture may be detected by, e.g., any suitable front-facing sensor on or near the display of the device oriented such that the hand not wearing the device is in the angle of view of the sensor. fig. 85 illustrates an example gesture involving a front-facing sensor detecting motion of multiple fingers, such as tapping of the fingers. for example, the gesture may include determining that the device is in a predetermined orientation in step 8505 , detecting a fingertip in step 8510 , detecting motion of the fingertip in step 8515 or detecting a tap sound in step 8525 , and detecting one or more gestures in steps 8520 and 8530 . fig. 86 illustrates an example gesture involving motion of a single finger. for example, the gesture may include determining that the device is in a predetermined orientation in step 8605 , detecting a fingertip in step 8610 , detecting motion of the fingertip in step 8615 or detecting a tap sound in step 8525 , and detecting one or more gestures in step 8620 . fig. 87 illustrates a gesture involving detecting movement of a hand holding an object, detecting the motion of the object, locking on to the object, and then detecting subsequent motion of the object. as a specific example, the gesture may include detecting that the device is in a predetermined orientation in step 8705 , detecting a hand in step 8710 , detecting motion of the hand in step 8715 , detecting an additional object to be moving the hand in step 8720 , locking on the object in step 8725 , detecting motion of the object in step 8730 , and detecting a gesture in step 8735 . for example, an object may be a pen or other stylus-like implement, and the front-facing sensor on the device may detect writing motions of the implement to, e.g., generate/store text on the device or on another device communicating with the wearable device. the example of fig. 87 may allow a user to generate drawings, notes, or other written content without actually generating written content on a display or other writing surface. as described more fully herein, any suitable gesture or combination of gestures may be used to impact or initiate augmented-reality (“ar”) functionality, and may be used to perform tasks using ar functionality. for example, the gestures of figs. 85-87 may used to capture a user's interaction with a virtual keyboard, virtual mouse, or virtual touchscreen and those interactions may generate input on the wearable device or any other suitable paired device. while this disclosure describes specific examples of metaphoric gestures and object detection (and associated functionality), this disclosure contemplates any suitable metaphoric gestures, detection of any suitable objects, and such gestures associated with any suitable functionality. in particular embodiments, a gesture may involve the entire appendage on which a device is affixed or worn. for example, figs. 88-92 illustrate example gestures involving motion of the arm on which the device is worn. the gestures may include detecting the initial position of the arm (e.g. via an accelerometer detecting the direction of the gravity vector), detecting the motion of the device (via the arm), detecting the corresponding change in the gravity vector, and detecting that the arm has stopped moving. such gestures may also include detecting the duration of movement, the amount of movement (e.g. detecting a large radius of motion, confirming that the entire arm has moved), the acceleration of movement, or any other suitable movement-related attributes. as illustrated by figs. 88-92 , gestures may involve detecting arm movements above the head, to the front, to the side, to the back, or down from an initially-higher starting position. for example, a gesture may include detecting a gravity vector indicating a hand is on the side of the body in step 8805 , detecting upward movement of the hand in step 8810 , detecting that the gravity vector indicates the hand is above the head in step 8815 , detecting the hand stopping movement in step 8820 , and detecting a gesture in step 8825 . as another example, a gesture may include detecting a gravity vector indicating a hand is on the side of the body in step 8905 , detecting upward and forward movement of the hand in step 8910 , detecting that the gravity vector indicates the hand is horizontal in step 8915 , detecting the hand stopping movement in step 8920 , and detecting a gesture in step 8925 . as another example, a gesture may include detecting a gravity vector indicating a hand is horizontal in step 9005 , detecting the hand moving downward and backward in step 9010 , detecting that the gravity vector indicates the hand is by the side in step 9015 , detecting the hand stopping movement in step 9020 , and detecting a gesture in step 9025 . as another example, a gesture may include detecting a gravity vector indicating a hand is by the side of the body in step 9105 , detecting the hand moving upward and backward in step 9110 , detecting that the gravity vector indicates the hand is horizontal in step 9115 , detecting the hand stopping movement in step 9120 , and detecting a gesture in step 9125 . as another example, a gesture may include detecting a gravity vector indicating a hand is by the side of the body in step 9205 , detecting the hand moving upward and outward in step 9210 , detecting that the gravity vector indicates the hand is horizontal in step 9215 , detecting the hand stopping movement in step 9220 , and detecting a gesture in step 9225 . in particular embodiments, gestures may involve motion of the entire body rather than just of the appendage on which the device is worn. in particular embodiments, a user may interact with the device via a variety of input mechanisms or types including, for example, the outer ring, touch-sensitive interfaces (e.g. the touch-sensitive layer), gestures performed by the user (described herein), or a speech interface (e.g. including voice input and speech recognition for applications including text input, communication, or searching). additionally, in particular embodiments, a user may interact with a graphical user interface presented on a circular display of the device via any of the input mechanisms or types. a user of the wearable electronic device may interact with the device (including, e.g., a graphical user interface presented on the circular display) by using the outer ring. in particular embodiments, the outer ring may be touch-sensitive, such that a user's touch on one or more portions of the ring may be detected as an input to the device and interpreted, causing one or more actions to be taken by the device (e.g. within a graphical user interface of the device). as an example, a touch-sensitive outer ring may be a capacitive ring or inductive ring, and a user of the device may perform any suitable touch gesture on the touch-sensitive ring to provide input to the device. the input may, for example, include swiping the ring with one finger, swiping the ring with two or more fingers, performing a rotational gesture with one or more fingers, or squeezing the ring. in particular embodiments, the outer ring may be rotatable, such that a physical rotation of the ring may serve as an input to the device. additionally, in particular embodiments, the outer ring may be clicked (e.g. pressed down) or squeezed. any of the embodiments of the outer ring may be combined, as suitable, such that the ring may be one or more of touch-sensitive, rotatable, clickable (or pressable), or squeezable. inputs from the different modalities of the outer ring (e.g. touch, rotation, clicking or pressing, or squeezing) may be interpreted differently depending, for example, on the combination of the modalities of input provided by a user. as an example, a rotation of the outer ring may indicate a different input than a rotation in combination with a clicking or pressing of the ring. additionally, feedback may be provided to the user when the user provides input via the outer ring, including haptic feedback, audio feedback, or visual feedback, described herein. fig. 93a illustrates an example of a user clicking (e.g. pressing down) on the outer ring, indicated by arrows 9310 . fig. 93b illustrates an example of a user squeezing the outer ring, indicated by arrows 9320 . fig. 94a illustrates an example of a user rotating the outer ring, such that content 9410 of a graphical user interface of the device changes in accordance with the rotation (e.g. to the right). fig. 94b illustrates an example of a user performing a rotating gesture on a touch-sensitive ring, without the ring itself rotating, such that content 9420 of a graphical user interface of the device changes in accordance with the rotation (e.g. to the right). fig. 94c illustrates an example of a user rotating the outer ring while simultaneously pressing or clicking the ring, such that content 9430 of a graphical user interface of the device changes in accordance with the rotation (e.g. to the right) and the pressing or clicking. in particular embodiments, a touch-sensitive interface of the device (e.g. the touch-sensitive layer) may accept user touch input and allow the device to determine the x-y coordinates of a user's touch, identify multiple points of touch contact (e.g. at different areas of the touch-sensitive layer), and distinguish between different temporal lengths of touch interaction (e.g. differentiate gestures including swiping, single tapping, or double tapping). touch gestures (described herein) may include multi-directional swiping or dragging, pinching, double-tapping, pressing or pushing on the display (which may cause a physical movement of the display in an upward or downward direction), long pressing, multi-touch (e.g. the use of multiple fingers or implements for touch or gesturing anywhere on the touch-sensitive interface), or rotational touch gestures. fig. 95a illustrates an example of a user tapping 9510 a touch-sensitive interface (e.g. the touch-sensitive layer) to provide input to the device. the precise x-y coordinates of the user's tapping may be determined by the device through input from the touch-sensitive interface (e.g. the touch-sensitive layer). fig. 95b illustrates an example of a user performing, respectively, a clockwise rotational gesture 9515 , a counter-clockwise rotational gesture 9520 , a vertical swipe gesture 9525 , and a horizontal swipe gesture 9530 . fig. 95c illustrates an example of a user touching the display (including a touch-sensitive layer with multi-touch sensing capability) using, respectively, one, two, or three points of contact 9535 (e.g. with one, two, or three fingers or implements) simultaneously. fig. 95d illustrates an example of a user performing touch gestures having multiple points of contact with the touch-sensitive interface. the user may, in this example, perform an expanding gesture 9540 , a pinching gesture 9545 , a clockwise rotational gesture 9550 , or a counter-clockwise rotational gesture 9555 with two fingers. in particular embodiments, a graphical user interface of the device may operate according to an interaction and transition model. the model may, for example, determine how modes including applications, functions, sub-modes, confirmations, content, controls, active icons, actions, or other features or elements may be organized (e.g. in a hierarchy) within a graphical user interface of the device. in one embodiment, the graphical user interface (gui) includes multiple top-level screens that each correspond to a different mode or application (or sub-mode, function, confirmation, content, or any other feature) of the device. each of these applications may be on the same level of the hierarchy of the interaction and transition model of the gui. fig. 96a illustrates an example layout of a hierarchy within the gui in which multiple top-level screens 9602 - 9606 and 9610 - 9614 each correspond to a different application, and one of the top-level screens 9608 (the home screen) corresponds to a clock. state transitions within the gui may be events triggered by input from an input source such as the user of the device. an input from a user of the device or from another input source (e.g. via any of the variety of input mechanisms or types including the outer ring, touch-sensitive interfaces, gestures, speech, or sensors) may cause a transition within the gui (e.g. from one top-level screen to another). for example, an input may cause the gui to transition from the home screen 9608 (e.g. the clock) to an application (e.g. 3 or 4 ) or from an application to another application. if the user rotates the outer ring to the right, for example, the gui may transition from the home screen 9608 to application 4 9610 , and if the user rotates the outer ring to the left, the gui may transition from the home screen 9608 to application 3 9606 . in yet other embodiments, context (e.g. as determined by sensors or other input sources on the device) may cause the gui to transition from the home screen to an application or from an application to another application. in one embodiment, the model may include operability for differentiation of the “left” and “right” sides in relation to the home screen. as an example, one or more of the top-level screens may be associated with modes or applications (or other features) in the hierarchy of the interaction and transition model of the gui that are fixed (e.g. always available to the user) or contextual or dynamic (e.g. available depending on context). the contextual screens may, for example, reflect the modes, applications, or functions most recently used by the user, the modes, applications, or functions most recently added (e.g. downloaded) by the user, ad-hoc registered devices (that may, for example, enter or exit the communication range of the device as it is used), modes, applications, or functions that are “favorites” of the user (e.g. explicitly designated by the user), or modes, applications, or functions that are suggested for the user (e.g. based on the user's prior activity or current context). fig. 96b illustrates an example layout of a hierarchy within the gui in which contextual or dynamic applications 9616 - 9620 and fixed applications 9624 - 9628 are grouped separately, with the left side (in relation to the home clock screen 9622 ) including contextual applications, and the right side including fixed applications. as an example, dynamic application 01 9620 may be the most recently used application, and dynamic application 02 9618 may be the second most recently used application, and so forth. in particular embodiments, the top level of the hierarchy of the interaction and transition model of the gui may include only “faces,” and the next level of the hierarchy may include applications (or any other features). as an example, the top level of the hierarchy may include a home screen (e.g. the clock), and one or more faces, each face corresponding to a different type of background, mode, or activity such as a wallpaper (e.g. customizable by the user), weather information, a calendar, or daily activity information. each of the faces may show the time in addition to any other information displayed. additionally, the face currently displayed may be selected by the user (e.g. via any suitable input mechanism or type) or automatically change based on context (e.g. the activity of the user). the faces to the left of the home screen may be contextual, and the faces to the right of the home screen may be fixed. fig. 97 illustrates an example layout of a hierarchy within the gui in which the top level of the hierarchy includes faces 9710 - 9770 (including clock face 9740 ) and the next level of the hierarchy includes applications 9715 - 9775 . in particular embodiments, an input from a user of the device or an input from another input source (e.g. via any of the variety of input mechanisms or types including the outer ring, touch-sensitive interfaces, gestures, speech, or sensors), or a context of use of the device may cause a transition within the gui from a screen at one level of the hierarchy of the interaction and transition model of the gui to a screen at another level of the hierarchy. for example, a selection event or input by the user (e.g. a touch or tap of the display, voice input, eye gazing, clicking or pressing of the outer ring, squeezing of the outer ring, any suitable gestures, internal muscular motion detected by sensors, or other sensor input) may cause a transition within the gui from a top-level screen to a screen nested one level deeper in the hierarchy. if, for example, the current screen is a top-level screen associated with an application, a selection event (e.g. pressing the ring) selects the application and causes the gui to transition to a screen nested one layer deeper. this second screen may, for example, allow for interaction with a feature of the selected application and may, in particular embodiments, correspond to a main function of the selected application. there may be multiple screens at this second, nested layer, and each of these screens may correspond to different functions or features of the selected application. similarly, a “back” selection input or event by the user (e.g. a double pressing of the outer ring or a touch gesture in a particular part of the display) may cause a transition within the gui from one screen (e.g. a feature of a particular application) to another screen that is one level higher in the hierarchy (e.g. the top-level application screen). fig. 98a illustrates an example of the operation of the interaction and transition model with respect to a function or a mode 9805 of a particular application of the device and the use or application of the function 9810 . as an example, if the application is a camera, the functions, modes, or other elements of the camera application may include picture mode, video mode (e.g. with a live view), and turning on or off a flash. the various functions, modes, or other elements may be accessed via transitions within a single layer of the model hierarchy. these intra-layer transitions may occur upon receiving or determining a particular type of transition event or input from an input source such as the user of the device (e.g. a rotation of the outer ring counterclockwise or clockwise), or upon determining a particular context of use of the device. in particular embodiments, a transition event input may also include, e.g., a touch or tap of the display, voice input, eye gazing, clicking or pressing of the outer ring, squeezing of the outer ring, any suitable gesture, internal muscular motion detected by sensors, or other sensor input. to select and use a function, mode, or other element of the application, the user may provide a particular type of selection event or input (e.g. a tap or touch of the display, a press or click of the outer ring, a particular gesture, or sensor input), causing an inter-layer transition within the gui to a deeper layer of the hierarchy. as an example, to take a video, the user may tap a screen associated with the video mode feature of the camera application. once in this deeper layer of the hierarchy, taking a video, the user may cause the gui to transition between different options in that layer, if available (e.g. options related to video mode). in particular embodiments, the user may select one of the options in the deeper layer, causing the gui to transition to an even deeper layer. as an example, once recording video in video mode, the user may again tap the display to transition the gui to a deeper layer, which in this case may include the option to stop recording video. additionally, the user may return to a higher layer of the hierarchy by providing a particular type of selection event or input (e.g. a “back” input, described herein). as an example, once recording video in video mode, the user may touch a particular “back” portion of the display, causing video recording to be canceled and causing the gui to transition to the screen associated with the video mode feature of the camera application (e.g. in the features layer of the hierarchy). the interaction and transition model hierarchy of the gui may have any number of layers and any number of elements (e.g. functions or content) within a single layer. fig. 98b illustrates an example of the operation of the interaction and transition model with respect to content 9815 on the device. in this example model, content may behave similarly to an application, except that if the user selects the content 9815 (e.g. a photo) and the gui transitions to a deeper layer in the hierarchy, the first option 9820 in a menu of options related to the content may be shown (e.g. options such as deleting the photo or sharing the photo). fig. 98c illustrates an example of the operation of the interaction and transition model with respect to a control 9825 on the device. a control element may function like a knob, in that it may modify a value over a range of possible values. user input to the device (e.g. rotating the outer ring to the right or left) may modify the value or state 9830 associated with the control element 9825 . the value modified by a control element may be substantially continuous in nature (e.g. the zoom level of a camera, or the volume level of a television) or may be substantially discrete in nature (e.g. the channel of a television). in particular embodiments, in cases where the value modified by a control is discrete in nature, a particular user input (e.g. pressing the outer ring) may “commit” the selection of the value. fig. 98d illustrates an example of the operation of the interaction and transition model with respect to an application 9835 on the device and a main function 9840 of the application. as an example, each mode or function of the device (e.g. camera or augmented reality functions) may be an application on the device. transitions within a single layer (e.g. performed upon receiving a particular user input such as a rotation of the outer ring) allow the user to change applications, modes, or functions of the device. transitions between layers (e.g. performed upon receiving a particular user input such as a tap on the display) allow the user to enter deeper layers (or exit deeper layers) of the hierarchy associated with the selected application, mode, or function. fig. 98e illustrates an example of the operation of the interaction and transition model with respect to an action 9845 (e.g. within an application) on the device. as an example, within the camera application, a captured image may be selected, and one or more actions may be available for the selected image, such as deleting the image, sharing the image on facebook, sharing the image on twitter, or sending an e-mail with the image. in this example, gui transitions within the “action” layer (e.g. performed upon receiving a particular user input such as a rotation of the outer ring) allow the user to view different actions to take. transitions between layers (e.g. performed upon receiving a particular user input such as a tap on the display) allow the user to enter deeper layers (or exit deeper layers) of the hierarchy associated with the selected action. in this example, the deeper layer entered by selecting an action 9845 shows secondary information 9850 or a confirmation (e.g. that the application is sending the image information to a selected sharing service). a confirmation 9855 (e.g. that the image has been sent) may also be shown in this deeper layer. the gui may automatically transition back to a higher layer (e.g. the action layer). there may, however, be a deeper layer of the hierarchy including the confirmation information, and this deeper layer may be entered by the gui upon user input or automatically. fig. 98f illustrates an example of the operation of the interaction and transition model with respect to an icon (e.g. an active icon 9860 including a top-level on/off option) and the switching of the state of the icon 9865 . as an example, a television communicatively paired with the device may be indicated by an active icon, for example, a television screen. in this example, gui transitions within the device/application top layer (e.g. performed upon receiving a particular user input such as a rotation of the outer ring) allow the user to view different applications, device, or other features. the television may appear in a menu in the gui of the device even when the television is off, but the television must be turned on before it may be used. if the user selects the television (e.g. by tapping on the display when the television icon is displayed by the gui) when it is off 9860 , the gui may transition to a state in a deeper layer of the interaction and transition model hierarchy in which the television is turned on 9865 . when the television is turned on, the icon associated with the television (displayed, for example, in the top layer of the model in the gui) 9870 may change to directly represent that the television has been turned on 9875 , as illustrated in fig. 98g . if the user again selects the television (now on), the gui may transition to an even deeper layer of the hierarchy in which functions or capabilities of the television (e.g. volume or channel changing) are exposed. in particular embodiments, the option to turn the television off again may be the first menu item in this deeper layer of the hierarchy, to enable quick access to the off function (e.g. in case the user has accidentally turned on the television). in particular embodiments, if the user selects the television when it is off, the television may be turned on and the icon associated with the television may change to directly represent that the television has been turned on without the gui transitioning to a different layer of the hierarchy or to a different user interface. the active television icon may, therefore, directly indicate within the top level of the hierarchy (e.g. a main menu) the state of the paired television. fig. 99 illustrates an example of the interaction and transition model hierarchy of a gui for an image capture application. in this example, the first screen 9902 arrived at after selection of the application (at screen 9900 ) may correspond to a “live view” function of the application. other fixed features of the image capture application, including video mode 9904 , zoom 9906 , or flash 9908 , may be available to the right of the home main function screen 9902 of the selected application. dynamically or contextually available features (e.g. captured images 9910 ) of the selected application may be available to the left of the home main function screen. a selection event at this functional layer of the hierarchy may cause a transition within the gui to another nested layer even deeper within the hierarchy. if, for example, the user selects the “zoom” function, the gui may transition to a screen 9912 in which the user may control the zoom setting of a camera with any suitable input (e.g. a rotation of the outer ring to the right to increase zoom or a rotation of the outer ring to the left to decrease zoom). similarly, the user may be able to control the state of different features (e.g. turning a flash feature on or off 9914 , or switching from a picture mode to a video mode 9916 ), browse content (e.g. 9918 - 9922 ), enter a deeper layer of the hierarchy in which actions 9924 - 9930 may be taken, or enter yet another, even deeper layer of the hierarchy in which confirmations 9932 - 9938 are provided once an action is selected. in particular embodiments, an interaction layout may structure an interaction and transition model of a gui of the device. an interaction layout may be applied to any suitable interaction model and need not be dependent on any specific type of motion or animation within a gui of the device, for example. although specific examples of interaction layouts are discussed below, any suitable interaction layout may be used to structure an interaction and transition model. as one example, a panning linear interaction layout may structure an interaction and transition model of a gui of the device. in a panning-linear-type gui, elements or features within a layer may be arranged to the left and right of the currently displayed element or feature. user input such as a rotation of the outer ring in a clockwise or counterclockwise direction navigates within a single layer of the model hierarchy. as an example, a rotation of the outer ring clockwise one rotational increment may display the element or feature to the right (e.g. the next element), and a rotation counterclockwise one rotational increment may display the element or feature to the left (e.g. the previous element). in particular embodiments, a fast rotation clockwise or counterclockwise may cause the gui to perform accelerated browsing. in such an embodiment, a single turn may cause the gui to transition through multiple elements or features, rather than a single element or feature, as described herein. different user input may navigate between layers (e.g. either deeper layers or higher layers) in the model hierarchy. as an example, if the user touches or taps the touch-sensitive layer of the display, the gui may transition one layer deeper in the model hierarchy (e.g. confirming the user's selection or providing options related to the selection). any suitable input by the user may cause the gui to transition between layers in the model hierarchy, either in place of or in addition to touch- or tap-based input. as another example, if the user presses a particular region of the touch-sensitive layer of the display (e.g. designated as a “back” button), or if the user double-taps the touch-sensitive layer of the display, the gui may transition one layer higher in the model hierarchy (e.g. to the previous layer). if, for example, the user performs a long press of the display or screen, the gui may transition back to the home screen (e.g. a clock). without additional user input, the gui may also transition back to the home screen after a pre-determined period of time (e.g. a timeout period). as described herein, as a user begins, for example, to rotate the outer ring in a clockwise or counterclockwise fashion, the gui transitions within the same layer, and the next user interface element or feature (e.g. a breadcrumb icon in the same layer) to the right or left, respectively, may begin to appear while the current user interface element or feature may begin to disappear. fig. 100a illustrates an example of the panning linear interaction layout. in this example, gui elements 10001 , 10002 , 10003 , and 10004 are in the same layer of the interaction and transition model hierarchy of the panning-linear-type gui. gui elements 10002 a, 10002 b, and 10002 c are elements in a second, deeper layer of the hierarchy and are sub-elements of element 10002 . as an example, the first layer may include devices paired with the device—element 10001 may represent an automobile, element 10002 may represent a television, element 10003 may represent a mobile phone, element 10004 may represent a home thermostat. element 10002 a may be a volume control element for the television, element 10002 b may be a channel control element for the television, and element 10002 c may be a picture control element for the television. as yet another example, the gui may transition one layer deeper in the hierarchy if the user clicks the ring (e.g. presses down on the ring once), and then sub-elements in the deeper layer may be panned by rotating the ring. alternatively, the user may pan the sub-elements in the deeper layer by rotating the ring while simultaneously pressing down on the ring. the device may include a switch to select how the user input is used to navigate between layers. as another example, a panning radial (or panning circular) interaction layout may structure an interaction and transition model of a gui of the device. in a panning-radial-type gui, elements or features in a layer may be arranged above and below the currently displayed element or feature. user input such as a rotation of the outer ring in a clockwise or counterclockwise direction navigates between layers of the model hierarchy. as an example, a rotation of the outer ring clockwise one increment may cause the gui to transition one layer deeper in the model hierarchy (e.g. entering a particular application's layer or confirming selection of the application), and a rotation counterclockwise one increment may cause the gui to transition one layer higher in the model hierarchy (e.g. exiting a particular application's layer to the previous layer). in particular embodiments, a fast rotation clockwise or counterclockwise may cause the gui to perform accelerated browsing, as described herein. in such an embodiment, a single rotational increment may cause the gui to transition through multiple layers of the hierarchy, rather than a single layer. different user input may navigate within a single layer in the model hierarchy. as an example, if the user touches or taps the touch-sensitive layer of the display, the gui may transition to the next element or feature (e.g. the element below the currently displayed element). as another example, if the user presses a particular region of the touch-sensitive layer of the display (e.g. designated as a “back” button), or if the user double-taps the touch-sensitive layer of the display, the gui may transition to a previous element or feature (e.g. the element above the currently displayed element). if, for example, the user performs a long press of the display or screen, the gui may transition back to the home screen (e.g. a clock). without additional user input, the gui may also transition back to the home screen after a pre-determined period of time (e.g. a timeout period). as described herein, as a user begins, for example, to rotate the outer ring in a clockwise or counterclockwise fashion, the gui transitions to a different layer, and the next user interface element or feature (e.g. in a different layer) may begin to appear while the current user interface element or feature may begin to disappear. fig. 100b illustrates an example of the panning radial interaction layout. in this example, gui elements 10001 , 10002 , 10003 , and 10004 are in the same layer of the interaction and transition model hierarchy of the panning-radial-type gui. gui elements 10002 a, 10002 b, and 10002 c are elements in a second, deeper layer of the hierarchy and are sub-elements of element 10002 . as before, the first layer may include devices paired with the device—element 10001 may represent an automobile, element 10002 may represent a television, element 10003 may represent a mobile phone, element 10004 may represent a home thermostat. element 10002 a may be a volume control element for the television, element 10002 b may be a channel control element for the television, and element 10002 c may be a picture control element for the television. as yet another example, an accordion-type interaction layout may structure an interaction and transition model of a gui of the device. in an accordion-type gui, elements or features of multiple layers may be arranged in a circular list structure. for example, rotating within the list structure (e.g. by rotating the outer ring) in a first direction past a screen associated with the last element or feature in that direction (e.g. the last fixed application of the device) may cause the gui to transition to a screen associated with the last element or feature in a second direction (e.g. the least-recently used contextual application of the device). continuing to rotate in the first direction may cause the gui to transition through screens associated with contextual applications in “reverse” order (e.g. from least-recently used to most-recently used). similarly, rotating in the second direction past the screen of the least-recently used contextual application may cause the gui to transition to the screen associated with the last fixed application, and continuing to rotate in the second direction may cause the gui to transition through the screens of the fixed applications in reverse order (e.g. from the last fixed application to the first, adjacent to the home screen). in an accordion-type gui, the element or feature currently displayed may be “expanded” (e.g. if selected by the user) such that its sub-elements or sub-features may become part of the single-layer list structure. in particular embodiments, an element or feature with sub-elements may indicate (when displayed) that it has sub-elements through, for example, visible edges of the sub-elements. user input such as a rotation of the outer ring in a clockwise or counterclockwise direction navigates within a single layer of the model, which may include elements or features, as well as sub-elements or sub-features of a selected element or feature. as an example, a rotation of the outer ring clockwise one increment may display the element or feature to the right (e.g. the next element), and a rotation counterclockwise one increment may display the element or feature to the left (e.g. the previous element). in particular embodiments, a fast rotation clockwise or counterclockwise may cause the gui to perform accelerated browsing. in such an embodiment, a single rotational increment may cause the gui to transition through multiple elements or features, rather than a single element or feature. different user input may cause the selection and expansion of an element or feature in the model. as an example, if the user touches or taps the touch-sensitive layer of the display, the gui may expand the displayed feature or element within the existing layer and transition to a sub-element or sub-feature. as another example, if the user presses a particular region of the touch-sensitive layer of the display (e.g. designated as a “back” button), or if the user double-taps the touch-sensitive layer of the display, the gui may collapse the expanded sub-elements or sub-features and transition to an element or feature in the list. if, for example, the user performs a long press of the display or screen, the gui may transition back to the home screen (e.g. a clock). without additional user input, the gui may also transition back to the home screen after a pre-determined period of time (e.g. a timeout period). as described herein, as a user begins, for example, to rotate the outer ring in a clockwise or counterclockwise fashion, the gui transitions within the same layer, and the next user interface element or feature (e.g. a breadcrumb icon in the same layer) to the right or left, respectively, may begin to appear while the current user interface element or feature may begin to disappear. fig. 100c illustrates an example of the accordion-type interaction layout. in this example, gui elements 10001 , 10002 , 10003 , and 10004 are in the same layer of the interaction and transition model of the accordion-type gui. because element 10002 has been selected by the user, gui sub-elements 10002 a, 10002 b, and 10002 c are expanded and also included in the list structure in the same layer of the model. thus, the gui may transition from sub-element 10002 c to either sub-element 10002 b or directly to element 10003 . if, however, the user desires to collapse the sub-elements (e.g. through a “back” input such as tapping the screen associated with element 10002 again), then the list structure will only include gui elements 10001 , 10002 , 10003 , and 10004 again. in particular embodiments, the gui may navigate to a home screen based on input received by a user of the device. the user input may include, for example, pressing and holding (e.g. a long press) the touch-sensitive layer, pressing and holding the display, pressing (e.g. clicking) and holding the outer ring, squeezing and holding the outer ring, covering the face (e.g. the display) of the device, covering a particular sensor of the device, turning the face of the device in a downward direction, pressing a software button (discussed herein), pressing a hardware button on the device, or shaking the device (or any other suitable gesture). any of these inputs or any variation of these inputs (including, for example, shorter durations) may be used as user inputs to go “back” within an interaction and transition model. figs. 101a-101b illustrate examples of a “back” software button layout in the gui. in fig. 101a , receiving user touch input in the bottom portion 10110 of the display causes the gui to confirm a selection or transition one layer deeper in the model hierarchy. receiving user touch input in the top portion 10120 of the display causes the gui to transition “back” or one layer higher in the model hierarchy. fig. 101b illustrates a similar layout, with the “back” region 10130 including a breadcrumb icon 10135 to indicate to the user where navigating “back” will transition. in particular embodiments (e.g. when the touch-sensitive layer is operable to determine precise x-y coordinates of a touch), any region of the display may be designated as a “back” region, a “confirm/select” region, or any other suitable functional region. in particular embodiments, the gui of the device may display particular types of content including, for example, lists. fig. 102a illustrates an example of the gui displaying a vertical list of items. an input from the user (e.g. any suitable input mechanism or type) may cause a selection frame 10210 of the gui to move through elements of the vertical list. as an example, if the user rotates right in a clockwise direction, the selection frame 10210 may move from the top of the vertical list toward the bottom of the vertical list. each rotational increment of the outer ring (e.g. if the outer ring moves in discrete increments), causes the selection frame 10210 to move one item within the list. in the example of fig. 102a , as the user rotates the ring clockwise, the displayed items of the list remain constant, and the selection frame 10210 moves downward through items of the list. in other embodiments, the selection frame may remain constant (e.g. in the center of the display), and items of the list may move upward or downward (e.g. one item at a time), depending on the direction of the ring's rotation. fig. 102b illustrates an example of the gui displaying a horizontal list of items. an input from the user (e.g. any suitable input mechanism or type) may cause a selection frame 10210 of the gui to move through elements of the horizontal list. as an example, if the user rotates right in a clockwise direction, the selection frame 10210 may move from the left of the horizontal list toward the right of the horizontal list. each rotational increment of the outer ring (e.g. if the outer ring moves in discrete increments), causes the selection frame 10210 to move one item within the list. in the example of fig. 102b , as the user rotates the ring clockwise, the selection frame 10210 remains constant in the center of the display, and items of the list move toward the left (e.g. one item at a time) in response to the clockwise rotation. in other embodiments, the displayed items of the list remain constant, and the selection frame moves left or right through items of the list, depending on the direction of rotation of the outer ring. in particular embodiments, the gui of the device may display vertically or horizontally continuous (or substantially continuous) content including, for example, charts or text. in particular embodiments, an input from the user (e.g. any suitable input mechanism or type) may cause a selection indicator of the gui to move through the continuous content. in other embodiments, an input from the user may cause the content to move into and out of the display in a horizontal direction, vertical direction, or any other direction mapped to the user's input (and the selection indicator, if present, may remain in a constant position). in the example of fig. 102c , a temperature chart is displayed. as the user rotates the outer ring in a clockwise fashion, the selection indicator 10220 remains in the center of the display, and the content moves into the display from the right and out of the display toward the left. in the example of fig. 102d , a portion of a larger piece of text 10230 is displayed. as the user rotates the outer ring in a clockwise fashion, additional text enters the display from the bottom and exits the display toward the top. figs. 103a-103d illustrate an example calendar application displayed in gui of the device. in fig. 103a , a user may click or press the outer ring (indicated by arrow 10305 ), causing the gui to display a circular menu 10310 with options “go up,” “weekly” (the default setting), “monthly,” and “daily.” in fig. 103c , the user may again click or press the outer ring (indicated by arrow 10305 ), confirming selection of “weekly” and causing the gui to display the weekly view 10320 of the user's calendar. in particular embodiments, the gui may display content that is of a size larger than the display. in such embodiments, the gui may scale or crop (or otherwise shrink or fit) the content so that all of the content may be displayed within the display at one time. in other embodiments, the gui does not alter the size of the content, and instead provides the ability for the user to pan through the content one portion at a time, for example using scrolling (described herein). in particular embodiments, the device includes the circular display, and the gui includes circular navigation and menu layouts. this disclosure contemplates any shape for the display, however, and any suitable navigation or menu layout for the gui. the menu layout may provide a user a visual indication of where the user is located within an interaction and transition model hierarchy of the gui, for example. the menu layout may also provide visual indicators that allow the user to differentiate between different types of menu items, as well as show an overall view of menu options. additionally, the menu may be displayed over any suitable background or content of the device. fig. 104 illustrates an example circular menu layout in which each segment 10410 represents one item or option in the menu and visual gaps such as 10420 separate the items from one another. the default or currently selected item 10430 is on the top of the visual display (but may be anywhere on the display), and may remain at the top of the display as the user orients the device display in different ways during use. figs. 105a-105b illustrate an example of browsing the items in a circular menu. the user may provide input such as a clockwise rotation of the outer ring, and in response to this user input, the next item in the menu 10520 (e.g. to the right of the currently selected item 10510 ) may be highlighted for selection. the content in the center of the display 10530 may automatically change to reflect the user's rotation input or may, in particular embodiments, change only after the user provides another input (e.g. pressing or clicking the outer ring once the desired menu item is highlighted). figs. 105c-105d illustrate an example of browsing a circular menu by rotating the outer ring, causing the next item in the menu 10550 (e.g. clockwise or to the right of the currently selected item 10540 ) to be highlighted for selection. in this example, the user's input also causes the rotation of a central “pointer” 10560 that points at the highlighted menu segment corresponding to the currently-selected menu item. in this example, the content in the center of the display automatically changes to reflect the user's rotation. figs. 106a-106c each illustrate different alignments and arrangements of a circular menu layout for the gui of the device. the circular menu may, for example, be displayed directly on the border of the display (as shown in fig. 106a ) or may be shown further inside the display, or as an overlay over a background of the device (shown in figs. 106b-106c ). figs. 107a-107c illustrate other forms and alignments of a circular menu layout for the gui of the device. as examples, the menu may consist of line segments (of various possible sizes) arranged in a circle 10710 , line segments arranged in a semicircle 10720 , or dots arranged in a circle or semi-circle, 10730 or 10740 . in particular embodiments, the visual indicator of the currently selected or default menu item 10732 may remain at the top center of the display, and the visual indicators of items in the menu 10734 may shift left or right based on user input ( fig. 107c ). in other embodiments, the visual indicator of the currently selected or default item 10732 may move through the indicators of the items of the menu, which remain fixed in position ( fig. 107b ). in particular embodiments, instead of segments or dots, the visual indicators of items in the menu may be icons (e.g. breadcrumb icons) associated with the menu items. fig. 108 illustrates that the menu layout need not be circular and may be any suitable layout, including a layout in which indicators of menu items 10810 are scattered throughout the display. with user input (e.g. a rotation of the outer ring), different items may be selected according to their position in the menu layout. as an example, if the user rotates in a clockwise manner, the next menu item 10820 in a clockwise direction may be selected. figs. 109a-109c illustrate different menu layouts with respect to menu items to the “left” and to the “right” (e.g. in the interaction and transition model hierarchy) of the currently selected or displayed menu item 10915 . in fig. 109a , all menu items 10910 are equally distributed on the circular menu around the display. in fig. 109b , the menu includes a gap which indicates a differentiation of items 10910 to the left and items to the right of the currently-displayed or selected menu item 10915 (e.g. in accordance with the interaction and transition model described herein). fig. 109c illustrates an example in which there are more items 10910 to the left than to the right of the currently-selected or displayed item 10915 , so that the left-hand segments of the circular menu are adjusted in size to accommodate the number of items available for selection. in the case of a large number of menu items (e.g. beyond a particular threshold such as 40 captured images), the segments of the circular menu may disappear, and the visual indicator presented to the user may be a scroll bar 11020 that allows the user to circularly scroll through the various menu items, as illustrated in fig. 110a . in other embodiments, a similar scrollbar-type visual indicator 11020 may allow the user of the device to manipulate an absolute or fixed value (e.g. a camera zoom level) over a fixed range of values 11030 , as illustrated in fig. 110b . in yet other embodiments, the length of a scrollbar-type visual indicator may show the user the level of a certain value. for example, if the user is controlling the volume of a television using the outer ring of the device, as the user turns the ring (e.g. clockwise) to increase the volume level, the visual indicator 11120 will grow longer, until it encircles or nearly encircles the entire display, as illustrated in figs. 111a-111c . in particular embodiments, the gui may display both an item of reference or background content as well as an indication of an available action or function to be performed with respect to the reference or background content. fig. 112 illustrates example layouts within the gui of reference content and contextual overlay actions or functions. different types of layouts (e.g. including those illustrated) may be selected based on the different types of reference or background content presented, for example, to minimize obscuring the reference or background content. for example, if the reference or background content is a picture of a person, an overlay that does not obscure the center of the photo may be selected. in particular embodiments, the perceptual brightness of the pixels of the reference or background content (e.g. behind the overlay) may be determined on a pixel-by-pixel basis. in cases where the contrast between the contextual overlay and the reference or background content (e.g. an image) is too low (e.g. based on a pre-determined threshold), a blurred drop shadow that pushes the underlying colors in the opposite direction may be used. an example algorithm may include determining the pixels under the overlay, reducing their saturation, taking the inverse of the visual brightness (e.g. such that colors remain the same but the brightness is selected to produce contrast), blur, and create a composite between the underlying reference or background content and the overlay. figs. 113a-113c illustrate examples 11310 - 11350 , of contextual overlays composed with background or reference content (here, images captured by a camera of the device). as illustrated, the contextual overlay may allow the user to perform actions or functions (e.g. deleting an image 11130 or sharing an image 11325 , searching for coffee 11330 , searching for restaurants 11340 , or making a location a “favorite” location 11350 ), provide confirmation to the user (e.g. that an image has been shared 11320 ), or provide any other type of information to the user. in particular embodiments, contextual overlays may be used anywhere within a menu layout of a gui except for the top level of the interaction and transition model hierarchy. in particular embodiments, icons displayed in the gui of device may optimize the energy or battery usage of the device. as an example, an icon may include primarily black background with the icon itself being composed of thin white strokes. this may allow for the amount of white color on the display screen to be very low, allowing for reduced energy consumption of the display while the gui is used. the icons displayed in gui may also include real-time notifications. for example, a mobile phone icon may include a notification with the number of new voicemails, an e-mail icon may include a notification with the number of new e-mails, a chat icon may include a notification with the number of new chat messages, and a telephone icon may include a notification with the number of missed calls. in particular embodiments, the gui of the device only displays colors other than black and white for user-generated content (e.g. pictures, files, contacts, notifications, or schedules). other information, including menu items, may be displayed in black and white. in particular embodiments, as the gui transitions from one element (e.g. feature, content item, or icon) to another (e.g. upon receiving input from a user), the gui may display visual transition effects. these transition effects may depend, for example, on the type of input received from a user of device. as an example, a single touch on the display may trigger particular transition effects, while a rotation of the outer ring may trigger a different (potentially overlapping) set of transition effects. in particular embodiments, a user's touch input on the touch-sensitive layer may trigger transition effects including center-oriented expansion, directional sliding, and scaling in or out. fig. 114a illustrates center-oriented mode or function expansion or scaling up. fig. 114b illustrates center-oriented mode or function collapsing or scaling down. fig. 115a illustrates center-oriented scaling up of an icon. fig. 115b illustrates center-oriented scaling down of an icon. fig. 116a illustrates an example of center-oriented icon scaling up with a twisting motion. fig. 116b illustrates an example of center-oriented icon scaling down with a twisting motion. fig. 117a illustrates an example of center-oriented unfolding and expansion outward of an icon. fig. 117b illustrates an example of center-oriented folding and collapsing inward of an icon. fig. 118a illustrates an example of text vertically sliding into the display, where the text is revealed by unmasking fig. 118b illustrates an example of text horizontally sliding in from the left to the right of the display. fig. 118c illustrates an example of text horizontally sliding in from the left to the right of the display within a masked region (e.g. a contextual overlay). fig. 119a illustrates a horizontal slide transition from right to left for content or an icon. fig. 119b illustrates a horizontal slide transition from right to left, with fading effects; the icon or content exiting the screen fades out gradually once it reaches the screen's border, and the icon or content entering the screen fades in gradually as it crosses the screen's border. fig. 119c illustrates an example of a horizontal slide transition from right to left with scaling effects; the content or icon exiting the screen is shrunk down, and the content or icon entering the screen is scaled up to full size. in particular embodiments, a user's rotation of the outer ring may trigger visual transition effects including zooming, directional sliding, blurring, masking, page folding, rotational movement, and accelerated motion. fig. 120a illustrates an example of a transition in response to a low-acceleration rotation of the outer ring. in this example, a single rotational increment may correspond to a single item, such that one turn (e.g. rotational increment) counterclockwise causes the next element (e.g. icon or content item) to enter the screen from the left toward the right, and no scaling of elements occurs. figs. 120b-120c together illustrate an example of a transition in response to a high-acceleration rotation of the outer ring. in this example, a single turn (e.g. rotational increment) counterclockwise causes the gui to pan quickly through multiple elements (which may scale down in size, enter the screen from the left, and exit the screen from the right) until the user stops turning the ring. when the user stops turning the outer ring, the element may scale up to normal size, and a single icon or content item may fill the display. fig. 121a illustrates an example of a transition within the gui in which content is zoomed-in in response to rotation of the outer ring. fig. 121b illustrates an example of a transition within the gui in which a first screen 1 “folds over” in an animation, resulting in a second screen 2 (e.g. for the next feature or content item) being displayed to the user. in particular embodiments, the gui of the device may include a physical model that takes into account motion of the user and produces visual feedback reflecting the user's movements. as an example, once there is activation input (e.g. in the form of a particular gesture) by the user, the user's motion may be continuously tracked through input from one or more of the sensors of the device. the visual feedback may reflect the user's motion in the user interface, while the underlying content stays still, so that gestures may be registered and parallax may be used to distinguish between ui features or controls and underlying content. in particular embodiments, the physical model may include a generalized spring model with damping. in such a model, items may be arranged in layers. deeper layer may have a “stiffer” spring in the physical model holding items in place. this may cause bottom layers of the user interface to move slightly when the device is moved, while top layers may move more, creating a sense of parallax. additionally, the spring model may include damping, which causes motion to lag, creating a more fluid, smooth motion. fig. 122 illustrate an example of using a physical model in the gui. the user wears the device 100 on her arm. once the user moves her arm in a downward fashion, the icon 12210 displayed on the screen (e.g. a light bulb) moves in a manner reflecting the user's movement. the underlying content (e.g. the background image) on the screen does not move, however. this type of floating icon or menu item may, for example, be helpful when the display is of a size that does not allow for many icons or menu items to be displayed simultaneously due to visual crowding. additionally, this type of floating behavior may also be used with notification means for presenting an event to the user. in particular embodiments, the gui of the device may include faces as default screens or wallpapers for the device, and these faces may be part of an interaction and transition model hierarchy (e.g. in the top layer of the hierarchy or as a home screen). as described herein, these faces may be changeable applications or modes that may automatically respond contextually to a user's activity. as an example, the faces may change depending on the user's environment, needs, taste, location, activity, sensor data, gestures, or schedule. the availability of a face (or the transition in the gui from one face to another) may be determined based on contextual information. as an example, if the user has an upcoming event scheduled in her calendar, the face of the device may change to a calendar face that displays the upcoming event information to the user. as another example, if the user is determined to be in the vicinity of her home (e.g. based on gps data), the face of the device may change to a face associated with a home-automation application. as yet another example, if the user is determined (e.g. based on various biometric sensors such as heart rate or arousal sensors, or based on accelerometers) to be moving vigorously, the face of the device may change to a fitness mode, showing the user's measured pulse, calories burned, time elapsed since the activity (e.g. a run) began, and the time. any suitable sensor data (e.g. from sensors including biometric sensors, focus sensors, or sensors which may determine a user's hand position while driving a car) may be used to determine a context and appropriate face to display to the user. the user's historical usage of the device (e.g. a particular time of day when the user has used a fitness application, such as in a fitness class) may also determine which face is displayed on the device. as an example, the device may anticipate the user's need for the fitness mode at the particular time of day when the user tends to exercise. contextual faces may also be associated with the suppression of notifications (e.g. if the user is determined to be driving or if the device is not being worn) or a change in how notifications are expressed (e.g. visually, or audibly). in particular embodiments, the faces of the device need not be associated with any application on the device and may be wallpapers or backgrounds on the display of the device. faces may be dedicated to specific channels of information (e.g. calendar feeds, health or activity feeds, notifications, weather feeds, or news). as an example, a severe weather notification or alert (received, e.g., from a weather feed) may cause the weather face to be displayed on the display along with the notification. faces may display the time (e.g. in analog or digital format) regardless of the type of face. the faces may be customizable by the user. the user's customizations or tastes may be input explicitly by the user (e.g. to management software on the device or a paired device) or learned directly by the device (e.g. using sensor and usage data to create a model over time). fig. 123 illustrates example faces, including an analog watch 12310 , an analog watch with a circular menu layout 12320 , a health-mode face 12330 , and a weather face 12340 . fig. 124 illustrates an example set of faces 12410 - 12440 for the device in which calendar and appointment information is displayed. in particular embodiments, the device may be worn on a limb of a user (without obscuring the user's face and without requiring the user to hold the device) and may include augmented reality (ar) functionality. this ar functionality may be based on the use of body motion for aiming a camera of the device, which may allow for aiming with higher accuracy due to a user's sense of proprioception. this type of system may allow the user of the device to view an object in the real world at the same time that the user views a version of the object (e.g. captured by a camera of the device) on the display. an example of this ar capability is illustrated in fig. 16 . such an ar system may allow for “see-through” capability using an aligned camera and sensor on opposite sides of a user's limb. various ar applications may be enabled by this type of arrangement, described herein. in particular embodiments, applications may be designed specifically for the device to allow for immediate, opportunistic use. additionally, a delegation model may be provided on the device, allowing for the use of external resources to improve the breadth of applications available to run on the device while incurring less (or no) penalty in terms of processing requirements or energy use. in particular embodiments, the device may control or be controlled by other devices (e.g. nearby devices discovered via a network and communicatively paired with the device). this type of control may be achieved via proximity, gestures, or traditional interfaces. pairing may be achieved using a variety of technologies including a camera of the device, discussed in further detail herein. fig. 125 illustrates an example of an automatic camera activation decision flow for the device. in particular embodiments, whether the camera is enabled and whether automatic activation of the camera (e.g. for object recognition) is enabled may depend on the application or mode the device is currently in. in particular embodiments, automatic camera activation may be enabled on the device 12510 . if this feature is enabled (determined at step 12520 ) and if there is sufficient cpu capacity and power available on the device (e.g. to calculate features of interest from an image, determined at step 12530 ), then a camera of the device (e.g. an outward-facing camera) may automatically capture, process, or display 12560 one or more images if the camera is held steadily in an aiming position by the user for a pre-determined amount of time (e.g. as detected by an inertial measurement unit on the wearable device or as calculated by the blurring of the image, determined at step 12540 ). in other embodiments, the camera may be activated and searching for images at all times. in yet other embodiments, the camera may capture an image and perform feature recognition only if the user manually triggers image capture (e.g. pressing or clicking the outer ring, or tapping the display, determined at step 12550 ). in particular embodiments, when the camera is activated (by any suitable method), augmented reality (ar) functionality may be enabled. the ar functionality may be automatically enabled (depending, e.g., on cpu capacity and power available on the device). in other embodiments, ar functionality may be explicitly enabled by the user via any suitable input by the user. the user may, for example, provide touch input on the display to enable ar functionality. as an example, a user may capture an object such as a bird (e.g. by pointing a camera of the device at the bird), and the user may touch the image of the bird as displayed on the display. this action may enable the ar functions of the device, causing, for example, the device to recognize the bird as an object and return information about the bird to the user. in other embodiments, as described herein, the user may perform one or more gestures to enable ar functionality, as well as to perform tasks using ar functionality (e.g. using a “virtual” keyboard by performing typing gestures in view of a camera of the device). in particular embodiments, if the device does not have the capability to calculate features of interest itself, the device may capture an image, transfer the image to a communicatively coupled device (e.g. a nearby device such as a phone or personal computer) or to an internet-based service, where the features of interest may be calculated remotely. once the features of interest are determined, an internet-based service or local data catalog may be consulted for additional information about a recognized object. if information is found, the relevant data may be displayed to the user on the device along with the recognized feature. the device may, in particular embodiments, have a small form factor and be constrained in terms of available memory, processing, and energy. a delegation model may allow the device to delegate portions of one or more processing tasks (e.g. tasks related to ar functionality) to nearby devices (e.g. phone or personal computer) or to network- or internet-based services, for example. as an example, for delegable tasks, the application requiring the task provides the system (e.g. a kernel of an operating system of the device) with characteristics or a profile of the task, including the task's latency sensitivity, processing requirements, and network payload size. this may be done for each delegable subtask of the overall delegable task. since tasks are often pipelined, contiguous chunks of the task pipeline may be delegated. the system may, in particular embodiments, take measurements of or build a model of one or more characteristics of the device. characteristics of the device may include static properties of the device, e.g. properties of hardware components of the device including total memory installed, maximum cpu speed, maximum battery energy, or maximum bandwidth of a network interface. characteristics of the device may also include dynamic properties of the device, e.g. operating properties of the device including available memory, current cpu capacity, available energy, current network connectivity, availability of network-based services, a tally of average user behavior among one or more users, or a predicted or expected processing time of a task (e.g. given a particular usage scenario). in particular embodiments, the device may have a model that incorporates previous and current measurements of device characteristics to aid in determining future device behavior. based on the task characteristics or profile and these measurements or models, as well as based on whether the task may be executed on the device, the system may delegate (or not delegate) one or more portions of the task or task pipeline. for example, if the available memory on the device cannot support the processing of a task (e.g. playing a video), one or more portions of the task may be delegated. as another example, if the cpu capacity of the device cannot support processing a task (e.g. if the cpu is running at capacity due to its existing load), one or more portions of the task may be delegated. as another example, if a battery level of the device is low and the battery is not expected to provide energy to the device for as long as the expected processing time of the task, one or more portions of the task may be delegated. as another example, if the network connectivity of the device is low or non-existent, one or more portions of the task may not be delegated (e.g. if the device also has enough available memory, cpu capacity, and energy). as another example, if one or more network-based services are available to the device (e.g. cloud-based services for processing) and the device has suitable network connectivity (e.g. good available bandwidth), one or more portions of the task may be delegated. as another example, if a user of the device typically (e.g. historically) delegates the playing of videos, one or more portions of the task of playing a video may be delegated. as another example, if a predicted processing time of the task (e.g. predicted based on a model incorporating previous and current measurements of device characteristics) is beyond a certain threshold (e.g. several minutes), the task may be delegated. any suitable characteristics of the device (e.g. static or dynamic properties) in any suitable combination may be used to determine whether to delegate a task. furthermore, any suitable characteristics of a task of the device (e.g. including a task profile or characteristics of the task including latency sensitivity, processing requirements, or network payload size) may be used to determine whether to delegate a task, either alone or in conjunction with device characteristics. additionally, any model of the device (e.g. device behavior) may be used, either alone or in conjunction with device or task characteristics, may be used to determine whether to delegate a task. in particular embodiments, devices paired with the device may also include a delegation model, such that the paired device (e.g. a phone) performs the same steps, delegating tasks based on its own models of energy, connectivity, runtime requirements, and feasibility. the delegated task may be processed or run to completion on the paired device (e.g. phone), and the results of processing the delegated task may be returned to the device. in particular embodiments, the device may operate in standalone mode (e.g. without delegating any processing tasks) when it does not have any network connectivity or when no paired devices are in range of the device. once the device regains connectivity, or when a device is paired with the device, delegation of tasks may resume. an example algorithm of a delegation model of the device is illustrated in fig. 126 . in this example, a delegable task process begins on the device ( 12610 ). the system of the device performs a power use analysis and prediction ( 12620 ) (based, e.g., on the user's historical energy usage 12630 and the expected time until a charge of the device 12640 ). based on this, the system determines at step 12650 whether there is sufficient charge remaining for the required uptime of the delegable task. if sufficient charge remains, the system of the device may increase the power usage 12660 and process the delegable task on the device itself 12670 . if, however, the device does not have sufficient charge for the required uptime, the device may query a paired device (e.g. a phone) 12680 to determine the energy status of the paired device ( 12690 ). if, in the example of a phone, there is sufficient charge remaining on the phone for the required uptime, the task may be processed on the phone 12694 . if, however, there is not sufficient charge on the phone, the system may determine at step 12692 if the device has connectivity to an internet-based (e.g. cloud) or other network-based service. if not, the device may delegate the process to the phone 12694 . if there is connectivity, the device may delegate the process to the cloud 12696 , where the task is processed and the results later returned to the device. in particular embodiments, delegable tasks may be delegated by the device in a divided fashion to one or more paired devices (e.g. mobile phones or personal computers) or network/internet services. that is, delegable sub-tasks of a delegable task or process may be delegated by the device to different locations. it is contemplated by this disclosure that a delegation model for a particular the device (or for a family or range of devices) may be dynamic or contextual. as an example, a delegation model may take into account available memory, cpu capacity, and available energy of a particular the device (or a family of devices), factors which may all change over time. the delegation model may also take into account the availability of network- or cloud-based services (and the capacity of each), as well as network connectivity (e.g. bandwidth and latency), which may also change over time. for example, with reference to fig. 127 , according to a first delegation model 12710 (which may, e.g., be applicable for devices manufactured in the next year), most processing may be evenly divided between the device and a paired device (e.g. smartphone), with only a small amount of delegation to a server of a cloud-based service. according to a second delegation model 12720 (which may, e.g., be applicable for devices manufactured in a three-year timeframe), most processing may be handled locally by the device (e.g. due to predicted advances in memory, cpu, and energy capacity in a small form factor). in this second model, some processing may be delegated to a server (e.g. more than in the first delegation model, due to improved network connectivity) and only a small amount of delegation may occur to the locally paired device. according to a third delegation model 12730 (which may, e.g., be applicable for devices manufactured in a five-year timeframe), all or almost all processing tasks may be evenly divided between the device and a server of a cloud-based service, with no or almost no processing being delegated to a locally-paired device. any number of delegation models may be created, as the factors taken into account by a delegation model are dynamic. as an example, all or almost all tasks may be performed locally on the device according to one delegation model, and all or almost all tasks may be delegated by the device in another delegation model. the device may choose to delegate functionality to a paired processing-rich device (e.g. phone, computer, tablet, television, set-top box, refrigerator, washer, or dryer) or to the internet based on the energy reserves or connectivity bandwidth to each of these locations. for example, a device with a powerful processor may delegate to the paired device when low on energy, or it may choose to delegate to the internet service when the paired device does not have sufficient power reserves. likewise, the system of the device may choose to process locally if the connection to the internet is showing higher latency to reduce the size of the data transfer. in particular embodiments, an entire application or a portion of an application may be delegated by a user of the device to a paired device or vice versa. this may occur on a per-application basis. when the application on a target device (e.g. a television) is to be delegated to the device, the target device may send a request over the paired connection (possibly via an intermediary device, such as a smartphone or personal computer) to load the application on the device. the device may then act as a client to a server running on the paired device (e.g. television). similarly, an application running on the device may be delegated to the paired device (e.g. a video playing on the device may be delegated to playing on a paired television). for example, if the device is running a first application, and a user of the device wants to interact with a second application, the device may automatically delegate a task of the first application to be processed by another device (e.g. a paired television). fig. 128 illustrates an example of a decision flow in the device operating according to a delegation model. in this example, an image-capture application is running on the device. a scene is captured on the device 12810 , and the device determines 12820 if it has sufficient cpu capacity for image feature calculations. if the device does have enough cpu capacity, it calculates the features of interest in the scene locally 12830 . if the device does not have sufficient cpu capacity, it may first determine 12840 if it is paired communicatively with another device with more processing capability (e.g. a mobile phone or a personal computer). if it is paired with such a device, the device may send data to the paired device so the paired device may calculate features of interest in the image 12850 . if the device is not paired with such a device, it may determine if it is connected to an internet-based (e.g. cloud) service 12860 . if not, the device performs no further action. if so, the device may send data to the cloud service so the service may calculate features of interest in the scene 12870 . features of interest may be calculated (wherever they are calculated) using any suitable algorithm including, for example, surf. in this example, the features of interest may be compared to a local catalog or an internet-based service to determine whether any matches are found (and if so, relevant information of interest) 12880 . if a match is found 12890 , the result may be presented to a user on the device 12895 . if no match is found, no further action is taken. in particular embodiments, a camera or other optical sensor of the device may be used to recognize any gestures performed by the user (e.g. in the space between the camera and a target in the real world). these gestures may, for example, be used to act upon the data presented (e.g. the real world target, such as a sign including text) or may be used to point to particular items upon which augmented reality functions may be performed. for example, the user may point to a word on a sign, causing the device to translate it and display the translation to the user. fig. 17 illustrates two examples of images captured by a camera of the device. in one example, a truck 1725 and the hand 1720 of a user of the device are both within the angle of view of a camera 1705 of the device and displayed by the device (shown at 1710 ). as such, gestures performed by the user upon the truck may be recognized by the device and processed by device to provide, for example, ar functionality. in the second example, only the truck is within the angle of view of the camera (shown at 1715 ), and as such, gestures performed by the user are not captured or recognized by the device. gesture recognition may also be delegated by the device. in particular embodiments, objects or images may be recognized by the device when they are within the frame of view of a camera of the device. as described herein, there may be multiple ways for the device to recognize an object. as one example, a gesture performed by the user (e.g. a pointing gesture indicating a particular object) may enable ar functionality on the device and cause the device to recognize the object. as another example, automatic object recognition may occur when, for example, the user positions the camera for a certain amount of time on a particular object (e.g. a section of text). as a third example, object recognition or ar functionality may be enabled explicitly by the user when, for example, the user taps or touches the display (or, e.g., clicks the outer ring) when the camera of the device has captured an object of interest. global object recognition may, in some instances, be computationally intensive and error-prone. as such, in particular embodiments, a limiting set (e.g. the pages of a magazine or catalog or a catalog of a particular type of object such as plant leaves or book covers) may be applied to improve accuracy. there exist a number of choices for calculation of feature vectors from images, which the designer of the system for the device may select from. in some instances, the conversion of feature vectors between different approaches may be computationally expensive, so that the choice of the database of possible matches is replicated on the device. the calculation of feature vectors may be delegated, as described herein. in particular embodiments, barcodes of various types may be recognized by the device. these barcodes may be used to query internet-based services for additional data, as well as options to purchase, review, or bookmark the barcoded item for future review. while two-dimensional barcodes may generally be read directly, the system of the device may offer an addition close-focus mode for particularly small or one-dimensional barcodes to improve recognition rate. should the system lack the ability to decode the barcode, it may simply focus the camera, take a picture, and delegate recognition to a remote service, as described herein. figs. 129a-129d illustrate an example of barcode recognition mode. the device may be pointed at an item ( 129 a), recognize the item ( 129 b), display additional information obtained from the internet about the item ( 129 c), and provide the user an interface to purchase the item ( 129 d). in particular embodiments, the device may perform translation. translation functionality may be divided into two portions: optical character recognition (ocr), and translation of recognized characters, words, or phrases. ocr may be completed on the device or delegated (e.g. to a paired processing device) to reduce the amount of data to be translated by the device. simple word translations may be performed on the device or delegated (e.g. to a paired processing device). as with other functionality described herein, part or all of the recognition or translation process may be delegated as needed. the user may optionally use a gesture to indicate the word to be translated, as shown in fig. 130 (e.g. the word “warning”). since individual words may be circumscribed by white space, the system may segment the word before attempting translation. additionally, if the device can perform ocr with low latency, it may show the text to the user so that the user knows when the device is targeting and correctly recognizing the correct text. if automatic ocr is enabled, then the device may automatically identify images in the angle of view of an outward-facing camera and present on the device display information about the identified images. if automatic translation is enabled, then the device may automatically translate text in the angle of view of the outward-facing camera and present the translated text on the device display. figs. 131a-131d illustrate examples of the device operating in various augmented reality modes described herein, including barcode recognition mode ( 131 a), image recognition mode ( 131 b), ocr and translate mode ( 131 c), and object recognition mode ( 131 d). fig. 132 illustrates an example of the overall flow of actions for an augmented reality system for the device. although this example illustrates an image capture application, any suitable task or process on the device may follow a similar flow. additionally, any task after the device captures an image and before the device displays results to the user may (as suitable) be delegable by the device. in this example, an image from a camera of the device is captured (in the image capture section 13210 ), pre-processed (in section 13220 ), features are extracted and recognized to produce image recognition results (in section 13230 ), and any objects may be recognized (in section 13240 ). object data may be formatted for action by a user of the device. the user may activate the augmented reality mode of the device 13211 (e.g. via a user gesture or pointing the camera of the device at an object for a pre-determined amount of time), and an image in the view of the camera 13212 may be captured (e.g. based on a trigger event such as a user input or automatic camera activation) by device camera 13213 to produce a camera image 13214 . at this point, the pre-processing stage 13220 may be entered. pre-processing 13220 may, for example, include contrast enhancement, grayscale conversion, sharpening, or down-sampling. in particular embodiments, the camera may operate in a general augmented reality mode in which anything in front of the camera may be processed and recognized. in other embodiments, the camera may operate in specific modes (e.g. ocr, barcode, or visual marker) and recognize only particular items when in such a mode. in particular embodiments, if it is determined that the image may include known shapes, symbols, or organizations of shapes or symbols (e.g. if the camera or device is in ocr mode, barcode mode, or visual marker mode), ar image processing may proceed on a first path. this first path begins with preliminary processing 13221 , proceeds to segmentation 13231 (which may, for example, determine symbol or symbol group boundaries such as letters or words), and commences with one or more of optical character recognition 13234 (e.g. if it is determined the image may contain characters, determining what those characters are), barcode recognition 13235 (e.g. if it is determined the image may contain a barcode, recognizing the barcode), or visual marker recognition (e.g. recognizing other types of visual markers) 13236 (e.g. for all other types of visual markers). the results of this first path are sent to object recognizer 13242 . in particular embodiments, if it is determined that the image may include features that are not necessarily known, ar image processing may proceed on a second path. the second path begins with feature extraction 13222 (e.g. in which the presence of edges or lines, changes in angles of lines, edges, points of interest, or patterns. are detected in the captured image). the second path proceeds to image recognition 13232 , in which the features of the image are compared with feature data from a recognition database 13233 (which may, for example, reside on the device, on a locally-paired device, or on a remote server or computer). the results of the image recognition comparison are provided 13237 and sent to the object recognizer 13242 . in the object recognition section 13240 , the first and second paths converge at the object recognizer 13242 . here, results from an object database 13241 are used to recognize objects (e.g. that a phone recognized using the image recognition database 13233 is a particular brand and model of phone). object data 13243 about the object recognized by recognizer 13242 (e.g. the price of the model of phone recognized, or where the phone may be available for purchase) may be provided. for text, there may be definitions or translations that occur and are displayed to the user. for barcodes, there may be product information and links to buy the recognized object that are displayed to the user. in particular embodiments, the data may be purely descriptive (e.g. the price of the phone) or may be active (e.g. a link where the user may purchase the phone). if the data includes action data 13244 , then an action controller 13250 (which controls, formats, and outputs a gui for the user of the device) may show a ui to the user 13255 including the active data (e.g. the link for purchasing the phone). if the user selects an action 13260 (e.g. clicking the link), then the action controller shows the action ui to the user 13265 (e.g. opening of the link), and if the action is confirmed 13270 , then the action (e.g. the actual opening of the webpage associated with the link) is performed 13275 . fig. 133 illustrates an example of a network environment. as described herein, in particular embodiments, the device 13310 may be paired with other devices (e.g. nearby devices). the device may connect directly to a personal area network 13320 (which may bridge via other devices on the same network to a local area network), or the device may connect to a local area network 13330 directly. the personal area network may include, for example, non-wi-fi radio technology, such as bluetooth, nfc, or zigbee. the personal area network may, for example, include a smart media gateway 13322 (e.g. a media server), a smart tv 13324 , another processing provider 13326 , or a phone 13328 . phone 13328 may allow the device to connect to a cellular network 13340 , and from there to the internet 13350 . the local area network 13330 may include, for example, wi-fi with or without authentication. the local area network may, for example, include a local wireless network router 13332 , smart media devices 13334 , smart appliances 13336 , and home automation technology 13338 . the local area network may, in turn, connect to the global internet 13350 via, for example, local router 13332 that connects to an internet service (e.g. a proprietary cloud service 13352 or other cloud service partners 13354 ). some devices may be reached by the device either via direct access (e.g. through the personal area network) or through the local area network. those devices reachable by the device may be paired with the device and may be controlled by the device or control the device. the device may connect to the personal area network or the local area network using, for example, any suitable rf technology. as shown in fig. 133 , pairing to a target device in the periphery may first occur over the rf network. this allows the device to know what is “nearby”. this may happen over the personal area network (e.g. an ad-hoc or peer-to-peer network), or may use a mediated network such as 802.11 wireless (e.g. the local area network). once a neighborhood is established, the device may request that nearby devices enter pairing mode. this may be done either directly or via a paired processing device with a greater gamut of connectivity options, such as a mobile phone. once the target devices have entered pairing mode, they may exhibit their pairing signals. for example, devices with displays may show a visual tag on their display, while others may enable an nfc tag allowing for a scanner to identify them. other approaches such as selection from a list or by pin code may also be used. once a device is uniquely identified as a pairing target, the device may exchange a security token with the target device to finalize the pairing. fig. 134 illustrates an example of different types of pairing technology that may be used to pair a target device with the device. the target device, which may be a smart device such as a phone, may include passive nfc tags 13402 or active nfc transmitters 13404 (which may be recognized by a nfc tag reader 13420 and nfc decoder 13428 of the device); an nfc decoder 13406 (which may recognize nfc tags written by the nfc tag writer 13422 of the device), passive visual tags 13408 (e.g. stickers), barcodes 13410 , or other display information 13412 (which may be recognized by a camera 13424 of the device); or other pairing system 13416 . an active tag generator 13414 of the target device may create the display information 13412 or provide information to the other pairing system 13416 of the target device (which is recognized by a mirror pairing system 13426 with pairing code decoder 13438 of the device). the device may write data to nfc tags (e.g. with an nfc tag writer 13422 ) to transmit this data to other target devices that may be paired to the device. tags written by the device may be recognized by nfc tag decoders 13406 on a target device. the device may include any of a number of decoders including barcode decoder 13430 , visual tag decoder 13432 , image recognizer 13434 , or other image-based decoder 13436 (e.g. a decoder for qr codes, logos, or blink patterns of leds), all taking input from camera 13424 of the device. after the device receives and recognizes pairing information, it may decode (e.g. through a variety of decoders) the relevant information to proceed with pairing with the target device. in particular embodiments, pairing may be achieved using motion—a motion-sensitive target device (e.g. mobile phone or remote) may be paired with the device by holding and moving the target device in the same hand as the device (e.g. if both devices include accelerometers, the similar pattern of motion may be detected and used to pair the devices). as another example, a fixed target device may be paired with the device by, for example, tapping the fixed target device with a random pattern while holding the fixed target device in the same hand as the device (e.g. if both devices include touch detection, the similar pattern of tapping may be detected and used to pair the devices). additionally, pairing may be done using audio—if the device and a target device both have audio reception capabilities, a user may make a sound (e.g. say a phrase) that both devices detect and then set up a pairing. any suitable technology (including, e.g., augmented reality functions) of the device may be used to pair with and control local devices. the device and target device may each connect to other possible intermediary network devices 13440 , and also to a local area network 13450 . fig. 135 illustrates an example process for pairing a target device (e.g. using any of the methods described herein) with the device. once pairing mode is enabled 13510 , the device determines if the rf network contains pairable target devices 13512 . if not, no further action is taken (e.g. the device may continue to scan periodically). if so, the device may request that the pairable devices enter pairing mode 13514 . the device may then proceed (in any order, or in a parallel fashion) to scan, via different available technologies, for available target devices. these may include nfc tag scans 13516 , visual tag scans in the camera's angle of view 13518 , barcode scans in the camera's angle of view 13520 , or any other method 13522 . if a target device is detected via one of these methods, the target device is paired to the device 13524 . once the pairing has occurred, the device may show menu items to the user for controlling the paired device(s). the device may allow for both visual and motion-based gestural control of the paired devices. for example, the user may gesture (e.g. wave her hand) to change channels on a paired television, or may make a pinching gesture to transfer video media from the device to a paired display (using, e.g., ar functionality). device control mediated over an rf network may be both local and securable. fig. 136 illustrates example controls enabled on the device for a paired and controlled television including an active on/off icon 13610 , favorite channels 13620 , a current channel display 13630 , and volume 13640 . as described herein, any suitable input from the user may be used to control functionality of a paired device. for example, gesture input, click or press input, or touch input may be used, for example, to change channels, adjust volume, or control other functions of the paired television. in particular embodiments, a pairing and control model for the device may include the following characteristics. the device may function as the host for an application that interacts with or controls one or more functions of a remote device (e.g. an appcessory such as a controllable thermostat). a smartphone (or other locally-paired device), which may have previously been the host for the application, may now function merely as a local target device to which the device may delegate certain functions related to the interaction or control of the remote device (e.g. longer-range wireless connectivity to the remote device, sending commands to the remote device, receiving data from the remote device, or processing tasks). control of the remote appcessory device may be done by the device using any suitable means including, for example, visual means (e.g. using the camera) or motion-based gestures. in other embodiments, the locally-paired smartphone may continue to function as the host for the application that interacts with the remote appcessory, but the device may provide some or all of the user interface for data input and output to and from the application (e.g. a “light” version of the application hosted by the smartphone). for example, the user may control the application using the device, but the smartphone may still function as the host of the application. in particular embodiments, the device may be operable with one or more services. these services may fall in categories including security, energy, home automation and control, content sharing, healthcare, sports and entertainment, commerce, vehicles, and social applications. example security applications include the following. the device may authenticate a user (who is wearing the unlocked device) to another device near the user (e.g. paired with the device). the device may be unlocked with a code entered by the user using any suitable input including, for example, rotating the outer ring of the device. as an example, while a user rotates (or presses or clicks) the outer ring, the display may show alphanumeric or symbolic data corresponding to the rotation (or press or click) by the user. if, for example, the user rotates the outer ring one rotational increment in a clockwise direction (or, e.g., clicks or presses the outer ring once), the display may show the user a “1,” and if the user rotates the outer ring two rotational increments (e.g. within a certain period of time, such as a millisecond) in a clockwise direction (or, e.g., clicks or presses the outer ring twice), the display may show the user a “2.” in particular embodiments, the display of alphanumeric or symbolic data corresponding to a rotation (or press or click) by the user may allow the user to unlock the device using the metaphor of a combination lock. the device may also be unlocked using biometric data (e.g. by skin or bone signatures of the user). in an example energy application, the device may automatically display information about the energy consumption of the room or other location in which the user is located. the device may also be able to display information about the energy consumption of other paired devices and update all of this information dynamically as the user changes location. in an example home control application, the user may select and directly control paired home-control devices using, for example, rotation of the outer ring or a gesture input. the user may use gestures to control the sharing or transfer of content to or from the device (e.g. transferring video playing on the device to a paired television, as described herein). additionally, auxiliary information (e.g. movie subtitles) may be provided on the device for content shown on another, larger device (e.g. television screen playing the movie). the device may automatically determine a healthcare context (e.g. if the user is exercising or sleeping). when it determines this context, the device may open applications corresponding to the healthcare context (e.g. for recording heart rate during exercise, movement during exercise, duration of exercise, pulse oximetry during exercise, sleep patterns, duration of sleep, or galvanic skin response). the device may, for example, measure a user's health-related data (e.g. heart rate, movement, or pulse oximetry) and send some or all of this data to a paired device or a server. although illustrated in the healthcare context, the determination of a relevant context (e.g. based on a user's behavior), opening of corresponding applications, recording of data, or transmission of this data may be applicable in any suitable context. the device may assist in sports-related applications such as, for example, automatically assessing a golf swing of the user and suggesting corrections. in a commercial setting, the device may automatically identify a product (e.g. using rfid, nfc, barcode recognition, or object recognition) when the user picks up the product and may provide information about the product (e.g. nutrition information, source information, or reviews) or the option to purchase the product. payment for the product may, for example, be accomplished using visual barcode technology on the device. in particular embodiments, the device may be used to pay for a product using nfc, rfid, or any other suitable form of short-distance communication. during payment, the user's information may, for example, be authenticated by the device, which may detect the user's biometric information (e.g. bone structure or skin signature). the device may also automatically provide an indication to the user (e.g. a vibration) when the user is near a product on her shopping list (e.g. stored in the device) or another list (e.g. a wish list of the user's friend). the device may function as a key for unlocking or turning on one or more vehicles. the user may, for example, enter a code using the outer ring to unlock or turn on the vehicle (e.g. using nfc technology), as described earlier. in particular embodiments, both user biometric information and a code entered by the user may be required to unlock the car, allowing for enhanced security for a car-based application. additionally, the device may include profiles for one or more users, each profile containing vehicle settings (e.g. temperature or seat position). as another example, biometric information of a particular user may be used not only to unlock the device, but also to determine which user profile to load during the car's operation. the proximity of the device to the vehicle may automatically cause the vehicle to implement the vehicle settings of the profile of the user. the device may also be operable for gps navigation (either directly on the device or when paired with and controlling a phone, for example). the device may access and operate in conjunction with a service that provides support for mixed-reality games or massively multi-player reality-based games. this functionality may, for example, include registration, management of user data (e.g. user profiles and game-related data such as levels completed or inventories of supplies), and management of accomplishment lists. the functionality of the device and the service may also include management of connectivity (e.g. concentrator functionality) that handles fragile wireless communication channels and provides a unified api to third party game servers. the device may access and operate in conjunction with a service that allows a user of the device to publish locations, check-ins, or other location-based data that allows various services to access a consistent reservoir of the most current information regarding the position and status of the user. as an example, the user of the device may find friends using similar devices. the service and device together may handle status updates, profile management, application access permissions, blacklists, or user-to-user access permissions. the service may be a trusted and centralized touchpoint for private data. by combining access to a unified location service, energy and battery life may, in particular embodiments, be conserved. in particular embodiments, certain functionality tokens may be made available based on the position of the user. an application may, for example, check on the device to see if this token is available and act accordingly. on the server side, apis may allow developers to see use of the tokens or allow for redemption. in particular embodiments, information may be distributed by the device to other users (e.g. a single other user, or in broadcast mode to multiple users). the device may access and operate in conjunction with a service that provides a unified polling interface that allows devices to receive and send polls. the device and service together may manage distribution lists, scoring criteria, and poll availability frames (both temporal and geographic, for example). this service may be exposed on the device and on a server such that third parties may use apis to write applications and receive results back via online apis. in particular embodiments, the device may access and operate in conjunction with a service that provides optimizations for the presentation of text, images, or other information on a circular display of the device. as an example, a web site may be rendered or formatted for display on a computer monitor, but a service may customize the rendering and formatting for a smaller, circular display by emphasizing images and truncating text. the customized rendering and formatting may, for example, be a task delegable among the device and one or more servers or locally-paired devices. this service may also include news or advertising services. fig. 137 illustrates an example computer system 13700 . in particular embodiments, one or more computer systems 13700 perform one or more steps of one or more methods described or illustrated herein. in particular embodiments, one or more computer systems 13700 provide functionality described or illustrated herein. in particular embodiments, software running on one or more computer systems 13700 performs one or more steps of one or more methods described or illustrated herein or provides functionality described or illustrated herein. particular embodiments include one or more portions of one or more computer systems 13700 . herein, reference to a computer system may encompass a computing device, and vice versa, where appropriate. moreover, reference to a computer system may encompass one or more computer systems, where appropriate. this disclosure contemplates any suitable number of computer systems 13700 . this disclosure contemplates computer system 13700 taking any suitable physical form. as example and not by way of limitation, computer system 13700 may be an embedded computer system, a system-on-chip (soc), a single-board computer system (sbc) (such as, for example, a computer-on-module (com) or system-on-module (som)), a desktop computer system, a laptop or notebook computer system, an interactive kiosk, a mainframe, a mesh of computer systems, a mobile telephone, a personal digital assistant (pda), a server, a tablet computer system, or a combination of two or more of these. where appropriate, computer system 13700 may include one or more computer systems 13700 ; be unitary or distributed; span multiple locations; span multiple machines; span multiple data centers; or reside in a cloud, which may include one or more cloud components in one or more networks. where appropriate, one or more computer systems 13700 may perform without substantial spatial or temporal limitation one or more steps of one or more methods described or illustrated herein. as an example and not by way of limitation, one or more computer systems 13700 may perform in real time or in batch mode one or more steps of one or more methods described or illustrated herein. one or more computer systems 13700 may perform at different times or at different locations one or more steps of one or more methods described or illustrated herein, where appropriate. in particular embodiments, computer system 13700 includes a processor 13702 , memory 13704 , storage 13706 , an input/output (i/o) interface 13708 , a communication interface 13710 , and a bus 13712 . although this disclosure describes and illustrates a particular computer system having a particular number of particular components in a particular arrangement, this disclosure contemplates any suitable computer system having any suitable number of any suitable components in any suitable arrangement. in particular embodiments, processor 13702 includes hardware for executing instructions, such as those making up a computer program. as an example and not by way of limitation, to execute instructions, processor 13702 may retrieve (or fetch) the instructions from an internal register, an internal cache, memory 13704 , or storage 13706 ; decode and execute them; and then write one or more results to an internal register, an internal cache, memory 13704 , or storage 13706 . in particular embodiments, processor 13702 may include one or more internal caches for data, instructions, or addresses. this disclosure contemplates processor 13702 including any suitable number of any suitable internal caches, where appropriate. as an example and not by way of limitation, processor 13702 may include one or more instruction caches, one or more data caches, and one or more translation lookaside buffers (tlbs). instructions in the instruction caches may be copies of instructions in memory 13704 or storage 13706 , and the instruction caches may speed up retrieval of those instructions by processor 13702 . data in the data caches may be copies of data in memory 13704 or storage 13706 for instructions executing at processor 13702 to operate on; the results of previous instructions executed at processor 13702 for access by subsequent instructions executing at processor 13702 or for writing to memory 13704 or storage 13706 ; or other suitable data. the data caches may speed up read or write operations by processor 13702 . the tlbs may speed up virtual-address translation for processor 13702 . in particular embodiments, processor 13702 may include one or more internal registers for data, instructions, or addresses. this disclosure contemplates processor 13702 including any suitable number of any suitable internal registers, where appropriate. where appropriate, processor 13702 may include one or more arithmetic logic units (alus); be a multi-core processor; or include one or more processors 13702 . although this disclosure describes and illustrates a particular processor, this disclosure contemplates any suitable processor. in particular embodiments, memory 13704 includes main memory for storing instructions for processor 13702 to execute or data for processor 13702 to operate on. as an example and not by way of limitation, computer system 13700 may load instructions from storage 13706 or another source (such as, for example, another computer system 13700 ) to memory 13704 . processor 13702 may then load the instructions from memory 13704 to an internal register or internal cache. to execute the instructions, processor 13702 may retrieve the instructions from the internal register or internal cache and decode them. during or after execution of the instructions, processor 13702 may write one or more results (which may be intermediate or final results) to the internal register or internal cache. processor 13702 may then write one or more of those results to memory 13704 . in particular embodiments, processor 13702 executes only instructions in one or more internal registers or internal caches or in memory 13704 (as opposed to storage 13706 or elsewhere) and operates only on data in one or more internal registers or internal caches or in memory 13704 (as opposed to storage 13706 or elsewhere). one or more memory buses (which may each include an address bus and a data bus) may couple processor 13702 to memory 13704 . bus 13712 may include one or more memory buses, as described below. in particular embodiments, one or more memory management units (mmus) reside between processor 13702 and memory 13704 and facilitate accesses to memory 13704 requested by processor 13702 . in particular embodiments, memory 13704 includes random access memory (ram). this ram may be volatile memory, where appropriate, and this ram may be dynamic ram (dram) or static ram (sram), where appropriate. moreover, where appropriate, this ram may be single-ported or multi-ported ram. this disclosure contemplates any suitable ram. memory 13704 may include one or more memories 13704 , where appropriate. although this disclosure describes and illustrates particular memory, this disclosure contemplates any suitable memory. in particular embodiments, storage 13706 includes mass storage for data or instructions. as an example and not by way of limitation, storage 13706 may include a hard disk drive (hdd), a floppy disk drive, flash memory, an optical disc, a magneto-optical disc, magnetic tape, or a universal serial bus (usb) drive or a combination of two or more of these. storage 13706 may include removable or non-removable (or fixed) media, where appropriate. storage 13706 may be internal or external to computer system 13700 , where appropriate. in particular embodiments, storage 13706 is non-volatile, solid-state memory. in particular embodiments, storage 13706 includes read-only memory (rom). where appropriate, this rom may be mask-programmed rom, programmable rom (prom), erasable prom (eprom), electrically erasable prom (eeprom), electrically alterable rom (earom), or flash memory or a combination of two or more of these. this disclosure contemplates mass storage 13706 taking any suitable physical form. storage 13706 may include one or more storage control units facilitating communication between processor 13702 and storage 13706 , where appropriate. where appropriate, storage 13706 may include one or more storages 13706 . although this disclosure describes and illustrates particular storage, this disclosure contemplates any suitable storage. in particular embodiments, i/o interface 13708 includes hardware, software, or both, providing one or more interfaces for communication between computer system 13700 and one or more i/o devices. computer system 13700 may include one or more of these i/o devices, where appropriate. one or more of these i/o devices may enable communication between a person and computer system 13700 . as an example and not by way of limitation, an i/o device may include a keyboard, keypad, microphone, monitor, mouse, printer, scanner, speaker, still camera, stylus, tablet, touch screen, trackball, video camera, another suitable i/o device or a combination of two or more of these. an i/o device may include one or more sensors. this disclosure contemplates any suitable i/o devices and any suitable i/o interfaces 13708 for them. where appropriate, i/o interface 13708 may include one or more device or software drivers enabling processor 13702 to drive one or more of these i/o devices. i/o interface 13708 may include one or more i/o interfaces 13708 , where appropriate. although this disclosure describes and illustrates a particular i/o interface, this disclosure contemplates any suitable i/o interface. in particular embodiments, communication interface 13710 includes hardware, software, or both providing one or more interfaces for communication (such as, for example, packet-based communication) between computer system 13700 and one or more other computer systems 13700 or one or more networks. as an example and not by way of limitation, communication interface 13710 may include a network interface controller (nic) or network adapter for communicating with an ethernet or other wire-based network or a wireless nic (wnic) or wireless adapter for communicating with a wireless network, such as a wi-fi network. this disclosure contemplates any suitable network and any suitable communication interface 13710 for it. as an example and not by way of limitation, computer system 13700 may communicate with an ad hoc network, a personal area network (pan), a local area network (lan), a wide area network (wan), a metropolitan area network (man), body area network (ban), or one or more portions of the internet or a combination of two or more of these. one or more portions of one or more of these networks may be wired or wireless. as an example, computer system 13700 may communicate with a wireless pan (wpan) (such as, for example, a bluetooth wpan), a wi-fi network, a wi-max network, a cellular telephone network (such as, for example, a global system for mobile communications (gsm) network), or other suitable wireless network or a combination of two or more of these. computer system 13700 may include any suitable communication interface 13710 for any of these networks, where appropriate. communication interface 13710 may include one or more communication interfaces 13710 , where appropriate. although this disclosure describes and illustrates a particular communication interface, this disclosure contemplates any suitable communication interface. in particular embodiments, bus 13712 includes hardware, software, or both coupling components of computer system 13700 to each other. as an example and not by way of limitation, bus 13712 may include an accelerated graphics port (agp) or other graphics bus, an enhanced industry standard architecture (eisa) bus, a front-side bus (fsb), a hypertransport (ht) interconnect, an industry standard architecture (isa) bus, an infiniband interconnect, a low-pin-count (lpc) bus, a memory bus, a micro channel architecture (mca) bus, a peripheral component interconnect (pci) bus, a pci-express (pcie) bus, a serial advanced technology attachment (sata) bus, a video electronics standards association local (vlb) bus, or another suitable bus or a combination of two or more of these. bus 13712 may include one or more buses 13712 , where appropriate. although this disclosure describes and illustrates a particular bus, this disclosure contemplates any suitable bus or interconnect. herein, a computer-readable non-transitory storage medium or media may include one or more semiconductor-based or other integrated circuits (ics) (such, as for example, field-programmable gate arrays (fpgas) or application-specific ics (asics)), hard disk drives (hdds), hybrid hard drives (hhds), optical discs, optical disc drives (odds), magneto-optical discs, magneto-optical drives, floppy diskettes, floppy disk drives (fdds), magnetic tapes, solid-state drives (ssds), ram-drives, secure digital cards or drives, any other suitable computer-readable non-transitory storage media, or any suitable combination of two or more of these, where appropriate. a computer-readable non-transitory storage medium may be volatile, non-volatile, or a combination of volatile and non-volatile, where appropriate. in particular embodiments, a display may provide for user input such as entry of text, symbols, or any suitable combination thereof. for example, text and symbols may include alphanumeric characters, such as letters, words, numerical symbols, punctuation, etc.; logograms, such as chinese characters, japanese characters, etc.; any symbol, character, or combination of symbols or characters used to visually communicate meaning, such as words or grammar of one or more languages, or any suitable combination thereof. the disclosure below may refer to some or all of the examples above as “text,” unless indicated otherwise. in particular embodiments, text may be entered on a relatively small electronic display, such as a display on a mobile or wearable device, such as for example on the wearable electronic device described more fully herein. this disclosure contemplates text entered onto any suitable display including any suitable small display, such as for example keychain-size screens, watch-size screens, digital camera screens, small-scale television screens, small-scale tablet screens, and any other type of digital small-scale devices that include a screen. this disclosure contemplates screens of any suitable shape, such as for example square screens, rectangular screens, circular screens, ellipsoid screens, or any other suitable shape. in particular embodiments text input on a display may include interaction with the display, such as for example by using a user's finger or a tool such as a stylus. in particular embodiments, text input on a display may include interaction with non-display portion of the device that the display is part of, such for example with a rotatable element of an electronic wearable device, as described more fully herein. in particular embodiments text input may include interaction with or activation of any suitable element of a device, such as for example a microphone, an accelerometer, a gyroscope, an optical sensor, or any other suitable element. this disclosure contemplates text input on any suitable display of any suitable device using any suitable means including interaction with or activation or any suitable element of the device. in particular embodiments, text input may include input by one or both of a portion of a user's hand(s), such as one or more fingers, or an additional tool held by the user, such as a pen or stylus. in particular embodiments, text input may use predictive methods to suggest or select one or more characters, words, or combination thereof based on context, such as on characters a user has already entered or on content displayed on the display. in particular embodiments, text can be input on top of existing visual content, thus enabling at least part of a display, such as a small display, to provide visual feedback while at the same time accepting input from the user. as described more fully herein, text input may be used to create special characters, numbers, symbols, and spaces or line breaks; delete characters; or switch between letters, such as for example between upper case and lower case letters. fig. 138a illustrates an example device with an example circular display 13802 that contains a display portion for inputting text, a portion for displaying inputted, and a portion for displaying text available for input. in the example of fig. 138a available text includes a character set 13804 arranged in a circular fashion around outer edge of display 13802 . while character set 13804 in the example of fig. 138a includes capitalized letters of the english alphabet, this disclosure contemplates that character set 13804 may be any suitable text, including characters, letters, numbers, symbols, or any combination thereof. figs. 139d-f display examples of text available for input that may displayed on a display, such as on the outer portion of circular display 13904 of fig. 139e . as illustrated in fig. 139a and described more fully herein, a display may include a rotatable element encircling the display, such as for example rotatable element 13906 , which may be used to facilitate text input. display 13802 of fig. 138a includes displayed text, e.g., “hello. it was.” a cursor appears after the “ll” of “hello.” the cursor may be used to indicate the current text interaction location, such as to input text, delete text, etc. in particular embodiments, text input may include interacting with display 13802 to generate a space. in particular embodiments, a swipe gesture, a tap gesture, or any suitable combination thereof may be used to generate a space. for example, a user may place one or more fingers on touch position 13806 of fig. 138b and swipe left to generate a space. a space may be generated at the location of the cursor or at the location of text nearest touch position 13806 . as another example, a user may place one or more fingers on touch position 13808 of fig. 138c and swipe to the right to generate a space. this disclosure contemplates that a user may swipe in any suitable direction, such has for example up or down, to generate a space. in particular embodiments, a user may input a space by tapping on display 13802 . for example, a user may tap on or near touch position 13810 of fig. 138d with one finger. a user may tap any suitable number of times, such as once or twice, on or near touch position 13810 to generate a space. in particular embodiments, a user may touch and hold a touch position for a predetermined amount of time to input a space. in particular embodiments, a user may input a space by touching two positions on display 13802 , such as for example on positions 13812 of fig. 138e , at substantially the same time. a user may touch on or near the positions any suitable number of times, such as once or twice, to input a space. in particular embodiments, as illustrated in figs. 138d and 138e , touch positions to input a space may be on or near displayed text. however, touch positions may be on any suitable portion of a display. while this disclosure describes example methods of text input to input a space, this disclosure contemplates that those methods may be used to input any suitable character. moreover, this disclosure contemplates that any of those methods may be used to delete one or more characters, words, or portions of words. for example, any of the methods may be used to delete the last entered character or word, or the character to the left or right of a cursor. in particular embodiments, the text input to delete a character or word may occur on a portion of the displayed text, such as for example by swiping left from touch position 13806 of fig. 138b . in particular embodiments, a rotatable element may be used to select text to add in text input or deletion. for example, a device may include a rotatable ring encircling a display, such as ring 13906 illustrated in fig. 139a . rotating the ring in any suitable direction may selection one or more characters. for example, when the cursor is positioned as displayed in fig. 139a , a user may rotate ring 13906 counterclockwise to select the word “heljo,” as illustrated in fig. 139b . the user may then delete the selected text using any suitable method, such as for example swiping right from touch position 13908 of fig. 139c . any of the methods described above may be used to input symbols, such as for example punctuation marks such as a period. the methods may also be used, as appropriate, to transition a display between types of text available for input. for example, a user may tap or swipe on a display or may rotate a rotatable element to transition a display among the displays of figs. 139d-f . like for inputting or deleting any type of text, this disclosure contemplates that transitioning between text available for display may be accomplished by any suitable gesture, such as for example by one or more taps, swipes, pinches (e.g., pinch-in or pinch-out with two or more fingers) or any suitable combination thereof in particular embodiments, a user may select a cursor location by any suitable gesture, such as for example by tapping a display on the location of displayed text that a user wishes the cursor to appear. in particular embodiments, as a user inputs text, a display may display one or more suggested characters, words, or portions of words. for example, display 14002 of fig. 140a illustrates suggested text in ribbon 14008 . a user may toggle between suggested text, such as the words displayed in ribbon 14008 , by any suitable method, such as for example by rotating a rotatable element 14006 , by swiping left, right, up, or down on the ribbon (or any other suitable portion of the display; by executing any suitable gesture or combination on the display; or by any suitable combination thereof. for example, a user may use a rotatable element 14006 to navigate among displayed suggestions and may use touch functionality to select one or more characters 14004 for text entry. in particular embodiments ribbon 14008 may include a highlighted portion indicating the suggested text that will be input if a user selects the suggestion, such as for example by performing a gesture on the highlighted portion. in particular embodiments, a user may swipe right to left or left to right to switch between different suggested text options. in particular embodiments, a user may swipe left to right to add space character, swipe right to left to delete the last-entered character, and swipe from top to bottom or bottom to top to switch between different suggested text options. in particular embodiments, a user may swipe right to left to add space character; swipe left to right to delete last character, and execute a two-finger swipe from bottom to top or top to bottom to switch between different suggested text options. in particular embodiments, a user may swipe right to left or left to right to switch between different suggested text options, may rotate a rotatable element to the right to add space character, and may rotate a rotatable element to the left to delete a last-entered character. in particular embodiments, text input may include handwritten input by one or both of a portion of a user's hand(s), such as one or more fingers, or an additional tool held by the user, such as a pen or stylus. in particular embodiments, handwritten input may use predictive methods to suggest or select one or more characters, words, or combination thereof based on context, such as on characters a user has already entered or on content displayed on the display. in particular embodiments, handwritten text can be input on top of existing visual content, thus enabling at least part of a display, such as a small display, to provide visual feedback while at the same time accepting input from the user. as described more fully herein, handwritten text input may be used to create special characters, spaces or line breaks, delete characters, switch between letter, numbers, or symbol, such as for example between upper case and lower case letters. in particular embodiments, handwritten input may include gestures captured by a sensor of a device, such as for example by a touch-sensitive display, by an optical sensor, by a motion sensor, or any other suitable sensor or combination of sensors. fig. 141a illustrates an example display 14102 that contains a handwriting input portion 14106 and an inputted text portion 14104 . for this embodiment and for other embodiments described herein, this disclosure contemplates that portions 14104 and 1406 may at least in part overlap. in particular embodiments, portion 14106 may include a portion of display 14102 that displays content, thus utilizing more screen space. for example, portion 14106 may be on top of a keyboard or other display of characters that can be input by selection of the characters. thus, a user can choose to select text or draw text to input text. this disclosure contemplates that portion 14102 and portion 14106 may take any suitable shape and size, which may be based on the shape and size of a display. for example, portions 14104 and 14106 may be portions of a circle when presented on a circular display. in the example of fig. 141a , text maybe input on the screen when the user's finger draws each letter on the display surface of the device. additional features may help the user to more quickly input text. display portion 14104 at the top part of the display 14102 shows the text that is being or has been input. the example fig. 141a illustrates “m” in input portion 14104 as the user is completing the “m” drawn on portion 14106 . this disclosure contemplates that a user may input text using any suitable handwriting input. for example, while fig. 141a illustrates a using drawing “m” to generate the character “m,” a user may, for example, swipe right on portion 14106 to generate a space. in particular embodiments, text prediction may be used to assist text input. for example, fig. 141b illustrates suggested text in portion 14108 of the display. the words suggested in portion 14108 are based on the text input in portion 14106 by the user. a user may navigate among the suggested text in portion 14108 , such as for example by swiping on portion 14108 , to access additional suggestions. this disclosure contemplates that portion 14108 may take any suitable size and shape, which depend on the size and shape of the physical display on which it is displayed. in particular embodiments, one or more visual guides, such as a pattern, may be displayed to facilitate a user's input of handwritten text. the visual guides may assist a user to execute the gestures necessary to input text. in particular embodiments, the visual guide may be always displayed, displayed at a user's request, displayed when the user commits one or more errors (e.g. when the user frequently deletes just-entered handwritten characters), when the user first uses handwriting functionality, or any other suitable time. in particular embodiments, each character can have several representations within a pattern, so that the user can use more than one approach to inputting text. fig. 142a illustrates an example display 14202 with visual guide in the form of grid 14204 . display 14202 may include a portion for displaying inputted text, such as text 14206 . using the user's finger(s) or a tool such as a pen, the user can trace one or more lines of grid 14204 to input characters. fig. 142b illustrates example traces that can be used to input upper and lower case a-k in the english alphabet. the display of a grid, such as the shape of its grid lines, their relative alignment, and the visual patterns they make may vary depending on the device, screen size and the purpose of text input. in particular embodiments, input for numbers or symbols may be either integrated in the same grid or may have a specific grid for themselves. for example when only numbers are a valid input, a simpler grid than that displayed in fig. 142a may be presented. this disclosure contemplates any suitable shapes and patterns of grid for text input. figs. 143a-b and 144 a-b illustrates examples displays 14302 and 14402 with example grids 14304 and 14404 for inputting text 14404 and 14406 , respectively. in particular embodiments, a user may write text on a surface other than the display of a device, and the written text may be captured by the device and displayed on the device's display. the device may capture written text by any suitable method, such as by an optical sensor or by a wired or wirelessly connected pen writing on digital paper or on another device. in particular embodiments, text may be directly transferred to the display, and further related action like corrections, deletion or direct usage of the text may be available for the user on the display. in particular embodiments, those options may also be available on the connected pen, so that the user can directly use or send text without touching the screen or the device itself. for example, the user may use or send text by either physical input methods on the pen using, e.g., buttons; by drawing a specific symbol after the text; by specific gestures with the pen like tapping or double tapping the surface; or any suitable combination thereof. in particular embodiments, a user may input text by using gestures that are not constrained to the display of a device. for example, in particular embodiments a gesture may either be handwritten symbol, such as a symbol representing a character, made in front of an optical sensor of the device. this disclosure contemplates that such gesture may be captured by any suitable optical sensor, such as for example any of the sensors described in connection with a wearable device described herein. in particular embodiments, a gesture may be made by an arm or hand that is used to wear or hold the device. this disclosure contemplates any suitable gesture captured by any suitable sensor, including the gestures and sensors described in connection with a wearable device described herein. fig. 145a illustrated an example of gesture input used to enter text on display 14504 of wearable device 14502 . a gesture, such as a writing or tracing of the characters 14512 made by the user's finger 14514 , may be captured by optical sensor 14508 and input as text 14506 on display 14504 . a gesture may be made in the air or on any suitable surface. in particular embodiments, device 14502 may include one or more sensors, such as an accelerometer, than captures motion of user's arm 14516 about which wearable device 14502 is worn. for example, arm 14516 may trace characters 14512 , and that motion may be detected by device 14502 and displayed on display 14502 . as illustrated in fig. 145b , text may be entered by gesture that does not necessarily correspond to a visual representation of the text. for example, a user's fingers, such as the thumb and middle finger, of arm 14518 about which wearable device 14502 is worn may make come together to input one or more text characters onto the display of device 14502 . such gestures may be captured by any suitable sensors, such has for example an optical sensor detecting the motion of the finger(s) or a microphone to detect vibrations generated by the thumb and middle finger meeting. gestures may be performed in air, on a surface, or both. when performed independently of a surface, gestures may be based on sensing where the user's fingers and hand touches. every letter may correspond to gestures made by specific fingers or specific portions of fingers. for example, a thumb touching a middle finger in the front may correspond to an “a”, a thumb touching the middle of the middle finger may correspond to a “c”, and a thumb touching the front of the index finger may correspond to an “e”. in particular embodiments, a thumb in combination with four fingers may map an entire set of characters, such as for example the english alphabet. in particular embodiments, text entry may depend at least in part on the number of times two or more fingers contact each other. for example, touching the index finger with a thumb three times may correspond to a “c”. this disclosure contemplates any suitable gestures corresponding to any suitable text in any suitable written language or communication scheme. in particular embodiments, gestures may identify the type of characters (e.g., upper case, lower case, numbers, etc.) a user wishes to input. in particular embodiments, gestures may be made on a surface to indicate input text. for example, specific characters or words may be input based at least in part on the length or number of taps on a surface, and/or by the number of fingers used to tap the surface. in particular embodiments, a user may program specific characters, combinations of characters, or words that correspond to specific gestures. while this disclosure describes specific gestures made to generate specific characters, this disclosure contemplates that any such gestures or combinations of gestures (both hand-only gestures and gestures involving a surface) may be used to generate words, symbols, or portions or words and symbols as appropriate. in particular embodiments, a full set of characters may be displayed on a display. a user may be able to access each displayed character directly by any suitable method, such as swiping, tapping, physical interactions, using a rotatable element, any of the selection methods described herein, or any other suitable method. in particular embodiments, a spatial distribution of characters may be based on physical parameters of display, as well as on letter frequency, letter combination frequencies, letter adjacencies, and the total number of characters in a set of characters. figs. 146a-c illustrate example character layouts for small displays displaying the english alphabet, and in fig. 146c , additional common punctuation marks. for example, display 14602 of fig. 146a may have a circular shape and character set 14604 may be displayed in an outer ring of display 14602 ; display 14606 may take a square or rectangular shape and character set 14608 may be displayed in an outer perimeter of display 14606 ; and display 14610 may take a square or rectangular shape while character set 14612 occupies a square or rectangular portion of display 14612 , such as a lower portion as depicted in fig. 146c . this disclosure contemplates any suitable display shapes and character set layouts. the exact layout of the display of a character set may vary depending on the specific device, screen sizes and applications. in particular embodiments, a character (or other suitable text portion) displayed on a display may be highlighted when selected by a user. for example, figs. 146d-e illustrate example highlighting 14614 and 14616 of characters selected by a user's finger. in particular embodiments, highlighting 14614 or 14616 may move as the user's finger moves around the screen without being removed from the screen. once the finger is removed from the screen, the character that was highlighted at the time of removal gets input. in particular embodiments, when no character was highlighted, no character is added. visual highlighting can be used additionally to orient the user to which character is more likely to appear next, or to completely remove characters that cannot create a word with the characters already input. mistakes of not touching in an exact location can be corrected by word prediction and recognition. while the examples of figs. 146d-e illustrate highlighting based on a particular example method of selection, this disclosure contemplates highlighting when any suitable method of selection is used. the ordering of the letters in the different keyboard formats is exemplary, and can be varied based on any suitable consideration, such as one or more of: alphabetical order; similarity to regular (qwerty) keyboard layout; letters that occur frequently after each other are not placed next to each other, but as far away as possible, e.g., to minimize the possibility of errors, to provide a better word prediction and to make them visible to the user while typing on a part of the screen that is not covered with the finger; letters that occur frequently after each other are placed next to each other e.g. to allow faster typing; the most frequent letters are placed more to the right/bottom, e.g. to facilitate a user's view of the rest of the displayed characters when selecting characters; or frequent letter are placed away from the center of the screen, where many letters may be closer together than on an edge of the screen, e.g. to allow for better precision while typing. for example, figs. 149a-c illustrate specific layouts 14904 , 14906 , and 14908 on a display 14902 of a device. in particular embodiments, the appearance of one or more characters may be altered based on the context of a device, such as for example on the characters entered or being entered on a display of a device. in particular embodiments, character alteration may be used to draw a user's attention to characters that are predicted or likely to be selected next. in particular embodiments, character alteration may include removing characters from a display, changing the size of a character on a display, changing the opacity of a character on a display (e.g., making the character more transparent), moving characters around on a display (e.g., to make certain character appear more prominent), or any other suitable method. figs. 147a-c illustrate an example of altering characters based on the characters being selected by a user on display 14702 . in fig. 147a , a user is selecting characters from full character set 14704 displayed on display 14702 . a user may select character 14706 , which is then displayed on display 14702 in fig. 147c . while a full character set is displayed in fig. 147a , in fig. 147b only certain characters are displayed based on the display of “h” on display 14702 . for example, characters highly unlikely to be selected after “h” may be completely removed, while characters that are most likely to be selected may be emphasized, such as for example by bolding the vowels as shown in character set 14708 . fig. 147c illustrates another example character set 14112 that illustrates removal and emphasis of characters based on a user's selection of “e” in fig. 147a . in particular embodiments, once one or several characters are typed, the characters that are less likely to be selected (e.g., those that wouldn't form a word in the selected language) become invisible and the virtual touch areas of the remaining words can be increased, e.g., to assist the user in inputting characters more quickly and with less precision. while this disclosure describes example layouts of characters sets, this disclosure contemplates any suitable layouts. for example, fig. 147 illustrates alphabetical characters arranged in a clockwise direction. fig. 148a illustrates the first and last characters as would be arranged in a counter clockwise direction (additional characters are not shown in fig. 148a-c .) fig. 148c illustrates a counterclockwise arrangement that includes a space 14808 between the first and last characters. space 14808 is larger than the space that separates characters “z’ and “a” in fig. 147a . fig. 148b illustrates a clockwise arrangement that also includes a relatively larger space between the terminal characters than is illustrated in fig. 147a . in particular embodiments, a character layout may be presented in a hierarchical or level-based structure. figs. 150a-d illustrate example hierarchical layouts. fig. 150a illustrates a display 15002 that includes a character layout having a first level 1004 and a second level 15006 . in particular embodiments, level 15004 may include the characters a user most frequently selects. in particular embodiments, tapping on a character in level 15004 may result in inputting that character. in particular embodiments, tapping on a row beneath level 15004 may input the character in level 15004 nearest to the touch point. in particular embodiments, a user may select characters in level 15006 by swiping down from level 15004 to the desired character in level 15006 . in particular embodiments, a layered keyboard may condense into a smaller keyboard, e.g., a keyboard having the most frequently selected characters. condensation may be based on context, as described more fully herein, or on user commands, and may facilitate a user's view of content on the non-character portion of the display (especially on a relatively small screen). figs. 150b-d illustrate condensed versions 15010 , 15012 , and 15014 of the keyboard displayed in fig. 150a . as illustrated in fig. 150b , in particular embodiments once a user has swiped from a first level to a second level, a user may swipe horizontally within the second level to select a character from that level. in particular embodiments, a user may input characters by continuously swiping an object, such as one of the users fingers, across characters displayed on a display. thus, the user can select several characters, possibly forming a word or multiple words, with one gesture. mistakes of not touching in an exact location can be corrected by word prediction and recognition. in particular embodiments, when the user removes the finger from the display the swiped characters get added to the display. in particular embodiments, a space character may be automatically added after input text. in particular embodiments, any suitable input, such as a gesture (like horizontal or vertical swipe or double tap) can change a space character into a period, a comma, or any other suitable character. this disclosure contemplates any suitable arrangement or display of characters, including the specific examples discussed more fully herein, and that any suitable text alteration, such as addition, removal, highlighting, or rearrangement of characters, may be used in connection with swiping to select characters to display on a display. for example, characters may be removed or emphasized as discussed in the examples of figs. 147a-c as a user swipes characters. in particular embodiments, swiping may begin from a preset place on a display, such as for example from near the center of a display that has a character set arranged around the edge of the display, as shown in figs. 146a-b . in particular embodiments, swiping may facilitate a user's entry of commonly entered strings of characters by equating each string with the swiping gesture that creates the string. figs. 151a-d illustrate example character set arrangements and example swiping gestures that create an example string of characters on that display. fig. 151a illustrates a rectangular display 1502 that includes a rectangular character set 15014 . user's finger 15108 may create swiping gesture 14106 across character set 15104 to create the character string “apple,” as illustrated in fig. 151a . in fig. 151b , a user may start from the illustrated finger position and create gesture 15114 across character set 14112 to create the character string “world” on display 15114 . in fig. 151c , a user may create gesture 15122 by swiping across character set 15118 to create the character string “upper” on display 15116 . in fig. 151d , a user may create gesture 15128 by swiping across character set 15124 to create the character string “hea” on display 15126 . in particular embodiments, a user may have to return to a specific area of a display, such as the center of the display, to enter a character after a first character has been swiped. for example, in fig. 151d , after a user swipes the “h” another character is not entered until the user passes through center, after which an “e” is entered, as so on. in particular embodiments, selection may occur by swiping through a character, as illustrated in fig. 151d . in particular embodiments, selection may occur by encircling a character in a swiping motion, as illustrated in fig. 151c . in particular embodiments, text may be added by one continuous drag of the user's finger on the screen. the displayed character set moves according to the finger movement in a continuous animation, and with every chosen character a new set of characters to add appear. the release of the finger may pause the animation, thereby giving a user access to other functionalities. the paused typing can be continued once the screen is touched again. the space between two characters or some special characters are also available for selection within the set of characters. fig. 152a illustrates an example text entry portion 15204 . a user may select characters from portion 15204 by swiping on display 15202 near or onto portion 15204 from, e.g., touch point 15206 . fig. 152b illustrates an example of inputting text. as the user's finger nears or comes into context with portion 15204 , columns 15208 with characters for selection may appear. in particular embodiments, some are all of columns 15208 may change as a user selects characters, and some may display characters that are already selected. for example, a user may have selected “h” and “e”, as represented by the two left-most columns 15208 . as illustrated in fig. 152c , a user may select from characters in a column by swiping vertically within the column, such as in column 15208 of fig. 152c . in particular embodiments, while keeping a finger in contact with a display, user may swipe back through previously selected characters to delete those characters. this disclosure contemplates that text selection area 15204 may take any suitable shape in any suitable portion of display 15202 . for example, the portion may be oriented horizontally, and the corresponding animations may then appear as rows of characters for a user's selection. in particular embodiments, text may be input by one continuous drag of the user's finger on the screen. for example, a finger may be placed in a predetermined location, such as the center at the bottom of the screen as indicated by touch point 15310 of fig. 153a . the user may select characters by making slight movements toward character set 15306 , which draws rows 15308 near touch point 15310 . input text may be displayed on display portion 15304 . for example, in fig. 153a a user has drawn “h” and then “e” to touch point 15310 . a user may both select characters and draw specific characters near touch point 15310 by moving in the direction of the characters. for example, in fig. 153b a user has selected “f” by moving towards that character from touch point 15310 . thus “hf” is displayed on portion 15304 . in particular embodiments, moving backwards from the touch point (i.e. away from rows 15308 ) deletes previously entered characters. figs. 154a-b illustrate the above-described embodiment for character set 15404 that is displayed on the perimeter of a rectangular display. a user selects characters by placing a finger on the center portion 15402 of the display, e.g. at touch point 15408 in fig. 154b , and moving towards the desired character(s). fig. 154b illustrates character set 15410 drawn near touch point 15408 . in particular embodiments, a special character, such as square 15406 , may enter a space. in particular embodiments, a user may delete previously entered characters by performing a gesture on portion 15402 , e.g., a left or right swipe. figs. 154c-d illustrates similar functionally for a circular display having a circular character set 15414 and a circular center portion 15412 . a user may select characters by moving from, e.g., touch point 15416 to draw characters near touch point 15416 , as displayed by character set 15418 of fig. 154d . in particular embodiments, characters may be rearranged as a user selects characters for input, such as for example by selecting characters during a continuous swipe. figs. 155a-d illustrate examples of character rearrangement while swiping. fig. 155a illustrates a display having a input text display area 15502 and an initial character set 15504 . initial character set may take any suitable configuration, such as for example having vowels appear towards the center of the character set. in fig. 155b a user has selected character “a” at touch point 15506 , and character set 15508 is rearranged relative to character set 15504 . for example, letters more likely to be input after “a” may be moved closer to touch point 15506 . figs. 156c-d illustrate additional rearrangements resulting in character sets 15510 and 15152 , respectively, as a user inputs text by swiping. as with other examples of swiping to select characters described herein, a user may select a character during a swiping gesture by pausing on the character. thus, the user can select characters that are not necessarily next to the user's current touch point simply by passing through the character without pausing on that character. in particular embodiments, specific gestures, such as a swipe left or swipe right, may result in specific characters, such as a space, or in specific functions, such as deleting the previous character. in particular embodiments, a small display screen may result in a limited set of buttons or input fields. to compensate, a display may include a character set grouped into several touch areas, e.g., to compensate for the difficulty of hitting individual small virtual buttons. in particular embodiments, different layouts and groupings may be based on physical parameters of the interaction of the screen as well as on letter frequency, letter combination frequencies, letter adjacencies, and/or context. in particular embodiments, a user can enter text by tapping the group that contains the character(s) the user wishes to input. a word prediction engine may then calculate which character string (for example, a word) is most likely intended from the tapped combination. in particular embodiments, additional suggested words may be presented with or accessible from the chosen word, and the user may select from the additional words by, e.g., swiping. in particular embodiments, a user can select an individual character from a group by pressing the group a predetermined number of times within a predetermined length of time, each predetermined number corresponding to a particular character. in particular embodiments, a user can select an individual character from a group by pressing and holding on the group until the desired letter appears or is input. figs. 156a-h illustrate particular groupings 15604 on a rectangular screen. selected text may be shown in display portion 15602 . as illustrated by figs. 151a-h , this disclosure contemplates any suitable groups having any suitable number, shape, location, distribution, area, and kinds or types of characters. fig. 1561 illustrates a circular display with groups 15610 and a text display portion 15608 . figs. 156j-n illustrate a circular display having groups and text input areas of various types. for example, fig. 156j illustrates groups 15612 arranged in a lower semicircle of the display, while input text may be displayed on portion 15614 . the groups may include a semicircular group 15616 containing, for example, special characters. fig. 156l illustrates groups 15618 arranged on a larger portion of a circular display, large enough so that group 15620 constitutes a circle. fig. 156n illustrates groups 15622 arranged in quarters on the display, with text display area 15624 arranged in horizontal portion in or near the middle of the display. fig. 156o illustrates a rectangular display having a text display portion 15626 and a character set arranged into groups 15628 . in particular embodiments, groups may be indicated by visual indicators other than lines delineating the groups. for example, fig. 156p illustrates groups by visible touch points, such as for example touch points 15630 and 15632 . each group consists of the letters nearest the touch point. for example, touch point 15630 contains the letters “q”, “w”, and “a.” in particular embodiments, groups may identified by visual cues such as colors. in particular embodiments, when a user touches a group or a character within the group, the visual indicator of that group may be highlighted, alerting the user to the group that the user has selected. in particular embodiments, a user may select a character by performing a gesture on a group that contains the character. fig. 157a illustrates a rectangular display having a text display portion 15702 and groups 15706 . the user may select a character by touching a group, such as for example on touch point 15708 , with finger 15710 . the particular character selected from the group may be determined by the length and/or direction of a swipe from the touch point. for example, in fig. 157a a user may swipe farther to the right to select a “g” than to select an “f.” in particular embodiments, characters within a group may be displayed on a display to facilitate the user's selections. for example, fig. 157b illustrates characters within group 15717 displayed in a vertical row when a user touches touch point 15716 . swiping to the displayed position may result in selection of the character displayed at that position. fig. 157c illustrates a horizontal layout of character set 15720 with an enlarged preview of characters 15722 . a user may select a character by touching the character, for example at touch point 15724 , or by touching the screen and swiping horizontally or vertically to select the desired character. the selected character may be displayed in text input area 15718 . fig. 157d illustrates an example character set with groups 15728 . a user may swipe across groups to input characters from the group into text input area 15726 . for example, a user may execute swiped gesture 15730 to input “apple” on the display. as with other example embodiments, a user may select a character by swiping through the character, or a word prediction engine may determine the appropriate character to select from the group during or after performance of the swiping gesture. in particular embodiments, a space may be automatically added after a character string is input onto display 15726 . in particular embodiments, a gesture (like horizontal or vertical swipe or double tap) can change that space character into a period, a comma, or any other suitable character(s). figs. 158a-b illustrate a keyboard that facilitates text entry on a screen of a wrist-wearable device. by placing the character set on the corner from which the finger enters the screen, a larger portion of the rest of the screen may be visible to the user while the user is entering text. when wearing the device on the left hand, as in fig. 158a , the user interacts with characters set 15804 with the right hand from the bottom right corner. a user can select characters by tapping or swiping, such as for example on or from touch point 15806 to input text in text input area 15802 of the display. when wearing the wrist-wearable device on the left hand, as in fig. 158b , a user may access character set 15808 (which in particular embodiments, may or may not be the mirror image of character set 15804 ) to input text, such as for example by tapping or swiping on or from touch point 15810 . in particular embodiments, a user may rotate the semicircles of the character set of figs. 158a-b to access additional characters. more frequently used characters may be permanently or initially visible, while less used characters may be swiped on the screen. in particular embodiments, a wrist-wearable device may sense the hand on which the user is wearing the device and may orient the display accordingly. in particular embodiments, especially on a device having a relatively small screen, it may be desirable to have an interface by which a user can input text, such as for example, a keyboard, cover as little of the screen space as possible. in particular embodiments, only one or a few characters may be visible on the screen at one time, even though each character can have a distinct position on the screen to facilitate entry of the character. figs. 159a-c illustrate example text-input interface displays. input element (or interactive element) 15906 is used to input characters. each character may correspond to a particular location in input element 15906 , which may be referred to as a “ribbon.” for example, fig. 159a illustrates that “m” corresponds to position 15908 . the character being selected may be displayed to the user in display area 15904 , while other content may be displayed in display area 15902 . as is applicable to other embodiments, display area 15904 may overlay display area 15902 , e.g., by being relatively transparent so that a user may see a portion of area 15902 through area 15904 . characters may be arranged in any suitable order, such as e.g. alphabetically. while the user is touching input element 15906 with one or more finger, sliding the finger(s) along input element 15906 may change the selected letter displayed in display area 15904 . for example, fig. 159b illustrates the touch position 15910 corresponding to “a”, and fig. 159c illustrates the touch position 15912 corresponding to “z.” when the finger gets released from input element 15906 , the character that was selected on release gets added as text to the display. with this keyboard, only a tiny amount of space is needed on a small screen, while the user still has fast access to a large character set. over time, a user maybe come familiar with positions of the characters, and thus use less swiping to find the desired character. in particular embodiments, performing gestures on display area 15904 may add, edit, remove, or otherwise alter displayed text. as merely one example, swiping from right to left may add a space, while swiping from left to right may delete the last input character or string of characters. this disclosure contemplates that a ribbon for text entry may take any suitable position on a display. for example, figs. 159d-f illustrates a vertical touch element 15916 on a display 15914 . fig. 159d illustrates that touch position 15918 corresponds to “m”, fig. 159e illustrates that touch position 15920 corresponds to character “a” and fig. 159f illustrates that touch position 15922 corresponds to character “z”. fig. 160 illustrates a display 16002 that has a text input area 16004 and suggested character string area 16006 . area 16006 may appear after a character from input area 16004 is selected, and suggested character strings may be sorted by likelihood of being selected by the user. the user can either browse this list, e.g., by swiping up or down on area 16006 , or may continue to enter text from input area 16004 . the user may navigate among characters by any suitable method, such as for example by swiping up or down on text input area 16004 . in particular embodiments, display area 16004 may be a ribbon, such as that shown in figs. 159a-f . this disclosure contemplates that areas 16004 , 16006 , and 16002 may take any suitable shape or size and be placed on any suitable area of a display having any suitable shape. fig. 161 illustrates a display 16102 having a text input area 16104 . a user can navigate among characters using navigation icons 16108 and 16110 . the character or string of characters that the user has currently navigated to is displayed in area 16106 . for example, in fig. 161 , the user has navigated to “m”. in particular embodiments, selection of icon 16108 may move one character forward or backward, and selection of icon 16110 may move a predetermined number of steps forward or backward. by keeping the icons 16108 or 16110 buttons pressed, the character(s) displayed in area 16104 may continuously cycle until the button gets released. interaction with icons 16108 and 16110 may occur by any suitable method, such as for example by interaction with physical buttons rather than virtual buttons on a display screen. fig. 162 illustrates a display 16202 having a text input area 16206 and a suggested text area 16204 . characters in a character set may browsed by, e.g., swiping horizontally on the input area 16206 . a user can input a character by quickly swiping the character up a predetermined distance, e.g. past area 16204 with a vertical swipe gesture. in particular embodiments, suggested text area 16204 may also be scrolled by swiping horizontally, and selection of a suggested character string may be made by flicking up on the desired string. fig. 163 illustrates a display screen with swipeable display portions 16304 and 16306 . in particular embodiments, swipeable portion 16304 may include characters for a user's selection, and portion 16306 may include functions that toggle portion 16304 between, e.g., alphabet characters, numbers, symbols, typesetting, capitalization, or any other suitable characters or character-related functions. a user may input characters onto text display portion 16302 by, for example, tapping on the character, such as for example on touch area 16308 . this disclosure contemplates that text entry may be made by any suitable method, as described more fully herein. figs. 164a-b illustrates examples methods of using a rotatable element 16406 of a circular device 16402 to select and enter text on the display of device 16402 . the text to be entered, such as an individual character, may be displayed on display portion 16404 . a user may navigate among characters by rotating rotatable element 16406 . for example, a character displayed in portion 16404 may change letter by letter in alphabetical order with each rotational step. a fast rotation over several steps at once can change this one to one ratio for an accelerated character selection. by rotating clockwise the selection may move to the next character, and by rotating counterclockwise the selection may move to the previous character. for example, rotating element 16406 counterclockwise one step may transition “b” to “a”. tapping a displayed character or waiting for a short time may add the chosen character as inputted text. in particular embodiments, more than one character for selection may be displayed. for example, fig. 164b illustrates additional characters shown on input portion 16408 . one of the characters, such as character 16410 , may be emphasized relative to the others, indicating that that character is the one that will be input if the user taps area 16408 or waits for a predetermined time. fig. 165 illustrates a text entry portion 16504 that consists of character portions, such as for example portion 16506 . characters are made of the character portions, which a user can select by, for example, swiping or tapping the appropriate portions. created characters may be displayed on text display portion 16502 , for example as the character is being built or after a complete character is constructed. this disclosure contemplates that icons and display areas for entry of text, including the keyboards and other buttons described herein, may be displayed based on device context, on user command, or continuously. herein, “or” is inclusive and not exclusive, unless expressly indicated otherwise or indicated otherwise by context. therefore, herein, “a or b” means “a, b, or both,” unless expressly indicated otherwise or indicated otherwise by context. moreover, “and” is both joint and several, unless expressly indicated otherwise or indicated otherwise by context. therefore, herein, “a and b” means “a and b, jointly or severally,” unless expressly indicated otherwise or indicated otherwise by context. the scope of this disclosure encompasses all changes, substitutions, variations, alterations, and modifications to the example embodiments described or illustrated herein that a person having ordinary skill in the art would comprehend. the scope of this disclosure is not limited to the example embodiments described or illustrated herein. moreover, although this disclosure describes and illustrates respective embodiments herein as including particular components, elements, feature, functions, operations, or steps, any of these embodiments may include any combination or permutation of any of the components, elements, features, functions, operations, or steps described or illustrated anywhere herein that a person having ordinary skill in the art would comprehend. furthermore, reference in the appended claims to an apparatus or system or a component of an apparatus or system being adapted to, arranged to, capable of, configured to, enabled to, operable to, or operative to perform a particular function encompasses that apparatus, system, component, whether or not it or that particular function is activated, turned on, or unlocked, as long as that apparatus, system, or component is so adapted, arranged, capable, configured, enabled, operable, or operative. while this disclosure describes particular structures, features, interactions, and functionality in the context of a wearable device, this disclosure contemplates that those structures, features, interactions, or functionality may be applied to, used for, or used in any other suitable electronic device (such as, for example, a smart phone, tablet, camera, or personal computing device), where appropriate.
134-197-619-534-446
FR
[ "DE", "US", "AU", "DK", "GR", "EP", "AT", "FR", "IE", "JP", "CA", "BR" ]
B32B15/082,B05D7/14,B32B15/08,C09D123/08
1981-05-12T00:00:00
1981
[ "B32", "B05", "C09" ]
metal coated with a polymer film and process for obtaining it
metal coated with a polymer film having a thickness between 10 and 500 mu m. the film is a terpolymer comprising 88 to 98.7 mole % of units derived from ethylene, from 1 to 10 mol % of units derived from an alkyl (meth)-acrylate, and from 0.3 to 3 mol % of units derived from maleic anhydride, and optionally may contain up to 5 mol % of units derived from a fourth monomer selected from alpha -olefins having from 3 to 8 carbon atoms, monoalkyl maleates and dialkyl maleates in which the alkyl groups have from 1 to 6 carbon atoms, vinyl acetate, and carbon monoxide. the terpolymer film has a melt index of between 2 and 10 dg/minute. the metal substrate is coated with the terpolymer film at a temperature between 140 deg c. and 300 deg c., the speed of travel of the metal substrate being between 40 to 400 meters per minute.
1. metal coated with a layer of polymer film having a thickness of between 10 and 500 .mu.m, said film consisting essentially of a polymer comprising from 88 to 98.7 mol % of units derived from ethylene, from 1 to 10 mol % of units derived from an alkyl (meth)-acrylate, from 0.3 to 3 mol % of units derived from maleic anhydride, and up to 5 mol % of units derived from a monomer selected from the group consisting of .alpha.-olefins having from 3 to 8 carbons atoms, monoalkyl maleates and dialkyl maleates in which the alkyl groups have from 1 to 6 carbon atoms, and vinyl acetate, and carbon monoxide, and having a melt index of between 2 and 10 dg/minute. 2. coated metal according to claim 1, wherein said metal is selected from the group consisting of aluminum and steel. 3. coated metal according to claim 1 or 2, wherein said metal is in the form of a plate, sheet, or tube and has a thickness of between 25 and 500 .mu.m. 4. coated metal according to claim 1 or 2, wherein the polydispersity index of the polymer is greater than 6. 5. coated metal according to claim 1 or 2, wherein the vicat temperature of the polymer lies between 30.degree. and 85.degree. c. 6. coated metal according to claim 1 or 2, wherein said polymer is treated beforehand with an amount, at most equal to the molar amount of units derived from maleic anhydride, of a reactant selected from ammonia and compounds having a primary or secondary amine group. 7. coated metal according to claim 1, wherein said coated metal consists essentially of said metal coated with said layer of polymer film. 8. process for the production of a coated metal according to claim 1, comprising coating a metal substrate with said polymer film at a temperature between 140.degree. c. and 300.degree. c., the speed of travel of said metal substrate being between 40 and 400 meters per minute. 9. process according to claim 8, wherein said polymer film is obtained through a flat die.
background of the invention the present invention relates to the coating of metals with a polymer film. in several applications, it is necessary to coat a metal with a polymer film of low thickness, the metal being in the form of a plate or sheet. the properties of the polymeric material required for such applications are essentially the peel strength, on the one hand, and the strength of the seals produced by hot sealing, on the other hand. the polymeric materials commonly used because of their good properties in these applications are ethylene ionomers, i.e. terpolymers of ethylene, methacrylic acid, and an alkali metal methacrylate or zinc methacrylate. furthermore, u.s. pat. no. 4,032,692 discloses a process for coating materials by applying to a substrate a molten terpolymer comprising, for 100 parts by weight, from 70 to 90 parts of ethylene, from 0.5 to 10 parts of an ethylenically unsaturated carboxylic acid amide and from 0.5 to 20 parts of an ethylenically unsaturated carboxylic acid ester. summary of the invention the object of the present invention is a new polymeric film for coating metals having improved properties for such applications, and a process for carrying out coating with this new film. additional objects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. the objects and advantages of the invention may be realized and attained by means of the instrumentalities and combinations particularly pointed out in the appended claims. to achieve the foregoing objects and in accordance with the purpose of the invention, as embodied and broadly described herein, the product of the invention comprises metal coated with a layer of polymer film having a thickness of between 10 and 500 .mu.m, the film comprising a terpolymer comprising from 88 to 98.7 mol % of units derived from ethylene, from 1 to 10 mol % of units derived from an alkyl (meth)-acrylate and from 0.3 to 3 mol % of units derived from maleic anhydride, and having a melt index of between 2 and 10 dg/minute. description of the preferred embodiments reference will now be made in detail to the presently preferred embodiments of the invention. some terpolymers usable within the scope of the present invention have been disclosed in french patent no. 1,323,379. particular terpolymers also usable, more particularly characterized by their polydispersity index greater than 6 and their vicat temperature of between 30 and 85.degree. c., have been disclosed in french patent application no. 81/01,430 in the name of the applicant. the french patent and application are incorporated herein by reference. the process for the manufacture of those terpolymers comprises copolymerizing, in the presence of at least one free radical initiator, a mixture composed of 94 to 99% by weight of ethylene, 0.7 to 5% by weight of (meth)-acrylic acid ester and 0.2 to 0.9% by weight of maleic anhydride, in a reactor kept under a pressure of 1,000 to 3,000 bars and at a temperature of 170.degree. to 280.degree. c., in releasing and then in separating the mixture of monomers and the terpolymer formed in the reactor, and finally in recycling, into the reactor, the mixture of ethylene and monomers previously separated off, the recycled stream comprising from 99 to 99.8% of ethylene and from 0.2 to 1% of meth-(acrylic) acid ester. optionally, the terpolymer can comprise a fourth monomer which is copolymerizable with the first three monomers. this fourth monomer can be selected from .alpha.-olefins having from 3 to 8 carbon atoms, monoalkyl maleates and dialkyl maleates, in which the alkyl groups have from 1 to 6 carbon atoms, vinyl acetate, and carbon monoxide, and can be present in an amount of up to 5 mol %, the proportion of ethylene in the tetrapolymer then being reduced accordingly, relative to the range indicated above. if necessary, the terpolymer used within the scope of the present invention may have been treated beforehand with a molar amount, at most equal to the molar amount of units derived from maleic anhydride, of a reactant selected from ammonia and compounds having a primary or secondary amine group. films having a thickenss of between 10 and 500 .mu.m are obtained from these terpolymers through a flat die, in a known manner. the invention also relates to a process for the production of a coated metal such as described above, which comprises coating a metal substrate with the polymer film at a temperature between 140.degree. c. and 300.degree. c., the speed of travel of the metal substrate being between 40 and 400 meters per minute. the metal is preferably selected from the group comprising aluminum and steel. the metal substrate is preferably in the form of a plate, sheet, or tube having a thickness of at least 25 .mu.m. metals coated according to the invention have significant properties that are improved compared with metals coated according to the state of the art referred to above. firstly, their peel strength is at least equivalent to that obtained by means of ethylene ionomers and is better equilibrated in the longitudinal and transverse directions. secondly, the strength of the seals is substantially improved compared with ethylene ionomers, for sealing temperatures ranging from 100.degree. c. up to more than 150.degree. c., and it remains satisfactory at the relatively low sealing temperatures that are generally sought for increasing production rates. metals coated according to the invention have varied and widespread applications. for example, aluminum films coated according to the invention can be used in the food packaging industry for keeping foodstuffs protected from moisture and preserving their aroma. as another example, the coating process according to the invention can be applied to steel pipes, such as pipes for conveying petroleum or gases, making it possible to protect these pipes against oxidation and shocks. in this case it is preferred that the terpolymer film be covered with a layer of polyethylene containing a filler such as carbon black. examples 1 and 2--manufacture of terpolymers a cylindrical autoclave reactor was used that comprised three zones, each having a volume of 1 liter, and was equipped with a blade stirrer. the zones were separated by valve screens. fresh ethylene, compressed by a first compressor, fed the first zone. the second zone was fed with a homogeneous mixture of ethylene, maleic anhydride (ma), and ethyl acrylate (ea). finally, a solution of tert -butyl 2-ethyl-perhexanoate in a hydrocarbon fraction was injected into the third zone. the latter thus constituted the only reaction zone because it brought the three comonomers into contact with a free radical initiator. table i below shows, on the one hand, the proportions by weight of maleic anhydride and ethyl acrylate, relative to the ethylene in the reaction zone, and, on the other hand, the temperature in the reaction zone. the reactor was kept under a pressure of 1,600 bars. at the bottom of the third zone of the reactor, there was a relief valve making it possible to lower the pressure to 300 bars. after it had passed through the relief valve, the mixture of the molten polymer, on the one hand, and the gaseous monomers, on the other hand, passed into a separating hopper. while the polymer was collected at the bottom of the hopper, the monomers were led into a second compressor, after they had passed through a degreasing hopper. furthermore, a solution of maleic anhydride in ethyl acrylate was pumped in under pressure and led towards the inlet of a venturi-type homogenizer, where it was mixed with the stream of recycled monomers orginating from the second compressor. on leaving this venturi device, the mixture of the three monomers was led towards a spiral-type homogenizer and then transferred to the second zone of the reactor. on leaving the separating hopper, the terpolymer produced was analyzed by infra-red spectrophotometry and the molar proportions of ethyl acrylate units and maleic anhydride units were determined, these being indicated in table i below. furthermore, the melt index (m.i.) of the polymer was determined according to astm standard specification d 1238-73 and is expressed in dg/minute. table i ______________________________________ reactor polymer % % % % example t.degree.c. of ma of ea of ma of ea m.i. ______________________________________ 1 185 0.35 3.0 1.0 3.3 6.4 2 230 0.25 0.85 0.4 1.3 3.8 ______________________________________ examples 3 (comparison), 4, and 5--coating of aluminum a 25 .mu.m thick film was formed through a flat die and used to coat, at a temperature of 265.degree. c. (comparison example 3) or of 250.degree. c. (example 4 according to the invention) an aluminum sheet travelling at a speed of 40 meters per minute. the film of example 3 consisted of a terpolymer comprising 90 mol % of units derived from ethylene, 3 mol % of units derived from methacrylic acid and 7 mol % of units derived from zinc methacrylate, marketed under the trademark surlyn. the film of example 4 consisted of the terpolymer of example 1. the following were measured on the aluminum sheets coated in this way: the peel strength in the longitudinal direction (l.p.s.) and transverse direction (t.p.s.) according to astm standard specification d 903-49, modified as regards the width of the polymer strip (35 mm instead of 25 mm) and expressed in grams; and the strength of the seals, s.s., according to french standard specification k 03-004, modified as regards the width of the polymer strip (35 mm instead of 15 mm) and expressed in kilograms. this strength can be measured for seals produced by electrodes at different temperatures. the results of these measurements are shown in table ii below. table ii ______________________________________ s.s. s.s. example l.p.s. t.p.s. 150.degree. c. 100.degree. c. ______________________________________ 3 260 460 0.3 1.8 4 360 390 0.7 2.2 ______________________________________ the film of example 5 consisted of the terpolymer of example 2. on the aluminum sheet coated in this way, the strength of the seals made at 100.degree. c., measured as above, is equal to 2.4 kg. it will be apparent to those skilled in the art that various modifications and variations could be made in the product of the invention without departing from the scope or spirit of the invention.
135-074-981-259-033
US
[ "US", "MX", "CN", "CA", "EA", "JP", "AU", "NZ", "IL", "KR", "BR", "EP", "WO" ]
A61F2/00,A61K9/00,A61K9/06,A61K9/19,A61K9/70,A61K47/34,A61L27/16,A61L27/52,C08F16/06,C08F218/08,C08F220/06,C08F220/12,C08F220/18,C08F220/56,C08J5/18,C08F8/12,A61K47/30,C08F2/22,C08J3/075,C08J9/28,C08L29/04,C08L89/00,C08F216/06,A61K8/02,A61K8/44,A61K8/81,A61K47/20,A61K47/32,A61L27/00,C08K5/00,A61Q1/00,A61Q90/00,A61K47/48,A61L27/14
2008-09-15T00:00:00
2008
[ "A61", "C08" ]
vinyl alcohol co-polymer cryogels, vinyl alcohol co-polymers, and methods and products thereof
a cryogel-forming vinyl alcohol co-polymer is operable to form a cryogel, i.e., a hydrogel formed by crytropic gelation, in an aqueous solution at a concentration of less than about 10% by weight, in the absence of a chemical cross-linking agent and in the absence of an emulsifier. in one embodiment, a vinyl alcohol co-polymer cryogel comprises at least about 75% by weight water and a vinyl alcohol co-polymer, wherein the vinyl alcohol co-polymer is operable to form a cryogel in an aqueous solution at a concentration of less than about 10% by weight, in the absence of a chemical cross-linking agent and in the absence of an emulsifier. in another embodiment, a vinyl alcohol co-polymer cryogel comprises at least about 75% by weight water and a vinyl alcohol co-polymer comprising a saponified product of a vinyl acetate co-polymer formed from at least about 80% by weight of vinyl acetate monomer, and (i) at least about 3% by weight of acrylamide monomer or a mixture of acrylamide monomer and acrylic acid monomer, or (ii) at least about 5% by weight acrylic acid monomer. the vinyl acetate co-polymer, vinyl alcohol co-polymer and vinyl alcohol co-polymer cryogel may be formed according to particular methods, and the vinyl alcohol co-polymer cryogels may be used in various applications including biomedical implants and thin films and for delivery of therapeutic or cosmetic agents.
1. a cryogel-forming vinyl alcohol co-polymer that forms a cryogel in an aqueous solution at a concentration of less than about 10% by weight, in the absence of a chemical cross-linking agent and in the absence of an emulsifier, said vinyl alcohol co-polymer comprising a saponified product of a vinyl acetate co-polymer formed from at least about 80% by weight of vinyl acetate monomer, and at least about 3% by weight of acrylamide monomer or a mixture of acrylamide monomer and acrylic acid monomer. 2. the vinyl alcohol co-polymer of claim 1 operable to form a cryogel in an aqueous solution at a concentration of about 5% by weight or less, in the absence of a chemical cross-linking agent and in the absence of an emulsifier. 3. the vinyl alcohol co-polymer of claim 1 , wherein the saponified product has a degree of saponification of at least about 90%. 4. the vinyl alcohol co-polymer of claim 1 , comprising the saponified product of a vinyl acetate co-polymer formed from at least about 85% by weight of vinyl acetate monomer. 5. the vinyl alcohol co-polymer of claim 1 , wherein the vinyl alcohol co-polymer is in the form of a powder. 6. a method of forming vinyl acetate co-polymer, comprising copolymerizing at least about 80% by weight of vinyl acetate monomer, and at least about 3% by weight of acrylamide monomer or a mixture of acrylamide monomer and acrylic acid monomer and optionally, at least about 5% by weight acrylic acid monomer, based on the weight of the monomers, in an aqueous medium with a polymerization initiator and a buffer, wherein the aqueous medium is free of emulsifier. 7. the method of claim 6 , wherein at least about 85% by weight of the vinyl acetate monomer is employed in the copolymerization. 8. the method of claim 6 further comprising the step of: saponifying the vinyl acetate co-polymer to form the cryogel-forming vinyl alcohol co-polymer. 9. the method of claim 8 , wherein the vinyl acetate co-polymer is saponified to have a degree of saponification of at least about 90%. 10. the method of claim 8 , wherein the vinyl alcohol co-polymer is precipitated in powder form. 11. a method of forming a vinyl alcohol co-polymer cryogel, comprising freezing an aqueous solution of the vinyl alcohol co-polymer of claim 1 at a temperature of from 0° c. to about −196° c. to form a molded mass, and thawing the molded mass to form a hydrogel. 12. the method of claim 7 , further comprising the steps of: freezing an aqueous solution of the vinyl alcohol co-polymer at a temperature of from 0° c. to about −196° c. to form a molded mass; and thawing the molded mass to form a hydrogel. 13. the method of claim 12 , wherein the aqueous solution of the vinyl alcohol co-polymer is frozen at a temperature of from about −15° c. to about −35° c. 14. the method of claim 12 , wherein the aqueous solution comprises from about 1 to about 10% by weight of the vinyl alcohol co-polymer. 15. the method of claim 12 , wherein the aqueous solution comprises from about 1 to about 5% by weigh of the vinyl alcohol co-polymer. 16. the method of claim 11 further comprising the step of: freeze-drying the vinyl alcohol co-polymer cryogel formed, wherein a porous solid material is formed. 17. a vinyl alcohol co-polymer cryogel, comprising at least about 75% by weight water and formed from a vinyl alcohol co-polymer according to claim 1 . 18. the cryogel of claim 17 , comprising at least 90% by weight water. 19. the cryogel of claim 17 , comprising at least 95% by weight water. 20. the cryogel of claim 17 , wherein the cryogel is free of emulsifier and chemical cross-linking agents. 21. the cryogel of claim 17 , wherein the cryogel is loaded with a therapeutic agent and/or a cosmetic agent. 22. the cryogel of claim 21 , wherein the cryogel is loaded with at least one therapeutic agent comprising an analgesic, anesthetic, antibacterial, antifungal, anti-inflammatory, anti-itch, anti-allergic, anti-mimetic, immunomodulator, ataractic, sleeping aid, anxiolytic, vasodilator, bone growth enhancer, osteoclast inhibitor, or vitamin. 23. the cryogel of claim 17 , wherein the cryogel is loaded with at least one functional agent comprising a colorant, taste enhancer, preservative, antioxidant, lubricant, rheology modulator, or thiolated mucoadhesive enhancer. 24. the cryogel of claim 23 , wherein the cryogel is loaded with cysteine. 25. the cryogel of claim 17 , wherein the cryogel is biodegradable. 26. the cryogel of any claim 17 , wherein the cryogel is non-biodegradable. 27. the biomedical implant formed of the polyvinyl cryogel of claim 17 . 28. the biomedical implant of claim 27 , wherein the cryogel is loaded with at least one amino acid. 29. the biomedical implant of claim 27 , wherein the cryogel is loaded with a biological macrocomplex. 30. the biomedical implant of claim 29 , wherein the biological macrocomplex is a plasmid, virus, bacteriophage, protein micelle, or cell component organelle. 31. a thin film formed of the cryogel of claim 17 . 32. the thin film of claim 31 , wherein the cryogel is loaded with at least one thiolated mucoadhesion enhancer. 33. the cryogel of claim 1 , wherein the acrylamide monomer does not exceed about 20% by weight. 34. the cryogel of claim 21 , which is a drug-containing topical formulation for the treatment of wounds or burns. 35. the cryogel of claim 21 , which is a topical formulation further comprising an antibiotic, an antiseptic or an antifungal drug. 36. the cryogel of claim 21 , which is a topical formulation further comprising one or more drug selected from the group nitrofurazone, fusidic acid, mafenide, iodine, bacitracin, lidocaine, bupivacaine, levobupivacaine, prilocaine, ropivacaine, mepivacaine and aloe vera. 37. the cryogel of claim 21 , which is a topical formulation further comprising iodine for the treatment of wounds or burns.
field of the invention the present invention is directed to vinyl alcohol co-polymer cryogels, i.e., hydrogels formed by cryotropic gelation, vinyl alcohol co-polymers suitable for forming the cryogels, and vinyl acetate co-polymers suitable for forming the vinyl alcohol co-polymers. the present invention is also directed to methods of forming vinyl acetate co-polymers, methods of forming vinyl alcohol co-polymers, and methods of forming vinyl alcohol co-polymer cryogels. in further embodiments, the invention is directed to biomedical implants and thin films formed from the vinyl alcohol co-polymer cryogels and to delivery systems for therapeutic or cosmetic agents. background of the invention conventional polyvinyl alcohol (pva) is a widely used polymer in fibers, adhesives, films, membranes, fishing baits, and drug delivery vehicles. pva is also commonly used as a base for various pharmaceutical and non-pharmaceutical chewing gums. co-polymers of pva with acrylic or methacrylic acid have been studied for controlled drug delivery and for ph-sensitive smart drug delivery vehicles (ranjha et al, “ph-sensitive non-crosslinked poly(vinyl alcohol-co-acrylic acid) hydrogels for site specific drug delivery,” saudi pharmaceutical journal, 7(3):137-143 (1999); hirai et al, “ph-induced structure change of poly(vinyl alcohol) hydrogel crosslinked with poly(acrylic acid),” angewandte makromolekulare chemie, 240:213-219 (1996); barbani et al, “hydrogels based on poly(vinyl alcohol-co-acrylic acid) as innovative system for controlled drug delivery,” journal of applied biomaterials and biomechanics, 2:192 (2004); and coluccio et al, “preparation and characterization of poly(vinyl alcohol-co-acrylic acid) microparticles as a smart drug delivery system,” journal of applied biomaterials and biomechanics, 2:202 (2004)). conventional pva hydrogels have also been extensively studied for biomedical applications, for example, in soft tissue applications wherein their high water content and rheology are well suited. crosslinking has been studied as a mechanism for controlling the mechanical properties of pva, including cross-linking by addition of chemical agents, e.g. glutaraldehyde (canal et al, “correlation between mesh size and equilibrium degree of swelling of polymeric networks,” journal of biomedical materials research, 23:1183-1193 (1989); kurihara et al, “crosslinking of poly(vinyl alcohol)-graft-n-isopropylacrylamide copolymer membranes with glutaraldehyde and permeation of solutes through the membranes,” polymer, 37:1123-1128 (1996); and mckenna, et al, “effect of cross-links on the thermodynamics of poly(vinyl alcohol) hydrogels,” polymer, 35:5737-5742 (1994)), crosslinking by irradiation/photopolymerisation, and crosslinking by cryotropic gelation (stauffer et al, “poly(vinyl alcohol) hydrogels prepared by freezing-thawing cyclic processing,” polymer, 33:3932-3936 (1992); urushizaki et al, “swelling and mechanical-properties of poly(vinyl alcohol) hydrogels,” international journal of pharmaceutics, 58:135-142 (1990); and peppas et al, “controlled release from poly(vinyl alcohol) gels prepared by freezing-thawing processes,” journal of controlled release, 18:95-100 (1992)). however, glutaraldehyde is known to be toxic to cells; accordingly, hydrogels prepared with such chemical crosslinking agents have limited applications unless the absence of unreacted toxic entities is reassured. irradiation-crosslinked pva hydrogels have been described for controlled release of biologically active substances (penther et al, jena math. nat. wiss. reihe, 36:669 (1987)). however, these gels are generally weak (yoshii et al, radiation physics and chemistry, 46:169-174 (1995)), and the irradiation methods are typically expensive and difficult for industrial scale-up. cryotropic gelation, i.e., gel formation upon consecutive freezing, for example in a temperature range between −5 and −196° c., and thawing, is a physical method of gel formation which is suited best for pharmaceutical and biotechnological applications as it avoids use of potentially hazardous cross-linking agents or irradiation to manufacture firm hydrogels. such cryogels can act as drug delivery vehicles useful in, e.g., controlled release formulations. early cryogels were made in the 1940s in germany where sponges were produced by freezing of starch paste. cryogels from pva solutions were described in the 1970s for manufacturing jelly fish baits (inoue et al, “water-resistant poly(vinyl alcohol) plastics,” japanese patent no. 47-012854 (1972)). descriptions of cryogelling properties of polyvinyl alcohol polymers are provided by nambu, “rubber-like poly(vinyl alcohol) gel,” kobunshi ronbunshu, 47:695-703 (1990); peppas et al, “reinforced uncrosslinked poly (vinyl alcohol) gels produced by cyclic freezing-thawing processes—a short review,” journal of controlled release, 16:305-310 (1991); and lozinsky, “cryotropic gelation of poly(vinyl alcohol) solutions,” uspekhi khimii, 67:641-655 (1998). pva-based hydrogel systems have been used to develop various stimuli-responsive pharmaceutical systems which undergo significant volume transitions with relatively small changes in the environmental conditions, e.g. ph, magnetic field, or light (hernandez et al, “viscoelastic properties of poly(vinyl alcohol) hydrogels and ferrogels obtained through freezing-thawing cycles,” polymer, 45(16):5543-5549 (2004)). pva is probably the most common polymer among cryogelling agents for biomedical applications because it is non-toxic and biocompatible. further, the structure-functionality relationship of pva-based cryogels has been extensively described. generally, pva gels with larger molecular mass form firmer cryogels than analogues with lower molecular mass (lozinsky, supra). this is because polymer chain elongation increases the possibility of entanglement between adjacent chains and eventually local crystallization. however, high molecular mass polymers are known to have lower solubility. similarly, higher density of available side chains produces firmer gels than analogues with lower degrees of branching (id.). the mechanism of cryogel formation is complex. in brief, it is believed that during freezing, local areas of high polymer concentration are formed and promote crystallite formation and cross-linking between polymer chains resulting in a macroporous mesh (domotenko et al, “influence of regimes of freezing of aqueous solutions of polyvinyl-alcohol and conditions of defreezing of samples on properties of obtained cryogels,” vysokomolekulyarnye soedineniya seriya, a 30:1661-1666 (1988)). as a result, pva chains form the ordered structures known as microcrystallinity zones (yakoyama et al, “morphology and structure of highly elastic poly(vinyl alcohol) hydrogel prepared by repeated freezing-and-melting,” coll. polym. sci., 264:595-601 (1986)). they act as junction knots which in turn arise only when the oh groups are free to participate in interchain interactions. inasmuch as an industrial pva is commonly manufactured by the saponification of poly(vinyl acetate), the degree of deacetylation along with the polymer molecular weight and tacticity are crucial in determining the ability of pva solutions to gel and particularly to gel via cryotropic gelation, since the residual acetyl groups will interfere with the coupling of the sufficiently long intermolecular contacts needed for the formation of pva crystallites. therefore, for the preparation of rigid cryogels of pva, it is necessary to use highly-deacetylated pva (watase et al, “rheological and dsc changes in polyvinyl-alcohol) gels induced by immersion in water,” journal of polymer science part b - polymer physics, 23:1803-1811 (1985)). while hydrogels in general can have a range of mechanical properties depending on their chemistry and water content, they generally have a relatively low mechanical strength ( hydrogels in medicine and pharmacy: vol. i - iii , peppas, ed., crc press boca raton, fla. (1986). currently known pva-based cryogels typically form firm gel structures at concentrations around 14-16% by wt (lozinsky, supra) and additional cross-linking agents may often be used. concentrated pva solutions are typically used for the preparation of mechanically rigid cryogel matrices; however, very concentrated (>20% by wt) solutions of pva are excessively viscous, especially when the polymer molecular weight exceeds 60-70 kda. (lozinsky et al, “poly(vinyl alcohol) cryogels employed as matrices for cell immobilization. 3. overview of recent research and developments,” enzyme and microbial technology, 23:227-242 (1998)). the lozinsky russian patent no. 2003-131705/04 discloses that pva-based cryogels are formed at concentrations between 3-25% by wt with the addition of a surface active agent and that the addition of surface active agents (herein and elsewhere also referred to as emulsifiers) was found crucial for obtaining both physical crosslinking between adjacent polymer chains and high macro-porosity. the lozinsky patent further discloses that the chemical character of the emulsifier (cationic, anionic, or amphoteric) is not critical as long as it was present in the composition. as noted, pva is typically manufactured by the saponification of poly(vinyl acetate). a commonly used polymerization route of polyvinyl-acetate-based polymers utilizes emulsifiers or protective hydrocolloids for successful polymerization. further, organic solvents are conventionally used in the process (i.e. the so-called varnish method), which are hazardous and environmentally unfriendly and therefore require special handling. in addition, during saponification of vinyl acetate products, a hard jelly-like mass is conventionally formed and is then broken using high-shear homogenizers, requiring a significant input of energy. gb patent no. 835,651 discloses a vinyl acetate polymer prepared with a limited amount of acrylamide to form a stable dispersion that yields a hard water-resistant film on drying at a high temperature. in view of the non-toxic and biocompatible nature of pva, further improvements in pva hydrogels are desirable to allow expanded use of pva in various applications. summary of the invention it is therefore an object of the present invention to provide improved vinyl alcohol-based hydrogels, and more specifically, improved vinyl alcohol-based cryogels, i.e., hydrogels formed by cryotropic gelation. it is a related object to provide materials and methods facilitating such vinyl alcohol-based cryogels, and to provide applications for such vinyl alcohol-based cryogels. in one embodiment, the present invention is directed to a cryogel-forming vinyl alcohol co-polymer operable to form a cryogel in an aqueous solution at a concentration of less than about 10% by weight, in the absence of a chemical cross-linking agent and in the absence of an emulsifier. in another embodiment, the invention is directed to a method of forming vinyl acetate co-polymer, the method comprising copolymerizing at least about 80% by weight of vinyl acetate monomer, and either (i) at least about 3% by weight of acrylamide monomer or a mixture of acrylamide monomer and acrylic acid monomer, or (ii) at least about 5% by weight acrylic acid monomer, based on the weight of the monomers, in an aqueous medium with a polymerization initiator and a buffer, wherein the aqueous medium is free of emulsifier. the resulting vinyl acetate co-polymer may be used, inter alia, in forming a vinyl alcohol co-polymer. thus, in a related embodiment, the invention is directed to a method of forming a cryogel-forming vinyl alcohol co-polymer, comprising forming a vinyl acetate co-polymer according to the aforementioned method, and saponifying the vinyl acetate co-polymer to form the cryogel-forming vinyl alcohol co-polymer. in another embodiment, the invention is directed to a method of forming a vinyl alcohol co-polymer cryogel. the method comprises freezing an aqueous solution of the vinyl alcohol co-polymer of the invention at a temperature of from 0° c. to about −196° c. to form a molded mass, and thawing the molded mass to form a hydrogel. in further embodiments, the invention is directed to vinyl alcohol co-polymer cryogels. in one embodiment, a vinyl alcohol co-polymer cryogel comprises at least about 75% by weight water and is formed from a vinyl alcohol co-polymer operable to form a cryogel in an aqueous solution at a concentration of less than about 10% by weight, in the absence of a chemical cross-linking agent and in the absence of an emulsifier. in another embodiment, a vinyl alcohol co-polymer cryogel comprises at least about 75% by weight water and is formed from a vinyl alcohol co-polymer comprising a saponified product of a vinyl acetate co-polymer formed from at least about 80% by weight of vinyl acetate monomer, and either (i) at least about 3% by weight of acrylamide monomer or a mixture of acrylamide monomer and acrylic acid monomer, or (ii) at least about 5% by weight acrylic acid monomer. the vinyl alcohol co-polymer cryogels and vinyl alcohol co-polymers of the present invention are advantageous in that they can be readily prepared without emulsifiers and chemical crosslinking agents, and therefore are suitable for use in a wide variety of applications, including topical and in vivo use. additionally, the vinyl alcohol co-polymer cryogels can advantageously be formed from relatively low concentrations of the vinyl alcohol co-polymer. properties of the vinyl alcohol co-polymer cryogels can be controlled via, inter alia, the vinyl acetate co-polymers and vinyl alcohol co-polymers employed in the cryogel formation. the methods of the present invention facilitate preparation of the vinyl alcohol co-polymer cryogels and vinyl alcohol co-polymers with desirable characteristics. these and additional objects and advantages are more fully apparent in view of the detailed description which follows. brief description of the drawings the detailed description will be more fully understood in view of the drawings, in which: figs. 1a-1c show environmental scanning electron microscope (esem) pictures of cryogels of examples 1-3; fig. 2 shows differential scanning calorimetry (dsc) results of cryogels of examples 1-3; figs. 3a and 3b show thermogravimetric analysis (tga) results of cryogels of examples 1-3; figs. 4a and 4b show dynamic mechanical thermal analysis (dmta) results of cryogels of examples 1-3; figs. 5a and 5b show saccharine sodium release from cryogels of examples 1-3 studied by uv-spectroscopy, fig. 6 shows homogeneous distribution of a dye (fluorescin sodium) within the matrix of a cryogel; figs. 7a and 7b show zolpidem release from cryogels of examples 1-3 studied by uv-spectroscopy, with fig. 7a showing sustained release of zolpidem in 0.9% sodium chloride solution and fig. 7b showing the effect of ph on zolpidem release; and figs. 8a and 8b show, respectively, a mucoadhesive film for buccal drug delivery prior to application and applied to a lower lip. the embodiments to which the drawings relate are described in further detail in the examples. these embodiments are illustrative in nature and are not intended to be limiting of the invention. moreover, individual features of the drawing and the invention will be more fully apparent and understood in view of the detailed description. detailed description the present invention is directed to vinyl alcohol co-polymers and vinyl alcohol co-polymer cryogels, methods of forming vinyl alcohol co-polymers, vinyl acetate co-polymers from which vinyl alcohol co-polymers can be formed, and vinyl alcohol co-polymer cryogels. as will be described in detail herein, the vinyl alcohol co-polymers are suitable for use in a wide variety of applications. more specifically, the vinyl alcohol co-polymers according to the invention are cryogel-forming vinyl alcohol co-polymers, i.e., they form cryogels upon freezing and thawing, and are operable to form a cryogel in an aqueous solution at a concentration of less than about 10% by weight, in the absence of a chemical cross-linking agent and in the absence of an emulsifier. in more specific embodiments, the vinyl alcohol co-polymers are operable to form a cryogel in an aqueous solution at a concentration of less than about 5% by weight, and in further embodiments, are operable to form a cryogel in an aqueous solution at a concentration of about 1-2% by weight. as conventional vinyl alcohol co-polymers typically are not cryogel-forming in concentrations less than about 14-16% by weight, and often require crosslinkers to form gels having sufficient mechanical strength, the vinyl alcohol co-polymers of the present invention provide a significant advantage over the prior art. in one embodiment, the vinyl alcohol co-polymer comprises a saponified product of a specific vinyl acetate co-polymer. more particularly, the vinyl acetate co-polymer is formed from at least about 80% by weight of vinyl acetate monomer, and either (i) at least about 3% by weight of acrylamide monomer or a mixture of acrylamide monomer and acrylic acid monomer, or (ii) at least about 5% by weight acrylic acid monomer. within the present disclosure, “acrylic acid monomer” is inclusive of acrylic acid and homologs thereof, including, but not limited to methyl, ethyl and propyl acrylic acids and acrylates, and “acrylamide monomer” is inclusive of acrylamide and homologs thereof, including, but not limited to methyl, ethyl and propyl acrylamides. the copolymerization is preferably conducted in the absence of an emulsifier and in an aqueous medium. within the present disclosure the term “emulsifier” is inclusive of any emulsifier, surface active agent, or surfactant. the acrylamide and/or acrylic acid monomers serve multiple purposes, including a) obtaining a self-emulsifying system, thereby avoiding contaminating emulsifiers, b) facilitating crosslinking during hydrogel formation and controlling hydrogel strength, thereby avoiding contaminating chemical crosslinking agents in hydrogel formation, and/or c) introducing functional groups for pronounced environment-responsive behavior, e.g. ph responsive gels, thermoresponsive gels, and the like. for instance, introducing acrylic acid or acrylamide monomers having ionizable groups of weak acids or bases allows the formation of ph-responsive systems, whereas introducing hydrophobic side chains allows the formation of thermoresponsive gels. acrylic acid monomer acts primarily as a self-emulsifying agent and facilitates cross-linking during the cryogel formation. co-polymers containing an acrylamide monomer generally result in firmer cryogels than co-polymers containing solely acrylic acid as the co-monomer. it should be noted that acrylamide monomer-based units are partly hydrolyzed into acrylic acid during the saponification stage in forming the vinyl alcohol co-polymer. the amount of acrylic acid monomer should not exceed about 20% by wt, more specifically should not exceed about 15% by wt, and even more specifically should not exceed about 10% by wt. addition of excessive amounts of acrylic acid monomer results in formation of mucilage during the saponification stage and a powdery vinyl alcohol co-polymer product is not obtained. similarly, the amount of acrylamide monomer should not exceed about 20% by wt, more specifically should not exceed about 15% by wt, and even more specifically not exceed about 10% by wt. accordingly, the amount of vinyl acetate monomer should be in a range of about 80 to 95% by wt, or about 80 to 97% by wt, or, in more specific embodiments, at least about 85% by wt or in a range of from about 85 to 95% by wt. the vinyl acetate copolymerization is, in one embodiment, conducted with a polymerization initiator and a buffer. the choice of the initiator for polymerization and its solubility can effect the product properties. the initiator may be water-soluble, e.g., ammonium persulfate or an alkali persulfate such as potassium persulfate, or oil-soluble, e.g., benzoyl peroxide, or a combination thereof. the combination of the two may further influence the functionality of the resultant polymer. even if the ratios of the original monomers are the same, depending on the choice of the initiator or the combination thereof, polymers are produced with different molecular weights, intrinsic viscosities, and degrees of polydispersity. suitable buffers include, but are not limited to, bicarbonates, phosphates, and the like. the vinyl alcohol co-polymer is formed as the saponified product of the vinyl acetate co-polymer. saponification of the stable vinyl acetate co-polymer emulsion is conducted in alkali medium. advantageously, the saponification results in the formation of a powdery product, without the formation of hard gel mass which is typically formed in conventional processes. the present method therefore avoids subsequent use of high shear homogenizers for dispersion of a hard gel mass. in one embodiment, the degree of saponification of the resultant product is at least about 90%. in further embodiments, the degree of saponification of the resultant product is above 92%, more specifically above 93%, and even more specifically above 95%. the resultant product consists of a vinyl alcohol co-polymer (pva) backbone polymer functionalized with acrylic acid, its homologues, acrylamide, its homologues, or combinations thereof. the characteristic viscosities [η] of the obtained pva products in 0.05m nano 3 are typically between 1 and 4. the molecular weight of the pva products is typically between 10,000 and 170,000 daltons. the polymer product emulsion is characterized by a ph typically of from 3.8 to 5.2 and a dry solid content typically of 30-50% by wt. these numerical ranges are only exemplary of specific, selected embodiments and should not be regarded in a limiting sense. the vinyl alcohol co-polymer may thus be used to form a vinyl alcohol co-polymer cryogel by freezing an aqueous solution of the vinyl alcohol co-polymer at a temperature in a range of from 0 to about −196° c., more specifically in a range of from about −15 to about −35° c., to form a molded mass, and consecutively thawing the molded mass above freezing temperatures to form the hydrogel. the freezing may be conducted for any suitable time period as desired, for example, from several minutes, e.g., 2, 3, 5, 10, 20, 30, 40 or 50 minutes, up to one or several or more hours, i.e., 2, 3, 5, 10, 15, 20, 24, or 30 hours, or more. repetitive freeze-thawing for varying times, for example from several minutes up to several hours, may be employed to enhance the gel strength. the cryogels contain more than about 75% by wt water. in specific embodiments, the cryogels contain at least about 90% by wt water and from about 1 to about 10% by weight of the vinyl alcohol co-polymer, more specifically at least about 95% by wt water and from about 1 to about 5% by weight of the vinyl alcohol co-polymer, and in some applications, more than about 96% by wt water and from about 1 to about 4% by weight of the vinyl alcohol co-polymer. the lower threshold concentration of vinyl alcohol co-polymer for cryogel formation is typically around 1% by wt. in a specific embodiment, the vinyl alcohol co-polymer concentration is about 3-4% by wt. should it be necessary, for example when using the invention to make firm biomedical implants, higher concentrations of the vinyl alcohol co-polymer may be used, for example up to about 25% by wt. however, without intending to be limited by theory, it is believed that the presence of ionizable groups in the acrylic acid and/or acrylamide co-monomers favors the formation of firm gel structures at even low polymer concentrations. in fact, very firm cryogel structures are formed at about 4% by wt vinyl alcohol co-polymer and 96% by wt water, whereas commonly available vinyl alcohol polymers form firm cryogels at about 14-16% by wt or more pva. as the vinyl alcohol co-polymer cryogels according to the invention are formed of emulsifier-free vinyl alcohol co-polymer and may be provided with a firm structure without conventional chemical crosslinking agents, such as glutaraldehyde, toxic components are avoided and the cryogels are advantageous for use in biomedical applications, for example in delivery systems for therapeutic agents and/or cosmetic agents, and as biomedical implants. in the event that firmer hydrogels are desired, traditional crosslinking techniques may used to provide further rigidity to the cryogel through covalent binding after cryogel formation, i.e., by use of conventional chemical crosslinking agents, such as glutaraldehyde, or irradiation. alternatively, other methods to modulate the mechanical properties of vinyl alcohol co-polymer cryogels by using non-covalent binders can also be employed. for example, one or more amino acids may be included in the vinyl alcohol co-polymer solution prior to cryogel formation to act as a rheology modifier. suitable amino acids include, but are not limited to, isoleucine, alanine, leucine, asparagine, lysine, aspartate, methionine, cysteine, phenylalanine, glutamate, threonine, glutamine, tryptophan, glycine, valine, proline, serine, tyrosine, arginine, and histidine. such amino acids can also serve as a probiotic additive as discussed in further detail below. depending on the functional composition of the obtained gel, its properties may greatly vary, e.g. with respect to gel strength and/or rheology. depending on the functional composition, the produced cryogels may be soft (more suitable for topical preparations or preparations like, but not limited to, wrinkle fillers, and vaginal and rectal injectables) or rigid (more suitable for e.g. oral administration, rectal suppositories or vaginal pessaries), as discussed in further detail below. further, the functional composition can be varied to control biodegradability of the vinyl alcohol co-polymer cryogel. in one embodiment, the vinyl alcohol co-polymer cryogel is biodegradable, for example over a period of from 1 hour, several hours, one day, several days, one month or several months. in another embodiment, the vinyl alcohol co-polymer cryogel is non-biodegradable. it is well known that pva is generally biodegradable and does not cause kidney problems when the molecular weight is 18,000 daltons or below. the molecular weight of the vinyl alcohol co-polymers according to the present invention can be derived from rheology measurements using techniques well known by one of ordinary skill in the art. due to the presence of ionizable functional groups and random branching of polymer chains, the vinyl alcohol co-polymers and cryogels produced according to the present invention may be biodegradable even when the molecular weight derived from viscosity measurements is well above 18,000 daltons. therefore, in vivo tests are necessary to verify the biodegradability of the cryogels for each specific polymer composition. however, it is the general understanding that the present cryogels are prone to biodegradation due to the physical character of bonding between adjacent polymer chains, unlike the covalent bonding produced with chemical cross-linking agents. therefore, as stated above, in order to obtain non-biodegradable gels, additional use of conventional crosslinking methods may be used after cryotropic treatment, e.g. by creating covalent bonding with chemical crosslinking agents, such as but not limited to, glutaraldehyde, or by radiation. in one embodiment, the resulting vinyl alcohol co-polymer cryogel may be subjected to freeze-drying according to conventional freeze drying techniques to obtain solid materials with well defined pore structure. the porous materials may be used in various applications including, but not limited to, various solid dosage forms, for example, plugs, for delivery of therapeutic agents or cosmetic agents. the vinyl alcohol co-polymer cryogels of the invention may optionally be loaded with a therapeutic agent, a cosmetic agent, or a functional agent, as desired. for example, one or more drugs can be loaded into the vinyl alcohol co-polymer cryogels to obtain drug delivery vehicles. the loading of the agents as described may be performed either prior to or post cryogelation. in the former case, the vinyl alcohol co-polymer is dissolved in a solution of the desired agent, for example at a temperature of from about 50° c. to about 80° c., although other temperatures may be employed as appropriate. in a specific embodiment, the vinyl alcohol co-polymer is dissolved in a solution of the desired agent at a temperature of from about 60° c. to about 75° c., or more specifically, from about 62° c. to about 71° c. alternatively, the agent may be dissolved in the vinyl alcohol co-polymer solution for example at a temperature of from about 50° c. to about 80° c., although other temperatures may be employed as appropriate. in a specific embodiment, the agent is dissolved in a solution of the vinyl alcohol co-polymer at a temperature of from about 60° c. to about 75° c., or more specifically, from about 62° c. to about 71° c. if the cryogel is to be sterilized by autoclavation, the drug dissolution and autoclavation can be combined in one single step. in this case, the dissolution is performed at the autoclavation temperature, typically between 100 and 144° c., depending on the steam pressure used. following the dissolution of the ingredients, and the autoclavation, if employed, the solution is poured into the desired form and frozen as described above. upon thawing, a loaded vinyl alcohol co-polymer cryogel formulation is obtained. alternatively, the desired agent may be incorporated in the cryogel after the cryogel has been formed, e.g. via soaking of the gel in a solution of the agent. such incorporation may be conducted prior and/or subsequent to any further crosslinking conducted after cryogel formation, e.g. prior and/or subsequent to any further crosslinking by irradiation, covalent and non-covalent binding. the loaded agent may comprise a therapeutic agent, a cosmetic agent, and or a functional agent. examples of therapeutic agents include, but are not limited to, an analgesic, anesthetic, antibacterial, antifungal, anti-inflammatory, anti-itch, anti-allergic, anti-mimetic, immunomodulator, ataractic, sleeping aid, anxiolytic, vasodilator, bone growth enhancer, osteoclast inhibitor, or vitamin. alternatively, the therapeutic agent may be an amino acid acting as a probiotic additive. in additional embodiments, the therapeutic agent may comprise a biological macrocomplex. non-limiting examples of biological macrocomplexes include plasmids, viruses, bacteriophage, protein micelles, cell compound organelles such as mytochondriae. in a specific embodiment of the present invention, the vinyl alcohol co-polymer cryogel may be loaded with a sparingly soluble therapeutic agent. examples of cosmetic agents include, but are not limited to, coloring components and the like. examples of functional agents include, but are not limited to, a colorant, taste enhancer, preservative, antioxidant, or lubricant. a thiolated mucoadhesive enhancer may be employed as a functional agent to increase mucoadhesive properties of the cryogel. thiolated mucoadhesive enhancers are known in the art and include, but are not limited to, cysteine. additionally, the functional agent may comprise an amino acid rheology modulator as described above. as will be detailed below, such a functional agent may be added to a vinyl alcohol co-polymer solution prior to cryogelation or loaded onto the formed cryogel. specific embodiments of various loaded vinyl alcohol co-polymer cryogels are described in further detail below. the vinyl alcohol co-polymer cryogels may be employed in a variety of forms, for example as topical applications, as injectables, embedded as the cryogel or in the form of a dried plug in capsules (hard or soft), formed as thin films, for example for topical application, mucoadhseive films, for example for buccal or sublingual drug delivery, a suppository, e.g. for rectal or vaginal delivery, as a base for chewing gum, for example for drug or cosmetic agent delivery, biomedical implants (with or without loaded agent), and the like. specific, non-limiting examples are described below. in one embodiment, the cryogel is freeze dried and loaded with a therapeutic agent to form a solid plug. such a plug may be used in various applications. in one embodiment, the plug may be used as a floating drug delivery vehicle in the stomach. the drugs incorporated in this device may include, but are not limited to, caffeine, theophylline, dilthiazem, propranolol hydrochloride, bipyridine, tramadol, and omeprazol. in another embodiment, the plug may be employed as a carrier for a nicotine inhalator, for example to provide smoke-free nicotine administration, for example to aid in smoking cessation. in a specific embodiment, the freeze-dried solid porous plugs of vinyl alcohol co-polymer cryogel are loaded with nicotine from a nicotine ethanol solution by using a rotary-evaporation. the nicotine-loaded solid plug is then integrated in an inhalator device, for example a short pipe with a mouthpiece to allow imitation of the puffing action of smoking. another embodiment of the present invention includes a drug-containing topical formulation. for example, in a more specific embodiment, the topical formulation may be used, e.g., prior to or for wound treatments, for burn healing, for treatment of insect bites, for soreness in connection with breast feeding, or for rectal problems such as hemorrhoids and cracks. the combination of high water content and slow drug release in a loaded vinyl alcohol co-polymer cryogel according to the invention is highly advantageous for such topical preparations. the cryogel may be formed as a soft hydrogel which releases the incorporated drug slowly and that has a very high water content. in a specific embodiment, the cryogel formulation is loaded with an antibiotic, antiseptic or antifungal drug and dried into a thin film which swells when in contact with exudates and therefore, not only covers the wound, but also exhibits good adhesion to its surface. the swelling of the film commences the sustained release of the loaded drug. suitable drugs include, but are not limited to, nitrofurazone, fusidic acid, mafenide, iodine, bacitracin, lidocain, bupivacain, levobupivakain, prilocain, ropivacain, mepivacain, and aloe vera. in another specific embodiment, the cryogel formulation is used in topical formulations for local pain relief, anti-inflammatory treatment or deep heating liniments. the drug may include, but is not limited to, diclofenac sodium, salicylic acid and methyl salicylate. in another embodiment, the vinyl alcohol co-polymer cryogel may be used for treatment of psoriasis, eczema, and other forms of dermatitis treatment. the drug may include, but is not limited to, an interleukin-6 antagonist, an anti-inflammatory drug, corticosteroids, immunomodulators like pimecrolimus and tacrolimus, anti-itch drugs like capsaicin and menthol, and naloxone hydrochloride and dibucaine. an additional embodiment of the invention is directed to a mucoadhesive formulation. the mucoadhesive formulation may be use, for example, for buccal, palatal or sublingual drug delivery. the vinyl alcohol co-polymer cryogel is loaded with a drug and dried to a thin film form and rapidly swells when in contact with water, thereby exhibiting excellent mucoadhesive properties. the mucoadhesion increases the retention time of the formulation in the oral cavity and ensures intimate contact with the underlying mucus and rapid onset of action. in a specific embodiment, the mucoadhesive formulation is formed from a cryogel prepared from the vinyl alcohol co-polymer as described, optionally including a thiolated mucoadhesive enhancer, for example, but not limited to, cysteine, which serves to enhance mucoadhesion. in another specific embodiment, the mucoadhesive formulation contains, inter alia, ataractic, sleeping aid or anxiolytic drugs, examples of which include, but are not limited to, diazepam, oxazepam, lorazepam, alprazolam, buspirone, flurazepam, propiomazine, triazolam, nitrazepam, eszopiclione, zoplclone, modafinil, ramelteon, zaleplon, melatonin, valerian root, st john's wort, restoril, sodium oxybate, midazolam, zolpidem, and diphenhydramine hydrochloride. another embodiment of the present invention is directed to a mucoadhesive film for palatal use for mild local anesthesia in dentistry. in a specific embodiment, the mucoadhesive film is formed from a cryogel prepared from the vinyl alcohol co-polymer as described, optionally including a thiolated mucoadhesive enhancer, for example, but not limited to, cysteine, which, as noted previously, serves to enhance mucoadhesion. suitable drugs may include, but are not limited to, anesthetics such as lidocain, bupivacain, levobupivakain, prilocain, ropivacain, and mepivacain. a further specific embodiment of mucoadhesive formulations contain a histamine antagonist useful, e.g., for rapid treatment of allergic reactions, motion sickness, nausea in pregnancy and cancer. anti-allergic substances include, but are not limited to, clemastine, fexofenadine, loratidine, acrivastine, desloratidine, cetrizine, levocetrizine, mizolastine. antimemetic drugs useful for treatment of motion sickness and nausea may include, but are not limited to, promethazine, cinnarizine, cyclizine, and meclizine. another specific embodiment includes a mucoadhesive formulation for cardiac treatment. the drug may include, but is not limited to, a vasodilator such as isosorbide dinitrate or nitroglycerine. another embodiment of the present invention includes ph sensitive cryogel systems, for example, for providing controlled drug release. the sensitivity of the cryogel may be tuned by the optimal balance of introduced ionizable functional groups which not only influence the strength of the produced gel but also the swelling behavior and thus the release of incorporated drugs in various ph. in a specific embodiment, the ph sensitive cryogel system is a solid plug or cryogel that optionally is embedded in a soft or hard capsule. in a more specific embodiment, the ph sensitive cryogel system is a vaginal hydrogel formulation. another embodiment of the present invention includes a drug-containing thermosensitive cryogel. the sensitivity of the cryogel to changes in temperature is achieved by introducing lipophillic side chains (e.g. methyl-, ethyl-, propyl-acrylate/acrylamide derivatives). in a more specific embodiment, the thermosensitive cryogel liquefies at body temperature. a further embodiment of the present invention includes a rectal cryogel formulation. the drugs may include, but are not limited to, indomethacin, paracetamol, diazepam, propranolol, and atenolol. the formulation has the advantage of having a high water content and suitable consistency. yet another embodiment of the present invention includes an injectable soft cryogel formulation for rectal use for treatment of ulcerative colitis. the drugs include, but are not limited to, 5-aminosalicylic acid (mesalazine) or its derivatives. another embodiment of the invention is in the form of a vaginal hydrogel formulation. the drug may include, but is not limited to, an antifungal or antibiotic such as ekonazole, metronidazol, and/or clotrimazol (klotrimazol). the formulation has the advantage of having a high water content and suitable consistency. another embodiment includes a formulation based on the cryogel material produced according to the present invention and useful as a vaginal solid plug. the drug may include, but is not limited to, ekonazole, metronidazol, and klotrimazol. an additional embodiment of the present invention includes a formulation containing one or more analgesics loaded in the cryogel for pain relief. suitable analgesic drugs include, but are not limited to, morphine, codeine, oxycodone, fentanyl, thebaine, methadone, ketobemidone, pethidine, tramadol, propoxyphene, hydromorphone, hydrocodone, oxymorphone, desomorphine, diacetylmorphine, nicomorphine, dipropanoylmorphine, benzylmorphine and ethyl morphine. in a further embodiment of the present invention, the vinyl alcohol co-polymer cryogel is formed as a biomedical implant, for example, an orthopedic implant. non-limiting examples of orthopedic implants include, but are not limited to, artificial disks, meniscus implants, and cochlear implants. the cryogel system is molded to the preferred shape and may optionally be loaded with a therapeutic agent. the polymer solution is sterilized prior to or post cryogelation, preferably prior to and by autoclavation. non-limiting examples of drugs suitable for use include, but are not limited to, bone morphogenetic proteins, antibiotics such as gentamicin, tobramycin, amoxicillin and cephalothin, and bisphosphonates such as pamidronate, neridronate, olpadronate, alendronate, ibandronate, risedronate, and zoledronate. in a specific embodiment, the vinyl alcohol co-polymer solution from which the biomedical implant cryogel is formed contains one or more amino acids serving as both a rheology modulator and a probiotic additive. one or more of the amino acids noted above may be employed. in additional embodiments of the present invention, the vinyl alcohol co-polymer cryogel may be employed as a biodegradable implant, for example, for bone regeneration. the cryogel may be molded to a desired shape as a macroporous tissue scaffold and may optionally be loaded with, for example, bone growth enhancers, drugs that inhibit osteoclast action and the resorption of bone, drugs that stimulate bone in growth, growth factors, and cytokines having the ability to induce the formation of bone and cartilage, as well as antibiotics. the co-polymer solution is sterilized prior to or post cryogelation, preferably prior to cryogelation and by autoclavation. non-limiting examples of suitable drugs suitable for loading include, but are not limited to, bone morphogenetic proteins, antibiotics such as gentamicin, tobramycin, amoxicillin and cephalothin, and bisphosphonates such as pamidronate, neridronate, olpadronate, alendronate, ibandronate, risedronate, and zoledronate. in a specific embodiment, the vinyl alcohol co-polymer solution from which the biodegradable implant cryogel is formed contains one or more amino acids serving as both a rheology modulator and a probiotic additive. one or more of the amino acids noted above may be employed. another embodiment of the present invention is directed to biomedical implants comprising sterile implantable macroporous cryogels for delivery of biological macrocomplexes. non-limiting examples of biological macrocomplexes include plasmids, viruses, bacteriophage, protein micelles, and cell compound organelles such as mytochondriae. the polymer solution is sterilized prior to or post cryogelation, preferably prior to cryogelation and by autoclavation. in a specific embodiment, the vinyl alcohol co-polymer solution from which the biodegradable implant cryogel is formed contains one or more amino acids serving as both a rheology modulator and a probiotic activity. suitable amino acids include those described in detail above. a further embodiment of the present invention includes various sterile cosmetic biodegradable fillers and/or implants. these systems for cosmetic use may be either firm (implantable) or soft (injectable). they may further be loaded with vitamins, e.g. c, e, or a, or other probiotic substances. non-limiting examples of biodegradable fillers and implants are wrinkle fillers, breast augmentation implants, butt implants, facial implants such as cheek implants, and the like. the following examples demonstrate non-limiting embodiments of various aspects of the invention. example 1 in this example, a vinyl acetate co-polymer was prepared from the following dispersion: vinyl acetate86mlacrylamide7.1gmethacrylic acid8.6gnahco 31gammonium persulfate0.3gwater150ml a three-neck reaction vessel, connected to a chiller and a mixer, was placed in a water bath. the vessel was filled with 86 g of vinyl acetate, 7.1 g acrylamide, 7.1 g of methacrylic acid, 1.0 g sodium hydrocarbonate, 140 ml water, and 0.3 g ammonium persulfate, previously dissolved in 10 ml of water. the reagents were allowed to stand under slow stirring for 5-6 hours at 64-70° c. until a white emulsion was formed and the residual monomer concentration did not exceed 0.4% by wt. the produced emulsion contained 40.1% by wt solids and exhibited ph of 4.4 and viscosity of 16.5 pa·s. the resulting copolymer viscous white emulsion was then chilled and further saponified in alkali medium using the following mixture wherein the dispersion refers to the copolymer emulsion product: dispersion180mlwater240mlethanol1800mlnaoh24g specifically, 180 ml of the emulsion was diluted with 240 ml of water and loaded in the vessel containing 24 g of sodium hydroxide in 1800 ml of ethanol. at 20° c., a powder pva was precipitated and acetic acid was added to the mixture under stirring to neutralize the alkali. the vinyl alcohol co-polymer was then filtered and the crude product was thoroughly washed with ethanol and subsequently dried. 50.62 g of product was yielded. the viscosity of 1% by wt solution of the obtained product at 20° c. was 12.5 mpa·s, intrinsic viscosity [η]=1.5. the resultant vinyl alcohol co-polymer product contained 2.3% by wt acetate groups, 9.24% by wt carboxylate groups, 1.35% by wt carboxylic groups, and 4.75% by wt amide groups. example 2 in this example, a vinyl acetate co-polymer was prepared from the following dispersion: vinyl acetate86mlacrylamide7.1gacrylic acid7.2gnahco 30.75gbenzoyl peroxide0.12gammonium persulfate0.23gwater152ml a three-neck reaction vessel as described was placed in a water bath and loaded with 86 ml of vinyl acetate, 7.1 g acrylamide, 7.2 g of acrylic acid, 0.75 g sodium hydrocarbonate, 190 ml water, 0.12 g of benzoyl peroxide, and 0.23 g ammonium persulfate, previously dissolved in 10 ml of water. the reagents were allowed to stand under slow stirring for 4 hours at 64-70° c. until a white emulsion was formed and the residual monomer concentration did not exceed 0.4% by wt. the produced emulsion contained 39.5% by wt solids and exhibited ph=3.8 and viscosity of 38.5 pa·s. the viscous white emulsion was then chilled and further saponified in alkali medium using the following mixture wherein the dispersion refers to the copolymer emulsion product: dispersion180mlwater240mlethanol1500mlnaoh24g 180 ml of the emulsion was diluted with 240 ml of water and loaded in the vessel containing 24 g of sodium hydroxide in 1500 ml of ethanol. the emulsion was loaded into the reactor drop-wise and stirred. at 20° c., a powder vinyl alcohol co-polymer was precipitated and acetic acid was added to the mixture under stirring to neutralize the alkali. the vinyl alcohol co-polymer was then filtered and the crude product was thoroughly washed with ethanol and subsequently dried. 50 g of product was yielded. the viscosity of 1% by wt solution of the obtained product at 20° c. was 50 mpa·s, intrinsic viscosity [η]=3.05. the resultant vinyl alcohol co-polymer product contained 3.97% by wt acetate groups, 8.55% by wt carboxylate groups, and 5.57% by wt amide groups. example 3 in this example, a vinyl acetate co-polymer was prepared from the following dispersion: vinyl acetate160mlacrylamide25gnahco31.3gpotassium persulfate0.5gwater350ml a three-neck reaction vessel as described was placed in a water bath and was filled with 160 ml of vinyl acetate, 25 g acrylamide, 1.3 g sodium hydrocarbonate, 330 ml water, and 0.5 g potassium persulfate, previously dissolved in 20 ml of water. the reagents were allowed to stand under slow stirring for 3.5 hours at 64-70° c. until a white emulsion was formed and the residual monomer concentration did not exceed 0.4% by wt. the produced emulsion contained 32.5% by wt solids and exhibited ph=5.1 and viscosity of 45.24 pa·s. the viscous white emulsion was then chilled and further saponified in alkali medium using the following mixture wherein the dispersion refers to the copolymer emulsion product: dispersion180mlwater240mlethanol1800mlnaoh24g 180 ml of the emulsion was diluted with 240 ml of water and loaded in the vessel containing 24 g of sodium hydroxide in 1800 ml of ethanol. the emulsion was loaded into the reactor drop-wise and stirred. at 20° c., a powder vinyl alcohol co-polymer was precipitated and acetic acid was added to the mixture under stirring to neutralize the alkali. the vinyl alcohol co-polymer was then filtered and the crude product was then thoroughly washed with ethanol and subsequently dried. 37.46 g of product was yielded. the viscosity of 1% by wt solution of the obtained product at 20° c. was 10.7 mpa·s, intrinsic viscosity [η]=1.61. the resultant vinyl alcohol co-polymer product contained 6.02% by wt acetate groups, 1.97% by wt carboxylate group, and 13.63% by wt amide groups. characterization of materials from examples 1, 2, and 3 the three vinyl alcohol co-polymer compositions described in examples 1-3 herein, denoted as pva-1, pva-2, and pva-3, respectively, were characterized to show the dependence of the cryogel properties on the functional composition. pva-1 was a copolymer of vinyl alcohol with acrylamide and methacrylic acid, pva-2 was a copolymer of vinyl alcohol with acrylamide and acrylic acid, and pva-3 was a copolymer of vinyl alcohol with acrylamide. the characteristic (intrinsic) viscosities of the samples in 0.05m nano 3 were 1.61, 3.05, and 1.5 dl/g for pva-1, pva-2, and pva-3, respectively, and the average degree of saponification was 94-97%. to form cryogels, 0.4 g of each vinyl alcohol co-polymer was dissolved in 10 ml of deionized water at 80° c. under stirring. the clear solution was then frozen at −22° c. overnight and thawed at room temperature. the produced gels were subsequently used for analysis. the vinyl alcohol co-polymer cryogels were visualized using environmental scanning electron microscopy (esem) (philips xl30 sem) equipped with a peltier cooling stage. no fixating agents such as glutaraldehyde or osmium tetraoxide were used. the cryogels were soaked in water and placed in the esem chamber. the temperature of the peltier stage was fixed at −7° c. to freeze the sample. when frozen, the pressure in the chamber was set to 5 mbar and acceleration voltage was applied (20 or 25 kv). the pressure in the chamber was dropped then to 1 mbar to induce sublimation of frozen water. esem is an electron microscopy technique which allows the examination of hydrated samples. however, when pores in a hydrogel are filled with water, which constitutes up to 96% of mass, their structure is difficult to visualize. further, due to capillary forces, the structure collapses into a dense compact upon removal of water. the use of fixating agents such as glutaraldehyde and osmium tetraoxide, which are commonly used in electron microscopy, is not preferable because glutaraldehyde is a known cross-linking agent for pva. to avoid the collapse of structure upon drying, the gel samples were first frozen, and then water was sublimed leaving the pore structure intact. in figs. 1a-1c , the gel macropore structure can be visualized. as it is seen in these pictures, the pores in pva-1 sample were in the order of 10-15 μm, and an open sponge-like structure was clearly visible. the pores in pva-2 sample were considerably smaller than in pva-1 (about 7 μm). further, the pore walls were thicker than in pva-1 and their distribution appeared denser. the smallest pores were observed in pva-3, which were in the range of 2 μm only. the pore walls were generally thin though occasionally thick structures were visible. the thermal properties of the co-polymers were studied. a seiko dsc 220 (ssc/5200 h, seiko, japan) was used for differential scanning calorimetry (dsc). the instrument was calibrated for melting point tm(° c.) and heat of fusion δhm (j/g) of indium (156.60° c.; 28.59 j/g), tin (232° c.; 60.62 j/g), gallium (29.80° c., 80.17 j/g), and zinc (419° c., 111.40 j/g). the experiments were performed in n 2 atmosphere. the heating rate was 10° c./min. the original co-polymer samples were carefully weighed in aluminum pans with cover (ta instruments, delaware, usa). empty pans were used as reference. a tga/sdta 581e (mettler toledo, switzerland) instrument was used for thermogravimetric analysis (tga). the experiments were performed in air atmosphere. the heating rate was 10° c./min. the original polymer samples were carefully weighed in open 70 μl aluminum oxide crucibles. the amount of moisture was calculated as weight loss at 100° c. in fig. 2 , the dsc plot is presented (the lower plot is the first derivative). because cryogels consisted of 96% by wt water, the dsc and tga profiles would be totally dominated by evaporation of water. therefore, dsc and tga analysis were performed on the original co-polymers without cryotropic gelation. prior to dsc analysis, the samples were cooled to −30° c. as the temperature is raised, the water present in the sample first melts and then starts to evaporate. at around 100° c., there is a large endothermic peak seen in all samples corresponding to water evaporation. upon further heating, at around 230° c., there is the second large endothermic peak seen, which corresponds to a melting point of pva. further heating induces pyrolysis. it should be noted that at least in two samples (pva-2 and pva-3) there is a small phase transition peak detected at around 40° c. the glass transition temperature of pure pva is 81° c., and the depression of tg in these samples could be a complex response to plasticizing action of moisture and presence of functional groups. in figs. 3a and 3b , the tga results are presented (the lower plot is the first derivative). the tga revealed that the moisture content of pva-1, pva-2, and pva-3 was 7.25, 11.05, and 4.13% by wt, respectively. dynamic mechanical thermal analysis (dmta) was performed to characterize the rheological properties of the cryogels. this was done using a controlled rate instrument of the couette type in the dynamic oscillation mode (bohlin vor reometer, bohlin reologi, sweden) at 1 hz. the measuring system was a concentric cylinder, c14 type. a 4.13 mm torsion wire was used for analysis. silicone oil was used on the top of the sample to prevent evaporation. it was ascertained that the applied strain was in the linear viscoelastic region. the measurements were conducted at 20, 30, 40, 50, 60, and 70° c., respectively. the phase angle δ is defined as follows: tan δ= g″/g′ (1) where g′ is the elastic (storage) modulus and g″ is the viscous (loss) modulus. in figs. 4a and 4b , the rheological properties of the vinyl alcohol co-polymer cryogels are presented. in fig. 4a , the elastic modulus of the cryogels is plotted as a function of temperature. in the pva-1 and pva-2 samples, the elastic modulus g′ is constant in the range between 10 and 40° c., whereas at higher temperatures the value of g′ is falling. in the pva-3 sample, the drop in the value of the elastic modulus g′ occurs at around 50° c. the higher the value of g′, the more resilient is the gel. it is seen from this plot that pva-1 formed the weakest gel in the series. the strongest gels were formed by pva-3 followed by pva-2. in fig. 4b , the phase angle δ is plotted as a function of temperature. traditionally, the temperature at which the phase angle δ exhibits a maximum is defined as the glass transition temperature of a polymer. it should be noted that, at temperatures above 80° c., the polymers are completely dissolved. it can thus be concluded from this plot, that pva-1 exhibits a tg at around 50° c., whereas the tg values for pva-2 and pva-3 are about 70° c. and 80° c., respectively. however, the onset of the phase transition in pva-2 and pva-3 samples is observed at around 50° c., which is in accordance with the dsc results in fig. 2 . the values of phase angle below 10° are typical for strong gel structures. in the range between 10 and 40° c., it is seen that pva-1 forms very weak gel structures as indicated by both high values of phase angle δ and low values of elastic modulus g′. the low values of phase angle δ for pva-3 in the range between 10 and 50° c. are indicative of a strong gel, whereas the gel properties of pva-2 are intermediate between pva-1 and pva-3. it should be noted that stronger gel structure was associated with smaller pore size and higher density of pores visualized in esem. in order to investigate the drug release properties from cryogels, saccharine sodium, as a model substance, was loaded in the cryogels. 50 mg of saccharine sodium was dissolved in deionized water and the total volume was brought to 50 ml. 5 ml of stock solution was placed in a 10 ml glass vial and 0.2 g of the vinyl alcohol co-polymer was added. the solution was heated to 80° c. until the co-polymer was dissolved and frozen at −22° c. overnight. the samples were thawed at room temperature. the produced cryogel samples were cylindrical in shape (2 cm in height; 2.3 cm in diameter). a glass beaker was filled with 100 ml of deionized water and heated to 30 or 50° c., respectively. the cryogel samples were placed in the beaker and the saccharine sodium release was monitored with uv-spectrophotometer (uv 1650pc, shimadzu, japan) at 270 nm. in figs. 5a and 5b , the release profiles of saccharine sodium from the vinyl alcohol co-polymer cryogels are presented. saccharine sodium was released most rapidly from pva-3 sample followed by pva-1. the slowest release profiles were observed in pva-2. no direct correlation between the gel strength or pore size is made. it has previously been observed that depending on the chemical nature of the drug substance, various interactions between the pva and drug can be observed. because saccharine sodium is an ionized molecule, the differences in the release profiles could be due to various electrostatic interactions with the pva-composites. to verify that incorporated drugs are being homogeneously distributed in the cryogel formulation, fluorescin sodium was incorporated in pva-3 in the same way as described above for saccharine sodium incorporation. 5 wt % cryogel was used. fig. 6 shows that the yellow dye was homogeneously dispersed in the cryogel matrix. example 4 in this example, a vinyl acetate co-polymer was prepared from the following dispersion: vinyl acetate86mlacrylic acid14.5gnahco30.8gpotassium persulfate0.1gwater150g a three-neck reaction vessel as described was placed in a water bath and was filled with 86 g of vinyl acetate, 14.5 g of acrylic acid, 0.8 g sodium hydrocarbonate, 140 ml water, and 0.1 g potassium persulfate, previously dissolved in 10 ml of water. the reagents were allowed to stand under slow stirring for 4.5 hours at 64-70° c. until a white emulsion was formed and the residual monomer concentration did not exceed 0.4% by wt. the produced emulsion contained 40.1% by wt solids and exhibited ph=3.2 and viscosity of 9.78 pa·s. the viscous white emulsion was then chilled and further saponified in alkali medium using the following mixture wherein the dispersion refers to the copolymer emulsion product: dispersion30mlwater30-40mlethanol300mlnaoh4g 30 ml of the emulsion was diluted with 40 ml of water and loaded in the vessel containing 4 g of sodium hydroxide in 300 ml of ethanol. the emulsion was loaded into the reactor drop-wise and stirred. at 20° c., a powder vinyl alcohol co-polymer was precipitated and acetic acid was added to the mixture under stirring to neutralize the alkali. the vinyl alcohol co-polymer was then filtered and the crude product was thoroughly washed with ethanol in and subsequently dried. 6.1 g of product was yielded. the viscosity of 1% by wt solution of the obtained product at 20° c. was 46.02 mpa·s, intrinsic viscosity [η]=1.5. the resultant pva product contained 6.49% by wt acetate groups, 21.17% by wt carboxylate groups. example 5 the vinyl alcohol co-polymers in examples 1, 2 and 3 (i.e. pva-1, pva-2 and pva-3) were dissolved in water at 64° c. to produce 5% vinyl alcohol co-polymer solutions. zolpidem drug was added in the above solution. 3 ml of the obtained solution containing 5 mg of zolpidem was poured into a cylindrical form and frozen overnight at −30° c. the forms were thawed at room temperature to obtain ready to use cryogels for oral use. similarly, solid plugs were produced by freeze drying over night. the release profiles of zolpidem from the different formulations and at different phs are shown in figs. 7a and 7b . example 6 the material in example 3 (pva-3) was dissolved in water at 64° c. to produce 5% modified vinyl alcohol co-polymer solution. diazepam drug was added in the above solution. 3 ml of the obtained solution containing 5 mg of diazepam was poured in a shallow form and frozen overnight at −30° c. the form was thawed at room temperature and dried to a constant mass to produce a thin film (0.2-0.5 mm) the film is ready for use as a mucoadhesive drug delivery vehicle for buccal use. figs. 8a and 8b show the physical appearance of the film prior to use as well as at the site of application in vivo. example 7 the material in example 2 (pva-2) is loaded with theophylline, freeze-dried and formulated in hard capsules. 5% by wt cryogel is used, and the drug is incorporated as described in example 5. the formulation is intended for a floating drug delivery vehicle in the stomach. example 8 the material in example 1 (pva-1) is loaded with lidocain and is for use in a topical preparation for burn healing. the inventive vinyl alcohol co-polymer is used to form a soft hydrogel which releases the incorporated drug slowly and which has significantly higher water content than its analogues. 3.5% by wt cryogel is used. the drug is incorporated as in example 5. example 9 the material from example 2 (pva-2) is loaded with nitrofurazone for wound healing. 5% by wt cryogel containing the drug is prepared and molded in thin films (0.2-0.5 mm), which are then dried to a constant mass. the film is intended for use as a mucoadhesive drug delivery vehicle which swells when in contact with exudates from the wound, which in turn commences the release of nitrofurazone. example 10 the material in example 1 (pva-1) is loaded with diclofenac sodium and is intended for use as a topical preparation for local pain relief. 4% by wt cryogel is used. the drug is incorporated as in example 5. example 11 the material in example 1 (pva-1) is loaded with an interleukin-6 (il-6) antagonist (sample 11a) and cortisone (sample 11b), for psoriasis treatment. 3.8% by wt cryogel is used. the drug is incorporated as in example 5. example 12 the material in example 3 (pva-3) is loaded with indomethacin to form a rectal firm hydrogel formulation. the formulation has the advantage of having high water content and suitable elasticity. 7% by wt cryogel is used. the drug is incorporated as in example 5. example 13 the material in example 3 (pva-3) is loaded with methronidazole to form a firm hydrogel formulation for vaginal application. the formulation has the advantage of having high water content and suitable firmness and elasticity. 7% by wt cryogel is used. the drug is incorporated as in example 5. example 14 in this example, a vinyl acetate co-polymer was prepared from the following dispersion using the procedure of example 1: vinyl acetate160mlethacrylamide25gnahco 31.3gpotassium persulfate0.5gwater350ml the product was saponified in alkali medium using the following mixture wherein the dispersion refers to the copolymer emulsion product, using the procedure of example 1: dispersion180mlwater240mlethanol1800mlnaoh24g 7% by wt polymer hydrogel is used as a thermoresponsive gel-forming matrix for rectal drug administration of indomethacin. example 15 the material in example 4 is used to produce biodegradable injectable fillers (for example, wrinkle fillers, etc.) having a soft consistency. 4% by wt polymer cryogel is used. prior to cryogelation, the polymer is autoclaved and sterilized. in one trial, the filler is loaded with vitamin c. example 16 a biodegradable implantable material produced as in example 3 (pva-3) is molded as a macroporous tissue scaffold. bmp-2 (bone morphogenetic protein-2) is contained in the cryogel. prior to cryogelation, the polymer was autoclaved and sterilized. 9% by wt cryogel is used and the drug is incorporated as in example 5. example 17 a biodegradable implantable material produced as in example 3 (pva-3) is molded as a firm cosmetic filler to be used as a butt implant. 9% by wt cryogel is used. a drug may be incorporated as in example 5. example 18 the material from example 2 (pva-2) is loaded with isosorbide dinitrate for buccal drug delivery. 5% by wt cryogel containing the drug is prepared and molded in thin slabs, which are then dried to a constant mass. the film is intended to be used as a mucoadhesive buccal drug delivery vehicle. example 19 the material from example 2 (pva-2) is loaded with 5-aminosalicylic acid (mesalazine) intended to be used as an injectable for treatment of ulcerative colitis. 3% by wt cryogel is used. the drug is incorporated as in example 5. example 20 the material from example 2 (pva-2) is loaded with fentanyl citrate intended for use in soft capsules for sustained release and chronic pain relief. 5% by wt cryogel is used. the drug is incorporated as in example 5. the specific examples and embodiments described herein are exemplary only in nature and are not intended to be limiting of the invention defined by the claims. further embodiments and examples, and advantages thereof, will be apparent to one of ordinary skill in the art in view of this specification and are within the scope of the claimed invention.
135-438-292-260-148
US
[ "CA", "GB", "US" ]
B65D75/32,B65D75/34,B65D83/04,B65D75/36,A61J1/03,A61J1/00,A61J7/04
2002-10-11T00:00:00
2002
[ "B65", "A61" ]
product packaging material for individual temporary storage of pharmaceutical products
an improved temporary pharmaceutical product packaging solution provides users with multiple individual cavities that may be filled with multiple individual doses of various pharmaceuticals and subsequently sealed by the user for future consumption of the pharmaceuticals. in a preferred exemplary embodiment, the user is able to simultaneously seal a plurality of individual cavities with a sheet of cover material after the individual cavities have been filled with the desired pharmaceuticals. in a preferred exemplary embodiment of the cover material i s reverse printed with dosing information for an individual.
1. packaging for a plurality of temporary pharmaceutical product packages comprising: a support member; a plurality of sheets of material, each sheet having a plurality of cavities formed therein, the sheets of material are stacked such that the cavities of each sheet are in registration with adjacent sheets in the stack and wherein the cavities are located within holes formed in the support member; and a plurality of sheets of cover material stacked adjacent to the sheets of material having cavities formed therein, wherein each sheet of cover material is secured to a corresponding sheet of material having cavities formed therein such that the sheet of cover material folds over to seal a plurality of cavities; and further wherein each sheet of cover material is further comprised of an adhesive layer formed thereon which is selectively positioned such that when the sheet of cover material is folded over the cavities, the adhesive surrounds each cavity.
background of the invention 1. field of the invention the present invention relates generally to the field of pharmaceutical product packaging materials. more specifically, the present invention is directed to a product packaging solution for packaging a plurality of individual temporary storage packages for solid pharmaceutical products. 2. description of the related art there are currently a wide variety of pharmaceutical product packaging solutions available for the temporary storage of pharmaceutical products. this is due to the fact that there are many people under the care of physicians who are required to take numerous prescription drug products on any given day and in some instances individuals are required to take multiple doses of medication throughout the day. in most instances, an individual receives one more prescriptions from a doctor and a pharmacy provides a supply of the required pharmaceuticals in a single container. thus, when an individual is required to take numerous pharmaceutical products throughout a given day, the individual is required to accesses each of the individual storage containers for the various pharmaceutical products. while this is not terribly inconvenient when an individual is taking a single medication, it does become problematic when the person is required to take multiple medications in a single day and is particular troublesome when the person is required to take multiple medications at various times throughout the day. especially with the significant increase in the aging population, it has become ever increasingly common for individuals to be required to take multiple doses of multiple medications at various times throughout the day. it is not uncommon for individuals to be required to take five or more different products on any given time during the day. these increasingly common regimens of pharmaceutical product doses can become difficult for users to ensure that the appropriate medications are taken at the correct times. without assistance, the requirement to take these multiple medications at various times throughout a given day can be confusing for individuals. an individual on such a regimen of pharmaceuticals can easily forget whether a particular required dosage was taken at a given time. as a result, it is not uncommon for patients to receive either more or less than the required or specified doses of their medicines. another problem arises when an individual who has received multiple prescriptions for pharmaceuticals is away from home for a given period time. if such an individual will be gone for a number of days, the person must either take all of the medications for all of the various prescriptions along or the user is required to selectively remove the required doses for a given period of time from the original packaging. this can be more than a minor inconvenience especially when the travel is unexpected or otherwise on short notice. although there are number of solutions for temporarily storing the pharmaceutical products that are currently available, none provides users with a very convenient disposable temporary storage device for multiple prescriptions. accordingly, there remains a need in the field for improved temporary pharmaceutical product package storage devices that provide users with the ability to selectively temporarily store doses of individual prescriptions in a convenient disposable package. other objects and advantages of the present invention will be apparent in light of the following summary and detailed description of presently preferred embodiments. summary of the invention the present invention is directed to temporary pharmaceutical product package storage devices that enable users to selectively prepare individual disposable dosage packets with one or more various pharmaceuticals to be consumed on or within a specified period time. the packaging solutions of the present invention specifically provide users with the ability to store all of the required doses of various medications in disposable containers. advantageously, users can utilize the packaging solutions of the present invention in order to pre-fill individual disposable dosage packets with the required pharmaceuticals for a given day, or portion of the day. in one preferred exemplary embodiment of the present invention, users are able to selectively prepare individual disposable dosage packets with all of the dosing requirements of one or more pharmaceuticals for a particular week or other period time. this enables a user to travel away from home without taking all of the medication containers for all of the prescription pharmaceuticals and/or vitamins that the patient is taking. desirably, users are able to selectively fill packages and use the pre-filled disposable packages at some later point in time. in accordance with the preferred exemplary embodiment of the present invention, the pharmaceutical product packaging solution of the present invention provides a thick member that in the preferred exemplary embodiment has a depth that is preferably greater than the depth of the disposable package members. the thick member has a plurality of holes arranged in a predetermined fashion to receive a corresponding plurality of package member front or top portions that are comprised of preferably clear molded plastic members that are formed from sheets of plastic material. in accordance with the preferred exemplary embodiment, preferably a plurality of sheets of the molded clear plastic material, each of the sheets having a plurality individual dosage cavities formed therein, are stacked in the thick member such that the cavity portions fall within the corresponding holes of the thick member. the thick-member is used in order to ensure that the individually formed cavities are not damaged during shipping or other handling of the packaging material of the present invention. additionally, the thick portion of the overall packaging solution acts as a die to provide peripheral support around the individual package member cavities. thus, it enables a user to readily fill the individual cavities with the desired solid pharmaceuticals and/or vitamins and also provides a relatively sturdy perimeter for sealing the individual cavities with a cover sheet. the overall packaging solution of the present invention also preferably includes backing material that may be used for sealing the individual dosage cavities. in that regard, a plurality of sheets of materials having at least a portion of which that contains adhesive or temporarily coated adhesive so that the sheets of material may be utilized for sealing the individual dosage packets. with the instant pharmaceutical product packaging solution of the present invention, users are able to selectively fill the individual dosage packets and subsequently seal them for later use. in accordance with the preferred exemplary embodiment of present invention, sheets of the sealing portions are located adjacent to the sheets of cavity members. as a result, users are able to remove the backing covering the portions of the sheets containing adhesive and simply fold the sheets over to cover and seal the individual dosage packets. in a further preferred exemplary embodiment, the sheets of cover material are reverse printed with dosing information for the pharmaceuticals that have been inserted by the user. as a result, by folding over the cover material to seal the individual dosage packets, the clear plastic sheet from which the individual cavities are formed allows the user to see the printed material indicating the dosing information. users may choose between daily indications or multiple daily indications for dosing requirements. for example, this information may include consumption day information and/or identification of the time at which a dose should be administered. brief description of the drawings fig. 1 illustrates a first preferred exemplary embodiment of the present invention; fig. 2 is an illustration that shows separation of the cover material and cavity members from the thick support member in accordance with a preferred exemplary embodiment of the present invention; fig. 3 illustrates a step in the process of filling and sealing individual cavity members; fig. 4 illustrates a further step in the process of filling and sealing individual cavity members; fig. 5 illustrates a plurality of filled and sealed cavity members in accordance with a preferred exemplary embodiment of the present invention; fig. 6 illustrates a plurality of clear plastic cover members with cavities formed therein in accordance with a preferred exemplary embodiment of the present invention. detailed description of the presently preferred embodiments fig. 1 illustrates a first exemplary embodiment of the present invention which is shown generally at 10 . in accordance with the first preferred exemplary embodiment, a thick support member 12 is provided for receiving a plurality of sheets of dosage packets forming members. the thick support member 12 is preferably formed of foam or plastic and ensures that the pre-formed cavity members are not crushed or otherwise damaged during shipping or other processing of the packaging device of the present invention. furthermore, as noted above, this thick member acts as a die and provides support around the perimeter of the individual cavities to aid users in sealing the individual cavities with the adhesive coated covering material. this thick member is preferably formed from lightweight material in order to ensure that the overall packaging product is not unnecessarily heavy. as shown in fig. 1 , a plurality of sheets of material 14 having cavities 15 formed therein are located within corresponding holes in the thick support member 12 . in accordance with a preferred exemplary embodiment, the sheets 14 are formed of clear plastic and the cavities 15 are molded therein preferably by stamping the cavities into the heated clear plastic sheets as is known in the art. in the preferred exemplary embodiment, sheets of cover members 16 are located adjacent to be sheets of preferably clear plastic 14 having the cavities 15 formed therein. in the preferred exemplary embodiment, the cover members 16 each include scoring or perforations that define a perimeter 17 which enables a user to easily remove the central portion of the cover members 16 in order to gain access to the cavities 15 when the cavities have been sealed with the cover members 16 . those skilled in the art will appreciate that clear plastic need not be used for the formation of the sheets 14 having the cavity members 15 formed therein. virtually any material will suffice. all that is necessary is that there be sufficient border around each of the cavities 15 to allow a user to seal the cavity member. multiple sheets of packaging material along with the thick support member 12 are located within the product cover member 18 . the product cover member 18 is also preferably formed of plastic and is preferably clear in order to allow consumers to readily identify the product contained within the package 18 . fig. 2 illustrates the temporary pharmaceutical product packaging solution of the present invention wherein the sheets of material 14 having cavities formed therein 15 and the corresponding cover members 16 are separated from the thick support member 12 . this view illustrates the holes 13 formed within the thick member 12 for receiving individual cavity members 15 formed in the sheets of material 14 . as noted above, this relationship ensures that the thick support member 12 protects the cavity members 15 from damage during processing and shipping. as noted, it may also act as a die in order to provide support around the perimeter of each individual product package cavity when a user seals the cavity. fig. 3 illustrates a step in the process of filling and sealing individual temporary storage members that are useful for storing multiple doses of multiple pharmaceutical products. those skilled in the art will appreciate that the packaging solution of the present invention may also be useful in other areas where temporary disposable packaging is required. as shown in fig. 3 , one sheet of material 14 having cavities 15 formed therein is located within the thick support member 12 such that the cavities 15 are located within corresponding holes 13 in the packaging solution of the present invention. as shown in fig. 3 , a user is able to locate a plurality of various pharmaceutical products 19 within each of the cavities 15 . fig. 3 specifically illustrates removal of backing material 21 from the cover members 16 . as shown in this illustration of a preferred exemplary embodiment, the backing material 21 protects an adhesive border located on the cover sheet 16 . the adhesive border corresponds to a perimeter of the sheets 14 surrounding each of the cavities 15 so that the cover members 16 may be secured to the sheets 14 such that each of the individual cavities 15 have been preferably sealed. fig. 4 illustrates a further step in the sealing process wherein the sheets of cover members 16 have been folded over the sheets 14 having the cavities formed therein. this step results in the sealing of the individual cavities those skilled in the art will appreciate that this is the preferred exemplary embodiment and that other configurations are possible as well. for example, in an alternate exemplary embodiment of the present invention, the sheets 14 having the cavities 15 formed therein are at least substantially coextensive with the thick cover member 12 and there are preferably multiple rows of cavities. in such an alternate embodiment, the cover members 16 are simply stacked on top of the sheets 14 and a user simply selectively applies a single sheet of cover members 16 at a time for sealing the cavities 15 in an upper sheet. specifically, it is not necessary for the cover members 16 to be located adjacent to the sheets 14 so that they may be folded over thereon. folding is only described with respect to the preferred exemplary embodiment. fig. 5 is a top plan view which illustrates a plurality of sealed cavities 15 containing multiple pharmaceuticals 19 . the product packaging solution of the present invention provides users with a simple device for temporarily storing doses of multiple pharmaceuticals for later consumption. as shown in fig. 5 , the backing material 6 may be reverse printed with dosing indications for a user. alternately, the printing can be printed on either side of the cover member or even the cavity in order to provide dosing indications for users. as noted, this may include identification of a particular day of intended consumption and/or the specific times at which the medication should be consumed. this provides a simple mechanism for a user to identify the appropriate time for taking one of the packages of medications. fig. 5 also illustrates perforations or scoring 22 between individual dosage packets. this is a further convenience for a user so that only a limited number of packages may be removed in order to provide convenient access for a user. fig. 6 is a top plan view that illustrates one of the sheets 14 having the cavities 15 formed therein. as shown in fig. 6 , the sheets 14 also preferably include scoring or perforations between cavities 15 so that the user may readily separate individual dosage packets. as noted above, in the preferred exemplary embodiment, the sheets 14 are formed from a clear plastic material and the cavities 15 are simply molded therein as is known in the art it is contemplated that other materials may be useful as well. the present invention has been described with respect to the preferred exemplary embodiment of the present invention. it is contemplated that various substitutions and modifications may be made to the specific devices and structures disclosed herein by will nonetheless fall within the scope of the present invention as described in the appended claims.
136-515-099-252-149
US
[ "US" ]
F02D29/02
2015-08-28T00:00:00
2015
[ "F02" ]
method of operating current controlled driver module
a method of operation of a current control driver module for an engine system is provided. the method includes selecting a mode of operation of the current control driver module. the method also includes performing any one of a peak current control and an average current control by the current control driver module based on the selected mode of operation of the current control driver module.
1 . a method of operation of a current control driver module for an engine system, the method comprising: selecting a mode of operation of the current control driver module; and performing any one of a peak current control and an average current control by the current control driver module based on the selected mode of operation of the current control driver module.
technical field the present disclosure relates to an engine system, and more particularly to a method of operating a current control driver module of the engine system. background engine applications such as operator fan, coolant fan, run on varying control modes. different engine applications use different types of current-controlled drivers that perform either peak current control or average current control. average current control is typically a software based algorithm and peak current control is typically done via hardware due to response time requirements. thus, in order to implement both peak and average current control functionality two separate sets of hardware configurations need to be associated with the system which is expensive and hard to implement on an engine system. u.s. pat. no. 4,964,014, hereinafter referred as the '014 patent, describes a solenoid current control system that includes a microprocessor. the microprocessor periodically generates a desired peak current value and which energizes the solenoid coil. current through the coil is sensed via a series resistor and the sensed current is compared to the desired peak current by comparator. the comparator generates an interrupt signal when the sensed current reaches the desired peak value. the interrupt signal is applied to the microprocessor which responds by de-energizing the coil. however, the '014 patent does not describe address providing a compact system which can offer dual functionality of peak and average current control. summary of the disclosure in one aspect of the present disclosure, a method of operation of a current control driver module for an engine system is provided. the method includes selecting a mode of operation of the current control driver module. the method also includes performing any one of a peak current control and an average current control by the current control driver module based on the selected mode of operation of the current control driver module. other features and aspects of this disclosure will be apparent from the following description and the accompanying drawings. brief description of the drawings fig. 1 is a schematic view of an exemplary engine system, according to one embodiment of the present disclosure; fig. 2 is a schematic diagram of an exemplary current control driver module including a microprocessor, a driver control, and a load, according to one embodiment of the present disclosure; and fig. 3 is a flowchart of a method for operating the current control driver module, according to the embodiment of the present disclosure. detailed description wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or the like parts. fig. 1 is a schematic view of an exemplary engine system 101 , according to one embodiment of the present disclosure. the engine system 101 includes an engine 103 . the engine 103 may embody may one of a compression ignition engine, a spark-ignition engine, or other combustion engines known in the art. in various examples, the engine 103 may have a capacity of 7 liters, 9 liters, and the like based on operational requirements. the engine 103 may be utilized for any suitable application such as motor vehicles, work machines, locomotives or marine engines, and in stationary applications such as electrical power generators. in one example, the engine 103 may include various sensors associated therewith. for example, a temperature sensor (not shown) may be associated with the engine 103 . the temperature sensor may generate a signal indicative of a surface temperature of the engine 103 . the engine 103 includes a radiator fan 105 . the radiator fan 105 is coupled at a front end of the engine 103 . the radiator fan 105 is configured to cool the engine 103 by forcing cooling air over the engine 103 . further, a sensing element (not shown) may be coupled to the radiator fan 105 . the sensing element may generate signals indicative of an operational status of the radiator fan 105 . in one example, the radiator fan 105 is communicably coupled to a current control driver module 100 . the current control driver module 100 controls current supply to the radiator fan 105 . more particularly, the current control driver module 100 is an interface that controls the current supplied to the radiator fan 105 . further, in other examples, the application of the current control driver module 100 may be extended to control the supply of current to a wide number of electrical components, including, but not limited to, various parts of the engine system 101 such as valves, pumps, etc. details of the current control driver module 100 will now be explained in detail with reference to fig. 2 . fig. 2 is a schematic diagram of the exemplary current control driver module 100 , according to one embodiment of the present disclosure. the current control driver module 100 is communicably coupled with the temperature sensor associated with the engine 103 and the sensing element associated with the radiator fan 105 . further, the current control driver module 100 receives signals from the temperature sensor and the sensing element. the current control driver module 100 includes a microprocessor 102 . the microprocessor 102 includes an analog to digital convertor port 104 . the analog to digital convertor port 104 is configured to receive input signals. the input signals are received in analog format that are converted into digital format. the microprocessor 102 also includes a number of pins, for example, a general purpose input output pin 106 , hereinafter referred as gpio pin 106 . in the present embodiment, the microprocessor 102 is coupled to a single gpio pin 106 . alternatively, the microprocessor 102 may include a number of gpio pins 106 . the gpio pin 106 can be configured either for output signal or input signal. the microprocessor 102 also includes another pin embodied as a microprocessor pin 107 . the microprocessor pin 107 is configured to send digital signals. the current control driver module 100 includes a high side gate driver circuitry 108 and a low side gate driver circuitry 112 . the high side gate driver circuitry 108 is in periodic communication with the microprocessor 102 via a line 110 . the high side gate driver circuitry 108 is modulated to regulate the current. the high-side gate driver circuitry 108 receives digital signals from the microprocessor pin 107 . the low side gate driver circuitry 112 is also communicably coupled to the microprocessor 102 via a control signal line 114 . the high-side gate driver circuitry 108 is connected to a high side mosfet 116 . the high-side gate driver circuitry 108 is used to convert the digital control signals from the microprocessor 102 to a gate drive voltage in order to control the high side mosfet 116 . the high side mosfet 116 is configured to supply controlled current from a battery 118 . the current supplied by the battery 118 passes through a first sensor 120 . further, the current passing through the first sensor 120 is fed back to the high-side gate driver circuitry 108 by means of an opamp 121 . the first sensor 120 is configured to convert the current from the battery 118 to a certain voltage that can be sensed by the high side gate driver circuitry 108 . the high side mosfet 116 regulates current from the battery 118 to the radiator fan 105 . the current from the radiator fan 105 is then grounded by the low side mosfet 117 through the second sensor 124 . the current control driver module 100 also includes a flyback diode 123 that is configured to ground the switched terminal of the radiator fan 105 . further, the low side gate driver circuitry 112 of the current control driver module 100 is used to convert the digital control signals from the microprocessor 102 to control the low side mosfet 117 . the current control driver module 100 includes a second sensor 124 . in an example, the second sensor 124 is configured to convert the current passing through the radiator fan 105 to voltage that can be sensed by the low side gate driver circuitry 112 . further, the second sensor 124 sends information on current passing through the radiator fan 105 through a feedback line 126 . the working of the microprocessor 102 is controlled by application software that may have algorithm pre-stored into a memory unit (not shown) of the microprocessor 102 . the microprocessor 102 is configured to operate in two distinct modes, more particularly; an average current mode and a peak current mode, based on system requirements. the detailed operation of the current control driver module 100 in the peak current mode will now be described in detail with reference to figs. 1 and 2 . in one exemplary embodiment, where the engine 103 has a capacity of 9 liters, the radiator fan 105 of the engine 103 may require peak current for operation thereof. during the working of the engine 103 , the temperature of the surface of the engine 103 may increase. in such situations, the temperature sensor may send an input signal to the current control driver module 100 . the input signal allows an activation of the radiator fan 105 , in order to cool the engine 103 . when the current control driver module 100 receives the input signal from the temperature sensor indicating a need of peak current, the microprocessor 102 selects the peak current mode of operation of the current control driver module 100 . alternatively, the selection of the mode of operation of the current control driver module 100 may be provided by any other method not described herein. the application software configures the microprocessor pin 107 as a reaction module output. the reaction module is programmed with current waveform information. the current waveform information may include target current, switching frequency, dither amplitude, and the like. the microprocessor 102 generates a target current control signal in a digital form. the target current control signal is sent to the high-side driver circuitry 108 via the line 110 . the digital signal is converted by the high-side driver circuitry 108 into gate drive voltage that further controls the high side mosfet 116 . the high side mosfet 116 supplies the peak current from the battery 118 to the radiator fan 105 . the current from the battery 118 is measured by the first sensor 120 . further, the second sensor 124 sends a feedback to the microprocessor 102 through the feedback line 126 to check if the current supplied by the battery 118 confirms the current requirements of the radiator fan 105 . the microprocessor 102 sends a switching frequency control signal on the line 110 . the control signal on the control signal line 114 is supplied as digital control signal to the low side gate driver circuitry 112 . the digital signal is converted by the low side gate driver circuitry 112 into gate drive voltage that further controls the low side mosfet 117 . the current thus supplied is converted into voltage signal that is sensed by the low side gate driver circuitry 112 . further, the signal is fed back to the microprocessor 102 . the reaction module monitors the signal from the second sensor 124 and enables the high-side gate driver circuitry 108 to control the high side mosfet 116 to send current until the target current is reached. the reaction module further turns off the high-side gate driver circuitry 108 . the reaction module thus runs independent of any signal from the microprocessor 102 . the output current is a peak current controlled waveform. further, based on the supply of peak current to the radiator fan 105 , the current control driver module 100 may receive a feedback signal from the sensing element of the radiator fan 105 . the feedback signals may be indicative of the operation of the radiator fan 105 . in another exemplary embodiment, where the engine 103 has a capacity of 7 liter, the radiator fan 105 of the engine 103 may require average current for operation. in such an example, the current control driver module 100 may operate in the average current mode to supply average current to the radiator fan 105 . in the average current mode, the application software configures the microprocessor pin 107 as time process unit or tpu output. the signal processing and the flow of current in the average current mode are similar to that described above for the peak current mode. the software algorithm when executed calculates a duty cycle and provides high or low current respectively, based on system requirements. the output current is an average current controlled waveform as the peak current varies based on duty cycle and load time. the current control driver module 100 is used to read current feedback through the adc 104 , and is read by software as part of a pid control loop for average current control algorithm when selected to be a tpu/software average current control. additionally, the current feedback is read by the reaction module to indicate a commanded current peak is reached when doing peak current control. the tpu or reaction module output of the current control driver module 100 is selected via software between the tpu and reaction module function, depending on current-control type. the output is used to modulate the high side mosfet 116 to regulate current. industrial applicability the present disclosure relates to the current control driver module 100 that may be operated in the average current mode or peak current mode, based on the type of application. the present disclosure allows a single set of hardware to provide peak current control or average current control based on the software configuration of the current control driver module 100 . for example, the current control driver module 100 disclosed herein can provide peak current for a particular engine application, based on operational requirements. further, for a different engine application that requires average current to operate, the current control driver module 100 provides average current using the same circuit arrangement, thereby saving hardware cost. fig. 3 is a flowchart for a method 200 of operation of the current control driver module 100 for the engine system 101 . at step 202 , the method 200 selects the mode of operation of the current control driver module 100 . at step 204 , the method 200 performs the peak current control or the average current control by the current control driver module 100 that is based on the selected mode of operation of the current control driver module 100 . in such situations, although the peak current control may not control the average current as well as average current control; however, the radiator fan 105 information is not required to make the current control functionality of the current control driver module 100 to work. accordingly, in the present disclosure, the external component or the radiator fan 105 commands a target current, and the current control driver module 100 turns on until the target is reached, and then turns off for the remainder of a switching period. while aspects of the present disclosure have been particularly shown and described with reference to the embodiments above, it will be understood by those skilled in the art that various additional embodiments may be contemplated by the modification of the disclosed machines, systems and methods without departing from the spirit and scope of what is disclosed. such embodiments should be understood to fall within the scope of the present disclosure as determined based upon the claims and any equivalents thereof.
139-348-822-762-740
US
[ "EP", "WO" ]
G01N27/447
2021-02-25T00:00:00
2021
[ "G01" ]
systems and methods for sampling
a system includes a capillary having a first end connected to a sampling device and a second end. the sampling device is configured to separate a sample with a separation solution and deliver the sample and the separation solution to the second end. the second end is coupled to a capillary ground contact. a transport liquid supply system in fluidic communication with a transport liquid supply conduit provides a transport liquid from a transport liquid source through the transport liquid supply conduit. the transport liquid provided from the transport liquid supply conduit includes a receiving volume defined at least in part by a meniscus. the second end of the capillary is in fluidic communication with the receiving volume. a liquid exhaust system in fluidic communication with a removal conduit removes liquid from the receiving volume. an analysis system is in fluidic communication with the removal conduit.
what is claimed is: 1. a system comprising: a capillary having a first end connected to a sampling device and a second end, wherein the sampling device is configured to separate a sample with a separation solution and deliver the sample and the separation solution to the second end, wherein the second end is coupled to a capillary ground contact; a transport liquid supply system in fluidic communication with a transport liquid supply conduit that provides a transport liquid from a transport liquid source through the transport liquid supply conduit, wherein the transport liquid provided from the transport liquid supply conduit comprises a receiving volume defined at least in part by a meniscus, and wherein the second end of the capillary is in fluidic communication with the receiving volume; a liquid exhaust system in fluidic communication with a removal conduit that removes liquid from the receiving volume; an electrical conductor for connecting the transport liquid supply conduit to the removal conduit; a first electrical contact connected to the transport liquid supply conduit; and an analysis system in fluidic communication with the removal conduit. 2. the system of claim 1, wherein the second end of the capillary is disposed within the meniscus and wherein the capillary ground contact comprises at least in part the receiving volume. 3. the system of any of claims 1-2, wherein the second end of the capillary is disposed in the removal conduit and wherein the capillary ground contact comprises at least in part the receiving volume. 4. the system of any of claims 1-3, wherein a perimeter of the second end of the capillary is spaced apart from the removal conduit. 5. the system of any of claims 1-4, further comprising an interface coupled to the transport liquid supply conduit and the second end of the capillary, wherein the interface defines the receiving volume proximate the removal conduit. 6. the system of any of claims 1-5, wherein the second end of the capillary further comprises a conductive tip and wherein the capillary ground contact is connected to the conductive tip. 7. the system of claim 5, wherein the interface comprises at least one flexible element. 8. the system of claim 7, wherein the flexible element comprises at least one of rubber, polyurethane, neoprene, and silicone. 9. the system of any of claims 1-8, wherein the sampling device performs capillary electrophoresis or liquid chromatography. 10. the system of any of claims 1-9, wherein the analysis system is a mass spectrometer. 11. the system of claim 6, wherein the tip is disposed remote from and above the meniscus. 12. a method for analyzing a sample, the method comprising: receiving the sample and a separation solution from a capillary, wherein the capillary has a first end connected to a sampling device and a second end coupled to a capillary ground contact, wherein the sampling device is configured to separate the sample from the separation solution, wherein the sample and the separation solution is received from the second end and into a receiving volume defined at least in part by a transport liquid delivered from a transport liquid supply conduit; aspirating the received sample and the separation solution into a liquid exhaust system in fluidic communication with a removal conduit in fluidic communication with the receiving volume; and analyzing the received sample and separation solution with a mass analysis system. 13. the method of claim 12, the method further comprising supplying the transport liquid to the transport liquid supply conduit from a transport liquid supply system in fluidic communication with the transport liquid supply conduit. 14. the method of any of claims 12-13, wherein the receiving volume is defined at least in part by a meniscus. 15. the method of any of claims 12-14, the method further comprising receiving the second end of the capillary within the meniscus. 16. the method of any of claims 12-15, wherein a perimeter of the second end of the capillary is in contact with the removal conduit. 17. the method of any of claims 12-16, the method further comprising receiving an interface defining the receiving volume proximate the removal conduit. 18. the method of claim 17, wherein the interface comprises at least one flexible element. 19. the method of any of claims 12-18, wherein the sampling device performs at least one of capillary electrophoresis and liquid chromatography to obtain the sample. 20. the method of any of claims 12-19, the method further comprising receiving the sample and the separation solution from the capillary, wherein the sample and the separation solution are released from the second end of the capillary under gravity into the receiving volume. 21. the method of any of claims 12-20, wherein the separation solution is diluted by the transport liquid to allow for ionization by the mass analysis system. 22. a sample processing system comprising: a separation capillary located to deliver an eluent from a capillary electrophoresis (ce) system into a transport liquid in a receiving volume of an open port interface for delivery of the eluent to an analytical instrument, wherein the separation capillary is electrically decoupled from the analytical instrument. 23. the sample processing system of claim 22, wherein the transport liquid comprises a buffer liquid delivered from the ce system. 24. the sample processing system of any of claims 22-23, wherein the receiving volume comprises an air-liquid interface at atmospheric pressure. 25. the sample processing system of claim 24, wherein a tip of the separation capillary is disposed on an air side of the air-liquid interface. 26. the sample processing system of claim 24, wherein a tip of the separation capillary is disposed on a liquid side of the air-liquid interface. 27. the sample processing system of any of claims 22-26, further comprising an isolation transformer electrically decoupling the ce system from the analytical instrument. 28. a system comprising: a capillary having a first end connected to a sampling device and a second end, wherein the sampling device is configured to separate a sample with a separation solution and deliver the sample and the separation solution to the second end; a transport liquid supply system in fluidic communication with a transport liquid supply conduit that provides a transport liquid from a transport liquid source through the transport liquid supply conduit, wherein the transport liquid provided from the transport liquid supply conduit comprises a receiving volume defined at least in part by a meniscus, the transport liquid in the receiving volume is electrically grounded relative to the first end, and wherein the second end of the capillary is in fluidic communication with the receiving volume; a liquid exhaust system in fluidic communication with a removal conduit that removes liquid from the receiving volume; and an analysis system in fluidic communication with the removal conduit. 29. the system of claim 28, wherein the second end of the capillary is disposed within the meniscus and further comprising a capillary ground contact in fluid contact with at least part of the receiving volume. 30. the system of any of claims 28-29, wherein the second end of the capillary is disposed in the removal conduit and wherein the capillary ground contact comprises at least in part a conduit wall of the receiving volume. 31. the system of any of claims 28-30, wherein a perimeter of the second end of the capillary is spaced apart from the removal conduit. 32. the system of any of claims 28-31, further comprising an interface coupled to the transport liquid supply conduit and the second end of the capillary, wherein the interface defines the receiving volume proximate the removal conduit. 33. the system of any of claims 28-32, wherein the second end of the capillary further comprises a conductive tip and wherein the receiving volume is grounded by the conductive tip. 34. the system of claim 28-33, wherein the interface comprises at least one flexible element. 35. the system of claim 34, wherein the flexible element comprises at least one of rubber, polyurethane, neoprene, and silicone. 36. the system of any of claims 28-35, wherein the sampling device performs capillary electrophoresis or liquid chromatography. 37. the system of any of claims 28-36, wherein the analysis system is a mass spectrometer. 38. the system of any of claims 28-32, wherein the second end of the capillary further comprises a conductive tip, wherein the conductive tip is disposed above a meniscus of the receiving volume, and wherein the sample and the separation solution is grounded by the conductive tip. 39. a method for analyzing a sample, the method comprising: receiving the sample and a separation solution from a capillary into a receiving volume defined at least in part by a transport liquid delivered from a transport liquid supply conduit, while electrically grounding the transport liquid; aspirating the transport liquid, received sample and the separation solution into a liquid exhaust system in fluidic communication with a removal conduit in fluidic communication with the receiving volume; and analyzing the received sample and separation solution with a mass analysis system. 40. the method of claim 39, the method further comprising supplying the transport liquid to the transport liquid supply conduit from a transport liquid supply system in fluidic communication with the transport liquid supply conduit. 41. the method of any of claims 39-40, wherein the receiving volume is defined at least in part by a meniscus. 42. the method of any of claims 39-41, the method further comprising receiving the second end of the capillary within the meniscus. 43. the method of any of claims 39-42, wherein a perimeter of the second end of the capillary is in contact with the removal conduit. 44. the method of any of claims 39-43, the method further comprising receiving an interface defining the receiving volume proximate the removal conduit. 45. the method of claim 44, wherein the interface comprises at least one flexible element. 46. the method of any of claims 39-45, wherein the sampling device performs at least one of capillary electrophoresis and liquid chromatography to obtain the sample. 47. the method of any of claims 39-46, the method further comprising receiving the sample and the separation solution from the capillary, wherein the sample and the separation solution are released from the second end of the capillary under gravity into the receiving volume. 48. the method of any of claims 39-47, wherein the separation solution is diluted by the transport liquid to allow for ionization by the mass analysis system.
systems and methods for sampling cross-reference to related applications [0001] this application is being filed on february 24, 2022, as a pct patent international application that claims priority to and the benefit of u.s. provisional application no. 63/153,586, filed on february 25, 2021, and u.s. provisional application no. 63/218,754, filed on july 6, 2021, which both applications are incorporated by reference herein in their entireties. background [0002] mass spectrometry (ms) based methods can achieve label-free, universal mass detection of a wide range of analytes with exceptional sensitivity, selectivity, and specificity. as a result, there is significant interest in improving the throughput of ms- based analysis for many applications. summary [0003] in one aspect, the technology relates to a system including: a capillary having a first end connected to a sampling device and a second end, wherein the sampling device is configured to separate a sample with a separation solution and deliver the sample and the separation solution to the second end, wherein the second end is coupled to a capillary ground contact; a transport liquid supply system in fluidic communication with a transport liquid supply conduit that provides a transport liquid from a transport liquid source through the transport liquid supply conduit, wherein the transport liquid provided from the transport liquid supply conduit includes a receiving volume defined at least in part by a meniscus, and wherein the second end of the capillary is in fluidic communication with the receiving volume; a liquid exhaust system in fluidic communication with a removal conduit that removes liquid from the receiving volume; an electrical conductor for connecting the transport liquid supply conduit to the removal conduit; a first electrical contact connected to the transport liquid supply conduit; and an analysis system in fluidic communication with the removal conduit. in an example, the second end of the capillary is disposed within the meniscus and wherein the capillary ground contact includes at least in part the receiving volume. in another example, the second end of the capillary is disposed in the removal conduit and wherein the capillary ground contact includes at least in part the receiving volume. in yet another example, a perimeter of the second end of the capillary is spaced apart from the removal conduit. in still another example, the system further includes an interface coupled to the transport liquid supply conduit and the second end of the capillary, wherein the interface defines the receiving volume proximate the removal conduit. [0004] in another example of the above aspect, the second end of the capillary further includes a conductive tip and wherein the capillary ground contact is connected to the conductive tip. in an example, the interface includes at least one flexible element. in another example, the flexible element includes at least one of rubber, polyurethane, neoprene, and silicone. in yet another example, the sampling device performs capillary electrophoresis or liquid chromatography. in still another example, the analysis system is a mass spectrometer. [0005] in another example of the above aspect, the tip is disposed remote from and above the meniscus. [0006] in another aspect, the technology relates to a method for analyzing a sample, the method including: receiving the sample and a separation solution from a capillary, wherein the capillary has a first end connected to a sampling device and a second end coupled to a capillary ground contact, wherein the sampling device is configured to separate the sample from the separation solution, wherein the sample and the separation solution is received from the second end and into a receiving volume defined at least in part by a transport liquid delivered from a transport liquid supply conduit; aspirating the received sample and the separation solution into a liquid exhaust system in fluidic communication with a removal conduit in fluidic communication with the receiving volume; and analyzing the received sample and separation solution with a mass analysis system. in an example, the method further includes supplying the transport liquid to the transport liquid supply conduit from a transport liquid supply system in fluidic communication with the transport liquid supply conduit. in another example, the receiving volume is defined at least in part by a meniscus. in yet another example, the method further includes receiving the second end of the capillary within the meniscus. in still another example, a perimeter of the second end of the capillary is in contact with the removal conduit. [0007] in another example of the above aspect, the method further includes receiving an interface defining the receiving volume proximate the removal conduit. in an example, the interface includes at least one flexible element. in another example, the sampling device performs at least one of capillary electrophoresis and liquid chromatography to obtain the sample. in yet another example, the method further includes receiving the sample and the separation solution from the capillary, wherein the sample and the separation solution are released from the second end of the capillary under gravity into the receiving volume. in still another example, the separation solution is diluted by the transport liquid to allow for ionization by the mass analysis system. brief description of the drawings [0008] fig. 1 is a schematic view of an example system combining a sampling device with an open port interface (opi) sampling interface and an electrospray ionization (esi) source. [0009] fig. 2a depicts an enlarged partial view of an example system for analyzing a separated sample from a capillary electrophoresis (ce) capillary. [0010] fig. 2b depicts an enlarged partial view of an example system for analyzing a separated sample from a liquid chromatography (lc) capillary. [0011] fig. 3 a depicts another enlarged partial view of an example system for analyzing a separated sample from a ce capillary. [0012] fig. 3b depicts another enlarged partial view of an example system for analyzing a separated sample from an lc capillary. [0013] fig. 4a another enlarged partial view of an example system for analyzing a separated sample from a ce capillary using an interface. [0014] fig. 4b depicts another enlarged partial view of an example system for analyzing a separated sample from an lc capillary using an interface. [0015] fig. 5a depicts another enlarged partial view of an example system for analyzing a separated sample from a ce capillary. [0016] fig. 5b depicts another enlarged partial view of an example system for analyzing a separated sample from an lc capillary. [0017] fig. 6 depicts a method for analyzing a separated sample. [0018] fig. 7 depicts an example of a suitable operating environment in which one or more of the present examples can be implemented. detailed description [0019] fig. 1 is a schematic view of an example system 100 combining a capillary 102 connected to a sampling device 132 with an opi sampling interface 104 and esi source 114. the system 100 may be a mass analysis instrument such as a mass spectrometry device that is for ionizing and mass analyzing analytes received within an open end of a sampling opi. such a system 100 is described, for example, in u.s. pat. no. 10,770,277, the disclosure of which is incorporated by reference herein in its entirety. the capillary 102 is configured to release or eject an eluent 108, containing separated analytes from a sample in a buffer solution, from an end having a tip 112 into the open end of sampling opi 104. the capillary 102 is connected at a first end to a sampling device 132, examples of which are described below. as shown in fig. 1, the example system 100 generally includes the sampling opi 104 in liquid communication with the esi source 114 for discharging a liquid containing one or more sample analytes (e.g., via electrospray electrode 116) into an ionization chamber 118, and a mass analyzer detector (depicted generally at 120) in communication with the ionization chamber 118 for downstream processing and/or detection of ions generated by the esi source 114. due to the configuration of the nebulizer probe 138 and electrospray electrode 116 of the esi source 114, samples ejected therefrom are transformed into the gas phase. a transport liquid supply system 122 (e.g., including one or more pumps 124 and one or more conduits 125) provides for the flow of liquid from a transport liquid source or reservoir 126 to the sampling opi 104 and from the sampling opi 104 to the esi source 114. the transport liquid source 126 (e.g., containing a liquid, desorption solvent) can be in fluidic communication with the sampling opi 104 via a transport liquid supply conduit 127 through which the transport liquid can be delivered at a selected volumetric rate by the pump 124 (e.g., a reciprocating pump, a positive displacement pump such as a rotary, gear, plunger, piston, peristaltic, diaphragm pump, or other pump such as a gravity, impulse, pneumatic, electrokinetic, and centrifugal pump), all by way of non-limiting example. as discussed in detail below, the flow of liquid into and out of the sampling opi 104 occurs within a receiving volume 128 defined at least in part by a meniscus 129 accessible at the open end of the sampling opi 104 such that one or more eluent droplets 108 can be introduced into the receiving volume 128 and subsequently delivered to the esi source 114. a removal conduit 110 forms one part of a liquid exhaust system that connects the opi 104 to the esi 114, and removes the transport liquid and any eluent droplet 108 from the opi 104. an electrical contact 106 is disposed on the opi 104 and an electrical conductor 107 connects the outer portion of the opi 104 (that forms a part of the transport liquid supply conduit 127) to the removal conduit 110 to ensure grounding thereof. [0020] a controller 130 can be operatively coupled to the various components of the system 100 for operation thereof. controller 130 can be, but is not limited to, a microcontroller, a computer, a microprocessor, or any device capable of sending and receiving control signals and data. wired or wireless connections between the controller 130 and the remaining elements of the system 100 are not depicted but would be apparent to a person of skill in the art. [0021] as shown in fig. 1, the esi source 114 can include a source 136 of pressurized gas (e.g. nitrogen, air, or a noble gas) that supplies a high velocity nebulizing gas flow to the nebulizer probe 138 that surrounds the outlet end of the electrospray electrode 116. as depicted, the electrospray electrode 116 protrudes from a distal end of the nebulizer probe 138. the pressured gas interacts with the liquid discharged from the electrospray electrode 116 to enhance the formation of the sample plume and the ion release within the plume for sampling by mass analyzer detector 120, e.g., via the interaction of the high speed nebulizing flow and jet of a liquid sample ls (e.g., a dilution of the transport fluid s and the eluent received from the sampling device 132). the discrete volumes of liquid samples ls are typically separated from each other by volumes of the transport liquid s. the nebulizer gas can be supplied at a variety of flow rates, for example, in a range from about 0.1 l/min to about 20 l/min, which can also be controlled under the influence of controller 130 (e.g., via opening and/or closing valve 140). [0022] it will be appreciated that the flow rate of the nebulizer gas can be adjusted (e.g., under the influence of controller 130) such that the flow rate of liquid within the sampling opi 104 can be adjusted based, for example, on suction/aspiration force generated by the interaction of the nebulizer gas and the analyte-solvent dilution as it is being discharged from the electrospray electrode 116 (e.g., due to the venturi effect). a voltage, e.g., 5kv, is applied to the electrospray electrode 116 during operation, thus creating an electrical potential between the electrospray electrode 116 and the grounded opi 104. the ionization chamber 118 can be maintained at atmospheric pressure, though in some examples, the ionization chamber 118 can be evacuated to a pressure lower than atmospheric pressure. [0023] it will also be appreciated by a person skilled in the art and in light of the teachings herein that the mass analyzer detector 120 can have a variety of configurations. generally, the mass analyzer detector 120 is configured to process (e.g., filter, sort, dissociate, detect, etc.) sample ions generated by the esi source 114. by way of non-limiting example, the mass analyzer detector 120 can be a triple quadrupole mass spectrometer, or any other mass analyzer known in the art and modified in accordance with the teachings herein. other non-limiting, exemplary mass spectrometer systems that can be modified in accordance with various aspects of the systems, devices, and methods disclosed herein can be found, for example, in an article entitled "product ion scanning using a q-q-q linear ion trap (q trap) mass spectrometer," authored by james w. hager and j. c. yves le blanc and published in rapid communications in mass spectrometry (2003; 17: 1056-1064); and u.s. pat. no. 7,923,681, entitled "collision cell for mass spectrometer," the disclosures of which are hereby incorporated by reference herein in their entireties. [0024] other configurations, including but not limited to those described herein and others known to those skilled in the art, can also be utilized in conjunction with the systems, devices, and methods disclosed herein. for instance, other suitable mass spectrometers include single quadrupole, triple quadrupole, tof, linear ion traps, 3d traps, electrostatic traps, hybrid analyzers, and other known mass spectrometers. it will further be appreciated that any number of additional elements can be included in the system 100 including, for example, an ion mobility spectrometer (e.g., a differential mobility spectrometer) that is disposed between the ionization chamber 118 and the mass analyzer detector 120 and is configured to separate ions based on their mobility difference between in high-field and low-field). additionally, it will be appreciated that the mass analyzer detector 120 can comprise a detector that can detect the ions that pass through the analyzer detector 120 and can, for example, supply a signal indicative of the number of ions per second that are detected. [0025] as shown in fig. 1, the sampling device 132 is interfaced with an opi 104 to provide a sample introduction system for high-throughput mass spectrometry. in an example, the sampling device 132 performs capillary electrophoresis (ce) and a ce capillary 102 is interfaced with the opi 104. in other examples, the sampling device 132 performs liquid chromatography (lc) and an lc capillary 102 is interfaced with the opi 104. when a ce or lc capillary 102 is coupled to a mass analysis instrument 120 via the opi 104, the system can be referred to as a capillary electrophoresis mass spectrometry (ce-ms) system or a liquid chromatography mass spectrometry (lc-ms) system, respectively. the analytical performance (sensitivity, reproducibility, throughput, etc.) of a ce-ms or lc-ms system depends on the performance of the ce or lc device and the opi. the performance of the ce or lc device and the opi depends on selecting the operational conditions or parameters for interfacing these devices. example operational conditions and parameters for interfacing these devices are described herein below. [0026] the sampling device 132 may be a ce system. ce is a sample separation method that separates analytes within a sample based on electrophoretic mobility. standard ce systems utilize a fused silica capillary filled with an electrolyte (e.g., a buffer solution). a sample is introduced into a first end of the capillary. in standard systems, the first end of the capillary is placed in contact with an anode buffer solution and a second end is placed in contact with a cathode buffer solution. a high voltage (e.g., 20kv) is then applied across the capillary to initiate the movement of analytes. the components of the sample move and separate under the influence of the electric field based on differences in electrophoretic mobility. this separated sample may then be delivered to the opi 104 of the mass spectrometry system 100, as an eluent containing the separated analytes in the buffer solution. fig. 1 depicts the tip 112 of the capillary 102 disposed below the meniscus 129 of the receiving volume 128. this is but one example configuration and is further depicted and described in fig. 2a. other appropriate configurations to enable direct interfacing of a ce system with a mass spectrometry device are depicted in figs. 3a, 4a, and 5 a. regardless of configuration, in order to maintain the desired electrical potentials in both the esi electrode 116 (e.g., 5 kv) and the ce system 132 (e.g., 20 kv) the eluent delivered from the ce system 132 is preferably grounded. examples of such structure of enable this functionality are depicted and described below and include grounding the ce system via the transport liquid and/or the physical structures within the opi, or grounding via a dedicated capillary ground contact. [0027] the sampling device 132 may be an lc system. lc separates analytes within a sample based on differences in chemical affinity. in standard lc systems, a liquid sample is dissolved in a solvent (e.g., the mobile phase), and then flowed through a system (e.g., a column) containing a stationary phase. the analytes with stronger retention to the stationary phase will take longer to travel through the system, thus causing separation of the sample. the target analytes of the separated sample may then be delivered to the opi 104 of the mass spectrometry system 100, as described herein. pressure from the lc system may be used to initiate a controlled flow of an eluent of the separated sample and the solvent from the capillary 102 into the opi 104. fig. 1 depicts the tip 112 of the capillary 102 disposed below the meniscus 129 of the receiving volume 128. this is but one example configuration and is further depicted and described in fig. 2b. other appropriate configurations to enable direct interfacing of an lc system with a mass spectrometry device are depicted in figs. 3b, 4b, and 5b. lc systems can utilize different buffers or solvents, including those that have high conductivity. such high conductivity liquids are typically incompatible with ms devices. thus, in the examples depicted herein, the first electrical contact 106 connected to the transport liquid supply conduit grounds the transport liquid, thus enabling a high conductivity buffer to be utilized in the lc system and directly introduced into the opi 104 for dilution to an extent that avoids ionization suppression in an ion source of the ms system. more specifically, the electrical contact 106 provides a ground that reduces the conductivity of the buffer. this enables an lc system utilizing a high conductivity buffer (e.g., native lc) to be directly interfaced with a ms device through the opi 104, without further processing of the eluent from the lc system. [0028] fig. 2a depicts an enlarged partial view of an example system 200a for analyzing a separated sample from a ce capillary 202a. in the system 200a, the capillary 202a has a first end connected to a ce sampling device 232a, and an electric potential (e.g., 20k v) is applied to the ce sampling device 232a. the capillary 202a is interfaced with an opi 204a by disposing a second end of the capillary having a tip 212a within a receiving volume 228a defined at least in part by a meniscus 229a. since the receiving volume 228a is open to the atmosphere, the interface between the capillary 202a and receiving volume 228a may be referred to as an atmospheric pressure liquid junction (aplj), with a meniscus 229a of the transport fluid forming an air-liquid interface. the receiving volume 228a completing the electric circuit, for instance by grounding the liquid in the receiving volume 228a relative to the electric potential applied at the first end of the ce sampling device 232a. the grounding may be applied to a supply source of liquid to the receiving volume 228a, a supply conduit, or at the outlet of the opi 204a. [0029] in fig. 2a, the system 200a includes a transport liquid supply system 222a having a pump 224a, a transport liquid source 226a, and a transport liquid supply conduit 227a. the pump 224a is configured to pump transport liquid from the transport liquid supply source 226a into the transport liquid supply conduit 227a at a first flow rate. the transport liquid flows through the transport liquid supply conduit 227a towards the open end of the opi 204a, where the receiving volume 228a defined at least in part by the meniscus 229a is formed. the tip 212a of the capillary 202a is disposed within the receiving volume 228a such that the eluent from the capillary 202a is released into the receiving volume 228a. the eluent from the capillary 202a is then removed through the removal conduit 210a at a second flow rate. the first flow rate and the second flow rate are configured to allow the transport liquid to form the receiving volume 228a at the open end of the opi 204a and then be subsequently removed through the removal conduit 210a, without any transport liquid dripping or leaking from the open end of the opi 204a. [0030] the system 200a further includes an electrical conductor 207a (which may be integrated into the transport liquid supply conduit 227a or discrete therefrom) and a first electrical contact 206a connected to the transport liquid supply conduit 227a. the electrical conductor 207a connects the transport liquid supply conduit 227a to the removal conduit 210a. the first electrical contact 206a is configured to ground the solvent liquid, and the electrical conductor 207a helps ensure that the removal conduit 210a is also grounded. the first electrical contact 206a may include a grounding connector, such as a metal clamp, attached to a grounding wire. so long as the tip 212a of the capillary 202a is in contact with the transport liquid comprising the receiving volume 228a, the eluent released from the capillary 202a will be grounded via the first electrical contact 206a. as will be appreciated, in other examples the transport liquid may be grounded upstream from the opi 204a, such as a supply conduit or liquid source, provided the liquid is sufficiently conductive to provide an effective ground at the receiving volume 228a. thus, the two liquid circuits, e.g., the solvent liquid from the ce sampling device 232a and the transport liquid flowing from the opi 204a, are electrically decoupled. this configuration enables the eluent from the ce capillary 202a to flow directly into the opi 204a for dilution and transfer of the diluted solution to an electrospray ionization source of the mass analysis system, while isolating and maintaining the required potentials on the ce system (e.g., 20k v) and on the electrospray electrode of the mass analysis system (e.g., 5 kv). [0031] in another example of the configuration depicted in fig. 2a, the transport liquid supply conduit 227a and discrete pump 224a may be eliminated. such a configuration is depicted and described in the appendix, the disclosure of which is hereby incorporated by reference herein in its entirety. transport liquid is still required for proper operation of the opi 204a, however, so buffer liquid may be introduced from an outlet vial of the ce sampling device 232a. this buffer liquid is introduced under pressure to the opi 204a at a junction separate from the sample capillary 202a. the buffer liquid may be grounded anywhere along the flow path to the opi 204a or aplj; in fig. 1 of the appendix, the ground is depicted just before introduction of the buffer liquid to the aplj. fig. 1 of the appendix also depicts an isolation transformer, which may be utilized if the ce sampling device 232a and mass analysis system utilize a common power supply, to electrically decouple those two components. as will be appreciated by the person of skill in the art, a separate isolation transformer may not be required depending upon the type and configuration of the power supply supporting the mass analysis system, or if separate power supplies are utilized. [0032] fig. 2b depicts an enlarged partial view of an example system 200b for analyzing a separated sample from an lc capillary 202b. a number of features are described above in the context of fig. 2a and as such are not necessarily described further, but are numbered consistently herein for clarity. in general, the system 200b includes a transport liquid supply system 222b having a pump 224b, a transport liquid source 226b, and a transport liquid supply conduit 227b. the system 200b further includes an electrical conductor 207b for connecting the transport liquid supply conduit 227b to the removal conduit 210b, and a first electrical contact 206b connected to the transport liquid supply conduit 227b. the first electrical contact 206b and the electrical conductor 207b ensure that the transport liquid and removal conduit 210b are grounded. the lc capillary 202b is interfaced with an opi 204b by disposing a second end of the capillary 202b having a tip 212b within a receiving volume 228b defined at least in part by a meniscus 229b. by disposing the tip 212b within the receiving volume 228b, the eluent released from the lc capillary 202b will be grounded. thus, high conductivity liquids, which are typically incompatible with standard ms systems, may be used in the lc-ms system described herein, because the electrical contact 206b reduces the ionization suppression from the high concentration eluent discharged from the lc system and permits direct introduction of the eluent (now diluted in the transport liquid) into the opi 204b without interfering with the separation operation of the lc system. this configuration enables the diluted eluent from the lc capillary 202b to flow directly through the opi 204b to the electrospray electrode of the mass analysis system. [0033] fig. 3 a depicts another enlarged partial view of an example system 300a for analyzing a separated sample from a ce capillary 302a. a number of features are described above in the context of fig. 2a and as such are not necessarily described further, but are numbered consistently herein for clarity. in general, the system 300a includes a transport liquid supply system 322a having a pump 324a, a transport liquid source 326a, and a transport liquid supply conduit 327a. the system 300a further includes an electrical conductor 307a for connecting the transport liquid supply conduit 327a to the removal conduit 310a, and a first electrical contact 306a connected to the transport liquid supply conduit 327a. the first electrical contact 306a and the electrical conductor 307a ensure that the transport liquid and removal conduit 310a are grounded. the system 300a further includes a receiving volume 328a defined at least in part by a meniscus 329a. the ce capillary 302a is interfaced with an opi 304a by disposing a second end of the capillary 302a having a tip 312a within the removal conduit 310a. the capillary 302a may be spaced apart from the removal conduit 310a such that there is no direct contact between the capillary 302a and the removal conduit 310a. alternatively, the capillary 302a may be in contact with the removal conduit 310a so long as the contact does not completely block the flow of transport liquid through the removal conduit 310a. in either configuration, by disposing the tip 312a of the capillary 302a within the removal conduit 310a, the capillary is in contact with the transport liquid and the eluent from the capillary 302a is therefore grounded via the first electrical contact 306a. the capillary 302a is sized so as to provide sufficient space between the capillary 302a and the removal conduit 310a to enable flow of the transport liquid. this configuration enables the eluent from the ce capillary 302a to be diluted in the transport liquid and flow directly through the opi 304a to the mass analysis system, while maintaining the required potential on the ce system (e.g., 20k v) and on electrospray electrode of the mass analysis system (e.g., 5 kv). [0034] fig. 3b depicts another enlarged partial view of an example system 300b for analyzing a separated sample from an lc capillary 302b. a number of features are described above in the context of fig. 2a and, as such are not necessarily described further, but are numbered consistently herein for clarity. in general, the system 300b includes a transport liquid supply system 322b having a pump 324b, a transport liquid source 326b, and a transport liquid supply conduit 327b. the system 300b further includes an electrical conductor 307b for connecting the transport liquid supply conduit 327b to the removal conduit 310b, and a first electrical contact 306b connected to the transport liquid supply conduit 327b. the first electrical contact 306b and the electrical conductor 307b ensure that the transport liquid and removal conduit 327b are grounded. the system 300b further includes a receiving volume 328b defined at least in part by a meniscus 329b. the lc capillary 302b is interfaced with an opi 304b by disposing a second end of the capillary 302b having a tip 312b within the removal conduit 310b. the capillary 302b may be spaced apart from the removal conduit 310b or in contact therewith, provide transport liquid flow is maintained, as noted above with regard to fig. 3a. by disposing the tip 312b of the capillary 302b within the removal conduit 310a, the capillary is in contact with the transport liquid and the eluent from the capillary 302b is therefore grounded via the first electrical contact 306b. thus, high conductivity liquids, which are typically incompatible with standard ms systems, may be used in the lc-ms system described herein, because the electrical contact 306b reduces the ionization suppression from the high concentration eluent discharged from the lc system, thus making it compatible with the mass analysis system. this configuration enables the eluent from the lc capillary 302b to be diluted in the transport liquid and flow directly through the opi 304b to the mass analysis system. [0035] fig. 4a depicts another enlarged partial view of an example system 400a for analyzing a separated sample from a ce capillary 402a utilizing, in this case, using a physical interface or connector 429a that may be secured to the opi 404a via a conductive or non-conductive flexible fastener or gasket 430a. a number of features are described above in the context of fig. 2a and, as such are not necessarily described further, but are numbered consistently herein for clarity. in general, the system 400a includes a transport liquid supply system 422a having a pump 424a, a transport liquid source 426a, and a transport liquid supply conduit 427a. the system 400a further includes an electrical conductor 407a for connecting the transport liquid supply conduit 427a to the removal conduit 410a, and a first electrical contact 406a connected to the transport liquid supply conduit 427a. the first electrical contact 406a and the electrical conductor 407a ensure that the transport liquid and removal conduit 410a are grounded. the ce capillary 402a is connected to an opi 404a by the interface 429a, which is coupled to the transport liquid supply conduit 427a and the second end of the capillary 402a. the interface 429a defines a receiving volume 428a proximate the removal conduit 410a, and thus, is configured to receive transport liquid from the transport liquid supply conduit 427a. the interface 429a is further configured to be communicatively coupled with at least a tip 412a of the second end of the capillary 402a, e.g., via an inlet. in this example system 400a, the transport liquid floods the interface 429a. the eluent from the capillary 402a can then be received into the receiving volume 428a defined by the interface 429a, diluted in the transport liquid, and removed via the removal conduit 410a. [0036] the interface 429a and/or fastener 430a may be conductive or non-conductive. examples of a nonconductive interfaces 429a include, but are not limited to, a tube, chamber, or conduit comprising a non-conductive material, such as rubber or plastic. examples of a conductive interface 429a include, but are not limited to, a tube, chamber, or conduit comprising a conductive material, such as a conductive metal (e.g., copper, aluminum, steel). depending on the conductivity of the interface 429a and/or the fastener 430a, additional grounding conductors may be required to maintain the required potential on the ce system (e.g., 20kv) and on the electrospray electrode of the mass analysis system (e.g., 5 kv) as the opi between the ce system and the esi electrically isolate the ce system from the mass analysis system (e.g., a fused glass silica capillary is an insulator between the opi and the esi). the ce system applies a voltage, positive or negative, at one end, and a counter electrode applies a second voltage at the other end coupled to the interface 429a, which must be consistent with the voltage applied at the open end of the opi 404a to maintain a well-defined voltage drop across the ce capillary. in examples, the fastener 430a may be a solid gasket, a chemical adhesive, or an adhesive wrap or tape. the interface 429a may include at least one flexible element or be made in whole or in part from at least one of rubber, polyurethane, neoprene, or silicone. other fastener 430a configurations will be apparent to a person of skill in the art. use of a non-conductive fastener 430a with a conductive fastener 429a requires a second electrical contact 416a is disposed on the interface 429a to maintain the potential required in the ce system. if the interface 429a itself is non- conductive, a second electrical contact 416a’ should be connected to the tip 412a. in examples where the interface 429a is conductive, the first electrical contact 406a is sufficient to ground the eluent from the capillary 402a, so any second electrical contact 416a’ is optional, but may be desirable. [0037] fig. 4b depicts another enlarged partial view of an example system 400b for analyzing a separated sample from an lc capillary 402b using an interface 429b connected via a fastener 430b. a number of features are described above in the context of figs. 2a and 4a and, as such are not necessarily described further, but are numbered consistently herein for clarity. in general, the system 400b includes a transport liquid supply system 422b having a pump 424b, a transport liquid source 426b, and a transport liquid supply conduit 427b. the system 400b further includes an electrical conductor 407b for connecting the transport liquid supply conduit 427b to the removal conduit 410b, and a first electrical contact 406b connected to the transport liquid supply conduit 427b. the lc capillary 402b is connected to an opi 404b by the interface 429b, which is coupled to the transport liquid supply conduit 427b and the second end of the capillary 402b. the eluent from the capillary 402b is received by the receiving volume 428b defined by the interface 429b, diluted in the transport liquid to reduce the ionization suppression from the high concentration eluent discharged from the lc system, and removed via the removal conduit 410b. configurations and materials for the interface 429b and fastener 430b, as well as requirements for electrical contacts 416b in view thereof, are described above in the context of fig. 4a. [0038] fig. 5 a depicts another enlarged partial view of an example system 500a for analyzing a separated sample from a ce capillary 502a; fig. 5 a is another configuration of the aplj first depicted in fig. 2a, but where a tip 512a of the sample capillary 502a is located within the atmosphere (e.g., air side of the aplj) about the opi 504a, rather than within a receiving volume 528a (liquid side of the aplj) thereof. although the opi 504a is inverted compared to the previous examples, the components utilized therein are consistent. thus, a number of features are described above in the context of fig. 2a and as such are not necessarily described further, but are numbered consistently for clarity. in general, the system 500a includes a transport liquid supply system 522a having a pump 524a, a transport liquid source 526a, and a transport liquid supply conduit 527a. the system 500a further includes an electrical conductor 507a for connecting the transport liquid supply conduit 527a to the removal conduit 510a, and a first electrical contact 506a connected to the transport liquid supply conduit 527a. the first electrical contact 506a and the electrical conductor 507a ensure that the transport liquid and removal conduit 510a are grounded. [0039] in the example system 500a, one end of the capillary 502a has a conductive tip 512a that is positioned remote from and above the open end of the opi 504a. a droplet 508a eluent from the capillary 502a can be released from the tip 512a under gravity into a receiving volume 528a defined at least in part by a meniscus 529a. since the tip 512a of the capillary 502a is disposed remote from and above the receiving volume 528a, a second electrical contact 516a connected to the conductive tip 512a is required to maintain potential on the ce system. thus, when the eluent droplet 508a is released from the tip 512a of the capillary 512a, the eluent 508a is grounded via the second electrical contact 516a. the eluent is diluted in the transport liquid and then removed via the removal conduit 510a to the mass analysis system. [0040] fig. 5b depicts another enlarged partial view of an example system 500b for analyzing a separated sample from an lc capillary 502b. a number of features are described above in the context of figs. 2a and 5 a and as such are not necessarily described further, but are numbered consistently herein for clarity. in general, the system 500b includes a transport liquid supply system 522b having a pump 524b, a transport liquid source 526b, and a transport liquid supply conduit 527b. the system 500b further includes an electrical conductor 507b for connecting the transport liquid supply conduit 527b to the removal conduit 510b, and a first electrical contact 506b connected to the transport liquid supply conduit 527b. the first electrical contact 506b and the electrical conductor 507b ensure that the transport liquid and removal conduit 527b are grounded. the capillary 502b includes a conductive tip 512b positioned remote from and above the open end of the opi 504b. a droplet of eluent 508b from the capillary 502b can be released from the tip 512b under gravity into a receiving volume 528b defined at least in part by a meniscus 529b. a second electrical contact 516b connected to the conductive tip 512b; thus, when the eluent droplet 508b is released from the tip 512b of the capillary 502b, the eluent 508b is grounded via the second electrical contact 516b. thus, high conductivity liquids, which are typically incompatible with standard ms systems, may be used in the lc-ms system described herein, because the electrical contact 516b reduces the ionization suppression from the high concentration eluent discharged from the lc system prior to entering the ms system via the removal conduit 510b. [0041] fig. 6 depicts a method 600 for analyzing a separated sample received from a capillary. the capillary has a first end connected to a sampling device that performs either capillary electrophoresis (ce) or liquid chromatography (lc). example systems for connecting a ce or lc sampling device are described above, for example, in figs. 2a-5b. the method 600 includes receiving an eluent (e.g., separated sample and solvent) from the capillary, operation 602. the eluent is received into a receiving volume defined at least in part by a transport liquid and diluted in the transport liquid. the receiving volume may be defined at least in part by a meniscus. in advance of performing operation 602, the method 600 may include supplying the transport liquid to a transport liquid supply conduit, operation 614. the transport liquid is supplied from a transport liquid supply system, which is in fluidic communication with the transport liquid supply conduit. the transport liquid flows from a transport liquid source through the transport liquid supply conduit towards an open end of an opi, where the receiving volume is formed. a first electrical contact is connected to the transport liquid supply conduit to ground the transport liquid. the received diluted eluent is then aspirated into a liquid exhaust system, operation 604. the liquid exhaust system is in fluidic communication with a removal conduit configured to remove liquid from the receiving volume. the liquid exhaust system is described above, for example, in fig. 1. the method continues with analyzing the received diluted eluent with a mass analysis system, operation 606. [0042] the capillary from which the eluent is received in operation 602 may be arranged in one of a variety of example configurations, as described above in figs. 2a-5b. in one example, the method 600 includes receiving the second end of the capillary within the meniscus, operation 608. the second end of the capillary may be disposed within the transport liquid and outside of the removal conduit, such as in the example systems shown in figs. 2a and 2b. alternatively, the second end of the capillary may be disposed within the removal conduit, such as in the example systems shown in figs. 3a and 3b. in these examples, the perimeter of the second of the capillary may be in contact with the removal conduit. however, the contact between the second end of the capillary and the removal conduit must be configured to allow transport liquid to flow from the receiving volume into the removal conduit. [0043] in a second example, the method 600 includes receiving an interface defining the receiving volume proximate the removal conduit, operation 610. the interface is coupled to the transport liquid supply conduit and the second end of the capillary. thus, once the interface is received, operation 610, an eluent from the capillary can be received, operation 602, into the interface defining the receiving volume. as described above, the interface may include at least one flexible element and may be connected to a second electrical contact. example systems utilizing the interface received in operation 610 are described herein above, for example, in figs. 4a and 4b. in athird example, the method 600 includes receiving an eluent from the capillary where the eluent is released from the second end of the capillary under gravity into the receiving volume, operation 612. example systems consistent with operation 612 are described above, for example, in figs. 5a and 5b. the second end of the capillary is positioned remote from and above the receiving volume such that gravity causes droplets of the eluent to be released from the tip of the capillary. since the capillary is not in direct contact with the receiving volume, a second electrical contact is disposed on the tip of the capillary and configured to ground the eluent as it is released from the second end of the capillary. [0044] fig. 7 depicts one example of a suitable operating environment 700 in which one or more of the present examples can be implemented. this operating environment may be incorporated directly into the controller for a mass spectrometry system, e.g., such as the controller depicted in fig. 1. the controller may further interface with a ce or ec system that supplies eluents to the opi. this is only one example of a suitable operating environment and is not intended to suggest any limitation as to the scope of use or functionality. other well- known computing systems, environments, and/or configurations that can be suitable for use include, but are not limited to, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, programmable consumer electronics such as smart phones, network pcs, minicomputers, mainframe computers, tablets, distributed computing environments that include any of the above systems or devices, and the like. [0045] in its most basic configuration, operating environment 700 typically includes at least one processing unit 702 and memory 704. depending on the exact configuration and type of computing device, memory 704 (storing, among other things, instructions to control the sampling device, release of eluent from the capillary, liquid flow rates, interface operation of the ce or lc with that of the ms, etc., or perform other methods disclosed herein) can be volatile (such as ram), non-volatile (such as rom, flash memory, etc.), or some combination of the two. this most basic configuration is illustrated in fig. 7 by dashed line 706. further, environment 700 can also include storage devices (removable, 708, and/or non-removable, 710) including, but not limited to, magnetic or optical disks or tape. similarly, environment 700 can also have input device(s) 714 such as touch screens, keyboard, mouse, pen, voice input, etc., and/or output device(s) 716 such as a display, speakers, printer, etc. also included in the environment can be one or more communication connections 712, such as lan, wan, point to point, bluetooth, rf, etc. [0046] operating environment 700 typically includes at least some form of computer readable media. computer readable media can be any available media that can be accessed by processing unit 702 or other devices having the operating environment. by way of example, and not limitation, computer readable media can include computer storage media and communication media. computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. computer storage media includes, ram, rom, eeprom, flash memory or other memory technology, cd-rom, digital versatile disks (dvd) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, solid state storage, or any other tangible medium which can be used to store the desired information. communication media embodies computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. the term "modulated data signal" means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. by way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, rf, infrared and other wireless media. combinations of the any of the above should also be included within the scope of computer readable media. a computer-readable device is a hardware device incorporating computer storage media. [0047] the operating environment 700 can be a single computer operating in a networked environment using logical connections to one or more remote computers. the remote computer can be a personal computer, a server, a router, a network pc, a peer device or other common network node, and typically includes many or all of the elements described above as well as others not so mentioned. the logical connections can include any method supported by available communications media. such networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the internet. [0048] in some examples, the components described herein include such modules or instructions executable by computer system 700 that can be stored on computer storage medium and other tangible mediums and transmitted in communication media. computer storage media includes volatile and non-volatile, removable and non removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules, or other data. combinations of any of the above should also be included within the scope of readable media. in some examples, computer system 700 is part of a network that stores data in remote storage media for use by the computer system 700. [0049] this disclosure described some examples of the present technology with reference to the accompanying drawings, in which only some of the possible examples were shown. other aspects can, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein. rather, these examples were provided so that this disclosure was thorough and complete and fully conveyed the scope of the possible examples to those skilled in the art. [0050] although specific examples were described herein, the scope of the technology is not limited to those specific examples. one skilled in the art will recognize other examples or improvements that are within the scope of the present technology. therefore, the specific structure, acts, or media are disclosed only as illustrative examples. examples according to the technology may also combine elements or components of those that are disclosed in general but not expressly exemplified in combination, unless otherwise stated herein. the scope of the technology is defined by the following claims and any equivalents therein.
139-719-586-519-094
US
[ "EP", "DE", "JP", "US", "AU", "WO", "CA", "ES", "AT", "DK" ]
C12N15/09,C12N15/10,C12N1/21,C12N9/00,C12N15/52,C12Q1/00,C12Q1/25,C12Q1/68,C12N1/12,C40B40/10,C12N1/14,C12N1/18,C40B30/02
1995-07-18T00:00:00
1995
[ "C12", "C40" ]
screening methods for enzymes and enzyme kits
recombinant enzyme libraries and kits where a plurality of enzymes are each characterized by different physical and/or chemical characteristics and classified by common characteristics. the characteristics are determined by screening of recombinant enzymes expressed by a dna library produced from various microorganisms. also disclosed is a process for identifying clones of a recombinant library which express a protein with a desired activity by screening a library of expression clones randomly produced from dna of at least one microorganism, said screening being effected on expression products of said clones to thereby identify clones which express a protein with a desired activity. also disclosed is a process of screening clones having dna from an uncultivated microorganism for a specified protein activity by screening for a specified protein activity in a library of clones prepared by (i) recovering dna from a dna population derived from at least one uncultivated microorganism; and (ii) transforming a host with recovered dna to produce a library of clones which is screened for the specified protein activity.
a process for obtaining a recombinant enzyme library derived from different microorganisms, comprising: screening recombinant proteins produced by a plurality of expression clones, derived from different microorganisms, said screening being effected to determine the clones which produce recombinant enzymes and to determine a plurality of different enzyme characteristics for the recombinant enzymes; and classifying the recombinant enzymes by said enzyme characteristics, thereby obtaining said recombinant enzyme library. the process of claim 1, wherein said screening includes screening said recombinant enzymes for a chemical characteristic, and rescreening recombinant enzymes having said chemical characteristic for a second chemical characteristic. the process of claim 1, wherein said screening includes screening said recombinant enzymes for one or more of oxidoreductases, transferases, hydrolases, lyases, isomerases and ligases. the process of claim 2 or 3, wherein said screening further includes rescreening said recombinant enzymes for one or more specified chemical functionalities. the process of claim 1 or 4, wherein said screening includes screening said recombinant enzymes for one or more physical properties. the process of any one of claims 1 to 5, further comprising the step of identifying the clones producing the recombinant enzymes. the process of claim 6, further comprising the step of sequencing the identified clones to identify the dna sequence encoding the enzyme. the process of any one of claims 1 to 7, wherein the microorganisms are uncultured microorganisms. the process of any one of claims 1 to 8, wherein the microorganisms are obtained from an environmental sample. the process of claim 9, wherein the microorganisms are extremophiles. the process of claim 10, wherein the extremophiles are thermophiles, hyperthermophiles, psychrophiles, and psychrotrophs. the process of any one of claims 9 to 11, wherein the environmental sample is obtained from arctic and antarctic ice, water, permafrost, volcanoes, soil, or plants in tropical areas.
this invention relates to the field of preparing and screening libraries of clones containing microbially derived dna and to protein. e.g. enzyme, libraries. more particularly, the present invention is directed to recombinant enzyme expression libraries and recombinant enzyme libraries wherein the recombinant enzymes are generated from dna obtained from microorganisms. industry has recognized the need for new enzymes for a wide variety of industrial applications. as a result, a variety of microorganisms have been screened to ascertain whether or not such microorganisms have a desired enzyme activity. if such a microorganism does have a desired enzyme activity, the enzyme is then recovered from the microorganism. naturally occurring assemblages of microorganisms often encompass a bewildering array of physiological and metabolic diversity. in fact, it has been estimated that to date less than one percent of the world's organisms have been cultured. it has been suggested that a large fraction of this diversity thus far has been unrecognized due to difficulties in enriching and isolating microorganisms in pure culture. therefore, it has been difficult or impossible to identify or isolate valuable enzymes from these samples. these limitations suggest the need for alternative approaches to characterize the physiological and metabolic potential i.e. activities of interest of as-yet uncultivated microorganisms, which to date have been characterized solely by analyses of pcr amplified rrna gene fragments, clonally recovered from mixed assemblage nucleic acids. mccormick (methods in enzymology, pages 445-449, 1987) and sambrook et al. (molecular cloning: a laboratory manual vol. 2, cold spring harbor, new york, pages 8.50-8.51, 1989) describe a method termed sib selection. this method is directed to the problem of gene isolation from a library of dna sequences. sib selection is a method of sequential fractionation of a heterogeneous sample that can be applied to isolation of a sequence, gene, or gene family from a complete library. library fractions that are positive a certain activity are further sub-fractionated until a single positive clone is obtained. in accordance with one aspect of the present invention, there is provided a novel approach for obtaining enzymes for further use. in accordance with the present invention, recombinant enzymes are generated from microorganisms and are classified by various enzyme characteristics. also described herein is a recombinant expression library which is comprised of a multiplicity of clones which are capable of expressing recombinant enzymes. the expression library is produced by recovering dna from a microorganism, cloning such dna into an appropriate expression vector which is then used to transfect or transform an appropriate host for expression of a recombinant protein. thus, for example, genomic dna may be recovered from either a culturable or non-culturable organism and employed to produce an appropriate recombinant expression library for subsequent determination of enzyme activity, such a recombinant expression library may be prepared without prescreening the organism from which the library is prepared for enzyme activity. having prepared a multiplicity of recombinant expression clones from dna isolated from an organism, the polypeptides expressed by such clones are screened for enzyme activity and specified enzyme characteristics in order to identify and classify the recombinant clones which produce polypeptides having the specified enzyme characteristics. also described herein is a process of screening clones having dna from an uncultivated microorganism for a specified protein, e.g. enzyme, activity which process comprises: screening for a specified protein, e.g . enzyme activity in a library of clones prepared by (i) recovering dna from a dna population derived from at least one uncultivated microorganism; and (ii) transforming a host with recovered dna to produce a library of clones which are screened for the specified protein, e.g . enzyme, activity. the library is produced from dna which is recovered without culturing of an organism, particularly where the dna is recovered from an environmental sample containing microorganisms which are not or cannot be cultured. preferably, dna is ligated into a vector, particularly wherein the vector further comprises expression regulatory sequences which can control and regulate the production of a detectable enzyme activity from the ligated dna. the f-factor (or fertility factor) in e. coli is a plasmid which effects high frequency transfer of itself during conjugation and less frequent transfer of the bacterial chromosome itself. to achieve and stably propogate large dna fragments from mixed microbial samples, a particularly preferred embodiment is to use a cloning vector containing an f-factor origin of replication to generate genomic libraries that can be replicated with a high degree of fidelity. when integrated with dna from a mixed uncultured environmental sample, this makes it possible to achieve large genomic fragments in the form of a stable "environmental dna library. preferably, double stranded dna obtained from the uncultivated dna population is selected by: converting the double stranded genomic dna into single stranded dna: recovering from the converted single stranded dna single stranded dna which specifically binds, such as by hybridization, to a probe dna sequence: and convening recovered single stranded dna to double stranded dna. the probe may be directly or indirectly bound to a solid phase by which it is separated from single stranded dna which is not hybridized or otherwise specifically bound to the probe. the process can also include releasing single stranded dna from said probe after recovering said hybridized or otherwise bound single stranded dna and amplifying the single stranded dna so released prior to converting it to double stranded dna. also described herein is a process of screening clones having dna from an uncultivated microorganism for a specified protein. e.g. . enzyme, activity which comprises screening for a specified gene cluster protein product activity in the library of clones prepared by: (i) recovering dna from a dna population derived from at least one uncultivated microorganism: and (ii) transforming a host with recovered dna to produce a library of clones with the screens for the specified protein. e.g. enzyme, activity. the library is produced from gene cluster dna which is recovered without culturine of an organism, particularly where the dna gene clusters are recovered from an environmental sample containing microorganisms which are not or cannot be cultured. alternatively, double-stranded gene cluster dna obtained from the uncultivated dna population is selected by converting the double-stranded genomic gene cluster dna into single-stranded dna; recovering from the converted single-stranded gene cluster polycistron dna, single-stranded dna which specifically binds, such as by hybridization, to a polynucleotide probe sequence; and converting recovered single-stranded gene cluster dna to double-stranded dna. these and other aspects of the present invention are described with respect to particular preferred embodiments and will be apparent to those skilled in the art from the teachings herein. brief description of the drawings figure i shows an overview of the procedures used to construct an environmental library from a mixed picoplankton sample as described in example 3. figure 2 is a schematic representation of one embodiment of various tiers of chemical characteristics of an enzyme which may be employed in the present invention as described in example 4. figure 3 is a schematic representation of another embodiment of various tiers of chemical characteristics of an enzyme which may be employed in the present invention as described in example 4. figure 4 is a schematic representation of a further embodiment of various tiers of chemical characteristics of an enzyme which may be employed in the present invention as described in example 4. figure 5 is a schematic representation of a still further embodiment of various tiers of chemical characteristics of an enzyme which may be employed in the present invention as described in example 4. figure 6 shows the ph optima results produced by enzyme esl-001-01 in the experiments described in example 5. figure 7 shows the temperature optima results produced by enzyme esl-001-01 in the experiments described in example 5. figure 8 shows the organic solvent tolerance results produced by enzyme esl-001-01 in the experiments described in example 5. detailed description of preferred embodiments in accordance with a preferred aspect of the present invention, the recombinant enzymes are characterized by both physical and chemical characteristics and such chemical characteristics are preferably classified in a tiered manner such that recombinant enzymes having a chemical characteristic in common are then classified by other chemical characteristics which may or may not be more selective or specific chemical characteristic and so on, as hereinafter indicated in more detail. as hereinabove indicated, the recombinant enzymes are also preferably classified by physical characteristics and one or more tiers of the enzymes which are classified by chemical characteristics may also be classified by physical characteristics or vice versa. as used herein, the term "chemical characteristic" of a recombinant enzyme refers to the substrate or chemical functionality upon which the enzyme acts and/or the catalytic reaction performed by the enzyme; e.g., the catalytic reaction may be hydrolysis (hydrolases) and the chemical functionality may be the type of bond upon which the enzyme acts (esterases cleave ester bonds) or may he the particular type of structure upon which the enzyme acts (a glycosidase which acts on glycosidic bonds). thus, for example, a recombinant enzyme which acts on glycosidic bonds may, for example, be chemically classified in accordance with the tiered system as: tier 1: hydrolase: tier 2: acetal bonds: tier 3: glycosidase. as used herein, a "physical characteristic" with respect to a recombinant enzyme means a property (other than a chemical reaction) such as ph; temperature stability; optimum temperature for catalytic reaction; organic solvent tolerance; metal ion selectivity; detergent sensitivity, etc. in an embodiment of the invention, in which a tiered approach is employed for classifying the recombinant enzymes by chemical and/or physical characteristics, the enzymes at one or more of the chemical characteristic tiers may also be classified by one or more physical characteristics and vice versa. in a preferred embodiment, the enzymes are classified by both physical and chemical characteristics, e.g., the individual substrates upon which they act as well as physical characteristics. thus, for example, as a representative example of the manner in which a recombinant enzyme may be classified in accordance with the present invention, a recombinant enzyme which is a protease (in this illustration tier 1 is hydrolase: tier 2 is amide (peptide bond) that may be further classified in tier 3 as to the ultimate site in the amino acid sequence where cleavage occurs: e.g., anion, cation, large hydrophobic small hydrophobic. each of the recombinant enzymes which has been classified by the side chain in tier 3 may also be further classified by physical characteristics of the type hereinabove indicated. in this manner, it is possible to select from the recombinant library, enzymes which have a specified chemical characteristic in common, e.g., all endopeptidases (which act on internal peptide bonds) and which have a specified physical characteristic in common. e.g., all act optimally at a ph within a specified range. as hereinabove indicated, a recombinant enzyme library prepared from a microorganism is preferably classified by chemical characteristics in a tiered approach. this may be accomplished by initially testing the recombinant polypeptides generated by the library in a low selectivity screen, e.g., the catalytic reaction performed by the enzyme. this may be conveniently accomplished by screening for one or more of the six iub classes; oxidoreductases; transferases; hydrolases: lyases, isomerases, ligases. the recombinant enzymes which are determined to be positive for one or more: of the iub classes may then be rescreened for a more specific enzyme activity. thus, for example, if the recombinant library is screened for hydrolase activity, then those recombinant clones which are positive for hydrolase activity may be rescreened for a more specialized hydrolase activity, i.e., the type of bond on which the hydrolase acts. thus, for example, the recombinant enzymes which are hydrolases may be rescreened to ascertain those hydrolases which act on one or more specified chemical functionalities, such as: (a) amide (peptide bonds), i.e., proteases: (b) ester bonds. i.e., esterases and lipases; (c) acetals. i.e., glycosidases, etc. the recombinant enzymes which have been classified by the chemical bond on which they act may then be rescreened to determine a more specialized activity therefor, such as the type of substrate on which they act. thus, for example, those recombinant enzymes which have been classified as acting on ester bonds (lipases and esterases) may be rescreened to determine the ability thereof to generate optically active compounds, i.e., the ability to act on specified substrates, such as meso alcohols, meso diacids, chiral alcohols, chiral acids, etc. for example, the recombinant enzymes which have been classified as acting on acetals may be rescreened to classify such recombinant enzymes by a specific type of substrate upon which they act, e.g., (a) p1 sugar such as glucose, galactose, etc., (b) glucose polymer (exo-, endo- or both), etc. enzyme tiers thus, as a representative but not limiting example, the following are representative enzyme tiers: tier 1. divisions are based upon the catalytic reaction performed by the enzyme, e.g., hydrolysis, reduction, oxidation, etc. the six iub classes will be used: oxidoreductase, transferases, hydrolases, lyases, isomerases, ligases. tier 2: divisions are based upon the chemical functionality undergoing reaction. e.g., esters, amides, phosphate diesters, sulfate mono esters, aldehydes, ketones, alcohols, acetals, ketals, alkanes, olefins, aromatic rings, heteroaromatic rings, molecular oxygen, enols, etc. lipases and esterases both cleave the ester bond: the distinction comes in whether the natural substrate is aggregated into a membrane (lipases) or dispersed into solution (esterases). tier 3: divisions and subdivisions are based upon the differences between individual substrate structures which are covalently attached to the functionality undergoing reaction as defined in tier 2. for example acetal hydrolysis: is the acetal part of glucose or galactose: or is the acetal the α or β anomer? these are the types of distinctions made in tier 3. the divisions based upon substrate specificity are unique to each particular enzyme reaction: there will be different substrate distinctions depending upon whether the enzyme is for example, a protease or phosphatase. tier 4: divisions are based on which of the two possible enantiomeric products the enzyme produces. this is a measure of the ability of the enzyme to selectively react with one of the two enantiomers (kinetic resolution), or the ability of the enzyme to react with a meso difunctional compound to selectively generate one of the two enantiomeric reaction products. tier 5/orthogonal tier/physical character tier. the fifth tier is orthogonal to the other tiers. it is based on the physical properties of the enzymes, rather than the chemical reactions, per se : the fifth tier forms a second dimension with which to classify the enzymes. the fifth tier can be applied to any of the other tiers, but will most often be applied to the third tier. thus, in accordance with an aspect of the present invention, an expression library is randomly produced from the dna of a microorganism, in particular, the genomic dna or cdna of the microorganism and the recombinant proteins or polypeptides produced by such expression library are screened to classify the recombinant enzymes by different enzyme characteristics. in a preferred embodiment, the recombinant proteins are screened for one or more particular chemical characteristics and the enzymes identified as having such characteristics are then rescreened for a more specific chemical characteristic and this rescreening may be repeated one or more times. in addition, in a preferred embodiment, the recombinant enzymes are also screened to classify such enzymes by one or more physical characteristics. in this manner, the recombinant enzymes generated from the dna of a microorganism are classified by both chemical and physical characteristics and it is therefore possible to select recombinant enzymes from one or more different organisms that have one or more common chemical characteristics and/or one or more common physical characteristics. moreover, since such enzymes are recombinant enzymes, it is possible to produce such enzymes in desired quantities and with a desired purity. the tiered approach of the present invention is not limited to a tiered approach in which for example, the tiers are more restrictive. for example, the tiered approach is also applicable to using a tiered approach in which, for example, the first tier is "wood degrading" enzymes. the second chemical tier could then, for example, be the type of enzyme which is a "wood degrading" enzyme. similarly, the first tier or any other tier could be physical characteristics and the next tier could he specified chemical characteristics. thus, the present invention is generally applicable to providing recombinant enzymes and recombinant enzyme libraries wherein various enzymes are classified by different chemical and/or physical characteristics. the microorganisms from which the recombinant libraries may be prepared include prokaryotic microorganisms, such as eubacteria and archaebacteria, and lower eukaryotic microorganisms such as fungi, some algae and protozoa. the microorganisms may be cultured microorganisms or uncultured microorganisms obtained from environmental samples and such microorganisms may be extremophiles, such as thermophiles, hyperthermophiles, psychrophiles, psychrotrophs, etc. preferably, the library is produced from dna which is recovered without culturing of an organism particularly where the dna is recovered from an environmental sample containing microorganisms which are not or cannot be cultured. sources of microorganism dna as a starting material library from which dna is obtained are particularly contemplated to include environmental samples, such as microbial samples obtained from arctic and antarctic ice, water or permafrost sources, materials of volcanic origin, materials from soil or plant sources in tropical areas, etc. thus, for example, genomic dna may be recovered from either uncultured or non-culturable organism and employed to produce an appropriate library of clones for subsequent determination of enzyme activity. bacteria and many eukaryotes have a coordinated mechanism for regulating genes whose products are involved in related processes. the genes are clustered, in structures referred to as "gene clusters," on a single chromosome and are transcribed together under the control of a single regulatory sequence, including a single promoter which initiates transcription of the entire cluster. the gene cluster, the promoter, and additional sequences that function in regulation altogether are referred to as an "operon" and can include up to 20 or more genes, usually from 2 to 6 genes. thus, a gene cluster is a group of adjacent genes that are either identical or related, usually as to their function. some gene families consist of identical members. clustering is a prerequisite for maintaining identity between genes, although clustered genes are not necessarily identical. gene clusters range from extremes where a duplication is generated to adjacent related genes to cases where hundreds of identical genes lie in a tandem array. sometimes no significance is discernable in a repetition of a particular gene. a principal example of this is the expressed duplicate insulin genes in some species, whereas a single insulin gene is adequate in other mammalian species. it is important to further research gene clusters and the extent to which the full length of the cluster is necessary for the expression of the proteins resulting therefrom. further, gene clusters undergo continual reorganization and, thus, the ability to create heterogeneous libraries of gene clusters from, for example, bacterial or other prokaryote sources is valuable in determining sources of novel proteins, particularly including proteins, e.g. enzymes, such as, for example, the polyketide synthases that are responsible for the synthesis of polyketides having a vast array of useful activities. other types of proteins that are the product(s) of gene clusters are also contemplated, including, for example, antibiotics, antivirals, antitumor agents and regulatory proteins, such as insulin. polyketides are molcules which are an extremely rich source of bioactivities, including antibiotics (such as tetracyclines and erythromycin), anti-cancer agents (daunomycin), immunosuppressants (fk506 and rapamycin), and veterinary products (monensin). many polyketides (produced by polyketide synthases) are valuable as therapeutic agents. polyketide synthases are multifunctional enzymes that catalyze the biosynthesis of a huge variety of carbon chains differing in length and patterns of functionality and cyclization. polyketide synthase genes fall into gene clusters and at least one type (designated type i) of polyketide synthases have large size genes and enzymes, complicating genetic manipulation and in vitro studies of these genes/proteins. the ability to select and combine desired components from a library of polyketide and postpolyketide biosynthesis genes for generation of novel polyketides for study is appealing. using the method(s) of the present invention facilitates the cloning of novel polyketide synthases, particularly when one uses the f-factor based vectors, which facilitate cloning of gene clusters. preferably, the gene cluster dna is ligated into a vector, particularly wherein a vector further comprises expression regulatory sequences which can control and regulate the production of a detectable protein or protein-related array activity from the ligated gene clusters. use of vectors which have an exceptionally large capacity for exogenous dna introduction are particularly appropriate for use with such gene clusters and are described by way of example herein to include the f-factor (or fertility factor) of e. coli. this f-factor of e. coli is a plasmid which affect highfrequency transfer of itself during conjugation and is ideal to achieve and stably propagate large dna fragments, such as gene clusters from mixed microbial samples. the term "derived" or "isolated" means that material is removed from its original environment ( e.g. , the natural environment if it is naturally occurring). for example, a naturally-occurring polynucleotide or polypeptide present in a living animal is not isolated, but the same polynucleotide or polypeptide separated from some or all of the coexisting materials in the natural system, is isolated. as hereinabove indicated, the expression library may be produced from environmental samples in which case dna may be recovered without culturing of an organism or the dna may be recovered from a cultured organism. in preparing the expression library genomic dna may be recovered from either a cultured organism or an environmental sample (for example, soil) by various procedures. the recovered or isolated dna is then fragmented into a size suitable for producing an expression library and for providing a reasonable probability that desired genes will be expressed and screened without the necessity of screening an excessive number of clones. thus, for example, if the average genome fragment produced by shearing is 4.5 kbp, for a 1.8mbp genome about 2000 clones should be screened to achieve about a 90% probability of obtaining a particular gene. in some cases, in particular where the dna is recovered without culturing, the dna is amplified (for example by pcr) after shearing. the sized dna is cloned into an appropriate expression vector and transformed into an appropriate host, preferably a bacterial host and in particular e. coli. although e. coli is preferred, a wide variety of other hosts may be used for producing an expression library. the expression vector which is used is preferably one which includes a promoter which is known to function in the selected host in case the native genomic promoter does not function in the host. as representative examples of expression vectors which may be used for preparing an expression library, there may be mentioned phage, plasmids, phagemids cosmids, phosmids, bacterial artificial chromosomes, p1-based artificial chromosomes, yeast artificial chromosomes, and any other vectors specific for specific hosts of interest (such as bacillus, aspergillus, yeast, etc.) the vector may also include a tag of a type known in the art to facilitate purification. the following outlines a general procedure for producing expression libraries from both culturable and non-culturable organisms. culturable organisms obtain biomass dna isolation (ctab) shear dna (25 gauge needle) blunt dna (mung bean nuclease) methylate ( eco ri methylase) ligate to eco ri linkers (ggaattcc) cut back linkers ( eco ri restriction endonuclease) size fractionate (sucrose gradient) ligate to lambda vector (lambda zap ii and gt11) package ( in vitro lambda packaging extract) plate on e. coli host and amplify unculturable organisms obtain cells isolate dna (various methods) blunt dna (mung bean nuclease) ligate to adaptor containing a not i site and conjugated to magnetic beads ligate unconjugated adaptor to the other end of the dna amplify dna in a reaction which allows for high fidelity, and uses adaptor sequences as primers cut dna with not i size fractionate (sucrose gradient or sephacryl column) ligate to lambda vector (lambda zap ii and gt11) package ( in vitro lambda packaging extract) plate on e. coli host and amplify the probe dna used for selectively recovering dna of interest from the dna derived from the at least one uncultured microorganism can be a full-length coding region sequence or a partial coding region sequence of dna for an enzyme of known activity, a phylogenetic marker or other identified dna sequence. the original dna library can be preferably probed using mixtures of probes comprising at least a portion of the dna sequence encoding the specified activity. these probes or probe libraries are preferably single-stranded and the microbial dna which is probed has preferably been converted into single-stranded form. the probes that are particularly suitable are those derived from dna encoding enzymes having an activity similar or identical to the specified enzyme activity which is to be screened. the probe dna should be at least about 10 bases and preferably at least 15 bases. in one embodiment, the entire coding region may be employed as a probe. conditions for the hybridization in which dna is selectively isolated by the use of at least one dna probe will he designed to provide a hybridization stringency of at least about 50% sequence identity, more particularly a stringency providing for a sequence identity of at least about 70%. hybridization techniques for probing a microbial dna library to isolate dna of potential interest are well known in the art and any of those which are described in the literature are suitable for use herein, particularly those which use a solid phase-bound, directly or indirectly bound, probe dna for ease in separation from the remainder of the dna derived from the microorganisms. preferably the probe dna is "labeled" with one partner of a specific binding pair ( i.e. a ligand) and the other partner of the pair is bound to a solid matrix to provide ease of separation of target from its source. the ligand and specific binding partner can be selected from, in either orientation, the following: (1) an antigen or hapten and an antibody or specific binding fragment thereof: (2) biotin or iminobiotin and avidin or streptavidin; (3) a sugar and a lectin specific therefor; (4) an enzyme and an inhibitor therefor; (5) an apoenzyme and cofactor: (6) complementary homopolymeric oligonucleotides; and (7) a hormone and a receptor therefor. the solid phase is preferably selected from: (1) a glass or polymeric surface; (2) a packed column of polymeric beads; and (3) magnetic or paramagnetic particles. the library of clones prepared as described above can be screened directly for a desired, e.g. enzymatic, activity without the need for culture expansion, amplification or other supplementary procedures. however, in one preferred embodiment, it is considered desirable to amplify the dna recovered from the individual clones such as by pcr. further, it is optional but desirable to perform an amplification of the target dna that has been isolated. in this embodiment the selectively isolated dna is separated from the probe dna after isolation. it is then amplified before being used to transform hosts. the double stranded dna selected to include as at least a portion thereof a predetermined dna sequence can be rendered single stranded, subjected to amplification and reannealed to provide amplified numbers of selected double stranded dna. numerous amplification methodologies are now well known in the art. the selected dna is then used for preparing a library for screening by transforming a suitable organism. hosts, particularly those specifically identified herein as preferred, are transformed by artificial introduction of the vectors containing the target dna by inoculation under conditions conducive for such transformation. as representative examples of expression vectors which may be used there may be mentioned viral particles, baculovirus, phage, plasmids, phagemids, cosmids, phosmids, bacterial artificial chromosomes, viral dna ( e.g. vaccinia, adenovirus, foul pox virus, pseudorables and derivatives of sv40), p1-based artificial chromosomes, yeast plasmids, yeast artificial chromosomes, and any other vectors specific for specific hosts of interest (such as bacillus, aspergillus, yeast, etc. ) thus, for example, the dna may be included in any one of a variety of expression vectors for expressing a polypeptide. such vectors include chromosomal, nonchromosomal and synthetic dna sequences. large numbers of suitable vectors are known to those of skill in the art, and are commercially available. the following vectors are provided by way of example: bacterial: pqe70, pqe60. pqe-9 (qiagen), psix174, pbluescript sk, pbluescript ks. pnh8a, pnh16a, pnh18a, pnh46a (stratagene); ptrc99a, pkk223-3. pkk233-3, pdr540, prit5 (pharmacia): eukaryotic: pwlneo. psv2cat. pog44. pxt1, psg (stratagene) psvk3. pbpv. pmsg. psvl (pharmacia). however, any other plasmid or vector may be used as long as they are replicable and viable in the host. a particularly preferred type of vector for use in the present invention contains an f-factor origin of replication. the f-factor (or fertility factor) in e. coli is a plasmid which effects high frequency transfer of itself during conjugation and less frequent transfer of the bacterial chromosome itself. a particularly preferred embodiment is to use cloning vectors, referred to as "fosmids" or bacterial artificial chromosome (bac) vectors these are derived from the e. coli f-factor and are able to stably integrate large segments of genomic dna. when integrated with dna from a mixed uncultured environmental sample, this makes it possible to achieve large genomic fragments in the form of a stable "environmental dna library." the dna derived from a microorganism(s) may be inserted into the vector by a variety of procedures. in general, the dna sequence is inserted into an appropriate restriction endonuclease site(s) by procedures known in the art. such procedures and others are deemed to be within the scope of those skilled in the art. the dna sequence in the expression vector is operatively linked to an appropriate expression control sequence(s) (promoter) to direct mrna synthesis. particular named bacterial promoters include laci, lacz, t3, t7, gpt, lambda p r , p l and trp. eukaryotic promoters include cmv immediate early. hsv thymidine kinase, early and late sv40, ltrs from retrovirus, and mouse metallothionein-i. selection of the appropriate vector and promoter is well within the level of ordinary skill in the art. the expression vector also contains a ribosome binding site for translation initiation and a transcription terminator. the vector may also include appropriate sequences for amplifying expression. promoter regions can be selected from any desired gene using cat (chloramphenicol transferase) vectors or other vectors with selectable markers. in addition, the expression vectors preferably contain one or more selectable marker genes to provide a phenotypic trait for selection of transformed host cells such as dihydrofolate reductase or neomycin resistance for eukaryotic cell culture, or such as tetracycline or ampicillin resistance in e. coli. generally, recombinant expression vectors will include origins of replication and selectable markers permitting transformation of the host cell, e.g ., the ampicillin resistance gene of e. coli and s. cerevisiae trp1 gene, and a promoter derived from a highly-expressed gene to direct transcription of a downstream structural sequence. such promoters can be derived from operons encoding glycolytic enzymes such as 3-phosphoglycerate kinase (pgk), α-factor, acid phosphatase, or heat shock proteins, among others. the heterologous structural sequence is assembled in appropriate phase with translation initiation and termination sequences, and preferably, a leader sequence capable of directing secretion of translated protein into the periplasmic space or extracellular medium. the dna selected and isolated as hereinabove described is introduced into a suitable host to prepare a library which is screened for the desired enzyme activity. the selected dna is preferably already in a vector which includes appropriate control sequences whereby selected dna which encodes for an enzyme may be expressed, for detection of the desired activity. the host cell can be a higher eukaryotic cell, such as a mammalian cell, or a lower eukaryotic cell, such as a yeast cell, or the host cell can be a prokaryotic cell, such as a bacterial cell. introduction of the construct into the host cell can be effected by transformation, calcium phosphate transfection. deae-dextran mediated transfection, dmso or electroporation (davis. l., dibner, m., battey, i., basic methods in molecular biology, (1986)). as representative examples of appropriate hosts, there may be mentioned: bacterial cells, such as e. coli, bacillus, streptomyces, salmonella typhimurium ; fungal cells, such as yeast: insect cells such as drosophila s2 and spodoptera sf9. animal cells such as cho. cos or bowes melanoma: adenoviruses: plant cells, etc. the selection of an appropriate host is deemed to be within the scope of those skilled in the art from the teachings herein. host cells are genetically engineered (transduced or transformed or transfected) with the vectors. the engineered host cells can be cultured in conventional nutrient media modified as appropriate for activating promoters, selecting transformants or amplifying genes. the culture conditions, such as temperature, ph and the like, are those previously used with the host cell selected for expression, and will be apparent to the ordinarily skilled artisan. the recombinant enzymes in the library which are classified as described herein may or may not be sequenced and may or may not be in a purified form. thus, in accordance with the present invention, it is possible.to classify one or more of the recombinant enzymes before or after obtaining the sequence of the enzyme or before or after purifying the enzyme to essential homogeneity. the screening for chemical characteristics may be effected on individual expression clones or may be initially effected on a mixture of expression clones to ascertain whether or not the mixture has one or more specified enzyme activities. if the mixture has a specified enzyme activity, then the individual clones may be rescreened for such enzyme activity or for a more specific activity. thus, for example, if a clone mixture has hydrolase activity, then the individual clones may be recovered and screened to determine which of such clones has hydrolase activity. also described herein are enzyme kits for use in further screening and/or research. thus, a reagent package or kit is prepared by placing in the kit or package, e.g., in suitable containers, at least three different recombinant enzymes with each of the at least three different recombinant enzymes having at least two enzyme characteristics in common. preferably, one common characteristic is a chemical characteristic or property and the other common characteristic is a physical characteristic or property: however, it is possible to prepare kits which have two or more chemical characteristics or properties in common and no physical characteristics or property in common and vice versa. since, in accordance with the present invention, it is possible to provide a recombinant enzyme library from one or more microorganisms which is classified by a multiplicity of chemical and/or physical properties, a variety of enzyme kits or packages can be prepared having a variety of selected chemical and/or physical characteristics which can be formulated to contain three or more recombinant enzymes in which at least three and preferably all of the recombinant enzymes have in common at least one chemical characteristic and have in common at least one physical characteristic. the kit should contain an appropriate label specifying such common characteristics. for example, at least three recombinant enzymes in the kit have in common the most specific chemical characteristic specified on the label. the term "label" is used in its broadest sense and includes package inserts or literature associated or distributed in conjunction with the kit or package. thus, for example, if the kit is labeled for a specific substrate (one of the tier 3 examples above), then for example, at least three of the enzymes in the kit would act on such substrate. the kits will preferably contain more than three enzymes, for example, five, six or more enzymes and in a preferred embodiment at least three and preferably a majority and in some cases all of the recombinant enzymes in the kit will have at least two enzyme properties or characteristics in common, as hereinabove described. the recombinant enzymes in the kits may have two or more enzymes in a single container or individual enzymes in individual containers or various combinations thereof. the library may be screened for a specified enzyme activity by procedures known in the art. for example, the enzyme activity may be screened for one or more of the six iub classes: oxidoreductases, transferases, hydrolases, lyases, isomerases and ligases. the recombinant enzymes which are determined to be positive for one or more of the iub classes may then be rescreened for a more specific enzyme activity. alternatively, the library may be screened for a more specialized enzyme activity. for example, instead of generically screening for hydrolase activity, the library may be screened for a more specialized activity. i.e. the type of bond on which the hydrolase acts. thus, for example, the library may be screened to ascertain those hydrolases which act on one or more specified chemical functionalities, such as: (a) amide (peptide bonds), i.e. proteases: (b) ester bonds, i.e. esterases and lipases: (c) acetals, i.e., glycosidases etc. the clones which are identified as having the specified enzyme activity may then be sequenced to identify the dna sequence encoding an enzyme having the specified activity. thus, in accordance with the present invention it is possible to isolate and identify: (i) dna encoding an enzyme having a specified enzyme activity, (ii) enzymes having such activity (inlcuding the amino acid sequence thereof) and (iii) produce recombinant enzymes having such activity. the screening for enzyme activity may be effected on individual expression clones or may be initially effected on a mixture of expression clones to ascertain whether or not the mixture has one or more specified enzyme activities. if the mixture has a specified enzyme activity, then the individual clones may be rescreened for such enzyme activity or for a more specific activity thus, for example, if a clone mixture has hydrolase activity, then the individual clones may be recovered and screened to determine which of such clones has hydrolase activity. the expression libraries may be screened for one or more selected chemical characteristics. selected representative chemical characteristics are described below but such characteristics do not limit the present invention. moreover, the expression libraries may be screened for some or all of the characteristics. thus, some of the chemical characteristics specified herein may be determined in all of the libraries, none of the libraries or in only some of the libraries. the recombinant enzymes may also be tested and classified by physical properties. for example, the recombinant enzymes may be classified by physical properties such as follows: ph optima <3 3-6 6-9 9-12 >12 temperature optima >90°c 75-90°c 60-75°c 45-60°c 30-45°c 15-30°c 0-15°c temperature stability half-life at: 90°c 75°c 60°c 45°c organic solvent tolerance water miscible (dmf) 90% 75% 45% 30% water immiscible hexane toluene metal ion selectivity edta - 10 mm ca +2 - 1 mm mg +2 - 100 µm mn +2 - 10µm co +3 - 10 µm detergent sensitivity neutral (triton) anionic (deoxycholate) cationic (chaps) the recombinant enzymes of the libraries of the present invention may be used for a variety of purposes and the present invention by providing a plurality of recombinant enzymes classified by a plurality of different enzyme characteristics permits rapid screening of enzymes for a variety of applications. thus, for example, assembly of enzyme kits is described which contain a plurality of enzymes which are capable of operating on a specific bond or a specific substrate at specified conditions to thereby enable screening of enzymes for a variety of applications. as representative examples of such applications, there may be mentioned: 1. lipase/esterase a. enantioselective hydrolysis of esters (lipids)/ thioesters 1) resolution of racemic mixtures 2) synthesis of optically active acids or alcohols from meso- diesters b. selective syntheses 1) regiospecific hydrolysis of carbohydrate esters 2) selective hydrolysis of cyclic secondary alcohols c. synthesis of optically active esters, lactones, acids, alcohols 1) transesterification of activated/nonactivated esters 2) interesterification 3) optically active lactones from hydroxyesters 4) regio- and enantioselective ring opening of anhydrides d. detergents e. fat/oil conversion f. cheese ripening 2. protease a. ester/amide synthesis b. peptide synthesis c. resolution of racemic mixtures of amino acid esters d. synthesis of non-natural amino acids e. detergents/protein hydrolysis 3. glycosidase/glycosyl transferase a. sugar/polymer synthesis b. cleavage of glycosidic linkages to form mono, di-and oligosaccharides c. synthesis of complex oligosaccharides d. glycoside synthesis using udp-galactosyl transferase e. transglycosylation of disaccharides, glycosyl fluorides, aryl galactosides f. glycosyl transfer in oligosaccharide synthesis g. diastereoselective cleavage of β-glucosylsulfoxides h. asymmetric glycosylations i. food processing j. paper processing 4. phosphatase/kinase a. synthesis/hydrolysis of phosphate esters 1) regio-, enantioselective phosphorylation 2) introduction of phosphate esters 3) synthesize phospholipid precursors 4) controlled polynucleotide synthesis b. activate biological molecule c. selective phosphate bond formation without protecting groups 5. mono/dioxygenase a. direct oxyfunctionalization of unactivated organic substrates b. hydroxylation of alkane, aromatics, steroids c. epoxidation of alkenes d. enantioselective sulphoxidation e. regio- and stereoselective bayer-villiger oxidations 6. haloperoxidase a. oxidative addition of halide ion to nucleophilic sites b. addition of hypohalous acids to olefinic bonds c. ring cleavage of cyclopropanes d. activated aromatic substrates converted to ortho and para derivatives e. 1.3 diketones converted to 2-halo-derivatives f. heteroatom oxidation of sulfur and nitrogen containing substrates g. oxidation of enol acetates, alkynes and activated aromatic rings 7. lignin peroxidase/diarylpropane peroxidase a. oxidative cleavage of c-c bonds b. oxidation of benzylic alcohols to aldehydes c. hydroxylation of benzylic carbons d. phenol dimerization e. hydroxylation of double bonds to form diols f. cleavage of lignin aldehydes 8. epoxide hydrolase a. synthesis of enantiomerically pure bioactive compounds b. regio- and enantioselective hydrolysis of epoxide c. aromatic and olefinic epoxidation by monooxygenases to form epoxides d. resolution of racemic epoxides e. hydrolysis of steroid epoxides 9. nitrile hydratase/nitrilase a. hydrolysis of aliphatic nitriles to carboxamides b. hydrolysis of aromatic, heterocyclic, unsaturated aliphatic nitriles to corresponding acids c. hydrolysis of acrylonitrile d. production of aromatic and carboxamides, carboxylic acids (nicotinamide, picolinamide, isonicotinamide) e. regioselective hydrolysis of acrylic dinitrile f. α-amino acids from α-hydroxynitriles 10. transaminase a. transfer of amino groups into oxo-acids 11. amidase/acylase a. hydrolysis of amides, amidines, and other c-n bonds b. non-natural amino acid resolution and synthesis the invention will be further described with reference to the following examples; however, the scope of the present invention is not to be limited thereby. unless otherwise specified, all parts are by weight. example 1 production of expression library the following describes a representative procedure for preparing an expression library for screening by the tiered approach of the present invention. one gram of thermococcus gu5l5 cell pellet was lysed and the dna isolated by literature procedures (current protocols in molecular biology, 2.4.1, 1987). approximately 100µg of the isolated dna was resuspended in te buffer and vigorously passed through a 25 gauge double-hubbed needle until the sheared fragments were in the size range of 0.5-10.0 kb (3.0 kb average). the dna ends were "polished" or blunted with mung bean nuclease (300 units. 37°c. 15 minutes), and ecori restriction sites in the target dna protected with ecori methylase (200 units, 37°c, 1 hour). ecori linkers [ggaattcc] were ligated to the blunted/protected dna using 10 pmole ends of linkers to 1pmole end of target dna. the linkers were cut back with ecori restriction endonuclease (200 units, 37°c. 1.5 hours) and the dna size fractionated by sucrose gradient (maniatis, t., fritsch. e.f., and sambrook, j., molecular cloning. cold spring harbor press, new york, 1982). the prepared target dna was ligated to the lambda zap ® ii vector (stratagene), packaged using in vitro lambda packaging extracts and grown on xl1-blue mrf' e. coli strain according to the manufacturer. the pbluescript ® phagemids were excised from the lambda library, and grown in e. coli dh10b f' kan, according to the method of hay and short (hay and short. j . strategies, 5 :16, 1992). the resulting colonies were picked with sterile toothpicks and used to singly inoculate each of the wells of 11 96-well microtiter plates (1056 clones in all). the wells contained 250 µl of lb media with 100 µg/ml ampicillin. 80 µg/ml methicillin, and 10% v/v glycerol (lb amp/meth. glycerol). the cells were grown overnight at 37°c without shaking. this constituted generation of the "source library"; each well of the source library thus contained a stock culture of e. coli cells, each of which contained a pbluescript phagemid with a unique dna insert. example 2 preparation of a dna library the following outlines the procedures used to generate a gene library from a sample of the exterior surface of a whale bone found at 1240 meters depth in the santa catalina basin during a dive expedition. isolate dna. isoquick procedure as per manufacturer's instructions. shear dna 1. vigorously push and pull dna through a 25g double-hub needle and i-cc syringes about 500 times. 2. check a small amount (0.5 µg) on a 0.8% agarose gel to make sure the majority of the dna is in the desired size range (about 3-6 kb). blunt dna 1. add: table-tabl0001 h 2 o to a final volume of 405 µ l 45 µ l 10x mung bean buffer 2.0 µ l mung bean nuclease (150 u/ µ l) 2. incubate 37°c. 15 minutes. 3. phenol/chloroform extract once. 4. chloroform extract once. 5. add 1 ml ice cold ethanol to precipitate. 6. place on ice for 10 minutes. 7. spin in microfuge, high speed, 30 minutes. 8. wash with 1 ml 70% ethanol. 9. spin in microfuge, high speed. 10 minutes and dry. methylate dna 1. gently resuspend dna in 26 µ l te. 2. add table-tabl0002 4.0 µl 10x ecor i methylase buffer 0.5 µl sam (32 mm) 5.0 µl ecor i methylase (40 u/µl) 3. incubate 37°. 1 hour. insure blunt ends 1. add to the methylation reaction: table-tabl0003 5.0 µl 100 mm mgcl 2 8.0 µl dntp mix (2.5 mm of each dgtp, datp, dttp, dctp) 4.0 µl klenow (5 u/µl) 2. incubate 12°c. 30 minutes. 3. add 450 µl 1x ste. 4. phenol/chloroform extract once. 5. chloroform extract once. 6. add 1 ml ice cold ethanol to precipitate and place on ice for 10 minutes 7. spin in microfuge, high speed. 30 minutes 8. wash with 1 ml 70% ethanol. 9. spin in microfuge, high speed. 10 minutes and dry. linker ligation 1. gently resuspend dna in 7 µ l tris-edta (te). 2. add: table-tabl0004 14 µl phosphorylated eco r i linkers (200 ng/µl) 3.0 µl 10x ligation buffer 3.0 µl 10 mm ratp 3.0 µl t4 dna ligase (4wu/µl) 3. incubate 4°c. overnight. eco ri cutback 1. heat kill ligation reaction 68°c. 10 minutes. 2. add: table-tabl0005 237.9 µl h 2 o 30 µl 10x eco r i buffer 2.1 µl eco r i restriction enzyme (100 u/µl) 3. incubate 37°c. 1.5 hours. 4. add 1.5 µl 0.5 m edta. 5. place on ice. sucrose gradient (2.2 ml) size fractionation 1. heat sample to 65°c. 10 minutes. 2. gently load on 2.2 ml sucrose gradient. 3. spin in mini-ultracentrifuge, 45k. 20°c. 4 hours (no brake). 4. collect fractions by puncturing the bottom of the gradient tube with a 20g needle and allowing the sucrose to flow through the needle. collect the first 20 drops in a falcon 2059 tube then collect 10 1-drop fractions (labelled 1-10). each drop is about 60 µl in volume. 5. run 5 µl of each fraction on a 0.8 % agarose gel to check the size. 6. pool fractions 1-4 (-10-1.5 kb) and, in a separate tube, pool fractions 5-7 (about 5-0.5 kb). 7. add 1 ml ice cold ethanol to precipitate and place on ice for 10 minutes. 8. spin in microfuge, high speed, 30 minutes. 9. wash with 1 ml 70% ethanol. 10. spin in microfuge, high speed, 10 minutes and dry. 11. resuspend each in 10 µ l te buffer. test ligation to lambda arms 1. plate assay to get an approximate concentration. spot 0.5 µl or the sample on agarose containing ethidium bromide along with standards (dna samples of known concentration). view in uv light and estimate concentration compared to the standards. fraction 1-4 = > 1.0 µg/µl. fraction 5-7 = 500 ng/µl. 2. prepare the following ligation reactions (5 µl reactions) and incubate 4°c. overnight: table-tabl0006 sample h 2 o 10x ligase buffer 10mm ratp lambda arms (gt11 and zap) insert dna t4 dna ligase (4 w u /µ) fraction 1-4 0.5 µl 0.5 µl 0.5 µl 1.0 µl 2.0 µl 0.5 µl fraction 5-7 0.5 µl 0.5 µl 0.5 µl 1.0 µl 2.0 µl 0.5 µl test package and plate 1. package the ligation reactions following manufacturer's protocol. package 2.5 µl per packaging extract (2 extracts per ligation). 2. stop packaging reactions with 500 µl sm buffer and pool packaging that came from the same ligation. 3. titer 1.0 µ l of each on appropriate host (od 600 = 1.0) [xli-blue mrf for zap and y1088 for gt11] add 200 µ l host (in mm mgso 4 ) to falcon 2059 tubes inoculate with 1 µ l packaged phage incubate 37°c, 15 minutes add about 3 ml 48°c top agar [50 ml stock containing 150 µ l iptg (0.5m) and 300 µ l x-gal (350 mg/ml)] plate on 100mm plates and incubate 37°c, overnight. 4. efficiency results: gt11 1.7 x 10 4 recombinants with 95% background zap ii: 4.2 x 10 4 recombinants with 66% background contaminants in the dna sample may have inhibited the enzymatic reactions, though the sucrose gradient and organic extractions may have removed them. since the dna sample was precious, an effort was made to "fix" the ends for cloning: re-blunt dna 1. pool all left over dna that was not ligated to the lambda arms (fractions 1-7) and add h 2 o to a final volume of 12 µ l. then add table-tabl0007 143 µ l h 2 o 20 µl 10x buffer 2 (from stratagene's cdna synthesis kit) 23 µ l blunting dntp (from stratagene's cdna synthesis kit) 2.0 µl pfu (from stratagene"s cdna synthesis kit) 2. incubate 72°c, 30 minutes. 3. phenol/chloroform extract once. 4. chloroform extract once. 5. add 20 µl 3m naoac and 400 µl ice cold ethanol to precipitate. 6. place at -20°c, overnight. 7. spin in microfuge, high speed, 30 minutes. 8. wash with 1 ml 70% ethanol. 9. spin in microfuge, high speed, 10 minutes and dry. (do not methylate dna since it was already methylated in the first round of processing) adaptor ligation 1 gently resuspend dna in 8 µ l ecor i adaptors (from stratagene's cdna synthesis kit). 2. add. table-tabl0008 1.0 µl 10x ligation buffer 1.0 µl 10 mm ratp 1.0 µl t4 dna ligase (4wu/µl) 3. incubate 4°c. 2 days. (do not cutback since using adaptors this time. instead, need to phosphorylate) phosphorylate adaptors 1. heat kill ligation reaction 70°c. 30 minutes. add table-tabl0009 1.0 µ l 10x ligation buffer 2.0 µ l 10mm ratf 6.0 µ l h 2 o 1.0 µ l pnk (from stratagene's cdna synthesis kit). 3. incubate 37°c, 30 minutes. 4. add 31 µl h 2 o and 5 µl 10x ste. 5. size fractionate on a sephacryl s-500 spin column (pool fractions 1-3). 6. phenol/chloroform extract once. 7. chloroform extract once. 8. add ice cold ethanol to precipitate. 9. place on ice, 10 minutes. 10. spin in microfuge, high speed, 30 minutes. 11. wash with 1 ml 70% ethanol. 12. spin in microfuge, high speed. 10 minutes and dry. 13. resuspend in 10.5 µl te buffer. do not plate assay. instead, ligate directly to arms as above except use 2.5 µl of dna and no water. package and titer as above. efficiency results: table-tabl0010 gt 11: 2.5 x 10° recombinants with 2.5% background zap ii: 9.6 x 10 5 recombinants with 0% background amplification of libraries (5.0 x 10 5 recombinants from each library) 1. add 3.0 ml host cells (od 660 = 1.0) to two 50 ml conical tube. 2. inoculate with 2.5 x 10 5 pfu per conical tube. 3. incubate 37°c. 20 minutes. 4. add top agar to each tube to a final volume of 45 ml. 5. plate the tube across five 150 mm plates 6. incubate 37°c. 6-8 hours or until plaques are about pin-head in size. 7. overlay with 8-10 ml sm buffer and place at 4°c overnight (with gentle rocking if possible). harvest phage 1. recover phage suspension by pouring the sm buffer off each plate into a 50-ml conical tube. 2. add 3 ml chloroform, shake vigorously and incubate at room temperature, 15 minutes. 3. centrifuge at 2k rpm, 10 minutes to remove cell debris. 4. pour supernatant into a sterile flask, add 500 µl chloroform. 5. store at 4°c. titer amplified library 1. make serial dilutions: 10 -5 = 1 µl amplified phage in 1 ml sm buffer 10 -6 = 1 µl of the 10 -3 ) dilution in 1 ml sm buffer 2. add 200 µl host (in 10 mm mgso 4 ) to two tubes. 3. inoculate one with 10 µl 10 -6 dilution (10 -5 ). 4. inoculate the other with 1 µl 10 -6 dilution (10 -6 ). 5. incubate 37°c. 15 minutes. 6. add about 3 ml 48°c top agar. [50 ml stock containing 150 µ l iptg (0.5m) and 375 µl x-gal (350 mg/ml)] 7. plate on 100 mm plates and incubate 37°c. overnight. 8. results: table-tabl0011 gt11: 1.7 x 10 11 /ml zap ii: 2.0 x 10 10 /ml excise the zap ii library to create the pbluescript library. example 3 preparation of an uncultivated prokaryotic dna library figure 1 shows an overview of the procedures used to construct an environmental library from a mixed picoplankton sample. the goal was to construct a stable, large insert dna library representing picoplankton genomic dna. cell collection and preparation of dna. agarose plugs containing concentrated picoplankton cells were prepared from samples collected on an oceanographic cruise from newport, oregon to honolulu, hawaii. seawater (30 liters) was collected in niskin bottles, screened through 10 µ m nitex, and concentrated by hollow fiber filtration (amicon dc10) through 30,000 mw cutoff polysulfone filters. the concentrated bacterioplankton cells were collected on a 0.22 µ m. 47 mm durapore filter, and resuspended in 1 ml of 2x ste buffer (1m nacl, 0.1m edta, 10 mm tris. ph 8.0) to a final density of approximately 1 x 10 10 cells per ml. the cell suspension was mixed with one volume of 1% molten seaplaque lmp agarose (fmc) cooled to 40°c, and then immediately drawn into a 1 ml syringe. the syringe was sealed with parafilm and placed on ice for 10 min. the cell-containing agarose plug was extruded into 10 ml of lysis buffer (10mm tris ph 8.0, 50 mm nacl, 0.1m edta, 1% sarkosyl, 0.2% sodium deoxycholate, a mg/ml lysozyme) and incubated at 37°c for one hour. the agarose plug was then transferred to 40 mis of esp buffer (1 % sarkosyl, 1 mg/ml proteinase-k, in 0.5m edta), and incubated at 55 °c for 16 hours. the solution was decanted and replaced with fresh esp buffer, and incubated at 55°c for an additional hour. the agarose plugs were then placed in 50 mm edta and stored at 4°c shipboard for the duration of the oceanographic cruise. one slice of an agarose plug (72 µl) prepared from a sample collected off the oregon coast was dialyzed overnight at 4°c against 1 ml of buffer a (100mm nacl. 10mm bis tris propane-hcl, 100 µg/ml acetylated bsa: ph 7.0 @ 25°c) in a 2 ml microcentrifuge tube. the solution was replaced with 250 µl of fresh buffer a containing 10 mm mgcl 2 and 1 mm dtt and incubated on a rocking platform for i hr at room temperature. the solution was then changed to 250 µl of the same buffer containing 4u of sau3al (neb), equilibrated to 37°c in a water bath, and then incubated on a rocking platform in a 37°c incubator for 45 min. the plug was transferred to a 1.5 ml microcentrifuge tube and incubated at 68°c for 30 min to inactivate the protein. e.g. enzyme, and to melt the agarose. the agarose was digested and the dna dephosphorylased using gelase and hk-phosphatase (epicentre), respectively, according to the manufacturer's recommendations. protein was removed by gentle phenol/chloroform extraction and the dna was ethanol precipitated, pelleted, and then washed with 70% ethanol. this partially digested dna was resuspended in sterile h 2 o to a concentration of 2.5 ng/µl for ligation to the pfos1 vector. pcr amplification results from several of the agarose plugs (data not shown) indicated the presence of significant amounts of archaeal dna. quantitative hybridization experiments using rrna extracted from one sample, collected at 200 m of depth off the oregon coast, indicated that planktonic archaea in (this assemblage comprised approximately 4.7% of the total picoplankton biomass (this sample corresponds to "pac1"-200 m in table 1 of delong et al. , high abundance of archaea in antarctic marine picoplankton, nature, 371 :695-698, 1994). results from archaeal-biased rdna pcr amplification performed on agarose plug lysates confirmed the presence of relatively large amounts of archaeal dna in this sample. agarose plugs prepared from this picoplankton sample were chosen for subsequent fosmid library preparation each 1 ml agarose plug from this site contained approximately 7.5 x 10 5 cells, therefore approximately 5.4 x 10 5 cells were present in the 72 µ l slice used in the preparation of the partially digested dna. vector arms were prepared from pfos1 as described (kim et al., stable propagation of casmid sized human dna inserts in an f factor based vector. nucl. acids res., 20 :10832-10835. 1992). briefly, the plasmid was completely digested with astii, dephosphorylated with hk phosphatase, and then digested with bamhi to generate two arms, each of which contained a cos site in the proper orientation for cloning and packaging ligated dna between 35-45 kbp. the partially digested picoplankton dna was ligated overnight to the pfos1 arms in a 15 µl ligation reaction containing 25 ng each of vector and insert and 1u of t4 dna ligase (boehringer-mannheim). the ligated dna in four microliters of this reaction was in vitro packaged using the gigapack xl packaging system (stratagene), the fosmid particles transfected to e. coli strain dh10b (brl), and the cells spread onto lb cm15 plates. the resultant fosmid clones were picked into 96-well microliter dishes containing lb cm15 supplemented with 7% glycerol. recombinant fosmids, each containing ca. 40 kb of picoplankton dna insert, yielded a library of 3.552 fosmid clones, containing approximately 1.4 x 10 8 base pairs of cloned dna. all of the clones examined contained inserts ranging from 38 to 42 kbp. this library was stored frozen at -80°c for later analysis. example 4 enzymatic activity assa y the following is a representative example of a procedure for screening an expression library prepared in accordance with example 2. in the following, the chemical characteristic tiers are as follows: tier 1: hydrolase tier 2: amide. ester and acetal tier 3: divisions and subdivisions are based upon the differences between individual substrates which are covalently attached to the functionality of tier 2 undergoing reaction; as well as substrate specificity. tier 4: the two possible enantiomeric products which the enzyme may produce from a substrate. although the following example is specifically directed to the above mentioned tiers, the general procedures for testing for various chemical characteristics is generally applicable to substrates other than those specifically referred to in this example. screening for tier 1-hydrolase; tier 2-amide the eleven plates of the source library were used to multiply inoculate a single plate (the "condensed plate") containing in each well 200 µl of lb amp/meth, glycerol. this step was performed using the high density replicating tool (hdrt) of the beckman biomek with a 1% bleach, water, isopropanol, air-dry sterilization cycle in between each inoculation. each well of the condensed plate thus contained 11 different pbluescript clones from each of the eleven source library plates. the condensed plate was grown for 2h at 37°c and then used to inoculate two white 96-well dynatech microtiter daughter plates containing in each well 250 µ l of lb amp/meth, glycerol. the original condensed plates was incubated at 37°c for 18h then stored at -80°c. the two condensed daughter plates were incubated at 37°c also for 18 h. the condensed daughter plates were then heated at 70°c for 45 min. to kill the cells and inactivate the host e.coli enzymes. a stock solution of 5mg/ml morphourea phenylalanyl-7-amino-4-trifluoromethyl coumarin (mupheafc, the 'substrate') in dmso was diluted to 600 µ m with 50 mm ph 7.5 hepes buffer containing 0.6 mg/ml of the detergent dodecyl maltoside. fifty µl of the 600 µm mupheafc solution was added to each of the wells of the white condensed plates with one 100 µl mix cycle using the biomek to yield a final concentration of substrate of ~ 100 µm. the fluorescence values were recorded (excitation = 400 nm. emission = 505 nm) on a plate reading fluorometer immediately after addition of the substrate (t=0). the plate was incubated at 70°c for 100 min, then allowed to cool to ambient temperature for 15 additional minutes. the fluorescence values were recorded again (t = 100). the values at t=0 were subtracted from the values at t =100 to determine if an active clone was present. these data indicated that one of the eleven clones in well g8 was hydrolyzing the substrate. in order to determine the individual clone which carried the activity, the eleven source library plates were thawed and the individual clones used to singly inoculate a new plate containing lb amp/meth, glycerol. as above, the plate was incubated at 37°c to grow the cells, heated at 70°c to inactivate the host enzymes, and 50 µl of 600 µm mupheafc added using the biomek. additionally three other substrates were tested. the methyl umbelliferone heptanoate, the cbz-arginine rhodamine derivative, and fluorescein-conjugated casein (~ 3.2 mol fluorescein per mol of casein). the umbelliferone and rhodamine were added as 600 µm stock solutions in 50 µl of hepes buffer. the fluorescein conjugated casein was also added in 50 µl at a stock concentration of 20 and 200 mg/ml. after addition of the substrates the t=0 fluorescence values were recorded, the plate incubated at 70°c, and the t= 100 min. values recorded as above. these data indicated that the active clone was in plate 2. the arginine rhodamine derivative was also turned over by this activity, but the lipase substrate, methyl umbelliferone heptanoate, and protein, fluorescein-conjugated casein, did not function as substrates. based on the above data the tier 1 classification is 'hydrolase' and the tier 2 classification is amide bond. there is no cross reactivity with the tier 2-ester classification. as shown in figure 2. a recombinant clone from the library which has been characterized in tier 1 as hydrolase and in tier 2 as amide may then be tested in tier 3 for various specificities. in figure 2, the various classes of tier 3 are followed by a parenthetical code which identifies the substrates of table 1 which are used in identifying such specificities of tier 3. as shown in figures 3 and 4, a recombinant clone from the library which has been characterized in tier 1 as hydrolase and in tier 2 as ester may then be tested in tier 3 for various specificities. in figures 3 and 4, the various classes of tier 3 are followed by a parenthetical code which identifies the substrates of tables 2 and 2 which are used in identifying such specificities of tier 3. in figures 3 and 4, r 2 represents the alcohol portion of the ester and r, represents the acid portion of the ester. as shown in figure 5, a recombinant clone from the library which has been characterized in tier 1 as hydrolase and in tier 2 as acetal may then be tested in tier 3 for various specificities. in figure 5, the various classes of tier 3 are followed by a parenthetical code which identifies the substrates of table 4 which are used in identifying such specificities of tier 3. enzymes may be classified in tier 4 for the chirality of the product(s) produced by the enzyme. for example, chiral amino esters may be determined using at least the following substrates: for each substrate which is turned over the enantioselectivity value, e, is determined according to the equation below: where ee p = the enantiomeric excess (ee) of the hydrolyzed product and c = the percent conversion of the reaction. see wong and whitesides. enzymes in synthetic organic chemistry, 1994, elsevier, tarrytown. ny, pgs 9-12. the enantiomeric excess is determined by either chiral high performance liquid chromatography (hplc) or chiral capillary electrophoresis (ce). assays are performed as follows: two hundred µ l of the appropriate buffer is added to each well of a 96-well white microtiter plate, followed by 50 µ l of partially or completely purified enzyme solution: 50 µ l of substrate is added and the increase in fluorescence monitored versus time until 50% of the substrate is consumed or the reaction stops, which ever comes first. enantioselectivity was determined for one of the esterases identified as follows. for the reaction to form (transesterification) or breakdown (hydrolysis) α-methyl benzyl acetate, the enantioselectivity of the enzyme was obtained by determining: ee c (the enantiomeric excess (ee) of the unreacted substrate), ee p (the ee of the hydrolyzed product), and c (the percent conversion of the reaction). the enantiomeric excess was by determined chiral high performance gas chromatography (gc). chromatography conditions were as follows: sample preparation: samples were filtered through a 0.2 µm, 13 mm diameter ptfe filter. column: supelco β-dex 120, 0.25 mm id. 30 m, 0.25 µm d f . oven: 90°c for 1 min, then 90°c to 150°c at 5°c/min. carrier gas: helium, 1 ml/min for 2 min then 1 ml/min. to 3 ml/min at 0.2 ml/min. detector: fid, 300°c. injection: 1 µ l (1 mm substrate in reaction solvent), split (1:75), 200°c. the transesterification reaction was performed according to the procedure described in: organic solvent tolerance. water immiscible solvents . see below. transesterification with enzyme esl-001-01 gave the following results: table-tabl0012 solvent %ee s %ee p %c n-heptane 10.9 44.3 19.8 toluene 3.2 100 3.1 the hydrolysis reaction was performed as follows: fifty µl of a 10 mm solution of α-methyl benzyl acetate in 10% aqueous dmso (v/v) was added to 200 µl of 100 mm, ph 6.9 phosphate buffer. to this solution was added 250 µl of enzyme esl-001-01 (2 mg/ml in 100 mm, ph 6.9 phosphate buffer) and the reaction heated at 70°c for 15 min. the reaction was worked up according to the following procedure: remove 250 µl of hydrolysis reaction mixture and add to a 1 ml eppendorf tube. add 250 µl of ethyl acetate and shake vigorously for 30 seconds. allow phases to separate for 15 minutes. pipette off 200 µl of top organic phase and filter through a 0.2 µm, 4 mm diameter ptfe filter. analyze by chiral gc as above. hydrolysis with enzyme esl-001-01 gave the following results: table-tabl0013 %ee s %ee p %c 100 0.7 99.3 example 5 testing for physical characteristics of a recombinant clone this example describes procedures for testing for certain physical characteristics of a recombinant clone of a library. ph optima. two hundred µ l of 4-methyl-umbelliferyl-2.2-dimethyl-4-pentenoate was added to each well of a 96-well microtiter plate and serially diluted from column 1 to 12. fifty µ l of the appropriate 5x ph buffer was added to each row of the plate so that reaction rate in eight different ph's were tested on a single plate. twenty µ l of enzyme esl-001-01 (1:3000 dilution of a 1 mg/ml stock solution) was added to each well to initiate the reaction. the increase in absorbance at 370 nm at 70°c was monitored to determine the rate of reaction: the rate versus substrate concentration fit to the michaelis-menten equation to determine v max at each ph. enzyme esl-001-01 gave the results shown in figure 6. temperature optima. to a one ml thermostatted cuvette was added 930 µ l of 50 mm, ph 7.5 hepes buffer. after temperature equilibration 50 µ l of enzyme esl-001-01 (1:8000 dilution of a 1 mg/ml stock solution in hepes buffer) and 20 µ l of 5 mm 4-methyl-umbelliferyl-heptanoate containing 30 mg/ml dodecyl maltoside. the rate of increase in absorbance at 370 nm was measured at 10, 20. 30. 40. 50, 60. 70, 80, and 90°c. enzyme esl-001-01 gave the results shown in figure 7. temperature stability. one ml samples of enzyme esl-001-01 (1:4000 dilution of a 1 mg/ml stock solution in hepes buffer) were incubated at 70. 80. and 90°c. at selected time points 25 µ l aliquots were removed and assayed as above in a 96 well microtiter plate with 200 µ l of 100 µ m 4-methylumbelliferyl palmitate and 0.6 mg/ml dodecyl maltoside. this data was used to determine the half life for inactivation of the enzyme. enzyme esl-001-01 gave the following results: table-tabl0014 temperature half life 90 23 min. 80 32 min. 70 110 h organic solvent tolerance. water miscible solvents (dimethylsulfoxide (dmso) and tetrahydro furan (thf)). thirty µ l of 1 mm 4-methyl-umbelliferyl-butyrate in the organic solvent was added to the wells of a 96-well microtiter plate. two hundred forty µ l of buffer and organic solvent mixture (see table below) were added the wells of the plate, followed by 30 µ l of an enzyme esl-001-01 (1:50,000 dilution of a i mg/ml stock solution in 50 mm, ph 6.9 mops buffer) and incubation at 70°c. the increase in fluorescence (ex=360 nm, em=440 nm) was monitored versus time to determine the relative activities. table-tabl0015 µl organic solvent µl buffer % organic solvent final 240 0 90 195 45 75 150 90 60 120 120 50 90 150 40 60 180 30 30 210 20 0 240 10 enzyme esl-001-01 ol gave the results shown in figure 8. water immiscible solvents ( n -heptane, toluene) one ml of the solvent was added to a vial containing 1 mg of lyophilized enzyme esl-001-01 and a stir bar. ten µl of 100 mm 1-phenethyl alcohol and 10 µl of 100 mm vinyl acetate were added to the vial and the vial stirred in a heating block at 70°c for 24 h. the sample was filtered through a 0.2 µm, 4 mm diameter ptfe filter and analyzed by chiral gc as above. see previous section for data. specific activity. the specific activity was determined using 100 µm 4-methyl umbelliferyl heptanoate at 90°c in ph 6.9 mops buffer. the specific activity obtained for enzyme esl-001-01 was 1662 µ mol/min·mg. example 6 testing for substrate specificity of a recombinant clone this example describes procedures for testing for substrate specificity of a recombinant clone of a library. substrate fingerprint. one and one quarter millimolar solutions containing i mg/ml of dodecyl maltoside in 50 mm ph 6.9 mops buffer of each of the following substrates were prepared: 4-methyl umbelliferyl acetate (a) 4-methyl umbelliferyl propanoate (b) 4-methyl umbelliferyl butyrate (c) 4-methyl umbelliferyl heptanoate (d) 4-methyl umbelliferyl α-methyl butyrate (e) 4-methyl umbelliferyl β -methylcrotonoate (f) 4-methyl umbelliferyl 2,2-dimethyl-4-pentenoate (g) 4-methyl umbelliferyl adipic acid monoester (h) 4-methyl umbelliferyl 1,4-cycylohexane dicarboxylate (1) 4-methyl umbelliferyl benzoate (m) 4-methyl umbelliferyl p-trimethyl ammonium cinnamate (n) 4-methyl umbelliferyl 4-guanidinobenzoate (o) 4-methyl umbelliferyl α-methyl phenyl acetate (p) 4-methyl umbelliferyl α-methoxy phenyl acetate (q) 4-methyl umbelliferyl palmitate (s) 4-methyl umbelliferyl stearate (t) 4-methyl umbelliferyl oleate (u) 4-methyl umbelliferyl elaidate (w). two hundred µl of each of the above solutions were added to the wells of a 96 well microtiter plate, followed by 50 µl of enzyme esl-001-01 (1:2000 dilution of a i mg/ml stock solution in mops buffer) and incubation at 70°c for 20 min. the fluorescence (ex=360 nm, em =440 nm) was measured and fluorescence due to nonenzymatic hydrolysis was subtracted. table 5 shows the relative fluorescence of each of the above substrates. numerous modifications and variations of the present invention are possible in light of the above teachings: therefore, within the scope of the claims, the invention may be practiced other than as particularly described. table-tabl0019 table 4 4-methyl umbelliferone wherein r = g2 β-d-galactose β-d-glucose β-d-glucuronide gb3 β-d-cellotrioside β-b-cellobiopyranoside gc3 β-d-galactose α-d-galactose gd3 β-d-glucose α-d-glucose ge3 β-d-glucuronide g13 β-d-n,n-diacetylchitobiose gj3 β-d-fucose α-l-fucose β-l-fucose gk3 β-d-mannose α-d-mannose non-umbelliferyl substrates ga3 amylose [polyglucan α1,4 linkages], amylopectin [polyglucan branching α1,6 linkages] gf3 xylan [poly 1,4-d-xylan] gg3 amylopectin, pullulan gh3 sucrose, fructofuranoside table-tabl0020 table 5 compound relative fluorescence a 60.6 b 73.6 c 100.0 d 84.2 e 29.1 f 5.4 g 7.1 h 0.9 i 0.0 m 9.4 n 0.5 o 0.5 p 4.0 q 11.3 s 0.6 t 0.1 u 0.3 w 0.2
140-381-209-008-336
JP
[ "KR", "CN", "EP", "JP", "US" ]
A61M37/00,A61K9/00,A61K9/70,B29C39/02
2015-03-10T00:00:00
2015
[ "A61", "B29" ]
process for producing sheet for percutaneous absorption
provided is a process for producing, with a simple device configuration, a sheet for percutaneous absorption which has a stable shape. the process for producing a sheet for percutaneous absorption comprises: a drug-solution filling step in which a drug solution 22 that is a polymer solution containing a drug is filled into the acicular recesses 15 of a mold 13; a drug-solution drying step in which the drug solution 22 filled into the acicular recesses 15 is dried to form a drug layer 120 containing the drug; a polymer-layer-forming-liquid supply step in which the mold 13 is provided with a dam part that surrounds the region 16 where the acicular recesses 15 have been formed and that extends above the region 16 where the acicular recesses 15 have been formed, and a polymer-layer-forming liquid 120 is supplied to the mold 13 so that the liquid 24 fills a space which extends to at least the height of the dam part and which, when viewed from above, at least reaches the dam part; and a polymer-layer-forming-liquid drying step in which the polymer-layer-forming liquid 24 supplied to the mold 13 is dried to form a polymer layer 122.
1. a method of producing a transdermal absorption sheet, comprising: a drug solution filling step of filling needle-like recessed portions of a mold having the needle-like recessed portions with a drug solution that is a polymer solution containing a drug, the mold having a first region in which the needle-like recessed portions are formed, and a second region in a periphery of the first region, the second region being provided with a step portion having a height that is higher than a height of the first region; a drug solution drying step of drying the drug solution filling the needle-like recessed portions to form a drug layer containing the drug; a polymer layer forming solution supply step of supplying a polymer layer forming solution to the mold, the supplied polymer layer forming solution being adjusted to have, over a range of greater than the step portion as seen from above, a height higher than a height of the step portion, and to have a first height in a vicinity of a center of the first region and a second height in a periphery of the center of the first region, the second height being higher than the first height, and then contracting the supplied polymer layer forming solution by surface tension, while a contact position of the supplied polymer layer forming solution and the mold is fixed to the step portion; and a polymer layer forming solution drying step of drying the polymer layer forming solution that has contracted, to form a polymer layer. 2. the method of producing a transdermal absorption sheet according to claim 1 , wherein the height of the step portion of the mold is 10 μm or more and 5,000 μm or less. 3. the method of producing a transdermal absorption sheet according to claim 1 , wherein in the polymer layer forming solution supply step, a thickness of the polymer layer forming solution is 5,000 μm or less. 4. the method of producing a transdermal absorption sheet according to claim 1 , wherein the step portion is a frame that is installed to be separated from the mold. 5. the method of producing a transdermal absorption sheet according to claim 1 , wherein the step portion has a step in the mold itself. 6. the method of producing a transdermal absorption sheet according to claim 1 , wherein the step portion has a tapered shape widening in a direction from the first region to an upper side in a vertical direction. 7. the method of producing a transdermal absorption sheet according to claim 1 , wherein a shape formed by the step portion in the periphery of the first region is a regular hexagonal or higher polygonal shape or a circular shape. 8. a method of producing a transdermal absorption sheet, comprising: a drug solution filling step of filling needle-like recessed portions of a mold having the needle-like recessed portions with a drug solution that is a polymer solution containing a drug, the mold having a first region in which the needle-like recessed portions are formed, and a second region in a periphery of the first region, the second region being provided with a step portion, the step portion having a height higher than a height of the first region and having a tapered shape widening in a direction from the first region to an upper side in a vertical direction; a drug solution drying step of drying the drug solution filling the needle-like recessed portions to form a drug layer containing the drug; a polymer layer forming solution supply step of supplying a polymer layer forming solution the mold, the supplied polymer layer forming solution being adjusted to have, over a range of greater than the step portion as seen from above, a height higher than a height of the step portion, and then contracting the supplied polymer layer forming solution by surface tension, while a contact position of the supplied polymer layer forming solution and the mold is fixed to the step portion; and a polymer layer forming solution drying step of drying the polymer layer forming solution that has contracted, to form a polymer layer. 9. the method of producing a transdermal absorption sheet according to claim 8 , wherein the height of the step portion of the mold is 10 μm or more and 5,000 μm or less. 10. the method of producing a transdermal absorption sheet according to claim 8 , wherein in the polymer layer forming solution supply step, a thickness of the polymer layer forming solution is 5,000 μm or less. 11. the method of producing a transdermal absorption sheet according to claim 8 , wherein the step portion is a frame that is installed to be separated from the mold. 12. the method of producing a transdermal absorption sheet according to claim 8 , wherein the step portion has a step in the mold itself. 13. the method of producing a transdermal absorption sheet according to claim 8 , wherein a shape formed by the step portion in the periphery of the first region is a regular hexagonal or higher polygonal shape or a circular shape.
cross-reference to related applications this application is a continuation of pct international application no. pct/jp2016/057197 filed on mar. 8, 2016, which claims priorities under 35 u.s.c. § 119(a) to japanese patent application no. 2015-047622 filed on mar. 10, 2015 and japanese patent application no. 2016-040831 filed on mar. 3, 2016. each of the above applications is hereby expressly incorporated by reference, in their entirety, into the present application. background of the invention 1. field of the invention the present invention relates to a method of producing a transdermal absorption sheet and particularly relates to a method of producing a transdermal absorption sheet in which needle-like protruding portions containing a drug are arranged on a sheet portion thereof. 2. description of the related art as a method for administering a drug or the like through a living body surface, that is, a skin, a mucous membrane, or the like, a drug injection method of using a transdermal absorption sheet on which needle-like protruding portions having a high aspect ratio and containing a drug (hereinafter, also referred to as “microneedles”) are formed and inserting the microneedles into a skin is used. in order to use a sheet as a transdermal absorption sheet, a drug needs to be mixed into the sheet. however, many drugs are expensive, and thus, the drug needs to be contained in the sheet so as to concentrate at the microneedles. as a method of producing a transdermal absorption sheet, a method is known in which a polymer solution or the like is poured into a mold on which needle-like recessed portions that are inverted shapes of needle-like protruding portions are formed, to transfer the shapes. for example, there is a method including using a mold for microneedle sheets in which through-holes passing through a base material are made at the bottoms of recessed portions, first, coating a surface of the mold with a solution of a diluted drug, subsequently scraping off extra solution using a squeegee or the like, drying the drug solution, and then coating the dried drug solution with a needle raw material. in addition, there is a method of producing a sheet in which microneedle-like protrusion preparation are accumulated in which the sheet is highly accurately produced in one step by filling a flexible substrate with a thick liquid consisting of a mixture of a target substance and a base utilizing a centrifugal force, while drying and hardening the liquid. in the formation of the microneedles using shape inversion by a needle-like recessed plate, regardless of whether a drug solution is contained or not, it is necessary to apply a polymer solution to the needle-like recessed plate by a certain method. for example, jp2009-082207a discloses a method of producing a functional film including applying a solution of a polymer resin to a form in which a recessed portion array is formed, and applying pressure with a pressurized fluid to fill the recessed portion array with the solution of the polymer resin. jp2011-224332a discloses use of a pressurizing filling apparatus for filling needle-like recessed portions with a polymer solution. in addition, jp2014-023698a discloses a method including supplying a needle-like body forming solution in a state in which a recessed plate is inclined, and moving the needle-like body forming solution from an upper side to a lower side to fill the recessed portions with the needle-like body forming solution. summary of the invention in order to improve the releasability of formed needle-like protruding portions, a needle-like recessed plate is formed by using a material having a low surface tension or is subjected to a surface treatment in some cases. therefore, in the case of directly applying a polymer solution to the needle-like recessed plate, the liquid is reduced due to a difference in surface tension between a solid and a liquid, and poor wettability, and the liquid is repelled. thus, there is a problem in that a film cannot be formed. in order to prevent the liquid from being repelled, it is considered that the surface tension of the needle-like recessed plate portion is increased to decrease the surface tension of the polymer solution. however, from the viewpoint of releasability, it is difficult to increase the surface tension of the needle-like recessed plate portion. in addition, the addition of a surfactant into the polymer solution may be not desirable from the viewpoint of performance. further, as another viewpoint, an increase in the coating film thickness of the polymer solution can be considered but it is not preferable to apply a load to a drying step or to form an unnecessary sheet in terms of performance from the viewpoint of production costs. in the methods described in jp2009-082207a, jp2011-224332a, and jp2014-023698a, it is possible to maintain the shape of the frame by suppressing the surface tension. however, a large scale apparatus that includes the entire apparatus is required and thus facility costs increase. in addition, coating and drying cannot be performed more accurately in a static state. the present invention has been made in consideration of such circumstances, and an object thereof is to provide a method of producing a transdermal absorption sheet capable of performing drying in a state in which each solid form of a polymer solution is stably maintained while suppressing an increase in facility costs by controlling a method of supplying the polymer solution into a frame that partitions the polymer solution. in order to achieve the above object, the present invention provides a method of producing a transdermal absorption sheet, comprising: a drug solution filling step of filling needle-like recessed portions of a mold having the needle-like recessed portions with a drug solution that is a polymer solution containing a drug; a drug solution drying step of drying the drug solution filling the needle-like recessed portions to form a drug layer containing the drug; a polymer layer forming solution supply step of supplying a polymer layer forming solution to the mold, the mold being provided with a step portion that is higher than a region in which the needle-like recessed portions are formed in a periphery of the region in which the needle-like recessed portions are formed, at a height equal to or higher than a height of the step portion in a range of equal to or greater than the step portion as seen from above; and a polymer layer forming solution drying step of drying the polymer layer forming solution supplied to the mold to form a polymer layer. according to the present invention, since the step portion that is higher than the region in which the needle-like recessed portions are formed is provided in the periphery of the region in which the needle-like recessed portions are formed on the mold on which the needle-like recessed portions are formed, and the polymer layer forming solution for forming a polymer layer is supplied at a height equal to or higher than the height of the step portion in a range of equal to or greater than the step portion, the polymer layer forming solution can be prevented from being repelled in the region in which the needle-like recessed portions are formed on the mold. accordingly, it is possible to stably produce a transdermal absorption sheet in a state in which the shape of the transdermal absorption sheet is maintained. the expression “the polymer layer forming solution is supplied at a height equal to or higher than the height of the step portion” means that the polymer layer forming solution is supplied at a height equal to or higher than a surface flushed with the step portion in the periphery of the region in which the needle-like recessed portions are formed. in another aspect of the present invention, it is preferable that in the polymer layer forming solution supply step, the polymer layer forming solution is supplied at a height higher than the height of the step portion in a range of greater than the step portion as seen from above and then a contact position of the polymer layer forming solution and the mold is fixed to the step portion while reducing the polymer layer forming solution. according to the aspect, since the polymer layer forming solution is supplied at a height higher than the height of the step portion of the mold in a range of greater than the step portion and then the supplied polymer layer forming solution is fixed to the step portion while reducing the polymer layer forming solution, the polymer layer forming solution can be prevented from being repelled in the region in which the needle-like recessed portions are formed on the mold. accordingly, it is possible to stably produce a transdermal absorption sheet in a state in which the shape of the transdermal absorption sheet is maintained. the expression “a contact position of the polymer layer forming solution and the mold” refers to the periphery of the liquid droplets of the polymer layer forming solution at the time of reduction of the polymer layer forming solution toward the region in which the needle-like recessed portions are formed, and the expression “contact position is fixed to the to the step portion” means that the periphery of the liquid droplets of the polymer layer forming solution is fixed to the upper side of the step portion. in addition, the reduction of the polymer layer forming solution is preferably performed by using surface tension. in another aspect of the present invention, it is preferable that the height of the step portion of the mold is 10 μm or more and 5,000 μm or less. in the aspect, the height of the step portion of the mold is defined and the amount of the polymer layer forming solution to be supplied can be reduced by setting the height of the step portion to the above range. thus, the time for the polymer layer forming solution drying step can be shortened and production costs can be reduced. in another aspect of the present invention, it is preferable that in the polymer layer forming solution supply step, a thickness of the polymer layer forming solution is 5,000 μm or less. in the aspect, the thickness of the polymer layer forming solution in the polymer layer forming solution supply step is defined. since the amount of the polymer layer forming solution to be supplied can be reduced by setting the thickness of the polymer layer forming solution to the above range, the time for the polymer layer forming solution drying step can be shortened and production costs can be reduced. the expression “the thickness of the polymer layer forming solution” is a thickness from the region in which the needle-like recessed portions are formed after the polymer layer forming solution supply step is performed and is a thickness of the thickest portion of the polymer layer forming solution. in another aspect of the present invention, it is preferable that the step portion is a frame that is installed to be separated from the mold. according to the aspect, drug solution filling step and the drug solution drying step can be performed on a flat mold by forming the step portion by the frame, and thus each step can be effectively performed. in addition, a frame can be installed according to a transdermal absorption sheet to be produced. in another aspect of the present invention, it is preferable that the step portion has a step in the mold itself. according to the aspect, it is possible to stably supply the polymer layer forming solution by providing a step in the mold itself to form the step portion. in another aspect of the present invention, it is preferable that the step portion has a tapered shape widening in a direction from the region in which the needle-like recessed portions are formed to an upper side in a vertical direction. according to the aspect, the effect of defoaming bubbles mixed in the polymer layer forming solution is exhibited by forming the step portion in a tapered shape widening to the upper side. it is possible to prevent defects of the needle-like protruding portions in the peeling-off step and damage of the needle-like protruding portions at the time of puncture by defoaming bubbles mixed in polymer layer forming solution. in order to achieve the above object, the present invention provides a method of producing a transdermal absorption sheet, comprising: a drug solution filling step of filling needle-like recessed portions of a mold having the needle-like recessed portions with a drug solution that is a polymer solution containing a drug; a drug solution drying step of drying the drug solution filling the needle-like recessed portions to form a drug layer containing the drug; a polymer layer forming solution supply step of supplying a polymer layer forming solution to the mold, the mold being provided with a step portion that is lower than a region in which the needle-like recessed portions are formed in a periphery of the region in which the needle-like recessed portions are formed, in a range of equal to or greater than the step portion as seen from above, and then fixing a contact position of the polymer layer forming solution and the mold to the step portion while reducing the polymer layer forming solution; and a polymer layer forming solution drying step of drying the polymer layer forming solution supplied to the mold to form a polymer layer. according to the present invention, since the polymer layer forming solution for forming a polymer layer is supplied in a range equal to or greater than the step portion by providing the step portion that is lower than the region in which the needle-like recessed portions are formed in the periphery of the region in which the needle-like recessed portions are formed in the mold in which the needle-like recessed portions are formed and is fixed to the step portion by reduction, the polymer layer forming solution can be prevented from being repelled in the region in which the needle-like recessed portions are formed on the mold. accordingly, it is possible to stably produce a transdermal absorption sheet in a state in which the shape of the transdermal absorption sheet is maintained. it is preferable that the polymer layer forming solution is reduced using the surface tension. in another aspect of the present invention, it is preferable that in the polymer layer forming solution supply step, the polymer layer forming solution is supplied to each needle-like recessed portion in which the step portion is provided. according to the aspect, it is possible to fix the supplied polymer layer forming solution to the respective step portions by supplying the polymer layer forming solution to each step portion. in another aspect of the present invention, it is preferable that in the polymer layer forming solution supply step, a thickness of the polymer layer forming solution is 5,000 μm or less. in the aspect, the thickness of the polymer layer forming solution in the polymer layer forming solution supply step is defined. since the amount of the polymer layer forming solution to be supplied can be reduced by setting the thickness of the polymer layer forming solution to the above range, the time for the polymer layer forming solution drying step can be shortened and production costs can be reduced. in another aspect of the present invention, it is preferable that the step portion has a step in the mold itself. according to the aspect, it is possible to stably supply the polymer layer forming solution by forming the step portion by providing a step in the mold itself. in another aspect of the present invention, in the case in which the polymer layer forming solution is supplied, in order to make a contractile force of the polymer layer forming solution which works on the step portion installed on the mold uniform, a shape formed by the step portion in the periphery of the region in which the needle-like recessed portions are formed is preferably a hexagonal or higher polygonal shape in which all corners are formed at an angle of 120° or greater as the step is viewed from above, and more preferably a regular hexagonal or higher polygonal shape or a circular shape. since the polymer layer forming solution is isotropically reduced, the shape formed by the step portion is formed in a hexagonal or higher polygonal shape in which all corners are formed at an angle of 120° c. or greater, and preferably in a regular hexagonal or higher polygonal shape or a circular shape, and then the polymer layer forming solution is reduced at the corner portions of the polygonal shape. thus, it is possible to prevent the liquid droplets of the polymer layer forming solution from being dropped into the region in which the needle-like recessed portions are formed without fixing the polymer layer forming solution to the step portion. in the case in which the liquid droplets of the polymer layer forming solution are dropped into the region in which the needle-like recessed portions are formed, the polymer layer forming solution is repelled and the shape of the transdermal absorption sheet is not stable. thus, this case is not preferable. according to the method of producing a transdermal absorption sheet of the present invention, it is possible to stably maintain a liquid level even in a thin film with a simple apparatus configuration. in addition, it is possible to realize cost reduction by adopting a simple apparatus configuration. brief description of the drawings fig. 1 is a perspective view showing a transdermal absorption sheet having a needle-like protruding portion. fig. 2 is a perspective view showing a transdermal absorption sheet having a needle-like protruding portion of another shape. fig. 3 is a cross-sectional view showing the needle-like protruding portions of the transdermal absorption sheets shown in figs. 1 and 2 . fig. 4 is a perspective view showing a transdermal absorption sheet having a needle-like protruding portion of another shape. fig. 5 is a perspective view showing a transdermal absorption sheet having a needle-like protruding portion of another shape. fig. 6 is a cross-sectional view showing the needle-like protruding portions of the transdermal absorption sheets shown in figs. 4 and 5 . fig. 7a is a step view of a method of producing a mold. fig. 7b is a step view of the method of producing the mold. fig. 7c is a step view of the method of producing the mold. fig. 8a is a step view of a method of producing a mold having another shape. fig. 8b is a step view of the method of producing the mold having another shape. fig. 8c is a step view of the method of producing the mold having another shape. fig. 9a is a step view of a method of producing a mold having another shape. fig. 9b is a step view of the method of producing the mold having another shape. fig. 9c is a step view of the method of producing the mold having another shape. fig. 10 is a partially enlarged view showing a mold. fig. 11 is a partially enlarged view showing a mold. fig. 12 is a flowchart of a method of producing a transdermal absorption sheet. fig. 13a is a schematic view showing a step of filling needle-like recessed portions of a mold with a drug solution. fig. 13b is a schematic view showing the step of filling the needle-like recessed portions of the mold with the drug solution. fig. 13c is a schematic view showing the step of filling the needle-like recessed portions of the mold with the drug solution. fig. 14 is a perspective view showing a tip end of a nozzle. fig. 15 is a perspective view showing a tip end of another nozzle. fig. 16 is a partially enlarged view showing the tip end of the nozzle and the mold during filling. fig. 17 is a partially enlarged view showing the tip end of the nozzle and the mold during scanning. fig. 18 is a schematic configuration view showing a drug solution filling apparatus. fig. 19 is an illustration showing a relationship between the liquid pressure in the nozzle and the supply of a drug-containing solution. fig. 20a is a schematic view showing a part of a step of producing a transdermal absorption sheet. fig. 20b is a schematic view showing a part of the step of producing the transdermal absorption sheet. fig. 20c is a schematic view showing a part of the step of producing the transdermal absorption sheet. fig. 20d is schematic view showing a part of the step of producing the transdermal absorption sheet. fig. 21a is an illustration showing a polymer layer forming solution supply step according to a first embodiment. fig. 21b is an illustration showing the polymer layer forming solution supply step according to the first embodiment. fig. 21c is an illustration showing the polymer layer forming solution supply step according to the first embodiment. fig. 22a is an illustration showing an unpreferable example of the polymer layer forming solution supply step. fig. 22b is an illustration showing the unpreferable example of the polymer layer forming solution supply step. fig. 23a is an illustration showing a method of applying a polymer layer forming solution to a mold. fig. 23b is an illustration showing the method of applying the polymer layer forming solution to the mold. fig. 24a is an illustration showing another method of applying a polymer layer forming solution to the mold. fig. 24b is an illustration showing the another method of applying the polymer layer forming solution to the mold. fig. 24c is an illustration showing the another method of applying the polymer layer forming solution to a mold. fig. 25a is an illustration showing reduction of the polymer layer forming solution according to a shape of a frame. fig. 5b is an illustration showing the reduction of the polymer layer forming solution according to the shape of the frame. fig. 25c is an illustration showing the reduction of the polymer layer forming solution according to the shape of the frame. fig. 26a is an illustration showing reduction of the polymer layer forming solution according to a shape of application. fig. 26b is an illustration showing the reduction of the polymer layer forming solution according to the shape of the application. fig. 26c is an illustration showing the reduction of the polymer layer forming solution according to the shape of the application. fig. 27a is an illustration showing the reduction of the polymer layer forming solution according to another shape of a frame. fig. 27b is an illustration showing the reduction of the polymer layer forming solution according to the another shape of the frame. fig. 27c is an illustration showing the reduction of the polymer layer forming solution according to the another shape of the frame. fig. 28a is an illustration showing a polymer layer forming solution supply step using a frame having another shape. fig. 28b is an illustration showing the polymer layer forming solution supply step using the frame having another shape. fig. 29a is an illustration showing a polymer layer forming solution supply step according to a modification example of the first embodiment. fig. 29b is an illustration showing the polymer layer forming solution supply step according to the modification example of the first embodiment. fig. 30a is an illustration showing a polymer layer forming solution supply step according to a second embodiment. fig. 30b is an illustration showing the polymer layer forming solution supply step according to the second embodiment. fig. 31a is an illustration showing an unpreferable example of the polymer layer forming solution supply step. fig. 31.13 is an illustration showing the unpreferable example of the polymer layer forming solution supply step. fig. 32a is a plan view showing an original plate used in examples. fig. 32b is a side view showing the original plate used in examples. description of the preferred embodiments hereinafter, a method of producing a transdermal absorption sheet of the present invention will be described with reference to the attached drawings. incidentally, in the specification, numerical values indicated using the expression “to” mean a range including the numerical values indicated before and after the expression “to” as the lower limit and the upper limit. (transdermal absorption sheet) a transdermal absorption sheet produced in the embodiment will be described. figs. 1 and 2 each show a needle-like protruding portion 110 (also referred to as a microneedle) that is a partially enlarged view of a transdermal absorption sheet 100 . the transdermal absorption sheet 100 delivers a drug into the skin by being attached to the skin. as shown in fig. 1 , the transdermal absorption sheet 100 has a tapered-shaped needle portion 112 , a frustum portion 114 connected to the needle portion 112 , and a plate-like sheet portion 116 connected to the frustum portion 114 . the tapered-shaped needle portion 112 and the frustum portion 114 configure the needle-like protruding portion 110 . a plurality of frustum portions 114 is formed on the surface of the sheet portion 116 (only one frustum portion 114 is shown in fig. 1 ). out of the two end surfaces of the frustum portion 114 , an end surface (lower base) having a larger area is connected to the sheet portion 116 . out of the two end surfaces of the frustum portion 114 , an end surface (upper base) having a smaller area is connected to the needle portion 112 . that is, out of the two end surfaces of the frustum portion 114 , an end surface in a direction in which the end surface is separated from the sheet portion 116 has a smaller area. since the end surface of the needle portion 112 having a large area is connected to the end surface of the frustum portion 114 having a small area, the needle portion 112 has a gradually tapered shape in a direction in which the needle portion is separated from the frustum portion 114 . in fig. 1 , the frustum portion 114 has a truncated cone shape, and the needle portion 112 has a cone shape. the shape of a tip end of the needle portion 112 can be appropriately changed to a curved surface having a radius of curvature of 0.01 μm or more and 50 μm or less, a flat surface, or the like in accordance with the degree of insertion of the needle portion 112 into the skin. fig. 2 shows a needle-like protruding portion 110 having another shape. in fig. 2 , the frustum portion 114 has a truncated square pyramid shape and the needle portion 112 has a quadrangular pyramid shape. fig. 3 shows cross-sectional views of the transdermal absorption sheets 100 shown in figs. 1 and 2 , respectively. as shown in fig. 3 , the transdermal absorption sheet 100 is formed of a drug layer 120 containing a predetermined amount of a drug and a polymer layer 122 . here, the expression “containing a predetermined amount of a drug” means containing a drug large in an amount such that the effect of the drug is exhibited in the case in which the transdermal absorption sheet 100 punctures the body surface. the drug layer 120 containing a drug is formed at the tip end of the needle-like protruding portion 110 (the tip end of the needle portion 112 ). the drug can be effectively delivered into the skin by forming the drug layer 120 at the tip end of the needle-like protruding portion 110 . hereinafter, the expression “containing a predetermined amount of a drug” is referred to as “containing a drug” if necessary. in the portion of the needle portion 112 excluding the drug layer 120 , the polymer layer 122 is formed. the frustum portion 114 is formed of the polymer layer 122 . the sheet portion 116 is formed of the polymer layer 122 . the distribution of the drug layer 120 and the polymer layer 122 forming the needle portion 112 , the frustum portion 114 , and the sheet portion 116 can be appropriately set. the thickness t of the sheet portion 116 is preferably in a range of 10 μm to 2,000 μm and more preferably in a range of 10 μm to 1,000 μm. a width w 1 of the bottom surface (lower base) in which the frustum portion 114 and the sheet portion 116 are in contact with each other is preferably in a range of 100 μm to 1,500 μm and more preferably in a range of 100 μm to 1,000 μm. a width w 2 of the bottom surface (upper base) in which the frustum portion 114 and the needle portion 112 are in contact with each other is preferably in a range of 100 μm to 1,500 μm and more preferably in a range of 100 μm to 1,000 μm. it is preferable that the width w 1 and the width w 2 satisfy the relationship of w 1 >w 2 in the above numerical value range. the height h of the needle-like protruding portion 110 is preferably in a range of 100 μm to 2,000 μm and more preferably in a range of 200 μm to 1,500 μm. in addition, h 1 /h 2 that is a ratio between a height h 1 of the needle portion 112 and a height h 2 of the frustum portion 114 is preferably in a range of 1 to 10 and more preferably in a range of 1.5 to 8. in addition, the height h 2 of the frustum portion 114 is preferably in a range of 10 μm to 1,000 μm. an angle α formed between the side surface of the frustum portion 114 and a surface parallel with the surface of the sheet portion 116 is preferably in a range of 10° to 60° and more preferably in a range of 20° to 50°. in addition, an angle β formed between the side surface of the needle portion 112 and a surface parallel to the upper base of the frustum portion 114 is preferably in a range of 45° to 85° and more preferably in a range of 60° to 80°. the angle β is preferably equal to or greater than the angle α. this is because the needle-like protruding portion 110 easily punctures the skin. figs. 4 and 5 show needle-like protruding portions 110 having other shapes. in the transdermal absorption sheets 100 shown in figs. 1 and 4 and the transdermal absorption sheets 100 shown in figs. 2 and 5 , the frustum portions 114 have the same shape, and the needle portions 112 have different shapes. each needle portion 112 shown in figs. 4 and 5 has a tapered needle-like portion 112 a and a cylindrical body portion 112 b. the bottom surface of the needle-like portion 112 a is connected to an end surface of the body portion 112 b. the other end surface that is not connected to the bottom surface of the needle-like portion 112 a out of the end surfaces of the body portion 112 b is connected to the upper base of the frustum portion 114 . the needle-like portion 112 a shown in fig. 4 has a conical shape and the body portion 112 b has a columnar shape. the needle-like portion 112 a shown in fig. 5 has a quadrangular pyramid shape and the body portion 112 b has a quadrangular shape. since the needle portion 112 has the body portion 112 b, the needle portion 112 is formed to have a shape having a fixed cross-sectional area in a direction in which the needle portion is separated from the frustum portion 114 . the tapered needle-like portion 112 a of the needle portion 112 has a shape gradually tapered in a direction in which the needle portion is separated from the body portion 112 b. the cylindrical body portion 112 b has two facing end surfaces having almost the same area. the needle portion 112 has a tapered shape as a whole. according to a degree of insertion of the needle portion 112 into the skin, the shape of the tip end of the needle portion 112 can be appropriately changed to have a curved surface of a radius of curvature of 0.01 μm or more and 50 μm or less, a flat surface, or the like. fig. 6 is a cross-sectional view showing the transdermal absorption sheets 100 shown in figs. 4 and 5 . as shown in fig. 6 , each transdermal absorption sheet 100 is formed of a drug layer 120 containing a drug and a polymer layer 122 . the drug layer 120 containing a drug is formed at the tip end of the needle-like protruding portion 110 (the tip end of the needle portion 112 ). by forming the drug layer 120 at the tip end of the needle-like protruding portion 110 , the drug can be effectively delivered into the skin. in the portion of the needle portion 112 excluding the drug layer 120 , the polymer layer 122 is formed. the frustum portion 114 is formed of the polymer layer 122 . the sheet portion 116 is formed of the polymer layer 122 . the distribution of the drug layer 120 and the polymer layer 122 forming the needle portion 112 , the frustum portion 114 , and the sheet portion 116 can be appropriately set. the thickness t of the sheet portion 116 , the width w 1 of the lower base of the frustum portion 114 , the width w 2 of the upper base of the frustum portion 114 , the height h of the needle-like protruding portion 110 , and the height h 2 of the frustum portion 114 can be set to be the same as the lengths in the transdermal absorption sheets 100 shown in fig. 3 . h 1 /h 2 that is a ratio between the height h 1 of the needle portion 112 and the height h 2 of the frustum portion 114 can be set to be the same as the ratios in the transdermal absorption sheets 100 shown in fig. 3 . h 1 b/h 1 a that is a ratio between a height h 1 a of the needle-like portion 112 a and a height h 1 b of the body portion 112 b is in a range of 0.1 or more and 4 or less and preferably in a range of 0.3 or more and 2 or less. the angle α formed between the side surface of the frustum portion 114 and a surface parallel to the surface of the sheet portion 116 is in a range of 10° or greater and 60° or less and preferably in a range of 20° or greater and 50° or less. in addition, the angle β formed between the side surface of the needle-like portion 112 a and an end surface parallel to the bottom surface of the body portion 112 b is in a range of 45° or greater and 85° or less and preferably in a range of 60° or greater and 80° or less. the angle β is preferably equal to or greater than the angle α. this is because the needle-like protruding portion 110 is easily inserted into the skin. in the embodiment, the transdermal absorption sheets 100 having the needle portions 112 shown in figs. 1, 2, 4, and 5 are described but the shape of the transdermal absorption sheet 100 is not limited to these shapes. (mold) figs. 7a to 7c are step views showing a step of producing a mold (form). as shown in fig. 7a , first, an original plate for producing a mold for producing a transdermal absorption sheet is produced. there are two kinds of methods of producing the original plate 11 . the first method includes applying a photo resist to a si substrate, and exposing and developing the photo resist. then, etching by reactive ion etching (rie) or the like is performed to produce a plurality of protruding portions 12 , each having the same shape as the needle-like protruding portion of the transdermal absorption sheet, in arrays on the surface of the original plate 11 . in addition, in the case of performing etching such as rie to form the protruding portion 12 on the surface of the original plate 11 , the protruding portion 12 can be formed by performing etching from an oblique direction while rotating the si substrate. as the second method, there is a method including processing a metal substrate of stainless steel, an aluminum alloy, ni, or the like using a cutting tool such as a diamond bite to produce a plurality of protruding portions 12 in arrays on the surface of the original plate 11 . next, as shown in fig. 7b , a mold 13 is produced using the original plate 11 . in order to produce a normal mold 13 , a method using ni electroforming or the like is generally used. since the original plate 11 has the protruding portions 12 having a conical shape with a sharp tip end or a pyramid shape (for example, a quadrangular pyramid shape), the shape of the original plate 11 is accurately transferred to the mold 13 , and the mold 13 can be peeled off from the original plate 11 . four methods that make possible to produce the mold 13 at a low cost are considered. the first method is a method in which a silicone resin obtained by adding a curing agent to polydimethylsiloxane (pdms, for example, sylgard (registered trademark) 184 , manufactured by dow corning corporation) is poured into the original plate 11 and cured by a heating treatment at 100° c., and then the mold 13 is peeled off from the original plate 11 . the second method is a method in which an uv curable resin that is curable by ultraviolet irradiation is poured into the original plate 11 and irradiated with ultraviolet light in a nitrogen atmosphere, and then the mold 13 is peeled off from the original plate 11 . the third method is a method in which a material obtained by dissolving a plastic resin such as polystyrene or polymethylmethacrylate (pmma) in an organic solvent is poured into the original plate 11 which has been coated with a release agent, and is dried to volatilize the organic solvent for curing, and then the mold 13 is peeled off from the original plate 11 . the fourth method is a method in which an inverted article is made by ni electroforming. in this manner, the mold 13 in which the needle-like recessed portions 15 having an inverted shape of the protruding portion 12 of the original plate 11 are arranged two-dimensionally is produced. the mold 13 produced in this manner is shown in fig. 7c . in addition, in any of the above four methods, the mold 13 can be easily produced any number of times. in the case of a mold itself having a step portion, a step portion is provided on the original plate to produce a mold having an inverted shape thereof. figs. 8a to 8c are step views of production of a mold 73 having a step portion 74 in which the periphery of a region in which the needle-like recessed portions 15 are formed is higher than a region in which the needle-like recessed portions 15 are formed. similar to the case of forming a mold not having a step portion, as shown in fig. 8a , an original plate 71 for producing the mold 73 having the step portion 74 is produced. in the original plate 71 , the step portion 75 is formed be lower than a region in which the protruding portions 12 are formed. the production of the original plate can be performed out in the same manner as in fig. 7a . next, as shown in fig. 8b , the mold 73 is produced by using the original plate 71 . the production of the mold 13 can be performed out in the same manner as in fig. 7b . thus, as shown in fig. 8c , the mold 73 in which the needle-like recessed portions 15 which are inverted shapes of the protruding portions 12 and the step portion 75 of the original plate 71 are arranged two-dimensionally and the step portion 74 is provided in the periphery thereof is produced. in addition, figs. 9a to 9c are step view of production of a mold 83 having a step portion 84 that is formed to lower than the region in which the needle-like recessed portions 15 are arranged two-dimensionally in the periphery of the region in which needle-like recessed portions 15 are formed. similar to figs. 7a and 8a , as shown in fig. 9a , an original plate 81 for producing the mold 83 having a step portion 84 is produced. in the original plate 81 , the step portion 85 is formed be higher than a region in which the protruding portions 12 are formed. next, as shown in fig. 9b , the mold 83 is produced by using the original plate 81 . thus, as shown in fig. 9c , the mold 83 in which the needle-like recessed portions 15 which are inverted shapes of the protruding portions 12 and the step portion 85 of the original plate 81 are arranged two-dimensionally and the step portion 85 is provided in the periphery thereof is produced. the method of producing the original plate and the method of producing the mold can be performed out in the same manner as in the production method of figs. 7a, 7b, 8a, and 8b . fig. 10 is a partially enlarged view showing the needle-like recessed portions 15 of the mold 13 . in the molds 73 and 83 , the needle-like recessed portions 15 have the same configuration. the needle-like recessed portion 15 is provided with a tapered inlet portion 15 a that is narrower in a depth direction from the surface of the mold 13 , and a tip end recessed portion 15 b that is tapered in the depth direction. the angle α 1 of the taper of the inlet portion 15 a basically coincides the angle α formed between the side surface of the frustum portion and the sheet portion of the transdermal absorption sheet. in addition, the angle β 1 of the taper of the tip end recessed portion 15 b basically coincides the angle β formed between the side surface of the needle portion and the upper base of the frustum portion. fig. 11 shows a more preferred embodiment of a mold complex 18 in performing a method of producing a transdermal absorption sheet. as shown in fig. 11 , the mold complex 18 includes a mold 13 in which a through-hole 15 c is formed at the tip end of the needle-like recessed portion 15 and a gas permeable sheet 19 that is bonded to the through-hole 15 c side of the mold 13 and is made of a material that is gas permeable, but is not liquid permeable. through the through-hole 15 c, the tip end of the needle-like recessed portion 15 communicates with the atmosphere through the gas permeable sheet 19 . the expression “tip end of the needle-like recessed portion 15 ” means a side that is tapered in a depth direction of the mold 13 and is opposite to a side from which a drug solution and a polymer layer forming solution are poured. using such a mold complex 18 , only the air present in the needle-like recessed portion 15 can be removed from the needle-like recessed portion 15 via the through-hole 15 c without permeation of the transdermal absorption material solution filling in the needle-like recessed portion 15 . the transferability in the case in which the shape of the needle-like recessed portion 15 is transferred to the transdermal absorption material is improved, and thus it is possible to form a sharper needle-like protruding portion. the diameter d of the through-hole 15 c is preferably in a range of 1 to 50 μm. by adjusting the diameter within this range, air bleeding is easily performed, and the tip end portion of the needle-like protruding portion of the transdermal absorption sheet can be formed into a sharp shape. as the gas permeable sheet 19 made of a material that is gas permeable, but is not liquid permeable, for example, poreflon (product name, manufactured by sumitomo electric industries, ltd.) can be suitably used. as the material used for the mold 13 , a resin-based raw material and a metallic raw material can be used. of these, a resin-based raw material is preferable and a raw material with high gas permeability is more preferable. the oxygen permeability, which is representative of the gas permeability, is preferably more than 1×10 −12 (ml/s·m·pa) and more preferably more than 1×10 −1 ° (ml/s·m·pa). by setting the gas permeability to be in the above range, the air present in the needle-like recessed portion 15 of the mold 13 can be removed from the mold 13 . it is possible to produce a transdermal absorption sheet with few defects. as a resin-based raw material for such a material, general engineering plastics such as silicone resin, epoxy resin, polyethylene terephthalate (pet), polymethyl methacrylate (pmma), polystyrene (ps), polyethylene (pe), polyacetal or polyoxymethylene (pom), polytetrafluoroethylene (ptfe), uv (ultraviolet) curable resin, phenolic resin, urethane resin, and the like can be used. in addition, examples of the metallic raw material include ni, cu, cr, mo, w, ir, tr, fe, co, mgo, ti, zr, hf, v, nb, ta, α-aluminum oxide, stainless steel, and alloys thereof. in addition, since it is necessary to fix a polymer layer forming solution to the step portion in a polymer layer forming solution supply step as described later, the mold 13 is preferably formed by using a material with controlled water repellency and wettability. for example, the contact angle between the mold and the polymer layer forming solution is preferably greater than 90° and close to 90°. (polymer solution) the polymer solution that is a solution of the polymer resin used in the embodiment is described. in the embodiment, the expression “polymer solution containing a predetermined amount of a drug” is referred to as a polymer solution containing a drug or a solution containing a drug, if necessary. also, the expression “polymer solution containing a predetermined amount of a drug” is referred to as a drug solution. whether or not a predetermined amount of a drug is contained in the solution can be determined based on whether or not the effect of the drug can be exhibited in the case in which the transdermal absorption sheet punctures the body surface. accordingly, the expression “containing a predetermined amount of drug” means containing the drug in such an amount that the effect of the drug is exhibited in the case in which the transdermal absorption sheet punctures the body surface. as the raw material for the resin polymer used for the polymer solution, a biocompatible resin is preferably used. it is preferable to use, as such a resin, sugar such as glucose, maltose, pullulan, chondroitin sulfate, sodium hyaluronate, or hydroxyethyl starch, protein such as gelatin, or a biodegradable polymer such as polylactic acid and a lactic acid-glycolic acid copolymer. among these, gelatin-based raw materials can be suitably used since the gelatin-based raw materials have adhesiveness with many base materials and have a high gel strength as materials to be gelated, and in the peeling-off step described later, the raw materials can be closely attached to the base material and a polymer sheet can be peeled off from the mold using the base material. the concentration of the resin is preferably such that 10% to 50% by mass of the resin polymer is contained in the polymer solution for forming the polymer layer 122 , while the concentration depends on the kind of the material. additionally, a solvent used for dissolution may be other than hot water as long as the solvent has volatility, and methyl ethyl ketone (mek), alcohol, or the like may be used. the drug to be supplied to the inside of the human body may concurrently be dissolved into the solution of the polymer resin in accordance with the application. the concentration of the polymer of the polymer solution containing a drug for forming the drug layer 120 (the concentration of the polymer excluding the drug in the case in which the drug itself is a polymer) is preferably 0% to 30% by mass. for a method for preparing the polymer solution, in the case in which a water-soluble polymer (gelatin or the like) is used, the solution may be prepared by dissolving water-soluble powder into water, and after the dissolution, adding a drug to the solution or putting and dissolving water-soluble polymer powder into a drug-containing solution dissolved therein. in the case in which the polymer resin is difficult to dissolve into water, the polymer resin may be dissolved on heating. the temperature can be appropriately selected as needed depending on the kind of the polymer material, but the material is preferably heated at about 60° c. or lower. regarding the viscosity of the solution of the polymer resin, the viscosity of the drug-containing solution for forming the drug layer 120 is preferably 100 pa·s or less and more preferably 10 pa·s or less. the viscosity of the solution for forming the polymer layer 122 is preferably 2,000 pa·s or less and more preferably 1,000 pa·s or less. appropriate adjustment of the viscosity of the solution of the polymer resin facilitates injection of the solution into the needle-like recessed portions of the mold. for example, the viscosity of the solution of the polymer resin can be measured with a capillary type viscometer, a falling ball type viscometer, a rotational type viscometer, or an oscillatory type viscometer. (drug) the drug that the polymer solution contains is not particularly limited as long as the drug is a substance having a function as a drug. particularly, the drug is preferably selected from peptide, protein, nucleic acid, polysaccharide, a vaccine, a medical compound, and a cosmetic component. in addition, it is preferable that the medical compound belongs to a water-soluble low-molecular-weight compound. (method of producing transdermal absorption sheet) the method of producing the transdermal absorption sheet of the embodiment include at least five steps of a drug solution filling step, a drug solution drying step, a polymer layer forming solution supply step, a polymer layer forming solution drying step, and a peeling-off step in this order as shown in fig. 12 . (drug solution filling step) the method of producing the transdermal absorption sheet using the mold 13 will be described. as shown in fig. 13a , the mold 13 with the two-dimensionally arranged needle-like recessed portions 15 is placed on a base 20 . two sets of a plurality of needle-like recessed portions 15 , each set including 5×5 two-dimensionally arranged needle-like recessed portions 15 , are formed in the mold 13 . a liquid supply apparatus 36 which has a liquid feed tank 30 storing a drug solution 22 that is a polymer solution containing a predetermined amount of a drug, a pipe 32 connected to the liquid feed tank 30 , and a nozzle 34 connected to a tip end of the pipe 32 is prepared. the drug solution 22 is discharged from the tip end of the nozzle 34 . fig. 14 shows a schematic perspective view of the tip end portion of the nozzle. as shown in fig. 14 , the tip end of the nozzle 34 includes a lip portion 34 a that has a flat surface on the tip end side, a slit-shaped opening portion 34 b, and two inclined surfaces 34 c that are widened along the lip portion 34 a in a direction away from the opening portion 34 b. the slit-shaped opening portion 34 b, for example, allows a plurality of needle-like recessed portions 15 constituting one column to be simultaneously filled with the drug solution 22 . the size (length and width) of the opening portion 34 b is appropriately selected in accordance with the number of needle-like recessed portions 15 to be filled at a time. an increased length of the opening portion 34 b makes it possible to fill an increased number of needle-like recessed portions 15 with the drug solution 22 at a time. thus, productivity can be improved. fig. 15 shows a schematic perspective view of a tip end portion of another nozzle. as shown in fig. 15 , the nozzle 34 has a lip portion 34 a having a flat surface on the tip end side, two slit-shaped opening portions 34 b, and two inclined surfaces 34 c that are widened along the lip portion 34 a in a direction away from the opening portion 34 b. the two opening portions 34 b, for example, allow a plurality of needle-like recessed portions 15 constituting two columns to be simultaneously filled with the drug solution 22 containing a drug. as the material used for the nozzle 34 , an elastic raw material and a metallic raw material may be used. for example, teflon (registered trademark), stainless steel (sus), or titanium may be used. the filling step will be described with reference to fig. 13b . as shown in fig. 13b , the position of the opening portion 34 b in the nozzle 34 is adjusted on the needle-like recessed portions 15 . the lip portion 34 a of the nozzle 34 is in contact with the surface of the mold 13 since the nozzle 34 that discharges the drug solution 22 is pressed against the mold 13 . the drug solution 22 is supplied from the liquid supply apparatus 36 to the mold 13 , and the needle-like recessed portions 15 are filled with the drug solution 22 through the opening portion 34 b in the nozzle 34 . in the embodiment, the plurality of needle-like recessed portions 15 constituting one column are simultaneously filled with the drug solution 22 . however, the present invention is not limited to this configuration. the needle-like recessed portions 15 may be filled with the drug solution 22 one by one. in addition, by using the nozzle 34 shown in fig. 15 , the plurality of needle-like recessed portions 15 constituting the plurality of columns can be simultaneously filled with the drug solution 22 so that filling is performed on the plurality of columns at a time. in the case in which the mold 13 is formed of a raw material having gas permeability, the drug solution 22 can be sucked by sucking from the back surface of the mold 13 , thereby promoting filling of the inside of the needle-like recessed portions 15 with the drug solution 22 . as shown in fig. 13c , while bringing the lip portion 34 a of the nozzle 34 into contact with the surface of the mold 13 , the liquid supply apparatus 36 is relatively scanned in a direction perpendicular to a length direction of the opening portion 34 b subsequent to the filling step in fig. 13b . by scanning the surface of the mold 13 by the nozzle 34 , the nozzle 34 is moved to the needle-like recessed portion 15 not filled with the drug solution 22 . the position of the opening portion 34 b of the nozzle 34 is adjusted on the needle-like recessed portions 15 . the embodiment has been described with reference to the example in which the nozzle 34 is scanned. however, the mold 13 may be scanned. since the nozzle 34 is scanned on the surface of the mold 13 while the lip portion 34 a of the nozzle 34 is brought into contact the surface of the mold 13 , the nozzle 34 can scrape off the drug solution 22 remaining on the surface of the mold 13 excluding the needle-like recessed portions 15 . this enables the drug solution 22 containing a drug to be prevented from remaining on the surface of the mold 13 excluding the needle-like recessed portions 15 . in the embodiment, the inclined surfaces 34 c of the nozzle 34 are arranged at a position perpendicular to the scanning direction indicated by the arrow. accordingly, the nozzle 34 can be smoothly scanned on the surface of the mold 13 . in order to reduce damage to the mold 13 and to suppress deformation of the mold 13 due to compression as much as possible, the degree of pressurization of the nozzle 34 against the mold 13 in the case of scanning is preferably controlled. for example, the pressing force with which the nozzle 34 is pressed against the mold 13 or the pressing distance of the nozzle 34 against the mold 13 is preferably controlled. furthermore, in order to prevent the drug solution 22 from remaining on the mold 13 excluding the needle-like recessed portions 15 , at least one of the mold 13 or the nozzle 34 is desirably formed of a flexible, elastically deformable raw material. the filling step shown in fig. 13b and the scanning step shown in fig. 13c are repeated to fill a 5×5 two-dimensionally arranged needle-like recessed portions 15 with the drug solution 22 . in the case in which the 5×5 two-dimensionally arranged needle-like recessed portions 15 are filled with the drug solution 22 , the liquid supply apparatus 36 is moved to the adjacent 5×5 two-dimensionally arranged needle-like recessed portions 15 , and the filling step in fig. 13b and the scanning step in fig. 13c are repeated. the adjacent 5×5 two-dimensionally arranged needle-like recessed portions 15 are also filled with the drug solution 22 . the above filling step and scanning step may be in (1) a form in which the needle-like recessed portions 15 are filled with the drug solution 22 while the nozzle 34 is being scanned or (2) a form in which, while the nozzle 34 is in scanning, the nozzle 34 is temporarily stopped above the needle-like recessed portions 15 to fill the needle-like recessed portions 15 with the drug solution 22 , and the nozzle 34 is scanned again after the filling. between the filling step and the scanning step, the lip portion 34 a of the nozzle 34 is pressed against the surface of the mold 13 . the amount of the drug solution 22 discharged from the liquid supply apparatus 36 is preferably equal to the total volume of the plurality of needle-like recessed portions 15 of the mold 13 to be filled. the drug solution 22 is prevented from remaining on the surface of the mold 13 excluding the needle-like recessed portions 15 and thus wasting the drug can be reduced. fig. 16 is a partially enlarged view of the tip end of the nozzle 34 and the mold 13 during filling of the needle-like recessed portions 15 with the drug solution 22 . as shown in fig. 16 , filling of the inside of the needle-like recessed portions 15 with the drug solution 22 can be promoted by applying a pressuring force p 1 into the nozzle 34 . moreover, in the case in which the needle-like recessed portions 15 is filled with the drug solution 22 , a pressing force p 2 with which the nozzle 34 is brought into contact with the surface of the mold 13 is preferably set to be equal to or greater than the pressuring force p 1 in the nozzle 34 . setting the pressing force p 2 ≥the pressuring force p 1 enables the drug solution 22 to be restrained from leaking from the needle-like recessed portions 15 to the surface of the mold 13 . fig. 17 is a partially enlarged view of the tip end of the nozzle 34 and the mold 13 during movement of the nozzle 34 . in the case in which the nozzle 34 is scanned relative to the mold 13 , a pressing force p 3 with which the nozzle 34 is brought into contact with the surface of the mold 13 is preferably set to be smaller than the pressing force p 2 with which the nozzle 34 is brought into contact with the surface of the mold 13 while filling is performed. this is intended to reduce damage to the mold 13 and to suppress deformation of the mold 13 associated with compression. it is preferable that the lip portion 34 a of the nozzle 34 is parallel to the surface of the mold 13 . the posture of the nozzle 34 may be controlled by providing a joint driving mechanism at a mounting portion of the nozzle 34 . the pressing force and/or the pressing distance of the nozzle 34 to the mold 13 is/are preferably controlled by driving the nozzle 34 in a z-axis direction in accordance with the surface shape of the mold 13 . fig. 18 is a schematic configuration diagram of a drug solution filling apparatus 48 capable of controlling the pressing force and/or the pressing distance. the drug solution filling apparatus 48 has a liquid supply apparatus 36 that has a liquid feed tank 30 storing a drug solution and a nozzle 34 mounted on the liquid feed tank 30 , a z-axis driving unit 50 that drives the liquid feed tank 30 and the nozzle 34 in the z-axis direction, a suction base 52 for placing the mold 13 thereon, a x-axis driving unit 54 that drives the suction base 52 in a x-axis direction, a stand 56 that supports the apparatus, and a control system 58 . the case of controlling a pressing force to be constant will be described. the z-axis driving unit 50 brings the nozzle 34 close to the mold 13 up to z-axis coordinates in which a desired pressing force is obtained. while the nozzle 34 brought into contact with the mold 13 is scanned by the x-axis driving unit 54 , the drug solution 22 is discharged while z-axis coordinate control is performed such that the pressing force becomes constant. the contact pressure measuring method is not particularly limited, but for example, various load cells can be used, for example, under the suction base 52 or in place of the suction base 52 . the load cell means a measuring instrument capable of measuring a force for compression in a thickness direction. the pressing force is an arbitrary pressure within a range of 1 to 1,000 kpa with respect to the mold 13 , and is preferably controlled to be constant. the case of controlling a pressing distance to be constant will be described. before contact with the nozzle 34 , the surface shape of the mold 13 is measured in advance. while the nozzle 34 brought into contact with the mold 13 is scanned by the x-axis driving unit 54 , the value obtained by performing z-axis coordinate offset such that a desired pressing distance is provided with respect to the surface shape of the mold 13 is fed back to the z-axis driving unit 50 by the control system 58 . the drug solution 22 is discharged while feeding back the value. the shape measuring method is not particularly limited. for example, an optical measuring instrument such as a non-contact-type laser displacement meter 60 or a contact-type probe-type step profiler can be used. furthermore, the posture of the nozzle 34 in a slit direction may be controlled in accordance with the surface shape of the mold 13 . the pressing distance is preferably controlled within a range of 1% to 15% with respect to the thickness of the mold 13 . through the operation with the control of the distance between the nozzle 34 and the mold 13 in the z-axis direction by the z-axis driving unit 50 in accordance with the shape of the mold 13 , the compression deformation rate becomes uniform, and thus the accuracy of the filling amount can be improved. regarding the control of the pressing force and the pressing distance, the pressing force is preferably controlled in the case in which the pressing distance is small, and the pressing distance is preferably directly controlled in the case in which the pressing distance is large. fig. 19 is an illustration showing the relationship between the liquid pressure in the nozzle and the supply of the drug-containing solution. as illustrated in fig. 19 , the supply of the drug solution 22 is started before the nozzle 34 is positioned above the needle-like recessed portions 15 . the reason for this is to securely fill the needle-like recessed portions 15 with the drug solution 22 . until the filling of the plurality of needle-like recessed portions 15 of 5×5 is completed, the drug solution 22 is continuously supplied to the mold 13 . the supply of the drug solution 22 to the mold 13 is stopped before the nozzle 34 is positioned above needle-like recessed portions 15 in the fifth column. therefore, it is possible to prevent the drug solution 22 from overflowing from the needle-like recessed portions 15 . the liquid pressure in the nozzle 34 increases in a region where the nozzle 34 is not positioned above the needle-like recessed portions 15 in the case in which the supply of the drug solution 22 is started. meanwhile, in the case in which the nozzle 34 is positioned above the needle-like recessed portions 15 , the needle-like recessed portions 15 are filled with the drug solution 22 , and the liquid pressure in the nozzle 34 decreases. such a change in the liquid pressure is repeated. in the case in which the filling of the plurality of needle-like recessed portions 15 of 5×5 is completed, the nozzle 34 is moved to a plurality of adjacent needle-like recessed portions 15 of 5×5. regarding the liquid supply, the supply of the drug solution 22 is preferably stopped in the case in which the nozzle is moved to the plurality of adjacent needle-like recessed portions 15 of 5×5. there is a distance between the needle-like recessed portions 15 in the fifth column and the needle-like recessed portions 15 in the next first column. in the case in which the drug solution 22 is continuously supplied therebetween during the scanning of the nozzle 34 , the liquid pressure in the nozzle 34 may excessively increase. as a result, the drug solution 22 may flow to a region of the mold 13 excluding the needle-like recessed portions 15 from the nozzle 34 . in order to suppress this problem, the supply of the drug solution 22 is preferably stopped. the tip end of the nozzle 34 is preferably used after being cleaned in the case of performing filling with the drug solution 22 . this is because the accuracy of the filling amount of the drug solution 22 is reduced in a case in which a substance adheres to the surface of the lip portion 34 a of the nozzle 34 before filling. in general, wiping using non-woven cloth is performed for cleaning. during wiping, the cleaning can be effectively performed in the case in which non-woven cloth is permeated with water, a solvent, or the like. after filling with the drug solution 22 , there is a possibility that the drug solution 22 may remain on the surface of the mold 13 in the case in which the nozzle 34 is separated from the mold 13 . by performing suck back control for suction of the drug solution 22 from the opening portion 34 b of the nozzle 34 after completion of the filling of the needle-like recessed portions 15 , an excessive amount of the drug solution 22 discharged can be sucked, and the liquid remaining on the surface of the mold 13 can thus be reduced. in the drug solution filling step, the drug solution can be sucked from the through-hole 15 c side using the mold complex 18 shown in fig. 11 to fill the needle-like recessed portions 15 with the drug solution 22 . in the case in which the filling of the needle-like recessed portions 15 with the drug solution 22 is completed, the process proceeds to the drug solution drying step, the polymer layer forming solution supply step, the polymer layer forming solution drying step, and the peeling-off step. as shown in fig. 20a , the needle-like recessed portions 15 of the mold 13 are filled with the drug solution 22 from the nozzle 34 in the drug solution filling step. the drug solution filling step is performed using the above-described method. (drug solution drying step) as illustrated in fig. 20b , in the drug solution drying step, the drug solution 22 is dried and solidified, and thus a drug layer 120 containing a drug are formed in the needle-like recessed portions 15 . the drug solution drying step is a step of drying the drug solution 22 filling in the needle-like recessed portions 15 of the mold 13 and localizing the drug solution at the tip ends of the needle-like recessed portions 15 . the drug solution drying step is preferably performed in an environment at a temperature of 1° c. or higher and 10° c. or lower. by performing the drug solution drying step in the above range, the occurrence of an air bubble defect can be reduced. in addition, the temperature and humidity condition for the drug solution drying step is controlled and the drying rate can be optimized. thus, fixation of the drug solution 22 to the wall surface of the needle-like recessed portions 15 of the mold 13 can be reduced and solidification is performed while collecting the drug solution 22 at the tip ends of the needle-like recessed portions 15 by drying. the drug solution 22 is preferably dried in a windless state in the drug solution drying step. uneven drying occurs in the case in which the drug solution 22 is directly exposed to non-uniform wind. this is because, in a portion exposed to strong wind, the drying rate may be increased, the drug solution 22 may be fixed to the wall surface of the mold 13 , and thus the localization of the drug solution 22 at the tip ends of the needle-like recessed portions 15 may be disturbed. in order to realize the drying in a windless state, for example, a windshield is preferably installed. the windshield is installed so as not to directly expose the mold 13 to wind. as the windshield, a physical obstacle such as a lid, a hood, a screen, a fence, or the like is preferably installed since this is a simple method. in addition, in the case in which the windshield is installed, a vent hole or the like is preferably secured such that the installation space for the mold 13 is not in a sealed state. in the case in which the installation space is in a sealed state, water vapor in the sealed space may be saturated, and the drying of the drug solution 22 may not proceed. the vent hole is preferably formed such that the passage of vapor is possible, and is more preferably covered with a water vapor permeable film or the like to stabilize the air flow in the windshield. the drying time is appropriately adjusted in consideration of the shape of the needle-like recessed portion 15 , the arrangement of the needle-like recessed portions 15 , and the number of the needle-like recessed portions 15 , the kind of the drug, the filling amount and the concentration of the drug solution 22 , and the like. the windless state refers to the case in which the wind speed is 0.5 m/s or less, including a state in which there is no wind at all. the reason for setting the wind speed to be in this range is that uneven drying rarely occurs. in the drug solution drying step, the drug solution 22 is solidified by being dried, and is reduced compared with that in the case in which the filling with the drug solution 22 is performed. accordingly, in the peeling-off step, the drug layer 120 can be easily peeled off from the needle-like recessed portion 15 of the mold 13 . (polymer layer forming solution supply step) next, as shown in fig. 20c , the polymer layer forming solution 24 that is a polymer solution for forming the polymer layer 122 is supplied to the drug layer 120 containing a predetermined amount of a drug to fill the needle-like recessed portions 15 with the polymer layer forming solution 24 . for the supply of the polymer layer forming solution, coating using a dispenser, bar coating, spin coating, coating using a spray, or the like can be applied but the method for the supply of the solution is not limited thereto. hereinafter, an embodiment in which the polymer layer forming solution 24 is supplied to the mold 13 by coating will be described. since the drug layer 120 containing a drug is solidified by being dried, diffusion of the drug contained in the drug layer 120 into the polymer layer forming solution 24 can be suppressed. first embodiment figs. 21a and 21b are views illustrating a polymer layer forming solution supply step. in the polymer layer forming solution supply step according to the first embodiment, the frame 14 is installed in the periphery of a region 16 in which the needle-like recessed portions 15 are formed and the polymer layer forming solution 24 is applied to the mold 13 having a step portion higher than the region 16 in which the needle-like recessed portions 15 are formed. the frame 14 can be installed to be separated from the mold 13 . in the polymer layer forming solution supply step, as shown in fig. 21a , the polymer layer forming solution 24 is applied using coating means 92 at a height equal to or higher than the step portion formed by the frame 14 installed in the periphery of the needle-like recessed portions 15 in a range of equal to or greater than the step portion as seen from above. the application of the polymer layer forming solution at a height equal to or higher than the height of the frame 14 means that the height of the polymer layer forming solution 24 in a part in which the polymer layer forming solution 24 and the frame 14 are in contact with each other is equal to or higher than the height of the frame 14 . in order to easily peel off the produced transdermal absorption sheet, the frame 14 is formed of a material that easily repels the polymer layer forming solution, and after the application of the polymer layer forming solution 24 , the polymer layer forming solution 24 is repelled by the frame 14 to be reduced by the surface tension. as shown in fig. 21b , the contact position of the reduced polymer layer forming solution 24 and the mold 13 is fixed to the step portion of the frame 14 . in a state in which the position is fixed by the frame 14 , the shape of the polymer layer 122 of the transdermal absorption sheet (the shape of the sheet portion 116 ) can be stably formed by drying the polymer layer forming solution. figs. 22a and 22b are illustrations showing an unpreferable example of the polymer layer forming solution supply step. as shown in fig. 22a , the polymer layer forming solution 24 is applied inside the frame 14 at a height lower than the height of the frame 14 . after applying the solution, as shown in fig. 22b , the polymer layer forming solution 24 is reduced by the surface tension in the region 16 in which the needle-like recessed portions 15 are formed. fig. 22b shows a state after the polymer layer forming solution supply step is performed and since the volume of the polymer layer forming solution 24 is further reduced by a polymer layer forming solution drying step, a part in which the sheet portion 116 of the transdermal absorption sheet is unstably formed parts is generated. as the material for the frame 14 provided on the mold 13 , the same material as the material of the mold can be used. in addition, as the wettability with the polymer layer forming solution becomes higher, the liquid level at the time of drying can be made uniform and can be set to be in a gentle state. thus, a local change in the liquid surface shape of the polymer layer forming solution can be prevented. in the present invention, since the polymer layer forming solution is repelled by the frame 14 and is fixed to the step portion by reduction, it is required for the material for forming the frame to have water repellency with respect to the polymer layer forming solution. accordingly, a material for forming the frame of which the water repellency and wettability with respect to the polymer layer forming solution are controlled is preferably used and the contact angle between the frame and the polymer layer forming solution is preferably greater than 90° and close to 90°. the shape immediately after the polymer layer forming solution can be stably applied by using a material having good wettability with polymer layer forming solution as the material for the frame and entrainment of bubbles can be prevented. in addition, a liquid level strong and stable with respect to a disturbance such as wind or temperature unevenness at the time of drying can be fixed. on the other hand, in the case in which the wettability of the material for the frame and the polymer layer forming solution is poor, the liquid level of the coating liquid is a liquid level having a high curvature and a significant difference in surface tension is generated even with slight surface unevenness so that the coating liquid is deteriorated in shape and the coating liquid is repelled from the frame. thus, this case is not preferable. in order to improve wettability, it is effective to make the raw material for the frame hydrophilic or to add a raw material having a surface active performance, such as protein, to the polymer layer forming solution. further, the frame 14 may be installed from the drug solution filling step or may be installed before the polymer layer forming solution supply step. the height of the frame 14 is preferably 10 μm or more and 5,000 μm or less. in order to fix the polymer layer forming solution 24 to the step portion, it is necessary to apply the polymer layer forming solution 24 at a height equal to or higher than the height of the frame 14 in a range of equal to or wider than the frame 14 . thus, setting the height of the frame 14 within this range allows a reduction in the amount of the polymer layer forming solution to be used. accordingly, the drying time can be shortened. in the case in which the height of the frame 14 is lower than 10 μm, the polymer layer forming solution 24 is not fixed in the frame 14 and the mold 13 repels the polymer layer forming solution so that the sheet portion 116 is not formed or the like. thus, there is a case in which a transdermal absorption sheet cannot be produced. in the polymer layer forming solution supply step, the thickness of the polymer layer forming solution at the time of coating is preferably equal to or higher than the height of the frame 14 from the region 16 in which the needle-like recessed portions 15 are formed in the mold 13 , and is 5,000 μm or less. in order to fix the polymer layer forming solution to the position of the frame 14 , it is necessary to set the thickness of the polymer layer forming solution to be equal to or higher than the height of the frame 14 . in the case in which the coating thickness of the polymer layer forming solution after the solution is applied is more than 5,000 μm, it takes some time for drying. while the liquid level is made thin as a whole to realize a reduction in drying load and a reduction in production costs, a film thickness distribution in which the film thickness in the vicinity of the frame is made thick and the film thickness in the vicinity of the center is made thin may be formed to stably fix the polymer layer to the frame. as the method of applying the polymer layer forming solution 24 to the mold 13 , as shown in figs. 23a and 23b , the polymer layer forming solution may be applied to each needle-like recessed portion 15 surrounded by the frame 14 or as shown in figs. 24a to 24c , the polymer layer forming solution may be applied by covering the entire surface of the mold 13 with the frame 14 . as shown in fig. 23a , in the case in which the polymer layer forming solution 24 is applied to each frame 14 , the polymer layer forming solution can be fixed in the frame 14 after the polymer layer forming solution is applied, as shown in fig. 23b . the polymer layer forming solution can be applied to each frame 14 by intermittent stripe coating using a slit coater, using a dispenser, ink jetting, letterpress printing, lithographic printing, screening printing, or the like. in addition, as shown in fig. 24a , in the case in which the polymer layer forming solution is applied to the entire surface of the mold 13 and the amount of the polymer layer forming solution is large, as shown in fig. 24b , in a state in which the polymer layer forming solution 24 is uniformly applied, the polymer layer forming solution drying step is performed. in this case, the polymer layer 122 can be stably formed in the next polymer layer forming solution drying step. in the case in which the amount of the polymer layer forming solution 24 is small, as shown in fig. 24c , the polymer layer forming solution 24 is reduced toward the inside of the frame 14 (the region in which the needle-like recessed portions 15 are formed) and is fixed in the frame 14 . thus, a transdermal absorption sheet with a stable shape can be produced. as the method of uniformly applying polymer layer forming solution, general coating methods of using a slit coater, a slide coater, a blade coater, a hard coater, a roll coater, a gravure coater, a dip coater, a spray coater, and the like can be used. figs. 25a to 27c are illustrations showing the reduction of the polymer layer forming solution according to the shape of the frame. fig. 25a is a view showing the case in which the polymer layer forming solution 24 is applied in a circular shape and the polymer layer forming solution 24 is applied using a circular frame 14 . since the polymer layer forming solution 24 is isotropically reduced as shown in fig. 25b , as shown in fig. 25c , the polymer layer forming solution 24 applied in a range wider than the frame 14 can be fixed to the position of the step portion formed by the frame 14 . fig. 26a is a view in which the polymer layer forming solution 24 is applied in a quadrangular shape by using the circular frame 14 . even in the case in which the polymer layer forming solution 24 is applied in a quadrangular shape, as shown in fig. 26b , the polymer layer forming solution 24 is isotropically reduced toward the step portion of the circular frame 14 . as shown in fig. 26c , while a repellent residue is present on the frame 14 , the polymer layer forming solution 24 can be fixed to the position of the step portion formed by the frame 14 . fig. 27a is a view in which the polymer layer forming solution 24 is applied in a circular shape by using a quadrangular frame 14 . in the case in which the shape of the frame 14 is a quadrangular shape, isotropic reduction of the polymer layer forming solution 24 allows the polymer layer forming solution 24 to be fixed to the corner portions of the quadrangular frame 14 as shown in fig. 27b . the polymer layer forming solution 24 may be detached from the frame 14 . as shown in fig. 27c , the polymer layer forming solution 24 detached from the frame 14 is further reduced on the mold and a polymer layer is not formed in a region in which the needle-like recessed portions 15 are formed. thus, a transdermal absorption sheet may not be stably formed. as shown in figs. 24a to 24c , in the case in which the polymer layer forming solution is uniformly applied to the mold, the polymer layer forming solution supply step can be performed without detachment of the polymer layer forming solution from the inside of the frame even in the case of the frame 14 having a quadrangular shape. in order to fix the polymer layer forming solution to the step portion formed by the frame, the shape of the periphery of the region in which the needle-like recessed portions are formed, which is formed by providing the frame, is preferably a hexagonal or higher polygonal shape in which all corners are formed at an angle of 120° or greater as seen from above, and is more preferably a regular hexagonal or higher polygonal shape, or a circular shape. by forming the shape of the step portion of the frame in the above shape, in the case of applying the polymer layer forming solution, a contractile force of the polymer layer forming solution by surface tension which works on the step portion installed on the mold can be made uniform. the expression “regular polygonal shape” is preferably a shape in which each side forming the polygonal is equal, but modification can be made within the range exhibiting the effect of the present invention. figs. 28a and 28b are illustrations showing a polymer layer forming solution supply step by using a frame 17 having a tapered shape widening in a direction from the region in which the needle-like recessed portions are formed in the mold 13 to the upper side. even in the case of using the tapered frame 17 , the polymer layer forming solution 24 is applied in a range of equal to or wider than the region of the upper side of the frame 17 ( fig. 28a ), and then is reduced by the surface tension. thus, the polymer layer forming solution 24 can be fixed to the step portion formed by the frame 17 ( fig. 28b ). in addition, by forming the frame 14 in a tapered shape widening in a direction toward an upper side in a vertical direction, the effect of defoaming bubbles mixed in the polymer layer forming solution 24 is exhibited. defoaming of bubbles mixed in the polymer layer forming solution 24 is performed and thus defects of the needle-like protruding portions in the peeling-off step and damage of the needle-like protruding portions at the time of puncture can be prevented. for an angle θ of the taper of the frame 17 , an angle formed between the frame and the mold 13 is preferably 45° or greater and 75° or less. modification example figs. 29a and 29b are views showing a modification example of the first embodiment. the mold 73 shown in figs. 29a and 29b is different from the mold of the first embodiment in that the step portion 74 is formed in the mold 73 itself. in the case in which the mold 73 has the step portion 74 , as described above, the polymer layer forming solution 24 can be fixed to the step portion 74 and a transdermal absorption sheet can be stably produced. the height of the step portion 74 from the region 16 in which the needle-like recessed portions 15 are formed, the coating thickness of the polymer layer forming solution 24 , the shape of the periphery of the region in which the needle-like recessed portions are formed which is formed by the step portion as seen from above, or the like can be set as in the above embodiment. second embodiment figs. 30a and 30b are illustrations showing a polymer layer forming solution supply step according to a second embodiment of the present invention. the polymer layer forming solution supply step of the second embodiment is different from the polymer layer forming solution supply step of the first embodiment in that the step portion is provided in the periphery of the needle-like recessed portions 15 such that the step portion 84 of the mold 83 having the needle-like recessed portions 15 is set to be lower than the region 16 in which the needle-like recessed portions 15 are formed. in the second embodiment, as shown in fig. 30a , the step portion 84 is set to be lower the region 16 of the mold 83 in which the needle-like recessed portions 15 are formed and the polymer layer forming solution 24 is applied in a wide range of equal to or wider than the region 16 in which the needle-like recessed portions 15 are formed, that is, up to the step portion 84 . after the polymer layer forming solution 24 is applied, the reduction of the polymer layer forming solution 24 by the surface tension is started and as shown in fig. 30b , the polymer layer forming solution 24 is fixed to the boundary between the region 16 and the step portion 84 not to cause a further reduction. thus, a transdermal absorption sheet can be stably formed. at the time of application of the polymer layer forming solution, the thickness from the region in which the needle-like recessed portions 15 are formed is preferably 5,000 μm or less. by setting the coating thickness of the polymer solution to 5,000 μm or less, the drying rate in the next polymer layer forming solution drying step can be improved. in the second embodiment, the polymer layer forming solution 24 is preferably applied to each region of the needle-like recessed portions 15 surrounded by the step portion 84 . fig. 31a is a view in which the polymer layer forming solution is uniformly applied to the entire surface of the mold 83 having the step portion 84 lower than the region 16 in which the needle-like recessed portions 15 are formed in the periphery of the needle-like recessed portions 15 . in the second embodiment, in the case in which the polymer layer forming solution is uniformly applied to the entire surface of the mold 83 , the polymer layer forming solution is reduced toward the step portion 84 which is a recessed portion of the mold 83 . accordingly, in the case in which the polymer layer forming solution is uniformly applied to the entire surface of the mold 83 , as shown in fig. 31b , the polymer layer forming solution 24 is repelled from the region 16 in which the needle-like recessed portions 15 are formed and a transdermal absorption sheet may not be stably formed. in the second embodiment, the shape formed by the step portion in the periphery of the region in which the needle-like recessed portions are formed can be formed into the same shape as in the first embodiment. (polymer layer forming solution drying step) returning to fig. 20d from fig. 20a , the polymer layer 122 is formed on the drug layer 120 by drying and solidifying the polymer layer forming solution 24 as shown in fig. 20d , after the polymer layer forming solution supply step is performed. a polymer sheet 1 having the drug layer 120 and the polymer layer 122 is produced. in order to stably fix the polymer layer to the frame, quick drying of the polymer layer forming solution at the step portion is effective. the step portion is preferably dried and solidified using dry air, the dry air blowing perpendicular to the mold, at a high temperature and at a low humidity by increasing the wind speed. in the polymer layer forming solution drying step, the volume of the polymer layer forming solution 24 is reduced by drying. in the case in which the polymer layer forming solution 24 adheres to the mold 13 during drying, a reduction in volume occurs in the film thickness direction of the sheet and thus the film thickness is reduced. in the case in which the polymer layer forming solution 24 is peeled off from the mold 13 during drying, the polymer sheet 1 shrinks in the plane direction and thus the polymer sheet may be deformed or curled. in the case in which the polymer sheet 1 is peeled off from the mold 13 in a state in which the polymer layer forming solution 24 in the needle-like recessed portion 15 is not sufficiently dried, a defect that the shape of the needle-like protruding portion of the polymer sheet 1 is broken or bent is easily generated. thus, it is preferable that the polymer sheet 1 is not peeled off from the mold 13 during drying. in addition, in order to suppress curling, a layer which shrinks to the same degree as the surface with the needle-like protruding portion may be formed on the back surface of the polymer sheet 1 (a surface opposite to the surface on which the needle-like protruding portion is formed). for example, a layer is formed so as to have a film thickness at which the effect of suppressing curling has been confirmed in advance by applying the same polymer solution as the surface side to the back surface side. (peeling-off step) the method of peeling off the polymer sheet 1 from the mold 13 is not limited. it is desirable that the needle-like protruding portion is not bent or broken during peeling-off. specifically, a sheet-like base material in which an adhesive layer having adhesive properties is formed is attached to the polymer sheet 1 , and then the base material can be peeled off to be turned over from an end portion. in addition, a method in which a sucker is installed on the back surface of the polymer sheet 1 and it is possible to vertically lift the polymer sheet while sucking the polymer sheet by air can be applied. a transdermal absorption sheet 100 is produced by peeling off the polymer sheet 1 from the mold 13 . (deaeration step) the drug solution 22 and/or the polymer layer forming solution 24 is/are preferably subjected to deaeration before the drug solution filling step and/or before the polymer layer forming solution supply step. through deaeration, the air bubbles contained in the drug solution 22 and the polymer layer forming solution 24 can be removed before the filling of the needle-like recessed portion 15 of the mold 13 . for example, in the deaeration step, air bubbles having a diameter of 100 μm to several millimeters are removed. examples of the deaeration method include (1) a method of exposing the drug solution 22 under a reduced pressure environment for 1 to 15 minutes, (2) a method of subjecting a container storing the drug solution 22 to ultrasonic vibration for 5 to 10 minutes, (3) a method of applying ultrasonic waves while exposing the drug solution 22 under a reduced pressure environment, and (4) a method of substituting the dissolved gas with helium by sending a helium gas into the drug solution 22 . any of the deaeration methods (1) to (4) also can be applied to the polymer layer forming solution 24 . examples hereinafter, the present invention will be described in more detail using examples of the present invention. the materials, amounts, ratios, treatment contents, treatment procedures, and the like shown in the following examples can be appropriately changed without departing from the gist of the present invention. therefore, the scope of the present invention should not be interpreted in a limited manner based on the specific examples illustrated below. (production of mold) an original plate 11 was produced by subjecting protruding portions 12 each with a needle-like structure to grinding at a pitch l of 1,000 μm in a two-dimensional array with 10 columns and 10 rows on the surface of a smooth ni plate having one side of 40 mm. as shown in figs. 32a and 32b , each protruding portion 12 with a needle-like structure includes a truncated cone 12 a with a circular bottom surface having a diameter d 1 of 500 μm and a height h 1 of 150 μm, and a cone 12 b formed on the truncated cone 12 a and having a circular bottom surface having a diameter d 2 of 300 μm and a height h 2 of 500 μm. on the original plate 11 , a film with a thickness of 0.6 mm was formed using a silicone rubber (silastic (registered trademark) mdx4-4210, manufactured by dow corning corporation) as a material. the film was thermally cured in a state in which the conical tip end portions of the original plate 11 were projected by 50 μm from the film surface, and then the cured film was peeled off. accordingly, an inverted article made of silicone rubber having through-holes having a diameter of about 30 μm was produced. the inverted article made of silicone rubber was trimmed so as to leave a planar portion with a side of 30 mm on whose central portion needle-like recessed portions were formed with two-dimensionally arranged in 10 columns and 10 rows and the obtained portion was used as a mold. the surface in which the needle-like recessed portions had wide opening portions served as a surface of the mold, and the surface having through-holes (air vent holes) having a diameter of 30 μm served as a back surface of the mold. (preparation of polymer solution containing drug (drug solution)) hydroxyethyl starch (manufactured by fresenius kabi) was dissolved in water to prepare an aqueous solution of 8%. 2% by mass of human serum albumin (manufactured by wako pure chemical industries, ltd.) was added to this aqueous solution as a drug to prepare a drug solution. after the solution was prepared, the solution was exposed in an environment of a reduced pressure of 3 kpa for 4 minutes and deaeration was performed. (preparation of polymer solution (polymer layer forming solution)) chondroitin sulfate (manufactured by manilla nichiro corporation) was dissolved in water to prepare an aqueous solution of 40%. the prepared solution was used as a polymer layer forming solution. after the solution was prepared, the solution was exposed in an environment of a reduced pressure of 3 kpa for 4 minutes and sufficient deaeration was performed. hereinafter, steps from the drug solution filling step to the polymer layer forming solution drying step were performed in an environment at a temperature of 5° c. and a relative humidity of 35% rh. (drug solution filling step and drug solution drying step) a drug solution filling apparatus is provided with a driving unit that has a x-axis driving unit and z-axis driving unit controlling relative position coordinates of the mold and the nozzle decided by an x axis and a y axis, a liquid supply apparatus (super small amount fixed-quantity dispenser smp-iii, manufactured by musashi engineering, inc.) on which the nozzle can be mounted, a suction base to which the mold is fixed, a laser displacement meter (hl-c201a, manufactured by panasonic corporation) that measures the surface shape of the mold, a load cell (lcx-a-500n, manufactured by kyowa electronic instruments co., ltd.) that measures a nozzle pressing pressure, and a control system that controls the z axis based on data of measured values of the surface shape and the pressing pressure. a gas permeable film (poreflon (registered trademark) fp-010, manufactured by sumitomo electric industries, ltd.) having one side of 15 mm was placed on the flat suction base, and the mold was installed thereon such that the surface thereof was positioned on the upper side. the gas permeable film and the mold were fixed to the suction base by pressure reduction with a suction pressure of −90 kpa gauge pressure in the back surface direction of the mold. a sus (stainless steel) nozzle having the shape shown in fig. 14 was prepared, and a slit-shaped opening portion having a length of 12 mm and a width of 2 mm was formed at the center of a lip portion having a length of 20 mm and a width of 0.2 mm. this nozzle was connected to the drug solution tank. the drug solution tank and the nozzle were filled with 3 ml of a drug solution. the nozzle was adjusted such that the opening portion was parallel to the first column of a plurality of needle-like recessed portions formed in the surface of the mold. the nozzle was pressed against the mold at a pressure (pressing force) of 0.14 kgf/cm 2 (1.4 n/cm 2 ) at a position apart from the first column with an interval of 2 mm therebetween in a direction opposite to the second column. while being pressed, the nozzle was moved at 1 mm/sec in a direction perpendicular to a length direction of the opening portion while the z axis was controlled such that the pressing force changed within ±0.05 kgf/cm 2 (0.49 n/cm 2 ). simultaneously, the drug solution was discharged from the opening portion for 10 seconds at 0.31 μl/sec by the liquid supply apparatus. the movement of the nozzle was stopped at a position apart from the tenth column of the plurality of needle-like recessed portions arranged two-dimensionally with an interval of 2 mm therebetween in a direction opposite to the ninth column, and the nozzle was separated from the mold. the mold filled with the drug solution was put and dried in a windshield (25 cm 3 ) with an opening portion having a diameter of 5 mm. the windshield mentioned herein has a gas permeable film (poreflon (registered trademark) fp-010, manufactured by sumitomo electric industries, ltd.) mounted on the opening portion and is structured so as not to be directly exposed to wind. (polymer layer forming solution supply step and polymer layer forming solution drying step) on the mold filled with the drug solution, a frame made of stainless steel (sus304) was placed. here, while adjusting the discharge amount, the frame, and the clearance of the nozzle, the polymer layer forming solution was directly applied. then, after 12 hours had passed, whether or not the shape of the transdermal absorption sheet was maintained was confirmed. the evaluation was performed based on the following criteria. <<liquid level fixation>> a . . . the polymer layer is fixed to the step portion. b . . . the polymer layer may not be fixed to the step portion in the case in which this procedure is repeated several times but is in a level not causing a problem. c . . . the polymer layer is not fixed to the step portion, the polymer layer forming solution is repelled from the mold, and the shape of a transdermal absorption sheet is not stable. in addition, regarding the sample of which the fixation of the liquid level was evaluated as “a”, peeling-off of the sheet was confirmed. <<peeling-off>> a . . . the sheet is peelable without any problem b . . . the sheet is not dried and is not peelable after 12 hours has passed when drying starts but is peelable after the drying proceeds and is completed. c . . . the sheet is not peelable. example 1 in the above method, coating was performed by using a circular frame made of sus at the time of the polymer layer forming solution supply step. the polymer layer forming solution was applied to the frame in a range of 1 mm larger than the size of the frame as a radius. the diameter of the frame is the diameter of the frame, and the height of the liquid level is the coating thickness of the polymer layer forming solution and indicates the height of the step portion formed by installing the frame from the surface of the mold to the liquid level. a test was performed using frames having a diameter of 10, 20, and 30 mm, while setting the same number of needle-like recessed portions and position thereof. accordingly, a distance from the needle-like recessed portion to the frame can be changed by changing the diameter of the frame. in addition, the height of the liquid level with respect to the height of the frame was changed by changing the height of the frame to 10 to 10,000 μm after the polymer layer forming solution has been applied to evaluate a polymer layer to be formed. the results are shown in table 1. table 1diameterthicknessheight offixationexample/testshape of stepof frameof frameliquid levelof liquidpeeling-comparativeno.shape of moldportion[mm][μm][μm]leveloffexample1nonenone10, 20, 30not10c—comparative example2formed100c—comparative example3500c—comparative example41,000c—comparative example55,000c—comparative example610,000c—comparative example7recessed shapecircular shape10, 20, 301010aaexample8100aaexample9500aaexample101,000aaexample115,000aaexample1210,000abexample13recessed shapecircular shape10, 20, 301010c—comparative example14100aaexample15500aaexample161,000aaexample175,000aaexample1810,000abexample19recessed shapecircular shape10, 20, 3050010c—comparative example20100c—comparative example21500aaexample221,000aaexample235,000aaexample2410,000abexample25recessed shapecircular shape10, 20, 301,00010c—comparative example26100c—comparative example27500c—comparative example281,000aaexample295,000aaexample3010,000abexample31recessed shapecircular shape10, 20, 305,00010c—comparative example32100c—comparative example33500c—comparative example341,000c—comparative example355,000aaexample3610,000abexample37recessed shapecircular shape10, 20, 3010,00010c—comparative example38100c—comparative example39500c—comparative example401,000c—comparative example415,000c—comparative example4210,000abexample in the case in which the height from the mold to the liquid level was equal to or higher than the height of the frame, the liquid level could not be fixed to the step portion. in addition, in the example in which the height of the liquid level was 10,000 μm, the liquid level of the polymer layer forming solution was fixed to the step portion. however, it took some time for drying and the solution was not dried after 12 hours had passed from drying. in the above table, the same results were obtained with the frames having diameters of 10 mm, 20 mm, and 30 mm and thus the results were collectively shown. example 2 the polymer layer forming solution was applied by using a mold having a step portion in which the region in which the needle-like recessed portions are formed was formed in a projecting shape and the periphery thereof was formed in a recessed shape compared to example 1. the results are shown in table 2. table 2shape ofdiameterthicknessheight offixationexample/testshape ofstepof frameof frameliquid levelof liquidpeeling-comparativeno.moldportion[mm][μm][μm]leveloffexample51protrudingcircular10, 20, 301010aaexample52shapeshape100aaexample53500aaexample541,000aaexample555,000aaexample5610,000abexample as shown in table 2, by setting the height of the liquid level to be equal to or higher than the thickness of the frame, the polymer layer could be fixed to the step portion. in addition, in the case in which the height of the liquid level was 10,000 μm, the polymer layer was fixed to the step portion as in example 1 but was not dried within 12 hours. example 3 the shapes of frames used were a square shape to a regular dodecagonal shape (a distance from the center to each apex was 10 mm) and a circular shape having a diameter of 20 mm, and the height of the liquid level was set to 100 μm. in addition, the step portion as formed such that the region in which the mold needle-like recessed portions were formed was formed into a protruding portion, and molds in which the shapes of the step portions are a square shape to a regular dodecagonal shape (a distance from the center to each apex was 10 mm) and a circular shape having a diameter of 20 mm were used to perform a test. the polymer layer forming solution was applied to these molds and the fixation of the polymer layer to the step portion was confirmed. the results are shown in table 3. table 3diameterthicknessheight offixationexample/testshape of stepof frameof frameliquid levelof liquidcomparativeno.shape of moldportion[mm][μm][μm]levelexample61recessed shapesquare shape20100200bexample62regularbexamplepentagonal shape63regularaexamplehexagonal shape64regularaexampleoctagonal shape65regularaexampledodecagonalshape66circular shapeaexample67protrudingsquare shape20100200bexample68shaperegularbexamplepentagonal shape69regularaexamplehexagonal shape70regularaexampleoctagonal shape71regularaexampledodecagonalshape72circular shapeaexample in the case of using the frame, even in the case of using the mold having a step portion to form the region of the needle-like recessed portions into a protruding portion, the shape of the step portion was a square shape, a regular polygonal shape, or the like, and the angle was relatively small. in this case, there was a sample in which the polymer layer forming solution was repelled from the apex portion of the shape of the step portion and the polymer layer could not be fixed to the step portion. example 4 the step portion was formed by using a circular sus frame having a diameter of 20 mm and a thickness of 100 μm used in example 1 such that the region having needle-like recessed portions was formed in a recessed shape. the height of the liquid level was set to 200 μm and the amount of protrusion to the step portion (the step portion side was set to the outside and the needle-like recessed portion side was set to the inside based on the position of the step portion) was changed to confirm the liquid level fixation of the polymer layer. the results are shown in table 4. table 4shape ofdiameterthicknessheight offixationexample/testshape ofstepamount ofof frameof frameliquid levelof liquidcomparativeno.moldportionprotrusion[mm][μm][μm]levelexample81recessedcircularuniform20100200aexampleshapeshapecoating825 mm outsideaexample831 mm outsideaexample840.1 mm outsideaexample85same positionaexample860.1 mm insideccomparativeexample871 mm insideccomparativeexample885 mm insideccomparativeexample in the cases of test nos. 81 to 85 in which the polymer layer forming solution was applied at a position equal to the step portion or wider than the step portion, the polymer layer could be fixed to the step portion. regarding test nos. 86 to 88 in which the polymer layer forming solution was applied to the inside of the frame, the polymer layer could not be fixed to the step portion and repelled in the region of the mold in which the needle-like recessed portions were formed. thus, a transdermal absorption sheet having a good shape could not be formed. explanation of references 1 : polymer sheet11 , 71 , 81 : original plate12 : protruding portion13 , 73 , 83 : mold14 , 17 : frame15 : needle-like recessed portion15 a: inlet portion15 b: tip end recessed portion15 c: through-hole16 : region18 : mold complex19 : gas permeable sheet20 : base22 : drug solution24 : polymer layer forming solution30 : liquid feed tank32 : pipe34 : nozzle34 a: lip portion34 b: opening portion34 c: inclined surface36 : liquid supply apparatus48 : drug solution filling apparatus50 , 54 : axis driving unit52 : suction base56 : stand58 : control system60 : displacement meter74 , 75 , 84 , 85 : step portion92 : coating means100 : transdermal absorption sheet110 : needle-like protruding portion112 : needle portion112 a: needle-like portion112 b: body portion114 : frustum portion116 : sheet portion120 : drug layer122 : polymer layer
141-916-855-634-096
US
[ "JP", "EP", "CN", "US", "WO" ]
G05B13/04,G06Q10/06,G06F17/00
2016-07-28T00:00:00
2016
[ "G05", "G06" ]
mpc with unconstrained dependent variables for kpi performance analysis
a method (100) of key performance indicator (kpi) performance analysis is disclosed. a dynamic model predictive control (mpc) process model for an industrial process including measured variables (mvs)and controlled variables (cvs) for an mpc controller is provided (101). the mpc process model includes at least one kpi that is also included in a business kpi monitoring system for the industrial process. a future trajectory of the kpi and a steady-state (ss) value for the kpi are estimated (102). the future trajectory and ss value are used (103) for determining dynamic relationships between keyplant operating variables selected from the cvs and mvs, and the kpi. a performance of the kpi is analyzed (104) including identifying at least one cause of a problem in the performance or exceedingthe performance during operation of the industrial process from the dynamic relationships and a current value for at least a portion of the mvs.
1 . a method of key performance indicator (kpi) performance analysis, comprising: providing a dynamic model predictive control (mpc) process model for an industrial process including a plurality of measured variables (mvs) and a plurality of controlled variables (cvs) for an mpc controller implemented by a processor having a memory storing said mpc process model, said mpc process model including at least one kpi that is also included in a business kpi monitoring system for said industrial process; estimating a future trajectory of said kpi and a steady-state (ss) value where said kpi will stabilize; using said future trajectory and said ss value, determining dynamic relationships between key plant operating variables selected from said plurality of cvs and said plurality of mvs, and said kpi, and analyzing a performance of said kpi including identifying at least one cause of a problem in said performance or exceeding said performance during operation of said industrial process from said dynamic relationships and a current value for at least a portion of said mvs. 2 . the method of claim 1 , wherein said kpi comprises a kpi unconstrained dependent variable (udv) having no upper or lower control limit. 3 . the method of claim 2 , wherein said kpi comprises a plurality of said kpi udvs. 4 . the method of claim 1 , wherein said identifying said cause of said problem comprises identifying which of said plurality of mvs are causing changes to said kpi. 5 . the method of claim 1 , further comprising providing results of said analyzing to said business kpi monitoring system, and a user of said business kpi monitoring system utilizing said results of said analyzing to troubleshoot said problem in said performance or said exceeding said performance. 6 . the method of claim 1 , further comprising updating said dynamic relationships based on said analyzing said performance. 7 . the method of claim 1 , further comprising using said future trajectory for automatic alerting and event detection. 8 . the method of claim 1 , wherein said mpc process model includes an optimizer, wherein said identifying said cause of said problem comprises identifying when said optimizer is causing said kpi to deviate from its target. 9 . the method of claim 1 , further comprising searching for at least one other variable in said industrial process that is not included in said mpc process model that impacts said kpi, and then adding said other variable to said mpc process model. 10 . a model predictive control (mpc) controller, comprising: a processor having a memory storing at least one algorithm executed by said processor for implementing a dynamic mpc process model for an industrial process run in an industrial plant including a plurality of measured variables (mvs) and a plurality of controlled variables (cvs), said mpc process model including at least one kpi that is also included in a business kpi monitoring system for said industrial process, wherein said kpi comprises a kpi unconstrained dependent variable (udv) having no upper or lower control limit; said mpc process model: estimating a future trajectory of said kpi and a steady-state (ss) value where said kpi will stabilize, and using said future trajectory and said ss value, determining dynamic relationships between key plant operating variables selected from said plurality of cvs and said plurality of mvs, and said kpi. 11 . the mpc controller of claim 10 , wherein said mpc process model further provides analyzing a performance of said kpi including identifying at least one cause of a problem in said performance or exceeding said performance during operation of said industrial process from said dynamic relationships and a current value for at least a portion of said mvs. 12 . the mpc controller of claim 10 , wherein said wherein said kpi comprises a plurality of said kpi udvs. 13 . the mpc controller of claim 10 , wherein said mpc process model further provides using said future trajectory for automatic alerting and event detection. 14 . the mpc controller of claim 10 , wherein said mpc process model includes an optimizer, wherein said identifying said cause of a problem comprises identifying when said optimizer is causing said kpi to deviate from its target. 15 . a model predictive control (mpc) controller, comprising: a processor having a memory storing at least one algorithm executed by said processor for implementing a dynamic mpc process model for an industrial process run in an industrial plant including a plurality of measured variables (mvs) and a plurality of controlled variables (cvs), said mpc process model including at least one kpi that is also included in a business kpi monitoring system for said industrial process, wherein said kpi comprises a kpi unconstrained dependent variable (udv) having no upper or lower control limit; said mpc process model: estimating a future trajectory of said kpi and a steady-state (ss) value where said kpi will stabilize; using said future trajectory and said ss value, determining dynamic relationships between key plant operating variables selected from said plurality of cvs and said plurality of mvs, and said kpi, and analyzing a performance of said kpi including identifying at least one cause of a problem in said performance or exceeding said performance during operation of said industrial process from said dynamic relationships and a current value for at least a portion of said mvs.
field disclosed embodiments relate to model predictive control (mpc) including key performance indicators. background processing facilities which operate physical processes that process materials, such as manufacturing plants, chemical plants and oil refineries, are typically managed using process control systems. valves, pumps, motors, heating/cooling devices, and other industrial equipment typically perform actions needed to process the materials in the processing facilities. among other functions, the process control systems often manage the use of the industrial equipment in the processing facilities. in conventional process control systems, controllers are often used to control the operation of the industrial equipment in the processing facilities. the controllers can monitor the operation of the industrial equipment, provide control signals to the industrial equipment, and/or generate alarms when malfunctions are detected. process control systems typically include one or more process controllers and input/output (i/o) devices communicatively coupled to at least one workstation and to one or more field devices, such as through analog and/or digital buses. the field devices can include sensors (e.g., temperature, pressure and flow rate sensors), as well as other passive and/or active devices. the process controllers can receive process information, such as field measurements made by the field devices, in order to implement a control routine. control signals can then be generated and sent to the industrial equipment to control the operation of the process. an industrial plant generally has a control room with displays for displaying process parameters such as key temperatures, pressures, fluid flow rates and flow levels, operating positions of key valves, pumps and other equipment, etc. operators in the control room can control various aspects of the plant operation, typically including overriding automatic control. generally in a plant operation scenario, the operator desires operating conditions such that the plant always operates at its “optimal” operating point (i.e. where the profit associated with the process is at a maximum, which can correspond to the amount of product generated) and thus close to the alarm limits. based on changing of the feedstock composition for a chemical process, changing products requirements or economics, or other changes in constraints, the operating conditions may be changed to increase profit. however, there is an increased risk associated with operating the plant closer to the alarm limits due to variability in the process. advanced process controllers implement multi-variable model predictive control (mpc) which is an advanced process control (apc) technique for controlling the operation of the equipment running an industrial process. the model is a set of generally linear dynamic relationships between several independent variables and several dependent variables. the model can have different forms, with laplace transforms and arx models being conventional model implementations. non-linear relationships between the variables is also possible. mpc control techniques typically involve using an empirically derived process model (i.e. based on historical process data) to analyze current input (e.g., sensor) data received, where the model identifies how the industrial equipment should be controlled (e.g., by changing actuator settings) and thus operated based on the input data received. the control principle of mpc uses three (3) types of process variables, manipulated variables mv and some measured disturbance variables (dvs) as the independent variables, and controlled variables (cvs) as the dependent variables. the model includes the response of each cv to mv/dv changes, and the model predicts future effects on the cvs from changes in the mvs and dvs. in many industrial and commercial customer applications key performance indicators (kpis) are used by a business kpi monitoring system to track whether a business or organization is performing to acceptable standards, for example in terms of compliance with the law, production rate, energy usage, and maintaining product or service quality, and profitability. typically there are a wide range of types of kpis, for example from operator′ working time lost due to accidents leading to injuries, maintenance shop performance, environmental emissions through to production rate, quality variations, and energy and chemicals consumption. some kpis used by the business kpi monitoring system are not related to the variables that are within the scope of a mpc controller (e.g., lost time injuries and maintenance shop performance kpis). however, kpis relating to production goals, such as feedrate to the process, production rates of various products, product 1 vs. product yield 2 , and energy consumption, etc., will typically overlap significantly with the mpc model's cvs, mvs or dvs. in some cases the same variables used to calculate kpis are also configured as mpc model cvs or mvs (because mpc controls and optimizes the important production variables). in other cases the kpis will be highly correlated with mpc mvs and dvs, and hence can be predicted/projected using the same mpc tools and workflows. this may include specific energy usage or product yields which can be used for performance monitoring. summary this summary is provided to introduce a brief selection of disclosed concepts in a simplified form that are further described below in the detailed description including the drawings provided. this summary is not intended to limit the claimed subject matter's scope. disclosed embodiments recognize while kpis and time-based trends of kpis are useful for tracking performance and helping to quickly identify changes in process performance in an industrial process, further detailed analysis is often needed to understand the root causes for a change in kpi performance in order to correct, mitigate or take advantage of a change in one or more kpis. data science and data analytics is a growing field and there are general purpose toolsets commercially available to support practitioners in this field. however, analyzing the causes of poor kpi performance can be challenging because of the potential variety of root cause events that can impact a given kpi and visibility or inferences of those root cause events, which may be measured in disparate systems, or not measured at all. another generally important factor is the influence of closed loop control within the system being measured, especially when the control can be switched between active (closed loop) and inactive (open loop) modes, is multivariate, i.e. has multiple inputs (cvs, dvs) and multiple outputs (mvs), which can be individually taken in or out of closed loop control, or is non-square (i.e. the number of cvs does not match the number of mvs). it is also recognized for model predictive control (mpc) there is a problem caused by the correlations between the various process variables in the mpc process model for the industrial process changing over time, depending on whether the control system is active or partially active, and whether a particular set of cv and mv are within the active set of control. disclosed embodiments moreover recognize it is far simpler to implement kpi analytics as part of a closed loop process control application (compared to a traditional “generic” data analytics approach where statistical regression and clustering techniques are used to analyze a large set of historical time series data, but without containing a model of the process and behavior of the control layers). although it is possible to perform kpi analytics on mpc applications for the purposes of evaluating if the mpc controller is functioning well, is configured properly and is being used effectively by the operator, the user here is the mpc maintenance engineer/mpc team leader. moreover, tying back mpc kpi analytics back to the business kpi monitoring system is not known. neither is the idea of including in the mpc model kpi unconstrained dependent variable (udv) having no upper or lower control limit. in many industrial sectors (e.g., such as the process and chemical industries) closed loop control, especially mpc control, is commonly used. the closed loop control is often configured to control and raise the profit for a plurality of cvs by adjustment of the mvs that are directly used as kpis or generally strongly influence other kpis. the application has some type of process model that describes the system's behavior. modern mpc control applications are predictive, to provide early, real-time indications of the future trajectory of the cvs and mvs and whether, with the available mvs and their configured high and low limits, the mpc will be able to control the cvs with the specified cv high and low limits, or whether these limits will be violated. the control applications execute relatively fast, monitoring the real-time constraints of the control system being monitored and reconciling the cv predictions to the current process measurements. if kpi are already included in the mpc model as cvs or mvs, and/or are added as kpi udvs, mpc can be used to predict the future trajectory of the kpis and whether they will deviate from their targets configured in the business kpi system. disclosed embodiments include a method of kpi performance analysis includes providing a dynamic mpc process model for an industrial process including a plurality of mvs and a plurality of cvs for an mpc controller implemented by a processor having a memory storing the mpc process model. the mpc process model includes at least one kpi that is also included in a business kpi monitoring system for the industrial process. a future trajectory of the kpi and a steady-state (ss) value where the kpi will stabilize are estimated. the future trajectory and the ss value are used for determining dynamic relationships between key plant operating variables selected from the plurality of cvs and plurality of mvs, and the kpi. a performance of the kpi is analyzed including identifying at least one cause of a problem in the performance or exceeding the performance during operation of the industrial process from the dynamic relationships and a current value for at least a portion of the mvs. brief description of the drawings fig. 1 is a flow chart showing steps in an example method of method of kpi performance analysis, according to an example embodiment. fig. 2 is a block diagram of an example process control system including a mpi controller that implements disclosed mpc control fig. 3 is an example simulation flowsheet for a debutanizer process, according to an example embodiment. fig. 4 shows an example mpc control schematic for the debutanizer process showing an example mpc control strategy employed having a plurality of kpi udvs, according to an example embodiment. detailed description disclosed embodiments are described with reference to the attached figures, wherein like reference numerals are used throughout the figures to designate similar or equivalent elements. the figures are not drawn to scale and they are provided merely to illustrate certain disclosed aspects. several disclosed aspects are described below with reference to example applications for illustration. it should be understood that numerous specific details, relationships, and methods are set forth to provide a full understanding of the disclosed embodiments. one having ordinary skill in the relevant art, however, will readily recognize that the subject matter disclosed herein can be practiced without one or more of the specific details or with other methods. in other instances, well-known structures or operations are not shown in detail to avoid obscuring certain aspects. this disclosure is not limited by the illustrated ordering of acts or events, as some acts may occur in different orders and/or concurrently with other acts or events. furthermore, not all illustrated acts or events are required to implement a methodology in accordance with the embodiments disclosed herein. disclosed embodiments implement certain types of operational kpis with an mpc control and optimization framework enabled by alignment between the kpi management activity and mpc objectives, recognizing it is difficult to analyze the causes of poor kpi performance unless the mpc performance and configuration is taken into account. fig. 1 is a flow chart showing steps in an example method 100 of kpi performance analysis, according to an example embodiment. step 101 comprises providing a dynamic mpc process model for an industrial process including a plurality of mvs and a plurality of cvs for an mpc controller implemented by a processor having a memory storing the mpc process model. the mpc process model includes at least one kpi that is also included in a business kpi monitoring system for the industrial process. the kpi can comprise a kpi udv that as described above has no upper or lower control limit. disturbance variables (dvs) although lacking control limits in contrast are independent variables. the mpc model can comprise a plurality of kpi udvs. known kpis include feed rate and product flowrates, disclosed kpi udvs can include key production and unit performance monitoring variables such as yields, energy and chemical consumption, and equipment efficiency. the kpis are generally calculated externally. the calculation can be implemented in a variety of ways, such as within a mpc coded calculation or within the dcs system. the calculation can be a simple ratio of flows (e.g., a product yield) a heat balance, or more complex correlations. the kpi is often (but not limited to) a calculated value. feed and product flows are examples of simple directly measured kpis. a product yield for example is a calculated value, a product flow divided by feed flow. energy efficiency, specific energy consumption, equipment efficiency are all values that are calculated from other directly measured variables. the kpi values will generally be calculated for the business kpi monitoring system. however, the kpi values also need to be calculated for the mpc application (the values of which can then be made available to the business kpi monitoring system to avoid duplication of effort). for the mpc, there are typically two choices for the implementation of the calculations, the mpc itself or a calculation block in the distributed control system. as known in the art, the mpc model is defined in terms of the open loop process behavior, but the mpc uses this for closed loop control by “inverting” the model. the net effect is that the process behavior is changed when the mpc is switched on to reflect closed loop behavior. step 102 comprises estimating a future trajectory of the kpi and a ss value where the kpi will stabilize. step 103 comprises using the future trajectory and the ss value to determine dynamic relationships between key plant operating variables selected from the plurality of cvs and plurality of mvs, and the kpi. step 104 comprises analyzing a performance of the kpi including identifying at least one cause of a problem in the performance or exceeding the performance during operation of the industrial process from the dynamic relationships and a current value for at least a portion of the mvs and optionally one or more of the cvs. the analyzing can be performed when the mpc controller is on, is off or is partially on. a significant advantage of disclosed embodiments is that the kpi analysis takes into account of the mpc state and how that (the mpc state) influences the behavior of the industrial process. the analyzing a performance of the kpi can be performed by the mpc controller or a separate computing device implementing disclosed algorithms, including a cloud based computer in one particular embodiment. the identifying the cause of the problem can comprise identifying which of the plurality of mvs are causing changes to the kpi. the method can further comprise providing results of the analyzing to the business kpi monitoring system, and a user of the business kpi monitoring system generally on a display screen showing a dashboard view utilizes the results of the analyzing to troubleshoot the problem in the performance or in some cases reasons for exceeding the performance. the business kpi monitoring system user can suggest a change in settings for at least one mpc model parameter selected from the mvs and cvs or to initiate a query of the process operator to find out why a given kpi is currently being limited or they could trigger a workflow for another individual to investigate. the method can further comprising updating the dynamic relationships based the analyzing of the performance. the future trajectory can be used for automatic alerting of the operator and event detection. most commercial mpc software includes an optimizer to direct the mpc controller to an operating point that maximizes the profit. this optimum point is found by defining an economic value or cost for one or more cvs and mvs. the optimizer calculates an ideal operating point within the high and low bounds. this optimizer essentially gives the mpc a mind of its own and can cause the mpc to drive the process towards or away from the kpi targets or tradeoff one kpi vs. another. in the case where the mpc process model includes an optimizer, the method can further comprise the identifying the cause of a problem when the optimizer is causing the kpi to deviate from its target. the kpi may be influenced by independent variables that are not currently within the mpc scope as mvs or dvs. however, mpc configurations tools can be used search other process variables to determine if they have a measurable effect on the kpi udvs. if this is the case, they can be added to the scope of the mpc controller so that the method can further comprise searching for at least one other variable in the industrial process that is not included in the mpc process model that impacts the kpi, and adding the other variable to the mpc process model. the predictability of the kpi udvs based on the mpc model can be analyzed. a good (i.e. predictable) mpc model means one can pursue further analysis. a poor model means there are other significant factors that impact the kpi udv, which should generally be explored. if the model prediction is poor one can trigger a workflow to explore what this might be, e.g. leveraging tools for historical model identification. the quality of the kpi prediction will be based on step 102 (estimating a future trajectory) where the future kpi values are predicted. these predictions (at a future timestamp) will then be stored and later compared against the measured values (when actual time has stepped forward to the corresponding timestamp). additional “test” models can be added to evaluate if the prediction improves. this helps to improve the understanding of the key contributors to the kpi. examples of the factors that lead to a poor kpi prediction are the existing mpc models being out of date and needing to be updated to reflect recent changes in the process behavior, and the mpc models need to be augmented to include additional plant information. both these effects can be overcome by a combination of commercially available plant step testing tools (e.g., the honeywell (profit stepper) and historic model identification tools. if the mpc model prediction is found to be reliable from the analyzing step, the predicted kpi value is generally useful as guidance for the process operator. the mpc model and mpc constraints can then be analyzed to understand most impactive controlled/uncontrolled variables on the kpi variables, and if the kpi is being “held back” (constrained) by the over-constraining of the process (e.g. conservative mv or cv limits) and how much extra could be achieved if specific limits were relaxed, which quantifies the incremental improvement in the kpi if the associated mpc cv and mv limits were relaxed marginally. a challenge is how to provide a good “line of sight” between the time aggregated kpis (e.g. over one shift or day of operation) and the actions the operator takes. for example, it is possible to evaluate which individual mv and cv constraints are holding back a given kpi from its target value (e.g. by relaxing the mpc controller limits to the ideal range limits specified by the process/reliability engineers). this can be significant work to execute on every controller iteration (execution cycle˜every minute) and for a kpi this generally makes more sense to do this less frequently, e.g. every controller time to steady state (ttss). one approach is to average the actual values of the mv and cv steady state values over the time period together with the limits and use these as starting points for an “optimization” case in an optimization tool such as excel or the honeywell profit controller optimization solver, relax the average limits and evaluate the impact on the kpi variables. one can also consider the effect of dvs. additional kpi variables not used in conventional mpc models are integrated into the mpc application as unconstrained dependent variables (udvs) that have no upper or lower control limit so that they not controlled by the mpc. as described above, there is often good alignment between the variables included in a mpc control application (or apc in the process industries) and a number of operational related kpis variables such as production rate, product yield, product quality and energy usage. good alignment is provided because many of the important production variables will typically be monitored by the business kpi system and controlled and manipulated by the mpc controller. there are also other variables that are monitored by the business kpi system that are highly correlated with the mpc variables or may be correlated to combinations of the mpc variables. this is because the mpc is typically justified by improvements in the operational performance. however, some specific kpi variables are not be included in conventional mpc because they may be considered as duplicate information or not have specific control or optimization targets. provided that these kpi variables are influenced by at least one mv in the mpc model, it is recognized that they can be included as udvs into the mpc model run by the mpc controller to provide several significant benefits at a generally very small additional engineering cost. benefits include calculation of kpi projections in mpc based on the mvs, root cause analysis including identification of which mv changes are causing kpi changes, and use of kpi projections for alerting and early event detection such as when the transient deviation is too large or steady state prediction is considered a long way from its target(s). now referring now to fig. 2 , a process control system 200 is shown in which a disclosed mpc process controller 211 having kpis including at least one kpi udv for kpi performance analysis is communicatively connected by a network 229 to a mpc server 228 , data historian 212 and to one or more host workstations or computers 213 (which may comprise personal computers (pcs), workstations, etc.), each having a display screen 214 . the mpc process controller 211 includes a processor 211 a and a memory 211 b. the control system 200 also includes a fourth level (l4) including a l4 network 235 having workstations or computers 241 , and a kpi business monitoring system comprising a business monitoring system server 240 . the business monitoring system server 240 is connected to the network 229 through a firewall 236 . the mpc controller 211 is also connected to field devices 215 to 222 via input/output (i/o) device(s) 226 . the data historian 212 may generally be any type of data collection unit having a memory and software, hardware or firmware for storing data, and may be separate from or a part of one of the workstations 213 . the mpc controller 211 is communicatively connected to the host computers 213 and the data historian 212 via, for example, an ethernet connection or other communication network 229 . the communication network 229 may be in the form of a local area network (lan), a wide area network (wan), a telecommunications network, etc. and may be implemented using hardwired or wireless technology. the mpc controller 211 is communicatively connected to the field devices 215 to 222 using hardware and software associated with, for example, standard 4-20 ma devices and/or any smart communication protocol such as the foundation® fieldbus protocol (fieldbus), the hart® protocol, the wireless hart™ protocol, etc. the field devices 215 - 222 may be any types of devices, such as sensors, valves, transmitters, positioners, etc. while the i/o devices 226 may conform to generally any communication or controller protocol. the field devices 215 to 218 may be standard 4-20 ma devices or hart® devices that communicate over analog lines or combined analog/digital lines to the i/o device 226 , while the field devices 219 to 222 may be ‘smart’ field devices, such as fieldbus field devices, that communicate over a digital bus to the i/o device 226 using fieldbus protocol communications. the mpc controller 211 , which may be one of many distributed controllers within the plant 205 , has at least one processor therein that implements or oversees one or more process control routines, which may include control loops, stored therein or otherwise associated therewith. the mpc controller 211 also communicates with the devices 215 to 222 , the host computers 213 and the data historian 212 to control a process. a process control element can be any part or portion of a process control system including, for example, a routine, a block or a module stored on any computer readable medium so as to be executable by a processor, such as a cpu of a computer device. control routines, which may be modules or any part of a control procedure such as a subroutine, parts of a subroutine (such as lines of code), etc. may be implemented in generally any software format, such as using ladder logic, sequential function charts, function block diagrams, object oriented programming or any other software programming language or design paradigm. likewise, the control routines may be hard-coded into, for example, one or more eproms, eeproms, application specific integrated circuits (asics), or any other hardware or firmware elements. the control routines may be designed using a variety of design tools, including graphical design tools or other type of software, hardware, or firmware programming or design tools. thus, the mpc controller 211 may be generally configured to implement a control strategy or control routine in a desired manner. in one embodiment, the mpc controller 211 implements a control strategy using what are generally referred to as function blocks, wherein each function block is a part of or an object of an overall control routine and operates in conjunction with other function blocks (via communications generally called links) to implement process control loops within the process control system 200 . function blocks typically perform one of an input function, such as that associated with a transmitter, a sensor or other process parameter measurement device, a control function, such as that associated with a control routine that performs mpc which controls the operation of some device (e.g., a valve), to perform some physical function within the process control system 200 . function blocks may be stored in and executed by the mpc controller 211 , which is typically the case when these function blocks are used for, or are associated with standard 4-20 ma devices and some types of smart field devices such as hart® devices, or may be stored in and implemented by the field devices themselves, which may be the case with foundation® fieldbus devices. still further, function blocks which implement controller routines may be implemented in whole or in part in the host workstations or computers 213 or in any other computer device. examples disclosed embodiments are further illustrated by the following specific examples, which should not be construed as limiting the scope or content of this disclosure in any way. for example, consider the simulation flowsheet 300 for a debutanizer process depicted in fig. 3 . the debutanizer process is a distillation process common in many oil refineries which separates light liquefied petroleum gas (lpg) components from a mixed naphtha stream. fig. 4 shows an example mpc control schematic for the debutanizer process showing an example mpc control strategy employed having kpis including several kpi udvs. as can be seen, many of the key distillation kpis for the process are naturally included within the mpc model's control strategy, such as: the unit feedrate, as manipulated variable mv1. the overhead product quality, as control variable cv1. the bottoms product quality, as controlled variable cv2. however, as can be seen in error! reference source not found. a number of kpi parameters not used as kpis in conventional mpc models have been added to the mpc controller, shown as kpi udvs 1-4 (being specific energy usage, reboiler duty-heat flow, lpg yield, and condensate yield) that each lack both a high and a low control limit. it can be seen within the mpc model that the kpi udvs each have only steady state (ss) and future values. implementing these kpi udvs 1-4 as additional kpis within the mpc model without high or low limits is recognized to provide at least two benefits: 1. mpc controller information (such as the values of the other controller mvs, and cvs and operator entered limits, optimizer configuration (the controller state) can be used to help diagnose (find the root cause) why any of the respective kpis are performing above or below their target (limit or range); 2. the future values of the kpis are predicted (trajectories), so that the operator can be automatically alerted in real-time if the kpis are predicted (future values) to change significantly in the short term to enable possible corrective action to be taken pre-emptively. analysing the cause(s) of poor (or exceeding) kpi performance can be implemented as a multi-step process as described below. step 1 prediction quality: the first step can comprise the assessment of whether each kpi value is predicted well by the mpc model. the technology to analyze the prediction quality of a general mpc controlled variable is well established and has been reduced to practice in commercially available products such as honeywell's profit expert toolset. this approach involves an assessment of the prediction mpc model bias relative to the movement of variables within the scope of the mpc controller and other external variables. if the mpc model is poor (a large bias for the kpi), a model update workflow using step testing tools such as honeywell's profit stepper and profit suite engineering studio can be used to improve the performance of the mpc model. the mpc model may be poor because it does not reflect changes in the behavior of the process or is incomplete, i.e. does not include all the influencing factors. known tools can be used to search for influencing process variable(s) from a historical data stored in a data historian 212 , that can then be refined with the step testing tools. step 2 determining the root cause(s) of kpi target deviation: there are a number sequential steps that can be followed to determine the causes of a kpi deviation from its target value or range: step 2a: a check cam be made to determine the percentage of time the kpi has been clamped against an operator or engineer entered limit and whether the mpc controller limits are consistent with the overall kpi limits. for example, in error! reference source not found. 3 the feedrate (mv1) and the two quality kpis (cv1 and cv2) have associated limits. in this real time view (a snapshot of the mpc controller at one instant in time) they are not being limited by those limits. however, over the course of the kpi aggregation period they may reach their limits for some percentage of the time. if the kpis within the mpc model are clamped at the mpc limits, and the mpc limits are not consistent (i.e. are more restrictive) than their kpi targets in the business kpi system, then this mismatch can be automatically flagged as a cause of poor kpi performance. for example, the poor kpi performance for the debutanizer feedrate can be reported as “debutanizer feedrate was below target during the kpi aggregation period. 67% of this deviation can be attributed to the fact that the mpc feedrate was clamped on average at 20 m 3 /hr by the process operators”. step 2b: when the kpis are not constrained at the limits including the kpi udvs (that do not have limits) within the mpc controller, shown as kpi then the mpc controller performance should be examined to determine what is holding back the kpi from moving in the direction to meet the aggregated kpi target. this can be due to: 1. the mpc controller economics (linear program and quadratic program weightings) have been configured to move the kpi in the wrong direction (away from the aggregated kpi target); 2. the mpc controller economics are such that its optimizer has calculated it can make more money by moving one or more kpis in the wrong direction, in order to move cvs and mvs (with greater economic value) towards their targets. a simple example is that when a processing plant is sold out, the incentive to produce more product (consuming more feedrate) is stronger that the incentive to reduce the total energy consumed or even the specific energy consumed. note however, if the primary optimization handles (regarding the maximization of the production rate) hit their constraints so that the production rate cannot be increased further, the secondary objectives such as reducing any incremental specific energy consumption can come into play (i.e. reducing any incremental specific energy consumption). 3. another, related variable within the controller is constraining the mpc controller from moving the kpi(s) in a favorable direction. the intent of the mpc optimizer can be established in one of two ways, by comparing the steady state prediction for the kpis (e.g., a cv or mv in the mpc) with the current measured value and by examination of the objective function economics. the optimizer can determine which way to move the free mvs by considering the direct mv economic weightings and the impact of a change in the mvs on the cvs with economic weightings. this can be calculated analytically for a given optimizer formulation, but in the general sense this can be expressed by the following equation: where: ∂ indicates a partial derivative; obj cost is the value of a cost-based objective function: obj cost =fn(cv i,n ,mv j,m ,economics) a cv value is predicted from past mv and dv changes as shown in the equation below: if the incentive for mvi is positive, the optimizer will always seek to minimize the mvi value, subject to the mpc controller limits. if this is a kpi with an aggregated target to be maximized, then the process is relying on the process operator to always maintain the low limit at or above the aggregated kpi limit in order to achieve the aggregated kpi limit. otherwise there will always be a mismatch between the mv and the kpi. likewise, if a kpi udv will not be optimized in the right direction by any of the application mvs, then the aggregated kpi target can generally only be achieved by the correct setting of the operator limits, which can be verified from operating data. this covers case 1 described above. case 2 above can be determined by analyzing if there are a mixture of mvs, some of which will move a given kpi in the right direction (towards the aggregated kpi direction) and others that will move it in a wrong direction. in this scenario the optimizer can decide whether to move the kpi udv towards or away from the aggregated kpi based on the freedom of the respective mvs to move. this case can be identified by analyzing the optimization direction, which mvs and cvs are constraints and the economic weightings. case 3 above can be assessed by examining the solution returned from the optimization step, in terms of the active set of constraints. conceptually this is equivalent to sequentially relaxing the limits for the mpc variables predicted to be at their limits (at steady state) to determine if the optimizer then moves the kpi towards its aggregated kpi target. step 3: aggregation and thresholding a number of the steps in the analysis use real time information to assess the reasons for a kpi being constrained away from its aggregated target. generally, these reasons need to be aggregated over the kpi reporting period, ranked in terms of their percentage applicability and the top reasons reported as causes of the kpi performance problem. while various disclosed embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. numerous changes to the subject matter disclosed herein can be made in accordance with this disclosure without departing from the spirit or scope of this disclosure. in addition, while a particular feature may have been disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for any given or particular application. as will be appreciated by one skilled in the art, the subject matter disclosed herein may be embodied as a system, method or computer program product. accordingly, this disclosure can take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” furthermore, this disclosure may take the form of a computer program product embodied in any tangible medium of expression having computer usable program code embodied in the medium.
142-559-018-174-428
CN
[ "KR", "EP", "BR", "WO", "US", "JP", "CN" ]
A61K47/32,A61K9/70,A61K47/38,A61K31/745,A61K31/74,A61L24/00,A61L26/00,A61P17/00,A61P17/02,A61P17/04,A61P17/10,A61K31/015,A61K31/045,A61K31/785,A61K36/23,A61K36/53,A61K36/61,A61K36/18,A61K45/00,A61K47/10,A61K47/22,A61K47/30,A61K47/36,A61P25/00
2011-11-08T00:00:00
2011
[ "A61" ]
a hydrocolloid composition and an article containing the same
the present invention provides a hydrocolloid composition which, based on 100% by weight of the hydrocolloid composition, comprises: 10-90%) by weight of a polyisobutylene tackifier; 5-55% by weight of a hydrophilic absorbing substance; and 0.1-20% by weight of a functional ingredient. the invention further provides an article containing the hydrocolloid composition.
what is claimed is: 1. a hydrocolloid composition which, based on 100% by weight of the hydrocolloid composition, comprises: 10-90% by weight of a polyisobutylene tackifier; 5-55% by weight of a hydrophilic absorbing substance; and 0.1-20% by weight of a functional ingredient. 2. the hydrocolloid composition according to claim 1, wherein the polyisobutylene tackifier has a weight average molecular weight of 20000-150000 and a polydispersity index of 1 -10. 3. the hydrocolloid composition according to claim 1 or 2, wherein hydrocolloid composition comprises 30-75%) by weight of the polyisobutylene tackifier, based on 100% by weight of the hydrocolloid composition. 4. the hydrocolloid composition according to claim 1, wherein the hydrophilic absorbing substance is selected from the group consisting of celluloses, starches, synthetic resins, and the mixtures thereof. 5. the hydrocolloid composition according to claim 1 or 4, wherein hydrocolloid composition comprises 10-40% by weight of the hydrophilic absorbing substance, based on 100% by weight of the hydrocolloid composition. 6. the hydrocolloid composition according to claim 1, wherein the functional ingredient is selected from the group consisting of limonene, asiatic pennywort herb, tea tree oil, spike essential oil, water absorbent, antimicrobial, wound healing, and the mixtures thereof. 7. the hydrocolloid composition according to claim 1, wherein the hydrocolloid composition further comprises 0.1-50% by weight of a hydrophobic unsaturated elastomeric homopolymer, based on 100% by weight of the hydrocolloid composition. 8. the hydrocolloid composition according to claim 7, wherein the hydrophobic unsaturated elastomeric homopolymer is selected from the group consisting of polyisoprene, polybutadiene, and the mixtures thereof. 9. the hydrocolloid composition according to claim 1, 7 or 8, wherein the hydrocolloid composition comprises 30-37% by weight of the hydrophobic unsaturated elastomeric homopolymer, based on 100% by weight of the hydrocolloid composition. 10. the hydrocolloid composition according to claim 1, wherein the hydrocolloid composition further comprises 0.1-55% by weight of a resin tackifer, based on 100% by weight of the hydrocolloid composition. 11. the hydrocolloid composition according to claim 10, wherein the resin tackifer has a weight average molecular weight of 200-3000 and a polydispersity index of 0.5-20. 12. the hydrocolloid composition according to claim 1, 10 or 11, wherein the hydrocolloid composition comprises 30-75% by weight of the resin tackifer, based on 100% by weight of the hydrocolloid composition. 13. the hydrocolloid composition according to claim 1, wherein the hydrocolloid composition further comprises 0.1-20% by weight of a penetration facilitator, based on 100% by weight of the hydrocolloid composition. 14. the hydrocolloid composition according to claim 13, wherein the penetration facilitator is selected from the group consisting of borneol, menthol, pipeline, and the mixtures thereof. 15. the hydrocolloid composition according to claim 1, 13 or 14, wherein the hydrocolloid composition comprises 0.5-8% by weight of the penetration facilitator, based on 100% by weight of the hydrocolloid composition. 16. an article containing the hydrocouoid composition according to anyone of claims 1-15. 17. the article according to claim 16, wherein the article is in a form of a patch. 18. the article according to claim 17, wherein the patch is a mind-refreshing patch, anti-itching patch, an anti-acne patch, a super water-absorbing hydrocouoid patch, antimicrobial patch, or a wound healing patch.
the description a hydrocolloid composition and an article containing the same technical field the present invention relates to a hydrocolloid composition and an article containing the same, and specifically, to a functional hydrocolloid composition used for personal care or medical treatment and an article containing the same. background art products containing natural plant ingredients are widely used in personal care or medical treatment and exhibit different efficacies, for example, mind-refreshment, mosquito-repellence, anti-acne, anti-itch or the like. the typical products are essential balm, cooling ointment, perfume or the like. besides these traditional products, there are also some natural plant products of adhesive plaster types or hydrogel types in the market. however, all of the above products have their respective disadvantages. - liquid-type or ointment-type products generally, they are applied to respective portions on the skins by hands, for example, a product for mind-refreshment can be applied to temples or the like. as to this application method, firstly, these products will contaminate fingers and clothes, and a portion of the products will be wasted during the application. some products will be removed by outside friction. additionally, these products volatilize easily after being applied to an affected part and the lasting time thereof is short. if there are some skin damages in the affected part, the products will cause local stimulation and pain to the skin. - traditional adhesive plaster-type products: most of the traditional adhesive plasters are based on small molecular materials. the most important problem with these products is that the adhesive thereof will irritate the skin. furthermore, these products have poor air permeability and large residual adhesive amount, which will cause itch and inflammation to skin. some of the plasters will stain the skin due to adhesive residues. i - gel-type products: they have the disadvantages of lower strength and poor integrity. therefore, in order to realize the efficacy of natural plants, it is still demanded for a composition or an article thereof, which not only has better strength and integrity, but also can avoid the disadvantages of the above products. contents of invention therefore, in the first aspect of the invention, it provides a hydrocolloid composition which, based on 100% by weight of the hydrocolloid composition, comprises: 10-90% by weight of a polyisobutylene tackifier; 5-55% by weight of a hydrophilic absorbing substance; and 0.1-20% by weight of a functional ingredient. preferably, the hydrocolloid composition further comprises 0.1-50%) by weight of a hydrophobic unsaturated elastomeric homopolymer, based on 100% by weight of the hydrocolloid composition. preferably, the hydrocolloid composition further comprises 0.1-55%) by weight of a resin tackifer, based on 100% by weight of the hydrocolloid composition. preferably, the hydrocolloid composition further comprises 0.1-20%) by weight of a penetration facilitator, based on 100% by weight of the hydrocolloid composition. in the second aspect of the invention, it provides an article containing the hydrocolloid composition in the first aspect of the invention. the hydrocolloid composition of the invention has sufficient strength and integrity, arid can provide various functions as demanded. for example, the hydrocolloid composition of the invention can be used for wound healing, or whelk care (reducing the infection from propionibacterium acnes), or mind-refreshment, or anti-itch, or anti-acnes, or mosquito-repellence, or antimicrobial or the like. specific mode for carrying out the invention the present invention relates to a gelling agent (a hydrophilic substance which can form a colloid upon being contacted with water). because the gelling agent will form an amorphous colloid after being contacted with water, it is necessary for it to be used in combination with other materials with better strength. in the invention, unless specifically stated otherwise, the term "hydrocolloid" means this composition containing a gelling agent. the present invention provides a hydrocolloid composition which, based on 100% by weight of the hydrocolloid composition, comprises: 10-90% by weight of a polyisobutylene tackifier; 5-55% by weight of a hydrophilic absorbing substance; 0.1-20% by weight of a functional ingredient. preferably, the hydrocolloid composition further comprises 0.1-50% by weight of a hydrophobic unsaturated elastomeric homopolymer, based on 100% by weight of the hydrocolloid composition. preferably, the hydrocolloid composition further comprises 0.1-55% by weight of a resin tackifer, based on 100% by weight of the hydrocolloid composition. preferably, the hydrocolloid composition further comprises 0.1-20% by weight of a penetration facilitator, based on 100% by weight of t the hydrocolloid composition. the polyisobutylene tackifier comprises 10-90% by weight, preferably 20-85% by weight, more preferably 25-80% by weight, and most preferably 30-75% by weight of the hydrocolloid composition. preferably, the polyisobutylene tackifier has a weight average molecular weight of 20,000-150,000 and a polydispersity index (pdi) of 1-10. the examples thereof include, but not limited to, sdg-8650 from shunda, hangzhou, lm-mh from exxon, oppanol b-12 sfn from basf, as well as pib 6h from ritchem, wherein sdg-8650 from shunda, hangzhou is preferable, oppanol b-12 sfn from basf is more preferable, and pib 6h from ritchem is most preferable. the absorbing substance comprises 5-55% by weight, preferably 5-50% by weight, more preferably 7-45% by weight, and most preferably 10-40%) by weight of the total weight of the hydrocouoid composition. the absorbing substance means a polymer which can absorb a substance with a weight several times larger than the weight itself of the polymer. the examples thereof include, but not limited to, celluloses, such as hs100000yp2 hydroxy ethyl cellulose from clariant chemical, fh5000 carboxymethyl cellulose sodium from wei yi, suzhou, china, sodium croscarmellose sodium from fmc company; starches such as a carboxymethyl starch from fuhua, henan; synthetic resins such as luquasorb®1030 super water-absorbing polymer from basf; or the mixtures of the above ingredients, wherein fh5000 carboxymethyl cellulose sodium from wei yi, suzhou is preferable, the combination of fh5000 carboxymethyl cellulose sodium from wei yi, suzhou and the sodium croscarmellose sodium from fmc company is more preferable, and the combination of the sodium croscarmellose sodium from fmc company and luquasorb®1030 super water-absorbing polymer from basf is most preferable. the functional ingredient comprises 0.1-20% by weight of the total weight of the hydrocouoid composition, and the addition ratio by weight is different respect to different functions. the functional ingredient means an ingredient which can provide one of the following functions: accelerating wound healing, whelk care (reducing the infection from propionibacterium acnes), mind-refreshment, anti-itch, anti-acnes, mosquito-repellence, antimicrobial and the like. the examples thereof include, but not limited to, limonene, asiatic pennywort herb, tea tree oil, spike essential oil, and water absorbent. the hydrophobic unsaturated elastomeric homopolymer comprises 0.1-50% by weight, preferably 10-45% by weight, more preferably 15-40% by weight, and most preferably 30-37%o by weight of the total weight of the hydrocouoid composition. the hydrophobic unsaturated elastomeric homopolymer is an elastic modified molecular material with a hydrophobic group and an unsaturated double bond, which is formed by polymerization of a monomer. the examples thereof include, but not limited to, polyisoprene (such as, natsyn 2210 from goodyear, usa; ir2200 from zeon, japan), polybutadiene (such as, br9000 polybutadiene from qilu petrochemical, sinopec), wherein br9000 polybutadiene from qilu petrochemical is preferable, natsyn 2210 from goodyear, usa is more preferable, and ir2200 from zeon, japan is most preferable. the resin tackifer comprises 0.1-55% by weight, preferably 0.1-40% by weight, more preferably 5-30% by weight, and most preferably 5-20% by weight of the total weight of the hydrocolloid composition. the resin tackifer refers to a resin which can increase the viscosity of a mbber. the examples thereof include, but not limited to, eastotac h-100r resin from eastman chemical, wingtack 95 from cray valley, and sylvalite re80hp rosin ester from arizona chemicals, wherein eastotac h-100r resin from eastman chemical is preferable, sylvalite re80hp rosin ester from arizona chemicals is more preferable, and wingtack 95 from cray valley is most preferable. preferably, the resin tackifer has a weight average molecular weight of 200-3000 and a polydispersity index of 0.5-20. the penetration facilitator comprises 0.1-20% by weight, preferably 0.2-15%) by weight, more preferably 0.3-10% by weight, most preferably 0.5-8% by weight of the total weight of the hydrocolloid composition. the penetration facilitator, also referred to as skin-penetrating agent, penetration accelerator, is mainly used to facilitate the penetration of functional ingredients into skin. the examples thereof include, but not limited to, borneol from huaxin, jiangxi, china; natural menthol from jubang anhui, china; pipeline from sabinsa company, usa, wherein the bomeol from huaxin, jiangxi is preferable, the natural menthol from jubang anhui is more preferable, and piperine from sabinsa company, usa is most preferable. the hydrocolloid composition of the invention can be prepared simply by the traditional mixing methods. the invention further provides an article containing the hydrocolloid composition as described above. the hydrocolloid composition of the invention can be used in various applications. for example, it can be used in the products for mind-refreshment, such as a patch attached to head in the case of examination, meeting, driving, and working. it can further used for anti-itch and wound healing. in summer, the bite from mosquitoes will cause itch and inflammation to skin, and sometimes, it even results in a low-grade wound or skin ulceration. an anti-itching hydrocolloid patch prepared from the hydrocolloid composition of the invention has the following functions of: persistently releasing anti-itching ingredients; absorbing filtering liquid from the wound; forming an excellent barrier on the wound; protecting the bared wound from bacteria invasion; and accelerating wound healing. the hydrocolloid composition of the invention can further be used to protect heels. heels are liable to be hurt by shoes, specifically, high-heel shoes or new shoes, which often result in skin damages, blisters on feet or the like. as to a hydrocolloid heel patch made of the hydrocolloid composition of the invention added with a functional substance having relaxing efficacy, it can relax the foot and remove fatigue, and additionally, the soft hydrocolloid material can alleviate pressure on the foot and isolate sharp portions on a shoe from the foot. in the case of skin damages, it can also protect the bared wound and accelerate wound healing. the hydrocolloid composition of the invention can further made into an antimicrobial hydrocolloid dressing. the invention is described in more detail with reference to the examples. it should be noted that these examples are illustrative and do not limit the invention in any way. unless specifically stated otherwise, the percentages, contents, proportions or the like in the invention are all in terms of weight, and the temperature used in the invention refers to degree centigrade. example the sources of the raw materials used in the following examples are listed in the following table 1. table 1 sources of the raw materials used in examples example 1 mind-refreshing patch 32.5% of br9000 polybutadiene, 40% of pib 6h polyisobutylene, 22% of hs100000yp2 hydroxyethyl cellulose, 5% of borneol, and 0.5% of limonene were added into a mixer, mixed for 2 hr, evacuated and remixed for 1 hr, and then brought it out. the mixture was weighed and placed between two organosilicon release papers, and then pressed into a sheet with a thickness of 0.05±0.01 mm using a plate curameter. subsequently, a release paper was peeled from the sheet, and the sheet was then combined with a transparent thin film. the sheet was cut into a certain shape as desired. example 2 anti-itching patch 50% of natsyn 2210 polyisoprene, 20% of sdg-8650 polyisobutylene, 22% of fh5000 carboxymethyl cellulose, 5% of asiatic pennywort herb, 2% of tea tree oil, and 1% of menthol were added into a mixer, mixed for 2 hr, evacuated and remixed for 1 hr, and then brought out. the mixture was weighed and extruded into a sheet with a thickness of 0.05±0.01 mm using a single screw extruder. the extraded sheet was combined with an organosilicon release paper on the bottom thereof and a transparent thin film on the top thereof, and then wound. the sheet was cut into a certain shape as desired. example 3 anti-acne patch 23.5% of ir2200 polyisoprene, 35% of oppanol b-12 sfn polyisobutylene, 15% of wingtack 95, 20% of the sodium croscarmellose, 1.5% of spike essential oil, 2% of tea tree oil, and 3% of menthol were added into a mixer, mixed for 2 hr, evacuated and remixed for 1 hr, and then brought out. the mixture was weighed and extruded into a sheet with a thickness of 0.03±0.005 mm using a single screw extruder. the extraded sheet was combined with an organosilicon release paper on the bottom thereof and a transparent thin film on the top thereof, and then wound. the sheet was cut into a certain shape as desired. example 4 super water-absorbing hydrocolloid patch 37.5% of natsyn 2210 polyisoprene, 35% of oppanol b-12 sfn polyisobutylene, 8% of fh5000 carboxymethyl cellulose, 6% of the sodium croscarmellose, and 13.5% of luquasorb®1030 were added into a mixer, mixed for 2 hr, evacuated and remixed for 1 hr, and then brought out. the mixture was weighed and extruded into a sheet with a thickness of 0.03±0.005 mm using a single screw extruder. the extruded sheet was combined with an organosilicon release paper on the bottom thereof and a transparent thin film on the top thereof, and then wound. the sheet was cut into a certain shape as desired. example 5 mind-refreshing patch 32.5% of br9000 polybutadiene, 40.0% of pib 6h polyisobutylene, 15.0% of fh5000 carboxymethyl cellulose, 6.5% of sodium croscarmellose, and 6.5% of wind medicated oil were added into a mixer, mixed for 2 hr, evacuated and remixed for 1 hr, and then brought it out. the mixture was weighed and placed between two organosilicon release papers, and then pressed into a sheet with a thickness of 0.05±0.01 mm using a plate curameter. subsequently, a release paper was peeled from the sheet, and the sheet was then combined with a transparent thin film. the sheet was cut into a certain shape as desired. example 6 60% of pib 6h polyisobutylene, 15.0% of fh5000 carboxymethyl cellulose, 9.5% of sodium croscarmellose, and 5% polyhexamethylene biguanide (phmb) cosmocil™ cq were added into a mixer, mixed for 2 hi", evacuated and remixed for 1 hi-, and then brought it out. the mixture was weighed and extruded into a sheet with a thickness of 0.03±0.005 mm using a single screw extruder. the extruded sheet was combined with an organosilicon release paper on the bottom thereof and a transparent thin film on the top thereof, and then wound. the sheet was cut into a certain shape as desired. example 7 one patch made in example 1 was taken. its weight was 0.2193g (the same size of backing was weighed 0.0116g, so hydrocouoid had a weight of 0.2077g). the patch was soaked in 20g (approximately 25ml) ethanol. it was stirred by using magnetic stirrer for 24h to obtain an extract solution. then the extract solution was tested via gc-ms. chromatographic condition: column: db-1 30m*0.32mm* ^m; injector temp.=250 ° c ; split ratio=l 0:1 ; oven: initial temp =120 ° c, hold 1 min; ramp=5 ° c/min; final temp =135 ° c, hold 5 min. carrier gas: n2 (2ml/min); detector: fid; temp.=250 ° c standard preparation: 0.0104g menthol standard was weighed into loml volumetric flask, dissolved with anhydrous ethanol, and diluted to mark. test results: menthol concentration in the extract solution was 0.0727mg/ml. its percentage in the patch was 0.88%. compared with the dosage 5%, 17.5% menthol was released from the patch. example 8 one patch made in example 2 was taken. its weight was 0.2403g (the same size of backing weighed 0.0116g, so hydrocouoid had a weight of 0.2287g). the patch was soaked in 20g (approximately 25ml) ethanol. it was stirred by using magnetic stirrer for 24h to obtain an extract solution. then the extract solution was tested via gc-ms. chromatographic condition: column: db-1 30m*0.32mm* ^m; injector temp.=250 ° c ; split ratio=l 0:1 ; oven: initial temp.= 120 ° c, hold 1 min; ramp=5 ° c/min; final temp =135 ° c , hold 5 min. carrier gas: n2 (2ml/min); detector: fid; temp.=250°c standard preparation: 0.026 lg borneol standard was weighed into 25ml volumetric flask, dissolved with anhydrous ethanol, and diluted to mark. test results: menthol concentration in extract solution was 0.0213mg/ml. its percentage in the patch is 0.23%. compared with the dosage 1%, 23.3% bomeol was released from the patch. examples 5 and 6 proved that the functional ingredients were indeed incorporated into the hydrocolloid matrix. example 9 this example was used to show whether it works by the in house panel test. 13 volunteers attended this test. they were asked to wear the patches made in example 5 and asked if they had feelings after 5, 10, 30 & 60 min. the results were listed in table 2. table 2 volunteers answers on whether they have feelings after different times according to the results, 100% of them can still feel the patch after 30min, even 2 of them still have feelings after 60min, which is much longer than daubing wind medicated oil on skin directly. example 10 one patch made in example 6 was taken. zone of inhibition (zoi) toward staphyloccocus aureus was tested according to disinfection technical guidelines 2002. the diameter of zoi is 4.5cm, which shows strong antimicrobial effectiveness on staphyloccocus aureus.
143-403-849-686-913
US
[ "CN", "EP", "WO", "US", "CA", "AU" ]
A41B13/06,A61F5/37,A47D15/00,A47G9/08,A61M21/02
2018-02-21T00:00:00
2018
[ "A41", "A61", "A47" ]
infant sleep garment
an infant sleep garment comprises an enclosure having a first side and a second side. the first side and the second side define an enclosure volume. the enclosure includes a first portion, a second portion, a weight element, and a support element. the first portion is configured to accommodate a lower body of an infant within the enclosure volume. the second portion is located in a superior position relative to the first portion and be configured to accommodate an upper body of the infant. the weight element is couplable to the first side at a location corresponding to an abdominal area of theinfant when the infant is enclosed. the support element is couplable to the second side at a location corresponding to the lower body of the infant. the support element is configured to elevate hipsand feet of the infant relative to the upper body of the infant.
claims what is claimed is: 1. an infant sleep garment comprising: an enclosure having a first side and a second side, the first side and the second side together defining an enclosure volume therebetween, wherein the enclosure comprises: a first portion configured to accommodate at least a portion of a lower body of an infant within the enclosure volume when the infant enclosed therein; and a second portion located in a superior position relative to the first portion and configured to accommodate at least a portion of an upper body of the infant within the enclosure volume when the infant enclosed therein; a weight element coupled or couplable to the first side along the second portion at a location corresponding to an abdominal or chest area of the infant when the infant is enclosed within the enclosure volume; a support element coupled or couplable to the second side along the first portion at a location corresponding to at least a portion of the lower body of the infant when the infant is enclosed within the enclosure volume, wherein the support element is configured to elevate legs and feet of the infant relative to the at least a portion of the upper body of the infant. 2. the infant sleep garment of claim 1, wherein the weight element weighs between 0.5 pounds and 2 pounds, and wherein the support element includes an arcuate cross-section that extends a distance into the enclosure volume at least 4 inches. 3. an infant sleep garment comprising: an enclosure having a first side and a second side, the first side and the second side together defining an enclosure volume therebetween, wherein the enclosure comprises: a first portion configured to accommodate at least a portion of a lower body of an infant within the enclosure volume when the infant is enclosed therein; and a second portion located in a superior position relative to the first portion and configured to accommodate at least a portion of an upper body of the infant within the enclosure volume when the infant is enclosed therein; wherein the first side is configured to couple with a weight element along the second portion and the second side is configured to couple with a support element along the first portion, wherein, when coupled with the first side, the weight element is positioned to apply weight to at least a portion of the upper body of the infant, and wherein, when coupled with the second side, the support element is positioned to support the at least a portion of the at least a portion of the lower body of the infant to elevate hips and feet of the infant relative to the at least a portion of the upper body of the infant. 4. the infant sleep garment of claim 3, wherein the first side is configured to couple with the weight element at a location configured to position the weight over a chest or abdominal area of the infant when the infant is enclosed in the enclosure volume to apply pressure to the same when the weight element is received therein. 5. the infant sleep garment of any one of claims 4 or 5, wherein the weight element weighs between 0.5 and 1.5 pounds. 6. the infant sleep garment of any one of claims 3-5, wherein the weight element weighs about 1 pound. 7. the infant sleep garment of any one of claim 3-6, wherein the first side includes a compartment configured to receive the weight element to thereby couple the weight at the first side. 8. the infant sleep garment of any one of claims 3-7, wherein, when coupled at the second side, the support element extends 4.5 to 5.5 inches into the enclosure volume. 9. the infant sleep garment of any one of claims 3-8, wherein the support element includes a cylindrical cross-section portion that when coupled at the second side extends into the enclosure volume. 10. the infant sleep garment of any one of claims 3-9, wherein, when coupled at the second side, an arcuate cross-section portion of the support element extends 4.5 to 5.5 inches into the enclosure volume. 11. the infant sleep garment of any one of claims 3-10, wherein the second side includes a compartment configured to receive the support element. 12. the infant sleep garment of claim 8, wherein the compartment configured to receive the support element is positioned at a location along the second side corresponding to at least thighs and feet of the infant to underlay the same when the infant is enclosed within the enclosure volume. 13. the infant sleep garment of claim 8, wherein the compartment configured to receive the support element is positioned at a location along the second side corresponding to at least hips of the infant to underlay the same when the infant is enclosed within the enclosure volume. 14. the infant sleep garment of any one of claims 3-13, wherein the first side and the second side are selectively couplable along their respective lateral peripheries extending between the first and second portions, and wherein adjacent lateral peripheries of the first and second sides include attachment members configured to couple the adjacent lateral peripheries. 15. the infant sleep garment of claim 14, wherein the attachment members comprise zipper halves that are matable with adjacent zipper to couple the lateral peripheries. 16. the infant sleep garment of any one of claims 3-15, wherein the first and second sides are configured to accommodate an infant sleep sack within the enclosure volume. 17. the infant sleep garment of claim 16, wherein the enclosure includes laterally positioned openings to accommodate passage of a sleep sack securing mechanism from the enclosure volume to an exterior of the enclosure for attachment to a moveable sleep platform to thereby indirectly fix the enclosure to movement of the sleep platform. 18. the infant sleep garment of claim 3, wherein the first side includes a compartment configured to receive the weight element to thereby couple the weight at the first side and the second side includes a compartment configured to receive the support element, wherein the compartment configured to receive the weight element is positioned along the first side at a location corresponding to a chest or abdominal area of the infant when the infant is enclosed within the enclosure volume such that the weight element when received therein applies pressure to the same, wherein the compartment configured to receive the support element is positioned along the second side at a location to underlay the lower body of the infant when the infant is enclosed within the enclosure volume, and wherein, when received within the compartment, the support element extends a distance into the enclosure volume between 4.5 to 5.5 inches. 19. an infant sleep garment comprising: an enclosure having a first side and a second side, the first side and the second side together defining an enclosure volume therebetween, wherein the enclosure comprises: a first portion configured to accommodate at least a portion of a lower body of an infant within the enclosure volume when the infant is enclosed therein; and a second portion located in a superior position relative to the first portion and configured to accommodate at least a portion of an upper body of the infant within the enclosure volume when the infant is enclosed therein; and a weight element, a support element, or both; wherein the weight element is coupled or couplable to the first side along the second portion at a location corresponding to an abdominal or chest area of the infant when the infant is enclosed within the enclosure volume, wherein the support element is coupled or couplable to the second side along the first portion at a location corresponding to at least a portion of the lower body of the infant when the infant is enclosed within the enclosure volume, and wherein the support element is configured to elevate hips and feet of the infant relative to the at least a portion of the upper body of the infant. 20. the infant sleep garment of claim 19, wherein the weight element weighs between 0.5 pounds and 1.5 pounds, and wherein the support element includes an arcuate cross-section that extends a distance into the enclosure volume at least 4 inches.
infant sleep garment technology field [0001] the present disclosure relates to infant garments, and specifically to garments with infant calming and sleep-aid properties intended to assist in triggering a calming reflex in an infant. background [0002] persistent crying and poor infant sleep are perennial and ubiquitous causes of parent frustration. during the first months of life, babies fuss/cry an average of about two hours/day and wake two to three times a night. one in six infants is brought to a medical professional for evaluation for sleep/cry issues. [0003] infant crying and parental exhaustion are often demoralizing and directly linked to marital conflict, anger towards the baby, impaired job performance, and are primary triggers for a cascade of serious/fatal health sequelae, including postpartum depression (which affects about 15% of all mothers and 25- 50% of their partners), breastfeeding failure, child abuse and neglect, infanticide, suicide, unsafe sleeping practices, sids/suffocation, cigarette smoking, excessive doctor visits, overtreatment of infants with medication, automobile accidents, dysfunctional bonding, and perhaps maternal and infant obesity. thus, there is a need for improved sleep aids to promote sleep (by reducing sleep latency and increasing sleep efficiency). "sleep latency" may be defined as the length of time between going to bed and falling asleep. "sleep efficiency" may be defined as the ratio of time spent asleep (total sleep time) to the amount of time spent in bed. summary [0004] in one aspect, an infant sleep garment comprises an enclosure having a first side and a second side. the first side and the second side may together define an enclosure volume therebetween. the enclosure may include a first portion, a second portion, a weight element, and a support element. the first portion may be configured to accommodate at least a portion of a lower body of an infant within the enclosure volume when the infant is enclosed therein. the second portion may be located in a superior position relative to the first portion and be configured to accommodate at least a portion of an upper body of the infant within the enclosure volume when the infant enclosed therein. the weight element may be coupled or couplable to the first side along the second portion at a location corresponding to an abdominal or chest area of the infant when the infant is enclosed within the enclosure volume. the support element may be coupled or couplable to the second side along the first portion at a location corresponding to at least a portion of the lower body of the infant when the infant is enclosed within the enclosure volume. the support element may be configured to elevate hips and feet of the infant relative to the at least a portion of the upper body of the infant. [0005] in one embodiment, the weight element weighs between 0.5 pounds and 1.5 pounds and the support element includes an arcuate cross-section that extends a distance into the enclosure volume at least 4 inches. [0006] in another aspect, an infant sleep garment includes an enclosure having a first side and a second side. the first side and the second side may together define an enclosure volume therebetween. the enclosure may include a first portion and a second portion. the first portion may be configured to accommodate at least a portion of a lower body of an infant within the enclosure volume when the infant is enclosed therein. the second portion may be located in a superior position relative to the first portion and may be configured to accommodate at least a portion of an upper body of the infant within the enclosure volume when the infant is enclosed therein. the first side may be configured to couple with a weight element along the second portion. the second side may be configured to couple with a support element along the first portion. when coupled with the first side, the weight element may be positioned to apply weight to at least a portion of the upper body of the infant. when coupled with the second side, the support element may be positioned to support the at least a portion of the at least a portion of the lower body of the infant to elevate hips and feet of the infant relative to the at least a portion of the upper body of the infant. [0007] in various embodiments, the first side includes a compartment configured to receive the weight element to thereby couple the weight at the first side. in one embodiment, the first side is configured to couple with the weight element at a location configured to position the weight over a chest or abdominal area of the infant when the infant is enclosed in the enclosure volume to apply pressure to the same when the weight element is received therein. in one example, the weight element weighs between 0.5 and 1.5 pounds. [0008] in some embodiments, the second side includes a compartment configured to receive the support element. the compartment may be configured to receive the support element and be positioned at a location along the second side corresponding to at least thighs and feet of the infant to underlay the same when the infant is enclosed within the enclosure volume. in one embodiment, the compartment configured to receive the support element is positioned at a location along the second side corresponding to at least hips of the infant to underlay the same when the infant is enclosed within the enclosure volume. [0009] in various embodiments, when coupled at the second side, the support element extends 4.5 to 5.5 inches into the enclosure volume. in one example, the support element includes a cylindrical cross-section portion that when coupled at the second side extends into the enclosure volume. in a further example, when coupled at the second side, an arcuate cross-section portion of the support element may extend a distance greater than 4 inches or between 4.5 to 5.5 inches into the enclosure volume. [0010] in an embodiment, the first side and the second side are selectively couplable along their respective lateral peripheries extending between the first and second portions. adjacent lateral peripheries of the first and second sides may include attachment members configured to couple the adjacent lateral peripheries. in one example, the attachment members comprise zipper halves that are matable with adjacent zipper to couple the lateral peripheries. [0011] in some embodiments, the first and second sides may be configured to accommodate an infant sleep sack within the enclosure volume. in one example, the enclosure includes laterally positioned openings to accommodate passage of a sleep sack securing mechanism from the enclosure volume to an exterior of the enclosure for attachment to a moveable sleep platform to thereby indirectly fix the enclosure to movement of the sleep platform. [0012] in one embodiment, the first side includes a compartment configured to receive the weight element to thereby couple the weight at the first side and the second side includes a compartment configured to receive the support element. the compartment configured to receive the weight element may be positioned along the first side at a location corresponding to a chest or abdominal area of the infant when the infant is enclosed within the enclosure volume such that the weight element when received therein applies pressure to the same. the compartment configured to receive the support element may be positioned along the second side at a location to underlay the lower body of the infant when the infant is enclosed within the enclosure volume. when received within the compartment, the support element may extend a distance into the enclosure volume a distance greater than 4 inches or between 4.5 to 5.5 inches. [0013] in still another aspect, an infant sleep garment comprises an enclosure having a first side and a second side. the first side and the second side may together define an enclosure volume therebetween. the enclosure may include a first portion, a second portion. the first portion may be configured to accommodate at least a portion of a lower body of an infant within the enclosure volume when the infant is enclosed therein. the second portion may be located in a superior position relative to the first portion and be configured to accommodate at least a portion of an upper body of the infant within the enclosure volume when the infant enclosed therein. the enclosure may further include a weight element, a support element, or both. the weight element may be coupled or couplable to the first side along the second portion at a location corresponding to an abdominal or chest area of the infant when the infant is enclosed within the enclosure volume. the support element may be coupled or couplable to the second side along the first portion at a location corresponding to at least a portion of the lower body of the infant when the infant is enclosed within the enclosure volume. the support element may be configured to elevate hips and feet of the infant relative to the at least a portion of the upper body of the infant. [0014] in one embodiment, the weight element weighs between 0.5 pounds and 1.5 pounds and the support element includes an arcuate cross-section that extends a distance into the enclosure volume at least 4 inches. [0015] in still another aspect, an embodiment of the present disclosure includes an infant sleep garment comprising a body having a first side, and a second side opposite the first side, the first and second sides defining an internal volume therebetween, the internal volume comprising: a first portion configured to accommodate at least a portion of an infant's lower body; and a second portion located in a superior position relative to the first portion and configured to accommodate at least a portion of the infant's upper body. [0016] in an embodiment, the sleep garment may further comprise a first accommodation mechanism connected to the body for receiving a support element, the first accommodation mechanism being located proximate to and inferior relative to the first portion at the second side, and wherein the support element, upon being received at the first accommodation mechanism, is configured to support the infant's legs at an elevated angle relative to the infant's hips. [0017] in an embodiment, the sleep garment may further comprise a first securing mechanism; wherein the first securing mechanism is connected to the body and configured to fix the sleep garment to a sleep surface; [0018] in an embodiment, a surface of the support element may be inclined upwardly or downwardly from a proximal end to a distal end. in an embodiment, the surface of the support element may be level, partial inclined, or shaped to have any geometries that may allow the infant to rest in a comfortable position. [0019] in an embodiment, the support element may comprise a flat portion and an inclined portion, the inclined portion further comprising the inclined surface. [0020] in an embodiment, the first side may comprise a first coupling mechanism coincident with the sagittal plane of the infant and allowing access to the internal volume [0021] in an embodiment, a portion of the body may comprise a mesh fabric for allowing air to move therethrough. [0022] in an embodiment, the elevated angle relative to the infant's hips may be between 30 and 160 degrees. [0023] in an embodiment, the support element, upon being received at the first accommodation mechanism, may be located outside the interior volume. [0024] in an embodiment, the support element, upon being received at the first accommodation mechanism, may be located inside the interior volume. [0025] in an embodiment, the infant sleep garment may further comprise a second securing mechanism. the second securing mechanism may be attached to the body and configured to fix an infant within the interior volume. [0026] in an embodiment, the infant sleep garment may further comprise a second attachment mechanism. the second attachment mechanism may be configured to receive a weight element at a location proximate to the second portion at the first side of the body, thereby applying pressure to the infant's body. [0027] in an embodiment, the second attachment mechanism may be connected to the body at the location proximate to the second portion at the first side of the body. [0028] in an embodiment, the weight element may weigh between 1 ounce and 3 pounds. [0029] in an embodiment, the infant sleep garment may further comprise a first accommodation mechanism connected to a sleep surface for receiving a support element, the first accommodation mechanism being located proximate to and inferior relative to the first portion at the second side; wherein the support element, upon being received at the first accommodation mechanism, may be configured to support the infant's legs at an elevated angle relative to the infant's hips. [0030] in an embodiment of the present disclosure, an infant sleep garment may further comprise: a second attachment mechanism connected to the body. the second accommodation mechanism may be configured to receive a weight element at a location proximate to the second portion at the first side of the body, thereby applying pressure to the infant's body. [0031] in an embodiment, the sleep garment may further comprise an outer enclosure configured to accommodate a portion of the body therein. [0032] in an embodiment, the sleep garment may further comprise a first accommodation mechanism connected to the outer enclosure for receiving a support element, the first accommodation mechanism being located proximate to and inferior relative to the first portion at the second side; wherein the support element, upon being received at the first accommodation mechanism, may be configured to support the infant's legs at an elevated angle relative to the infant's hips. [0033] in an embodiment, the outer enclosure may comprise a second coupling mechanism extending partially around the periphery of the outer enclosure and allowing accommodation of the body therein. [0034] in an embodiment, a portion of the outer enclosure may comprise a mesh fabric for allowing air to move therethrough. [0035] in an embodiment, the sleep garment may be configured to receive a weight element at a location proximate to the second portion at the first side of the body, thereby applying pressure to the infant's body. [0036] in an embodiment, the second portion of the internal volume may comprise a compartment to receive the weight element at the location proximate to the second portion at the first side of the body. [0037] in an embodiment, the weight element may be received at the location proximate to the second portion at the first side of the body by connecting the weight element to the second portion. [0038] in an embodiment, the sleep garment may be configured to receive a weight element at a location proximate to the second portion at the first side of the body, thereby applying pressure to the infant's body; and the outer enclosure further may comprise a second compartment for accommodating the weight element at the location proximate to the second portion at the first side of the body. [0039] in an embodiment, the weight element may be received at the location proximate to the second portion at the first side of the body by connecting the weight element to the second portion. [0040] in an embodiment, the second portion of the internal volume may comprise a compartment to receive the weight element at the location proximate to the second portion at the first side of the body. [0041] in another aspect of the present disclosure, an enclosure for accommodating an infant may comprise: a first side and a second side opposite the first side; a first portion configured to accommodate an infant's legs between the first and second sides; a second portion for accommodating the infant's torso beneath the first side; wherein the first portion may comprise a first accommodation mechanism for receiving a support element configured to support the infant's legs at an elevated angle relative to the infant's hips; and wherein the second portion may comprise a second accommodation mechanism for receiving a weight element configured to apply pressure to the infant's body. [0042] in an embodiment, portions of the first and second side may comprise a mesh fabric for allowing air to move therethrough. [0043] in an embodiment, the first and second side may be at least partially connected by a coupling mechanism extending partially around the periphery of the first portion. [0044] in an embodiment, the first accommodation mechanism may comprise a compartment for receiving the support element therein. [0045] in an embodiment, the second accommodation mechanism may comprise a compartment for receiving the weight element therein. [0046] in an embodiment, the first and second sides may be configured to at least partially accommodate an infant sleep garment therebetween. brief description of the drawings [0047] the novel features of the described embodiments are set forth with particularity in the appended claims. the described embodiments, however, both as to organization and manner of operation, may be best understood by reference to the following description, taken in conjunction with the accompanying drawings in which: [0048] fig. 1 is a perspective view illustration of an infant sleep garment according to various embodiments described herein; [0049] fig. 2 is a top-down perspective view illustration of the first side of an infant sleep garment according to various embodiments described herein; [0050] fig. 3 is a bottom-up perspective view illustration of the second side of an infant sleep garment according to various embodiments described herein; [0051] fig. 4 is a left-to-right perspective view illustration of the left side of an infant sleep garment according to various embodiments described herein; [0052] fig. 5 is a right-to-left perspective view illustration of the right side of an infant sleep garment according to various embodiments described herein; [0053] fig. 6 is perspective view illustration of a sleep surface according to various embodiments described herein; [0054] fig. 7 is a perspective view illustration of a first securing mechanism according to various embodiments described herein; [0055] fig. 8 is a perspective view of an outer enclosure according to various embodiments described herein; [0056] fig. 9 is a perspective view of an outer enclosure according to various embodiments described herein; [0057] fig. 10 is a bottom view of an enclosure for accommodating an infant according to various embodiments described herein; [0058] fig. 11 is a top view of an enclosure for accommodating an infant according to various embodiments described herein; [0059] fig. 12 is a top view of an opened enclosure for accommodating an infant according to various embodiments described herein; [0060] fig. 13 is a perspective view of a support element according to various embodiments described herein; [0061] fig. 14 is a top view of an enclosure including a first side according to various embodiments described herein; [0062] fig. 15 is a bottom view of an enclosure including a second side according to various embodiments described herein; [0063] fig. 16 is a top view of an enclosure wherein a first side has been folded down to review an enclosure volume according to various embodiments described herein; [0064] figs. 17a & 17b illustrate a zipper garage feature according to various embodiments described herein; [0065] fig. 18a is a top view of an infant positioned within an enclosure volume of an enclosure with a first side of the enclosure folded down according to various embodiments described herein; [0066] fig. 18b is a side view of the infant positioned with the enclosure volume of the enclosure shown in fig. 18a according to various embodiments described herein; [0067] fig. 19 is a top view of an infant positioned within an enclosure volume of an enclosure wherein a first side of the enclosure is positioned over the infant according to various embodiments described herein; [0068] fig. 20 is a top view of an infant secured within an enclosure volume of an enclosure according to various embodiments described herein; [0069] fig. 21a illustrates a securing mechanism pulled through an opening in a side of an enclosure according to various embodiments described herein; [0070] fig. 21b illustrates a securing mechanism being pulled from a pocket in a side of the enclosure of fig. 21a according to various embodiments described herein; [0071] fig. 22 is a top view of an infant positioned within an enclosure volume of an enclosure wherein the enclosure and infant are indirectly secured to a platform of a sleep device according to various embodiments described herein; and [0072] figs. 23a & 23b illustrate a neck opening adjustment mechanism according to various embodiments described herein. detailed description [0073] traditional parenting practices have utilized swaddling, rhythmic motion and certain sounds to soothe fussing infants and promote sleep by reducing sleep latency and increasing sleep efficiency. [0074] swaddling, rhythmic motion and certain positions and sounds may be utilized to imitate elements of in utero sensory milieu and activate a suite of subcortical reflexes, called the "calming reflex." during the first 4-6 months of a baby's life. [0075] swaddling, for example, is a method of snug wrapping with the arms restrained at the sides of the baby. this imitates the confinement and continual touch a baby experienced in the womb. swaddling also inhibits startling and flailing, which often interrupts sleep and starts/exacerbates crying. [0076] rhythmic motions may include jiggling motions that replicate movement a baby experienced as a fetus when the mother was walking. this motion is believed to stimulate the vestibular apparatus in the inner ear. a specific rumbling noise may also be incorporated that imitates the sound created by the turbulence of the blood flowing through the uterine and umbilical arteries. in utero, the sound level a fetus hears has been measured at between 75 and 92 db. [0077] like the children and adults they will grow to become, each baby is an individual that may favor a specific and/or unique mix of motion and sound that most efficiently activates his or her calming reflex. this preferred mix is believed to stay consistent through the first months of life (e.g., babies who respond best to swaddling plus jiggling continue to respond to those modalities over time and do not abruptly switch their preference to swaddling plus sound). in utero, babies are confined in a small space requiring them to flex their hips and knees with the knees pressing against the stomach and arms pressed against the chest. [0078] the calming reflex has several constant characteristics. it is triggered by a stereotypical sensory input; produces a stereotypical behavioral output; demonstrates a threshold phenomenon (i.e. stimuli that are too mild may not be sufficient to activate a response); has a threshold that varies between individuals (i.e. is higher or lower for any given child); the threshold varies by state (e.g. fussing and crying raise the level of stimulation required to exceed threshold and bring about reflex activation); the reflex is universally present and relatively obligatory at birth, but wanes after 3-4 months of age. [0079] in addition, crib death or suid (sudden unexplained infant death) is a leading cause of infant mortality. approximately 3700 us babies die each year from suid during the first year of life. the peak occurrence is from 2-4 months of age, with 80% of the victims being under 4 months and 90% being under 6 months of age. [0080] in the l990's, a program to reduce suid deaths called "back to sleep" was introduced. at that time, it was discovered that sleeping on the stomach was a key triggering factor in many of the deaths, so caregivers were instructed to place babies on their backs for sleeping. within less than a decade, the rate of suid dropped almost in half, however, since 1999, the suid incidence has barely diminished. studies have indicated that stomach sleeping may indeed predispose babies to suid by causing suffocation or by reducing infant arousability and inhibiting breathing. [0081] in addition, many babies fall out of their bassinet during the first 6 months of life. federal reports reveal that 69% of recent bassinet/cradle incidents have been attributed to falling. all falls resulted in head injury. alarmingly, 45% of falls occurred in infants five months old or less. [0082] therefore, a need exists for an infant garment that constrains the movement of the infant relative to a sleep surface while promoting the calming reflex of the infant. [0083] in an embodiment of the present disclosure, an infant sleep garment may comprise a body having a first side, and a second side opposite the first side, the first and second sides defining an internal volume therebetween, the internal volume comprising: a first portion configured to accommodate at least a portion of an infant's lower body; and a second portion located in a superior position relative to the first portion and configured to accommodate at least a portion of the infant's upper body. [0084] in an embodiment, the sleep garment may further comprise a first accommodation mechanism connected to the body for receiving a support element, the first accommodation mechanism being located proximate to and inferior relative to the first portion at the second side, and wherein the support element, upon being received at the first accommodation mechanism, is configured to support the infant's legs at an elevated angle relative to the infant's hips. [0085] fig. 1 illustrates an infant sleep garment 100 according to various embodiments described herein. the sleep garment 100 may include a body 102, which may also be referred to herein as a sleep sack, may be configured as a clothing article or an over garment to be worn by an infant. the body 102 may include an infant enclosure region comprising an interior volume. the body 102 may include a coupling mechanism to enclose the infant within the interior volume. the body 102 may be configured to swaddle an infant when within the interior volume thereof. in some embodiments, the body 102 may comprise a sleep sack as described in u.s. patent application no. 15/055,077, filed february 26, 2016, titled infant calming/sleep-aid and sids prevention device with drive system, or u.s. patent application no. 15/336,519, filed october 27, 2016, titled infant calm/sleep-aid, sids prevention device, and method of use. the disclosures of both of which are hereby incorporated herein by reference. [0086] with reference to figs. 1-5, body 102 has first side 104 and second side 106 opposite first side 104, first side 104 and second side 106 defining internal volume 108 therebetween. internal volume 108 comprises first portion 110 configured to accommodate at least a portion of an infant's lower body. in an embodiment, a width of first portion 110 may be the average width of an infant's hips, or a circumference of first portion 110 may be the average waist circumference of an infant. in an embodiment, first portion 110 may be a material with elastic properties that shrinks or expands to accommodate at least a portion of an infant's lower body, or may be adjustable such that a user may set an appropriate dimension of first portion 110 to accommodate an infant's lower body. [0087] internal volume 108 may further define a second portion 112 located in a superior position relative to the first portion 110 and configured to accommodate the infant's upper body. the internal volume 108 may further define a third portion 113 located in an inferior position relative to the first portion 110 and configured to accommodate the infant's feet. in the context of the present disclosure, a superior position is one relatively further towards the head of an infant, while an inferior position is one relatively further towards the feet of an infant. [0088] fig. 2 is a top-down perspective view illustration of the first side 104 of infant sleep garment 100. in the example shown in fig. 2, the first side 104 may comprise a first coupling mechanism 124 coincident with the sagittal plane of the infant and allowing access to the internal volume 108. a portion of the body 102 may comprise a mesh fabric 126 for allowing air to move therethrough. first coupling mechanism 124 may include but is not limited to any sealable and un- sealable mechanism, such as a zipper mechanism, a hook and loop attachment mechanism, a push snap attachment, a magnetic attachment mechanism, fasteners, clips, buttons, or any similar mechanism. [0089] fig. 2 also shows an embodiment of a first securing mechanism 114. first securing mechanism 114 is connected to the body 102 and configured to fix the sleep garment to a sleep surface 116 (shown in fig. 6). in an embodiment, the first securing mechanism 114 may be any mechanism configured to secure the sleep garment 100 to sleep surface 116 to prevent an infant inside a sleep garment from rolling over or otherwise moving into an unsafe disposition. the first securing mechanism 114 may include but is not limited to any of the following: a strap configured to attach to a clip, an elastic strap, a hook and loop attachment mechanism, a push snap attachment, a zipper mechanism, a magnetic attachment mechanism, or any similar mechanism configured to secure sleep garment 100 to sleep surface 116. an embodiment of a securing mechanism 114 may comprise a sleeve 117 or clip 123 for engaging a corresponding clip or sleeve, which may be connected to the sleep surface 116, e.g., in a manner shown in fig. 7. in an embodiment, the first securing mechanism 114 of the sleep garment 100 may be configured to attach to another garment, or sleep garment 100 may be accommodated within another garment, wherein the other garment is attachable to sleep surface 116, thereby indirectly securing the sleep garment 100 to sleep surface 116. [0090] with particular reference to figs. 1-3, a support element 118 may be configured to be received at location 115 that is proximate to and inferior relative to the first portion 110 at the second side 106, thereby supporting the infant's legs at an elevated angle relative to the infant's hips. in an embodiment, support element 118 may be placed at location 115 by a user, without an attachment mechanism. in the embodiment shown in figs. 1-3, support element 118 may be received at first accommodation mechanism 121 located at location 115 that is proximate to and inferior relative to the first portion 110 at the second side 106. the first accommodation mechanism 121 may be connected to the body 102 of sleep garment 100 as shown in fig. 3, or may be connected to the sleep surface 116 of fig. 6, or may be connected to outer enclosure 120 which may receive support element 118, as shown in figs. 8-9. the first accommodation mechanism 121 may include but is not limited to any mechanism configured to receive support element 118, such as a pocket, an enclosure, a strap configured to attach to a clip, an elastic strap, a hook and loop attachment mechanism, a push snap attachment, a zipper mechanism, a magnetic attachment mechanism, or any similar mechanism configured to receive support element 118. [0091] it is to be appreciated that the support element 118 may be configured to have different shapes and dimensions in accordance to the principles disclosed there. in an embodiment, a surface 119 of the support element 118 may be flat, inclined upwardly or downwardly, rounded, recessed, partially inclined or any combination thereof. in an embodiment, the surface 119 may be inclined from one end to another end, as shown in fig. 1. in an embodiment, a support element may comprise a flat surface. in an embodiment shown in fig. 13, support element 218 may comprise a first portion 230 and a second portion 232, first portion 230 comprising a flat surface and second portion 232 comprising an inclined surface. the incline of the surfaces 119 and 232 may be configured to promote a desired elevated angle between the infant's legs and hips. in an embodiment, the elevated angle relative to the infant's hips may be between 30 and 160 degrees, which is a range that may be effect in comforting certain infants. the raising of an infant's legs to within this range may preferably relax the infant's abdomen muscles, promoting a calming reflex. in an embodiment of a support element, there may be multiple upwardly or downwardly inclined, flat, or otherwise shaped portions from a proximal to a distal end. a support element may comprise other shapes and geometries in accordance to the principles disclosed herein. for example, in an embodiment, a surface of the support element may be contoured to accommodate each leg of an infant separately. in an embodiment, the height of a distal end of the support element may be lower than that of the proximal end. [0092] in an embodiment, the support element 118, upon being received at the location 115 that is proximate to and inferior relative to the first portion 110 at the second side 106, may be located outside the interior volume 108. [0093] in an embodiment, the support element 118 may be received at first accommodation mechanism 121 located at location 115 outside the interior volume 108 by connecting the support element 118 to the sleep surface 116. it is to be appreciated that the support element 118 may be connected to the sleep surface 116 by any type of connectors (not shown) in accordance with the principles of the present disclosure. for example, the support element 118 may be connected to sleep surface 116 by a strap configured to attach to a clip, an elastic strap, a hook and loop attachment mechanism, a push snap attachment, a zipper mechanism, a magnetic attachment mechanism, or any similar mechanism configured to connect support element 118 to sleep surface 116. in an embodiment, support element 118 may also be inserted into an accommodation space (not shown) of sleep surface 116. [0094] in an embodiment, the support element 118 may be received at the location 115 that is proximate to and inferior relative to the first portion 110 at the second side 106 by connecting the support element 118 to the body 102 of the garment 100. support element 118 may be connected to body 102 at the second side 106 by any type of connectors (not shown) in accordance with the principles of the present disclosure. for example, the support element 118 may be connected to the garment 100 by a strap configured to attach to a clip, an elastic strap, a hook and loop attachment mechanism, a push snap attachment, a zipper mechanism, a magnetic attachment mechanism, or any similar mechanism configured to attach support element 118 to second side 106. in an embodiment, support element 118 may also be inserted into an accommodation space (not shown) defined on the exterior or interior of the second side 106 of the garment 100. [0095] in an embodiment, the support element 118, upon being received at the location 115 that is proximate to and inferior relative to the first portion 110 at the second side 106, may be located inside the interior volume 108. for example, an accommodation space (not shown) may be defined by a compartment between the first and second sides 104, 106 of the garment 100 for receiving the support element 118 in the interior volume 108. [0096] in an embodiment, the support element 118, upon being connected to the garment 100, may be located outside the interior volume 108. for example, the garment 100 may further include an outer enclosure 120 for enclosing the support element 118 in a compartment 122 outside the body 102. [0097] fig. 8 is a perspective view of an embodiment of the outer enclosure 120. the outer enclosure 120 may be configured to accommodate a portion of the body 102 therein and define an internal compartment 122 for accommodating the support element 118 and the third portion 113 of the body 102. the support element 118, upon being accommodated in the internal compartment 122, may be received at the location 115 that is proximate to and inferior relative to the first portion 110 at the second side 106 and under the third portion 113. [0098] in fig. 8, the outer enclosure 120 may comprise a second coupling mechanism 128 extending at least partially around the periphery of outer enclosure 120. in an embodiment, the second coupling mechanism 128 may be a strap configured to attach to a clip, an elastic strap, a hook and loop attachment mechanism, a push snap attachment, a zipper mechanism, a magnetic attachment mechanism, or any similar mechanism configured to seal and unseal outer enclosure 120 to accommodate body 102 therein. at least a portion of the outer enclosure 120 may comprise mesh fabric 126 for allowing air to move therethrough. in an embodiment, at least a surface of the outer enclosure 120 proximate to the second surface may be comprised entirely of a mesh structure. in an embodiment, a majority of the outer enclosure 120 may comprise a mesh structure. [0099] referring back to fig. 1, the infant sleep garment 100 may further comprise a second securing mechanism 130; and the second securing mechanism 130 may be attached to the body 102 and configured to fix an infant within the interior volume 108. the second securing mechanism 130 may include but is not limited to any of the following: a strap configured to attach to a clip, an elastic strap, a hook and loop attachment mechanism, a push snap attachment, a zipper mechanism, a magnetic attachment mechanism, or any similar mechanism configured to fix an infant within the interior volume 108. the second securing mechanism 130 may be located inside the interior volume 108 or may be located outside the interior volume 108, and may be proximate the first portion 110, second portion 112, or third portion 113. [00100] in fig. 9, the sleep garment 100 may be configured to receive a weight element 132 at a location 134 proximate to the second portion 112 at the first side 104 of the body 102, thereby applying pressure to the infant's body. in an embodiment, a sleep garment may be configured to receive a weight element at a location to thereby apply pressure to the infant's upper body, lower body, or both upper and lower body simultaneously. [00101] in an embodiment, the second portion 112 of the internal volume 108 may comprise a compartment (not shown) to receive the weight element 132 at the location 134 proximate to the second portion 112 at the first side 104 of the body 102. [00102] in an embodiment, the weight element 132 may be received at the location 134 proximate to the second portion 112 at the first side 104 of the body 102 by connecting the weight element 132 to the first side 104 of the body 102 by a connector (not shown) which may include but is not limited to any of the following: a strap configured to attach to a clip, an elastic strap, a hook and loop attachment mechanism, a push snap attachment, a zipper mechanism, a magnetic attachment mechanism, or any similar mechanism. [00103] in fig. 9 the outer enclosure 120 further may comprise a second compartment 136 for accommodating the weight element 132 at the location 134 proximate to the second portion 112 at the first side 104 of the body 102. [00104] in an embodiment, the weight element 132 may weigh between 1 ounce and 3 pounds, preferably between 0.5 and 1.5 pounds, between 1 and 1.5 pounds, or about 5 pounds, about 0.75 pounds, about 1 pound, about 1.25 pounds, or about 1.5 pounds, or about 1.75 pounds. by positioning the weight element 132 at location 134 proximate an infant's chest, the pressure applied by the weight element 132 may elicit a calming response from the infant, aiding in the sleep of the infant. further, upon being received at location 134, the weight element 132 may be fixed relative to an infant within the sleep garment, and may be prevented from interfering with the sleep of the infant. in an embodiment, the pressure applied by weight element 132 upon being received at location 134 may be distributed over the chest and stomach of the infant. in an embodiment, the pressure from the weight element may be at least partially distributed over the lower body of the infant as well. in an embodiment, the weight element may be received at an alternate location (not shown) to thereby distribute a portion of the weight over the upper body and a portion of the weight over the lower body. [00105] in an embodiment, the weight element 132 may be received at the location 134 proximate to the second portion 112 at the first side 104 of the body 102 by connecting the weight element 134 to the second portion 112 by a connector (not shown) which may include but is not limited to any of the following: a strap configured to attach to a clip, an elastic strap, a hook and loop attachment mechanism, a push snap attachment, a zipper mechanism, a magnetic attachment mechanism, or any similar mechanism. [00106] in an embodiment, the weight element 132 may comprise any weighted material which may include but is not limited to a metal, a plastic, a ceramic, a polymer, gel, liquid, a composite, a natural, or an artificial material. furthermore, the weight may be flat, round, irregular or any other shape and further may be any size so as to be effective for its functions described herein. [00107] figs. 10-12 illustrate an embodiment of an enclosure 200 for accommodating an infant. fig. 11 shows a view of first side 204, while fig. 10 shows a view of second side 202, which is opposite first side 204. [00108] enclosure 200 comprises a first portion 206 configured to accommodate an infant's legs between the first side 204 and the second side 202. enclosure 200 may further comprise a second portion 208 configured to accommodate the infant's torso beneath the first side 204. [00109] the second side 202 is configured to couple with a support element 210. in fig. 12, first portion 206 comprises a first accommodation mechanism 210 for receiving a support element 212 configured to support the infant's legs at an elevated angle relative to the infant's hips. furthermore, second portion 208 comprises a second accommodation mechanism 214 for receiving a weight element (not shown) for applying pressure to an infant's torso. [00110] referring to figs. 10-12, portions of first side 204 and second side 202 comprise a mesh fabric 216 for allowing air to move therethrough. first side 204 and second side 202 are partially connected by coupling mechanism 218, which extends partially around the periphery of first portion 206. [00111] in fig. 12, the first accommodation mechanism 210 comprises a compartment 222 for receiving the support element 212 therein, and second accommodation mechanism 214 comprises compartment 220 for receiving the weight element (not shown) therein. furthermore, first side 204 and second side 202 are configured to at least partially accommodate an infant sleep garment, such as sleep garment 100, therebetween. when a sleep garment is located between first side 204 and second side 202, the enclosure 200 may accommodate the sleep garment by limiting the second side 202 to second portion 206, as shown in fig. 10. in an embodiment, enclosure 200 may be configured to attach to a sleep surface, such as sleep surface 116. in an embodiment, upon enclosure 200 accommodating a sleep garment such as sleep garment 100 therewithin, the sleep garment may be configured to attach to a sleep surface such as sleep surface 116, thereby fixing enclosure 200 to the sleep surface. in an embodiment, the enclosure 200 may alternately be configured to attach to another sleep garment through an attachment mechanism, and the other sleep garment may then be secured to a sleep surface, thereby indirectly securing enclosure 200 to the sleep surface. by attaching the enclosure 200 to a sleep surface, an infant accommodated within enclosure 200 may be prevented from rolling over or otherwise moving into an unsafe disposition. [00112] in an embodiment, the enclosure 200 may comprise a first securing mechanism for securing an infant within the enclosure according to the principles disclosed herein. in an embodiment, the enclosure 200 may comprise a second securing mechanism for securing the enclosure 200 to a sleep surface, such as sleep surface 116, according to the principles disclosed herein. the second securing mechanism may prevent an infant accommodated with the enclosure 200 from rolling over or otherwise moving into an unsafe disposition. furthermore, the attachment mechanisms disclosed herein may be configured to communicate with a control system to detect whether an attachment mechanism is properly secured, and may alert a user or cease some operational function upon detecting that an attachment mechanism is not properly secured. [00113] referring to figs. 10-12, an infant may be placed proximate to enclosure 200, with the infant's hips adjacent to support element 212 and legs positioned on a top surface of support element 212. a user may then operate coupling mechanism 218, partially sealing first side 204 on top of second side 202, accommodating the infant and support element 212 within compartment 222. first side 204 will then be located above the infant, with compartment 220 positioned proximate to the infant's torso. [00114] fig. 13 illustrates an embodiment of support element 212. support element 212 comprises a first portion 230 and a second portion 232. first portion 230 is flat, while second portion 232 is inclined from a proximal end 236 to a distal end 234. support element 212 at the first portion 230 is a constant thickness, and has a rounded, bullnose shape at the distal end 234 and a straight edge at the proximal end 236. in embodiments, a support element may include but is not limited to a foam material, a cushion, an air pocket, or any material configured to support an infant's legs in an elevated angle relative to the infant's torso. the support element may further comprise a fabric case (not shown) surrounding the supportive material. the support element may have a resistance to deformation configured to support the infant's legs in the elevated position according to the principles disclosed herein. the support element and/or the fabric case may also be resistant to liquid or biological materials. [00115] in an embodiment, sleep garment 100 may be configured to accommodate a pillow, gel pad or other type of support (not shown) beneath the head of an infant. in an embodiment, enclosure 200 may be configured to accommodate a pillow, gel pad or other type of support (not shown) beneath the head of an infant. [00116] figs. 14-17b illustrate an enclosure 300 according to various embodiments. the enclosure 300 may be configured to accommodate an infant, which, in some embodiments, may include all or a portion of a body 102 (see, e.g., fig. 1) further enclosing or swaddling the infant therein. [00117] the enclosure 300 includes a first side 302 (fig. 14) and a second side 306 (fig. 15) that together define an enclosure volume 305 (fig. 16) between their respective interior surfaces. fig. 16 illustrates the first side 302 partially separated or detached from the second side 304. the first and second sides 302, 304 may remain attached along a portion of their respective perimeters as shown or along another perimeter portion or may be completely separated (not shown). the enclosure 300 further comprises a first portion 306 configured to accommodate the lower body of an infant within the enclosure volume 305 between the first side 304 and the second side 302. the enclosure 300 may further comprise a second portion 308 configured to accommodate a torso of an infant within the enclosure volume 305 between the first side 304 and the second side 302. [00118] the enclosure 300 may include one or more areas comprising mesh fabric 316 to provide breathability and reduce overheating. for example, the first side 304, the second side 302, or both may include one or more areas comprising a mesh fabric 316. in the illustrated embodiment, the first side 304 includes an area comprising a mesh fabric 316 along the first portion 306 defining a region of the enclosure volume 305 corresponding to a region for enclosing the legs of an infant. the second side 306 includes one or more areas of fabric mesh 316 along the second portion 308 defining a region of the enclosure volume 305 corresponding to regions for enclosing arms or shoulders of an infant. in an embodiment, at least a surface of the enclosure 300 proximate to the first portion 306 may be comprised entirely of a breathable mesh structure. in a further embodiment, a majority of the enclosure 300 may comprise a breathable mesh structure. [00119] the enclosure 300 may be configured to include or associated with one or more accommodation mechanisms. the accommodation mechanisms may be similar to the accommodation mechanisms described above with respect to figs. 1-13 and elsewhere herein. in the illustrated embodiment, the first side 304 is configured to couple with a weight element 332 and the second side 203 is configured to couple with a support element 312. for example, the enclosure 300 is configured to include or associate with an accommodation mechanism 332 comprising a weight element 332 and an accommodation mechanism 310 comprising a support element 312. as shown, the enclosure 300 is configured to receive a weight element 332 at a location 335 proximate to the second portion 308 at the first side 304. the weight element 332 may be positioned such that it applies weight to the upper body, chest, and/or abdominal area of an infant enclosed in the enclosure 300. in the illustrated embodiment, as best shown in fig. 16, a compartment 320 comprising a pocket is positioned on the first side 304 for receiving the weight element 332. the compartment 320 is accessible from the interior side of the first side 304. in one embodiment, the first side 304 includes a compartment 320 accessible from its exterior side or that is sewn, adhered, or otherwise closed, with the weight element 332 enclosed therein. in some embodiments, the first side 304 is configured to removably couple with the weight element via snaps, straps, clips, hook and loop, mating structures, or other coupling structures. [00120] in some embodiment, the weight element 332 may weigh between 1 ounce and 3 pounds, preferably between 0.5 and 1.5 pounds, between 1 and 1.5 pounds, or about 5 pounds, about 0.75 pounds, about 1 pound, about 1.25 pounds, or about 1.5 pounds, or about 1.75 pounds. by positioning the weight element 332 at location 335, which corresponds to an upper body location proximate a chest of an infant, the pressure applied by the weight element 332 may be applied to a chest or abdominal area and elicit a calming response from the infant, aiding in the sleep of the infant. further, upon being received at location 335, the weight element 332 may be fixed relative to an infant within the enclosure 300, and may be prevented from interfering with the sleep of the infant. in an embodiment, the pressure applied by weight element 332 upon being received at location 335 may be distributed over the chest and stomach of the infant. in an embodiment, the pressure from the weight element 332 may be at least partially distributed over the lower body of the infant as well. in an embodiment, the weight element 332 may be received at an alternate location (not shown) to thereby distribute a portion of its weight over the upper body and a portion of the weight over the lower body of an infant enclosed in the enclosure 300. [00121] in some embodiments, the weight element 332 may comprise any weighted material which may include but is not limited to a metal, a plastic, a ceramic, a polymer, gel, liquid, a composite, a natural, or an artificial material. furthermore, the weight may be flat, round, irregular or any other shape and further may be any size so as to be effective for its functions described herein. [00122] as introduced above, enclosure 300 also includes a support element 312. the support element 312 may extend within the enclosure volume 305 and include a structure dimensioned to elevate the lower body, legs, and/or feet of an infant, e.g., from between 0 and 8 inches, such as between 3 and 6 inches, between 4 or 4.5 and 5.5 inches, at least or greater than 4 inches, or approximately 5 inches +/- ¼ inch. the support element 312 may extend from the interior side of the second side 302 so as to underlay the legs of an infant when enclosed in the enclosure volume 305. the support element 312 may include an upper surface 313 positioned to underlay the lower body, legs, and/or feet of the infant and that extends a distance from the second side 302 corresponding to the elevation distance the support element 312 is configured to elevate the lower body, legs, and/or feet. the elevation and operable perimeter surface for contacting an infant’s lower body, legs, and/or feet is preferably sufficient to produce a bend in the hips and elevate the feet of the infant. the support element 312 may be configured to support the infant's legs at an elevated angle relative to the infant's hips. in some instances, the hips may also contact the support element 312 or otherwise be elevated. [00123] the support element 312 may include various dimensions and cross-section shapes. as best shown in fig. 16, the illustrated support element 312 includes a generally cylindrical shape having an arcuate cross-section extending from the second side 302. in one embodiment, the support element 312 includes a planar exterior facing surface, an arcuate interior facing surface, and comprises a general“d” shaped cross-section. in another embodiment, the support element 312 is arcuate around its entire or a majority of its perimeter. in other embodiments, the support element 312 comprises other dimensions and cross-section shapes. for example, the support element 312 may comprise geometric, non-geometric, or free form cross-section shapes. in some embodiments, the support element 312 includes inclined, declined, curved, planar, or undulating surfaces. in one embodiment, the top surface 313 comprises an edge formed by the convergence of two sides that form a peak along a length of the support element 312. in one configuration, the support element 312 is dimensioned and shaped as described above with respect to support element 212. in some embodiments, a support element may include but is not limited to a foam material, a cushion, an air pocket, or any material configured to support an infant's legs in an elevated angle relative to the infant's torso. in the illustrated embodiment, the support element 312 is configured to support legs at an elevated angle relative to the hips. the support element 312 comprises a generally cylindrical foam insert approximately 5 inches high configured to produce bend in hips and elevate feet of an infant. the support element 312 may further comprise a fabric case (not shown) surrounding the supportive material. the support element 312 may have a resistance to deformation configured to support the infant's legs in the elevated position according to the principles disclosed herein. in various embodiments, the support element 312 may include contours such as indentations to nest the legs. for example, the top surface, distal facing surface, or proximal facing surface of the support element 312 may include a first indentation to nest a first leg and a second indentation to nest a second leg. thus, the surfaces within and/or adjacent to the indentations may rise above a back of a leg. in some embodiments, the surfaces within and/or adjacent to the indentations may contact sides of a leg or provide vertically extending obstructions to lateral leg movement. the indentations may be of a constant width or may tapper to a reduced width within indentation valleys. the indentations may have planar bases or may include rounded or arcuate laterally extending base surfaces. the support element 312 and/or the fabric case may also be resistant to liquid or biological materials. it will be appreciated that height of the support element 312 or vertical distance the support element 312 extends as described herein is intended to be in reference to the top surface upon which the back or legs or feet are supported. [00124] the support element 312 may be integral or couplable with respect to the second side 302. for example, the enclosure 300 may be configured to receive the support element 312 at a location proximate to the first portion 306 along the second side 302. the support element 312 may be positioned along the first portion 306 such that it elevates the lower body, legs, and/or feet and produces a bend in the hips of an infant enclosed in the enclosure 300. in the illustrated embodiment, as best shown in fig. 16, a compartment 322 comprising a pocket is position on the second side 302 for receiving the support element 312. thus, the support element 312 may comprise a material insert. the pocket may comprise an envelope enclosure configuration as shown or another configuration. the compartment 322 is accessible from the exterior side of the second side 302. in one embodiment, the second side 302 includes a compartment 322 accessible from its interior side or that is sewn, adhered, or otherwise sealed closed with the support element 312 enclosed therein. in some embodiments, the second side 302 is configured to removably couple with the support element 312 via snaps, straps, clips, velcro or hook and loop, mating structures, or other coupling structures along the exterior or interior side of the second side 302. [00125] the enclosure 300 may also define one or more selectively openable apertures between the exterior or the enclosure and the enclosure volume 305. in the illustrated embodiment, the enclosure 300 includes a coupling mechanism comprising one or more attachment members 318a, 318b for coupling the first side 304 and the second side 302 and thereby selectively open or close first and second apertures 3 l9a, 3 l9b. the selectively openable apertures are located along lateral peripheries of the first and second sides 304, 302. attachment members 318a, 318b extend along the adjacent lateral peripheries for coupling the first and second sides 304, 302 to close or reveal the enclosed volume 305. as best shown in figs. 14 & 16, the attachment members 3 l8a, 318b comprise zippers wherein adjacent portions of the first and second sides 304, 302 defining the apertures 3 l9a, 3 l9b include zipper halves. in other embodiments, adjacent portions of the first and second sides 304, 302 defining one or more apertures may include snaps, straps, clips, hook and loop, mating structures, or other coupling structures configured to interact to selectively reveal the enclosure volume 305. [00126] in other embodiments, the first side 304 couples to the second side 302 via closure of a single aperture. for example, a selectively openable or closable aperture may extend down a right, left, or middle portion of the first or second sides 304, 302. in a further example, a selectively openable or closable aperture extends diagonally across the first or second sides 403, 302. some embodiments of the enclosure 300 may include more than two selectively openable or closable apertures. [00127] an infant may be placed within the enclosure 300, with the infant's hips adjacent to support element 312 and legs positioned on upper surface 313 thereof. a user may then operate coupling members 318a, 318b, partially sealing first side 304 to the top of second side 302, accommodating the infant and support element 312 within the enclosure volume 305. first side 304 may then be located above the infant, with compartment 322 positioned proximate to the infant's torso. [00128] the illustrated apertures 319a, 319b extend through the first portion 306 and the second portion 308 to a bottom region or location corresponding to a region of the enclosure volume 305 configured to be at or beyond the feet of an enclosed infant. in other embodiments, one or more apertures may not be dimensioned to extend beyond a foot region of the enclosure volume 305. for example, an aperture may extend to an ankle or knee region of the enclosure volume 305. in some embodiments, the enclosure 300 may include multiple selectively openable or closable apertures wherein a length of a first aperture is less than a length of a second aperture. [00129] with particular reference to fig. 16, decoupling attachment members 318a, 318b allows the first side 304 to be pulled down and moved out of the way to reveal the enclosure volume 305. this configuration provides caregivers maximum visibility for proper positioning of an infant and ease of removal. in another embodiment, the first side 304 and second side 302 may be completely separated to reveal the enclosure volume 305 and may thereafter be coupled with attachment members as described herein. [00130] enclosure 300 further defines a neck opening 315 (fig. 14) between the first and second sides 304, 302. the first side 304 includes a first portion 3 l5a defining a first side of the neck opening 315 and the second side 302 includes a second portion 315b defining a second side of the neck opening 315. when the attachment members 318a, 318b are coupled to close the apertures 3 l9a, 3 l9b, the first portion 3 l5a and the second portion 3 l5b define the neck opening to allow a neck of an infant to extend from the enclosure volume 305. [00131] to better protect sensitive skin of an infant, all or a portion of one or more attachment members 318a, 318b may be covered interiorly. figs. 17a & 17b illustrate an embodiment of the enclosure 300 wherein a portion of the attachment member 318a is covered by a zipper garage 350 comprising a flap 351 configured to extend over the attachment member 3 l8a along a portion thereof corresponding to the neck opening 315 and an adjacent region. in various embodiments, the flap 351 may include a reinforcement or biasing material configured to cover the attachment member 318a when coupled. in one embodiment, the flap 351 includes a magnet or magnetically attractive structure to attract to the attachment member 318a or a magnet or magnetically attractive structure adjacent to the attachment member 318a. in another example, the flap 351 may include an exteriorly facing attachment member such as a snap configured to mate with an adjacent interiorly facing snap to cover the attachment member 318a. while the zipper garage 350 is shown with respect to attachment member 318a, in some embodiments, a zipper garage is provided for attachment member 318b in addition to or instead of for attachment member 318a. in some embodiments, enclosure 300 does not include a zipper garage 350. [00132] the enclosure 300 may include additional features. for example, in an embodiment, enclosure 300 may be configured to accommodate a pillow, gel pad or other type of support (not shown) beneath the head of an infant. [00133] figs. 18a-22 illustrate a sleep garment 500 and methods of aiding sleep of an infant 450 utilizing a sleep garment 500. the sleep garment 500 includes an enclosure 300 as described above with respect to figs. 14-17b. the sleep garment 500 may also include or associate with a sleep sack 400. the sleep sack 400 may comprise a sleep sack as described in u.s. patent application no. 15/336,519, filed october 27, 2016, titled infant calm/sleep-aid, sids prevention device, and method of use. the sleep sack 400 may comprise a body 402, which may be configured similar to body 102 and its various embodiments described herein. [00134] in one embodiment, a method of aiding sleep of an infant 450 may include positioning an infant 450 with the enclosure volume 305 of the enclosure 300. in a further embodiment, the method may include positioning the infant 450 in a sleep sack 400 before positioning the infant in the enclosure 300. [00135] positioning the infant in the enclosure 300 may include laying the enclosure 300 on a surface with the exterior side of the second side 302 down. with the attachment members 318a, 318b decoupled, the first side 304 may be moved or folded down to reveal the enclosure volume 305. for example, in the illustrated enclosure 300, attachment member 318a, 318b zippers may be unzipped all the way down and the first side 304 and weight element 332 may be moved out of the way (e.g., as shown in fig. 16). the weight element 332 may be positioned in the compartment 320 prior to or after folding down the first side 304. [00136] with the enclosure 300 open, the infant 450 may be positioned on the interior side of the second side 302 within the enclosure volume 305. fig. 18a depicts a top view of the infant 450 positioned within the sleep sack 400 and further positioned on the interior side of the second side 302 within the enclosure volume. fig. 18b depicts a side view of the same. [00137] the support element 312 is preferably positioned in compartment 322 prior to positioning the infant within the enclosure. the infant 450 may be placed onto the interior side of the second side 302 with the lower body of the infant 450 up against the support element 312 and the lower legs, ankles, or feet of the infant extending over the top surface 313. in some examples, thighs may be over the top surface or the infant 450 may be placed with feet up against the support element 312. from the side view shown in fig. 18b, the hips and feet of the infant 450 are elevated up onto the support element 312 and top surface 313 thereof. the buttocks or thighs of the infant 450 may contact a proximal surface 347 of the support element 312. [00138] with reference to fig. 19, the first side 304 may be pulled over the infant 450 with the weight element 332 positioned over the chest/abdomen region of the infant 450. the attachment members 318a, 318b may then be coupled, e.g., zipped up, to enclose the infant 450 within the enclosure volume 305 of the enclosure 300 in a manner similar to that shown in fig. 20. [00139] in embodiments wherein the sleep sack 400 includes one or more securing mechanisms 414, securing mechanism 414 may be extended out from the enclosure volume 305 through laterally positioned side openings 337 formed in the enclosure 300, as shown in fig. 21a, for example. with reference to fig. 21b, the enclosure 300 may also include a pocket 338 adjacent to opening 337 for tucking in the securing mechanism 414 when not in use. [00140] it should be noted, that in some embodiments the enclosure 300 does not include side openings 337 or includes side openings 337 with attachment members (not shown) to selectively open and close the openings 337. in one embodiment, the enclosure 300 includes securing mechanisms similar to that described herein with respect to sleep sacks or bodies thereof for securing the enclosure 300 to a platform or bassinet. the securing mechanism may be in addition to or instead of securing mechanism 414. in one such embodiment, the enclosure 300 includes a pocket (not shown) for tucking the securing mechanism out of the way when not in use. [00141] to secure the sleep garment 500 to a platform or other structure, such as a bassinet, mattress, chair, or pad, the securing mechanism 414 may be pulled through the opening 337 or from the pocket 338 (figs. 21a & 21b) for coupling to or with respect to the platform or other structure. with reference to fig. 22, the enclosure may couple to a sleep platform 616 of a sleep device 600, which is a bassinet in the illustrated embodiment. for example, securing mechanism 414 may include clips or sleeves as described above with respect to securing mechanism 114. as shown, the securing mechanism 414 includes sleeves 415a, 415b for receiving clip arms 623a, 623b that are fixed relative to the sleep platform 616. in some embodiments, the attachment mechanism 414 or corresponding attachment mechanism to which attachment mechanism 414 couples may be configured to communicate with a control system to detect whether an attachment mechanism 414 is properly secured, and may alert a user or cease some operational function upon detecting that an attachment mechanism is not properly secured. by attaching the enclosure 300 to a sleep surface platform 616, the infant 450 accommodated within enclosure 300 may be prevented from rolling over or otherwise moving into an unsafe disposition. in various embodiments, attachment mechanism 414 and corresponding attachment mechanisms to which attachment mechanism 414 couples may correspond to attachment mechanisms described in u.s. patent application no. 15/336,519, filed october 27, 2016, titled infant calm/sleep-aid, sids prevention device, and method of use. in one embodiment, the sleep device comprises a sleep device or bassinet having a movable platform as described in u.s. patent application no. 15/336,519, filed october 27, 2016, titled infant calm/sleep-aid, sids prevention device, and method of use. [00142] it is to be appreciated that embodiments of the sleep garment 500 may include fewer or additional features. for example, in one embodiment, the sleep garment 500 does not include a sleep sack 400. in some such embodiments, the sleep garment 500 may include another sack or swaddling device or may not include an enclosure to enclose the infant in addition to that of the enclosure 300. in some embodiments, the infant may be enclosed in the enclosure 300 without being enclosed in a sleep sack or swaddle. as described above, the sleep garment 500 or enclosure 300 may also be configured with fewer or additional accommodation mechanisms. [00143] embodiments of the sleep garment 500 may include modifications to the sleep sack 400, enclosure 300, or both. the modifications may include any of those described herein, including those described with respect to sleep garment 100, body 102, enclosure 200, or outer enclosure 120. for example, the support element 312 may be incorporated under the lower body, legs, or feet of the infant within the interior of the sleep sack 400 or a compartment thereof. the support element 312 may be positioned loosely or unattached with respect to the enclosure 300. the weight element 332 may be attached to or within a compartment of the sleep sack 400. the weight element 332 may be positioned at other locations to apply weight to other regions of the body of the infant. [00144] in an embodiment, the enclosure 300 may be configured to receive a weight element 332 at a location to thereby apply pressure to the infant's upper body, lower body, or both upper and lower body simultaneously. in one embodiment, a sleep sack 400 or body 402 may include an accommodation mechanism, which may be in addition to or instead of one or both of accommodation mechanisms 314, 310. for example, a body 302 may include a weight element, support element, or both, which may be similar to weight element 332 and support element 312. such elements may be coupled or couplable to the body 302, e.g., elements may be attachable to surfaces via attachment members such as snaps, hook and loop, straps, or the like or may be received within compartments such as a pocket. in some embodiments, one or more accommodation mechanisms may be loosely positioned within the enclosure volume 305 or exterior thereto and located to elevate the lower body, legs, or feet or apply weight to the chest of an infant in a manner consistent to that described herein. [00145] as noted above, body 102 described with respect to figs. 1-5, may be used to enclose an infant and subsequently be enclosed within the enclosure volume 305 of enclosure 300 (not shown). in one such embodiment, the weight element 332, rather than being received at location 335, may be received on body 102 at location 134 proximate to the second portion 112 at the first side 104 of the body 102 by connecting the weight element 332 to the second portion 112 by a connector which may include but is not limited to any of the following: a strap configured to attach to a clip, an elastic strap, a hook and loop attachment mechanism, a push snap attachment, a zipper mechanism, a magnetic attachment mechanism, or any similar mechanism. [00146] in an embodiment, sleep garment 500 may be configured to accommodate a pillow, gel pad or other type of support (not shown) beneath the head of an infant. [00147] in some embodiments, the enclosure 300 includes a neck opening adjustment mechanism operable to adjust the size of the neck opening 315. figs. 23a & 23b illustrate a neck opening adjustment mechanism 360 according to various embodiments. as shown in fig. 23a, adjacent sides of the neck opening 315 include attachment members 360a, 360b, which in this embodiment comprise snaps. the attachment members 360a, 360b may be coupled to decrease the size of the neck opening 315, as exemplified in fig. 23b. for infants with larger necks, attachment members 360a, 360b may be left decoupled to provide a larger neck opening 315. while the neck opening adjustment mechanism 360 is shown with respect to attachment members 360a, 360b comprising snaps, in some embodiments, attachment members 360a, 360b may comprise straps, clips, hook and loop, mating structures, or other coupling structures. additionally, while the neck opening adjustment mechanism 360 is shown with respect to a single side of the neck opening 315, in various embodiments, the neck opening adjustment mechanism 360 may include adjustment features on a second side of the opening comprising attachment members. [00148] in various embodiments, a method of aiding sleep with respect to an infant that prefers a side or stomach position, which is unsafe and known to increase incidence of sids, includes enclosing the infant within a sleep garment or enclosure as described herein to cause the infant feel as if they are in the fetal position, but while sleeping safely on the back. the method may similarly be effective to aid sleep of infants that have difficulty sleeping on their back or when not laying on a caregiver’s body. the method may also be effective to aid sleep of infants that experience limited sleep duration or those that sleep well in a rock in play, swing or bouncer, but not on their back. [00149] this specification has been written with reference to various non-limiting and non- exhaustive embodiments. however, it will be recognized by persons having ordinary skill in the art that various substitutions, modifications, or combinations of any of the disclosed embodiments (or portions thereof) may be made within the scope of this specification. thus, it is contemplated and understood that this specification supports additional embodiments not expressly set forth in this specification. such embodiments may be obtained, for example, by combining, modifying, or re-organizing any of the disclosed steps, components, elements, features, aspects, characteristics, limitations, and the like, of the various non-limiting and non-exhaustive embodiments described in this specification. [00150] various elements described herein have been described as alternatives or alternative combinations, e.g., in a lists of selectable actives, ingredients, or compositions. it is to be appreciated that embodiments may include one, more, or all of any such elements. thus, this description includes embodiments of all such elements independently and embodiments including such elements in all combinations. [00151] the grammatical articles "one", "a", "an", and "the", as used in this specification, are intended to include "at least one" or "one or more", unless otherwise indicated. thus, the articles are used in this specification to refer to one or more than one (i.e., to "at least one") of the grammatical objects of the article. by way of example, "a component" means one or more components, and thus, possibly, more than one component is contemplated and may be employed or used in an application of the described embodiments. further, the use of a singular noun includes the plural, and the use of a plural noun includes the singular, unless the context of the usage requires otherwise. additionally, the grammatical conjunctions "and" and "or" are used herein according to accepted usage. by way of example, "x and y" refers to "x" and "y". on the other hand, "x or y" refers to "x", "y", or both "x" and "y", whereas "either x or y" refers to exclusivity. [00152] any numerical range recited herein includes all values and ranges from the lower value to the upper value. for example, if a range is stated as 1 to 50, it is intended that values such as 2 to 40, 10 to 30, 1 to 3, or 2, 25, 39 and the like, are expressly enumerated in this specification. these are only examples of what is specifically intended, and all possible combinations of numerical values and ranges between and including the lowest value and the highest value enumerated are to be considered to be expressly stated in this application. numbers modified by the term "approximately" are intended to include +/- 10% of the number modified. [00153] the present disclosure may be embodied in other forms without departing from the spirit or essential attributes thereof and, accordingly, reference should be had to the following claims rather than the foregoing specification as indicating the scope of the invention. further, the illustrations of arrangements described herein are intended to provide a general understanding of the various embodiments, and they are not intended to serve as a complete description. many other arrangements will be apparent to those of skill in the art upon reviewing the above description. other arrangements may be utilized and derived therefrom, such that logical substitutions and changes may be made without departing from the scope of this disclosure.
143-698-342-319-197
DE
[ "EP", "US", "DE" ]
F25B41/06,B60H1/00,F16K1/12,F16K1/14,F16K1/36,F16K1/44,F16K27/02,F16K47/04,F16K47/00
2015-10-27T00:00:00
2015
[ "F25", "B60", "F16" ]
valve assembly, in particular for an expansion valve, for an air conditioner
a valve mechanism for an air condition circuit of an air condition system may include a valve housing enclosing a fluid duct for passing a fluid flow, a closure body arranged movably in the fluid duct between at least a closed position and an open position, and adjustment element operably connected to the closure body to move the closure body between the open position and the closed position, and a noise reduction device configured to facilitate a reduction of an operating noise when the fluid flow through the fluid duct. the noise reduction device may be disposed at one or more of the closure body, a valve seat, and the adjustment element.
1 . a valve mechanism for an air conditioning circuit of an air conditioning system, comprising: a valve housing enclosing a fluid duct for passing a fluid flow; a closure body arranged movably in the fluid duct, wherein in a closed position the closure body closes a valve opening disposed in the fluid duct and enclosed by a valve seat and bears against the valve seat to close the valve opening, and wherein in an open position the closure body is arranged at a distance from the valve seat to release the valve opening; an adjustment element operably connected to the closure body to move the closure body between the open position and the closed position; and a noise reduction device configured to facilitate a reduction of an operating noise when fluid flows through the fluid duct, wherein the noise reduction device is disposed at one or more of the closure body, the valve seat and the adjustment element. 2 . the valve mechanism according to claim 1 , wherein at least one of the closure body is structured as a closure ball and the adjustment element is structured as a control tappet. 3 . the valve mechanism according to claim 1 , wherein the noise reduction device includes a surface structure, and wherein the surface structure includes at least one of a plurality of elevations protruding into the fluid duct and a plurality of depressions arranged on a surface of said one or more of the closure body, the valve seat and the adjustment element. 4 . the valve mechanism according to claim 3 , wherein at least one of the plurality of elevations has a height of 1 mm or less. 5 . the valve mechanism according to claim 3 , wherein at least one of the plurality of depressions has a depth of 1 mm or less. 6 . the valve mechanism according to claim 3 , wherein the surface structure has a lateral extension between 0.1 mm 2 and 1 mm 2 . 7 . the valve mechanism according to claim 3 , wherein the surface structure is arranged in an axial end section on a circumferential side of the adjustment element facing the closure body. 8 . the valve mechanism according to claim 3 , wherein the surface structures is disposed on the valve seat and the adjustment element, such that the surface structure disposed on the valve seat is positioned opposite the surface structure disposed on the adjustment element. 9 . the valve mechanism according to claim 3 , wherein the surface structure is disposed on the closure body and arranged at least partly in a surface region that touches the valve seat when the closure body is in the closed position. 10 . the valve mechanism according to claim 3 , wherein the surface structure is disposed on the valve seat and arranged at least partly in a surface region that touches the closure body when the closure body is in the closed position. 11 . the valve mechanism according to claim 3 , wherein: the valve seat includes a first seat section in a longitudinal cross section along a movement direction of the adjustment element that passes into a second seat section in the movement direction, wherein the second seat section of the valve seat tapers away from the first seat section and the closure body engages the second seat section when the closure body is in the closed position; wherein the second seat section passes along the movement direction into a third seat section; and wherein the surface structure is disposed on at least one of the first seat sections, the second seat section and the third seat section on an internal circumferential side of the housing enclosing the fluid duct. 12 . the valve mechanism according to claim 3 , wherein the surface structure is configured as a roughened surface. 13 . the valve mechanism according to claim 12 , wherein the roughened surface has a roughness of more than 16 μm. 14 . the valve mechanism according to claim 3 , wherein the surface structure is provided on the closure body and configured as a ring and extends at least partially around the surface of the closure body along a direction perpendicular to the movement direction. 15 . the valve mechanism according to claim 3 , wherein the surface structure is disposed on one of a surface section of the closure body bearing against the valve seat when the closure body is in the closed position of the closure body and a surface section of the valve seat against which the closure body bears when the closure body is in the closed position. 16 . the valve mechanism according to claim 3 , wherein at least one of the plurality of elevations and the plurality of depressions is arranged in a grid-like configuration on the surface. 17 . the valve mechanism according to claim 3 , wherein at least one of the plurality of elevations and the plurality of depressions are distributed irregularly on the surface. 18 . the valve mechanism according to claim 3 , wherein at least one of the plurality of elevations and the plurality of depressions are structured in at least one of a substantially round shape and a substantially elongate shape in a top view of the surface structure. 19 . the valve mechanism according to claim 3 , wherein the plurality of depressions are structured as at least one of craters and funnels. 20 . an air conditioning system for a motor vehicle, comprising: an air conditioning circuit; and a valve mechanism arranged in the air conditioning circuit, the valve mechanism including: a valve housing enclosing a fluid duct for communicating a fluid flow, the fluid duct defining a valve opening; a valve seat enclosing the valve opening; a closure body arranged movably in the fluid duct between a closed position and an open position, wherein the closure body engages the valve seat in the closed position to close the valve opening, and the closure body is arranged at a distance from the valve seat to release the valve opening in the open position; an adjustment element operably connected to the closure body to move the closure body between the open position and the closed position; and a noise reduction device configured to facilitate a reduction of an operating noise when fluid flows through the fluid duct, wherein the noise reduction device is disposed at one or more of the closure body, the valve seat and the adjustment element.
cross-reference to related applications this application claims priority to german patent application no. de 10 2015 221 002.2, filed oct. 27, 2015, the contents of which are hereby incorporated by reference in its entirety. technical field the invention concerns a valve mechanism, especially an expansion valve, for an air conditioning circuit of an air conditioning system, as well as an air conditioning system with such a valve mechanism. background valve mechanisms which are used as expansion valves in air conditioning systems are known in a variety of forms from the prior art. the problem with such expansion valves is often that coherent vibrations can be produced when a refrigerant flows through the valve opening in the flow of refrigerant, resulting in an unwanted noise production in the valve mechanism. given this background, ep 1764 568 a1 discusses a valve mechanism with turbulence generating elements which ensure a turbulent flow of refrigerant at the valve opening. this is supposed to prevent gas bubbles in the liquid refrigerant for the most part or even entirely, so that the flow noise generated when passing through the valve opening should be lessened and homogenized. summary one problem which the present invention proposes to solve is to create an improved design for a valve mechanism which is distinguished in particular by a reduced operating noise. this problem is solved by the subject matter of the independent patent claims. preferred embodiments are the subject matter of the dependent patent claims. accordingly, the basic notion of the invention is to outfit a valve mechanism in the region of its valve opening closable by a closure body with a noise reduction device. the installation of such a noise reduction device in the region of the valve opening means that vibrations occurring there in the closure body and in the adjustment element for moving the closure body, which subsequently lead to said noise production, can be significantly reduced or even entirely prevented. as a result, such a vibration reduction also leads to a lessening of said noise production. a valve mechanism according to the invention, especially an expansion valve, for an air conditioning circuit of an air conditioning system comprises a valve housing, which encloses a fluid duct through which a fluid can flow, especially a coolant or refrigerant. in the fluid duct there is arranged a closure body which can move relative to the valve housing. in a closed position, the closure body closes a valve opening present in the fluid duct and enclosed by a valve seat and bears against the valve seat for this purpose. said valve opening with suitable dimensioning of the fluid duct, especially with reduced diameter in the region of the valve opening, can form an expansion valve together with the valve seat and the closure body. in an open position, the closure body is arranged to release the valve opening at a distance from the valve seat. by means of an adjustment element, the closure body can be moved between the open position and the closed position. for this, the adjustment element can be connected to a suitable driving device, especially an electric, hydraulic or pneumatic actuator for its drive. according to the invention, a noise reduction device is configured at the closure body and/or at the valve seat and/or at the adjustment element, and accomplishes a reduction of the operating noise of the valve mechanism when the fluid flows through it. advisedly, the closure body can be designed as a closure ball. alternatively or additionally, the adjustment element can be designed as a control tappet. both measures, alone or in combination, facilitate a technical realization of the valve mechanism as an expansion valve with reduced fluid duct diameter in the region of the valve opening. in a preferred embodiment, the noise reduction device is designed as a surface structure with a plurality of elevations protruding into the fluid duct. alternatively or additionally, the surface structure has a plurality of depressions arranged on the surface. in one preferred variant, said elevations and depressions can be arranged in the same surface structure, especially preferably alternating next to each other. the mentioned steps, alone or in combination, accomplish an especially pronounced reduction of the operating noise generated by the valve mechanism. according to an advantageous modification, the individual elevations have a height of at most 1 mm, preferably at most 0.2 mm, especially preferably a height of essentially 0.1 mm. experimental investigations have shown that elevations with such a height provide an especially good reduction of the operating noise of the valve mechanism. according to another advantageous modification, the individual depressions have a depth of at most 1 mm, preferably at most 0.2 mm, especially preferably a depth of essentially 0.05 mm. experimental investigations have shown that depressions with such a depth provide an especially good reduction of the operating noise of the valve mechanism. in another advantageous modification, the surface structure has a lateral extension between 0.1 mm 2 and 1 mm 2 . experimental investigations have shown that a lateral extension of the surface structures in the mentioned range is associated with an especially good reduction of the operating noise of the valve mechanism. according to an advantageous modification of the invention, the surface structure is arranged in an axial end section of the adjustment element facing the closure body, on its circumferential side. this measure as well assists the desired noise reduction. moreover, simulation calculations have shown surprisingly that an additional noise reduction can be achieved when the surface structures essential to the invention are designed on both the valve seat and the adjustment element, and such that the two surface structures are positioned opposite each other. in this scenario, the fluid duct through which the fluid flows is therefore enclosed on both sides by the surface structure essential to the invention, so that an especially pronounced dampening effect can be achieved. in another preferred embodiment, the surface structure of the closure body is arranged at least partly in a surface region which touches the valve seat in the closed position of the closure body. in another preferred embodiment, which can be combined with the above explained preferred embodiment, the surface structure of the valve seat is arranged at least partly in a surface region which touches the closure body in the closed position of the closure body. in another preferred embodiment, the valve seat comprises a first seat section in a longitudinal cross section along a movement direction of the adjustment element that passes into a second seat section in the movement direction. in the second seat section, the valve seat tapers, especially in conical manner, away from the first seat section to produce the valve mechanism as an expansion valve. the closure body in the closed position bears against the second seat section, and the second seat section passes along the movement direction into a third seat section. in this variant, the surface structure is configured on at least one of the three seat sections on an internal circumferential side enclosing the fluid duct. especially preferably, the diameter of the fluid duct is substantially constant in the first and third seat sections. laboratory investigations have further shown that an especially good noise reduction can be achieved when said surface structure of the noise reduction device is realized as a kind of a roughened surface. the effect of noise reduction by the noise reduction device can be realized in especially pronounced form if the roughened surface has a roughness rz of more than 16 μm. according to an advantageous modification, the surface structure is formed as a ring or a ring segment and extends around the surface of the closure body along a direction perpendicular to the movement direction. preferably, the surface structure runs entirely around it. to improve the sealing effect of the closure element in its closed position, in another preferred embodiment it is proposed that no surface structure is formed in a surface section of the closure body bearing against the valve seat in the closed position of the closure body. the same effect of an improved sealing action can be achieved by having no surface structure formed in a surface section of the valve seat against which the closure body bears in its closed position. in another preferred embodiment, the plurality of elevations and/or depressions is arranged gridlike on the surface. in an equally preferred alternative embodiment, the plurality of elevations and/or depressions is distributed irregularly on the surface. both variants, which can also be combined, accomplish an especially effective noise dampening. advisedly, the elevations and/or depressions can be formed in substantially round or substantially elongate form in a top view of the surface structure. this measure accomplishes an especially pronounced noise reduction. especially advisedly, the depressions can be formed as craters or funnels. this measure also accomplishes an especially pronounced noise reduction. the invention furthermore concerns an air conditioning system, especially for a motor vehicle. the air conditioning system comprises an air conditioning circuit, in which a previously explained valve mechanism is arranged. thus, the benefits of the valve mechanism according to the invention are transferred to the air conditioning system according to the invention. further important features and benefits of the invention will emerge from the subclaims, the drawings, and the corresponding description of the figures with the aid of the drawings. of course, the above mentioned features and those yet to be mentioned can be used not only in the particular indicated combination, but also in other combinations or alone, without leaving the scope of the present invention. preferred exemplary embodiments of the invention are represented in the drawings and shall be explained more closely in the following description, where the same reference numbers pertain to the same or similar or functionally equivalent components. brief description of the drawings in the figures, each time schematically, fig. 1 shows an example of a valve mechanism according to the invention in a longitudinal cross section, fig. 2 shows a detail representation of fig. 1 with the valve closed, fig. 3 shows a detail representation of fig. 1 with the valve opened, fig. 4 shows a detail representation of fig. 1 , in which the noise reduction device essential to the invention is shown, fig. 5 shows a representation showing possible configurations of the surface structure forming the noise reduction device. detailed description fig. 1 shows an example of a valve mechanism 1 according to the invention, which can be used as an expansion valve for an air conditioning system, in a longitudinal cross section. the valve mechanism 1 comprises a valve housing 2 , which encloses a fluid duct 3 through which a fluid 4 can flow. in the valve housing 2 there is present a fluid inlet 23 , through which the fluid 4 , typically a coolant or refrigerant of the air conditioning system, can be introduced into the fluid duct 3 . through a fluid outlet 24 provided on the valve housing 2 the fluid 4 is taken out from the fluid duct 3 once again. in the valve housing 2 or in the fluid duct 3 there is formed a valve seat 5 , which encloses a valve opening 8 . in the fluid duct 3 , moreover, there is arranged a closure body 6 which can move relative to the valve housing 2 , being preferably designed as a closure ball 7 . the closure body 6 or the closure ball 7 can be moved between a closed position and an open position. in the closed position, the closure body 6 bears against the valve seat 5 for the closing of the valve opening 8 , so that no fluid 4 can flow through the valve opening 8 . this situation is shown for clarity in a separate representation in fig. 3 . in the open position, the closure body 6 releases the valve opening 8 for the fluid 4 to flow through and for this it is arranged at a distance from the valve seat 5 . this situation is shown for clarity in a separate representation in fig. 2 . the moving of the closure body 6 between the open position and the closed position is performed with the aid of an adjustment element 9 , which can be designed as a control tappet 10 . the adjustment element 9 can move along a movement direction v in the fluid duct 3 , which can be a main flow direction h of the fluid 4 flowing through the valve opening 8 . the movement direction v can be identical to a longitudinal direction l of the fluid duct 3 . the adjustment element 9 , in turn, can be connected to a pneumatic or hydraulic or electrical actuator (not shown in the figures) for its drive. the adjustment element 9 moves the closure body 6 by mechanical contact between the open position and the closed position. the closure body 6 can be biased by means of a tensioning element 11 , such as a kind of elastic spring element, against the adjustment element 9 . in the example of fig. 1 , the tensioning element 11 also biases the adjustment element 9 against the valve seat 5 and thus toward its closed position. the valve mechanism 1 moreover comprises a noise reduction device 12 , by means of which noises, especially vibration-like noises which are generated by vibrations of the closure body 7 as well as the adjustment element 9 when the fluid 4 flows through the valve opening 8 , are dampened. from fig. 4 , which shows the valve mechanism in analogous manner to fig. 2 in a detail representation in the region of the valve opening 8 , it can be observed that the noise reduction device 12 can be formed on the closure body 6 and/or on the valve seat 5 and/or on the control tappet 10 . in all three cases, the noise reduction device 12 is designed as a surface structure 13 a , 13 b , 13 c with a plurality of elevations 14 and, alternatively or additionally, with a plurality of depressions 25 . for clarity, fig. 5 shows in addition to the representation of fig. 4 a top view of the valve seat 5 along the movement direction v of the adjustment element 9 . as an example, three different surface structures 13 b are shown: the surface structure 13 b additionally designated as 15 comprises a plurality of elevations 14 , which are arranged in a grid on the surface 16 of the valve seat 5 and protrude from the surface 16 into the fluid duct 3 . the plurality of elevations 14 can, however, also be distributed irregularly on the surface 16 , as is shown for example for the surface structure 13 b additionally designated as 18 . the elevations 14 can be elongate or round (not shown) in configuration, as shown for the surface structure 15 . however, in place of irregularly arranged elevations 14 , there can also be depressions 25 arranged irregularly on the surface 16 , as is the case with the surface structure 13 b additionally designated as 17 . the depressions 25 can be in the form of craters or funnels. in one variant not shown in the figures, the depressions 25 can also be in a regular arrangement. the elevations 14 of the surface structure 17 differ from the elevations 14 of the surface structure 18 in that the former are in the form of craters or funnels with individual geometry and the latter are round, each with an identical geometry. a combination of the aforementioned examples is also conceivable, such as a combination of gridlike elevations 14 each with round or crater-like elevations 14 (not shown). instead of a round geometry, as shown for the elevations 14 of the surface structure 18 , an elongate configuration is possible, as is represented for the elevations 14 of the surface structure 15 . all of the above described examples for elevations 14 apply not only to elevations 14 which protrude from the surface 16 into the fluid duct 3 , but also for depressions 25 provided in the surface 16 , and vice versa. a combination of depressions 25 and elevations 14 is also conceivable. preferably, the individual elevations 14 have a height of at most 1 mm, preferably at most 0.2 mm, especially preferably a height of essentially 0.05 mm. in corresponding fashion, the depressions 25 have a depth of at most 1 mm, preferably at most 0.2 mm, especially preferably a depth of substantially 0.05 mm. the height or depth here is measured along a direction perpendicular to the surface 16 . the above explanations for possible configurations of the surface structure 13 b on the valve seat 5 also apply, mutatis mutandis, for the surface structures 13 a , 13 c on the closure body 6 and on the adjustment element 9 . the surface structure 13 c provided on the surface of the adjustment element 9 or the control tappet 10 is arranged in an axial end section 19 of the adjustment element 9 or the control tappet 10 facing the closure body 6 , on its circumferential side 20 . in the exemplary scenario, the surface structure 13 a of the closure body 6 is arranged at least partially in a surface region of the closure body 6 which touches the valve seat 5 in the closed position of the closure body 6 . in analogous manner, the surface structure 13 b of the valve seat 5 is arranged at least partially in a surface region which touches the closure body 6 in the closed position of the closure body 6 . alternatively, no surface structures 13 a , 13 b can be formed on the closure body 6 and/or on the valve seat 5 in those surface sections touching each other in the closure body 6 , so as to rule out any lessening of the sealing action of the closure body 6 connected with this. the valve seat 5 comprises a first seat section 21 a in the longitudinal cross section shown in the figures along the movement direction v of the adjustment element 9 , which passes into a second seat section 21 b in the movement direction v. the first seat section 21 a can have a constant diameter along the movement direction v. the second seat section 21 b tapers conically away from the first seat section 21 a . in its closed position, the closure body 6 or the closure ball bears against the valve seat 5 . the second seat section 21 b passes along the movement direction v away from the first seat section 21 a into a third seat section 21 c , which can have a constant diameter along the movement direction v. the surface structure 13 b of the valve seat 5 forming the noise reduction device 12 can be formed on an internal circumferential side 22 enclosing the fluid duct 3 on at least one of the three seat sections 21 a , 21 b , 21 c . in the example of fig. 4 , the surface structure 13 b is shown as an example in the second and third seat sections 21 b , 21 c . the surface structure 13 b formed on the third seat section 21 c of the valve seat 5 at the internal circumferential side 22 of the fluid duct 3 lies opposite the surface structure 13 c formed on a circumferential side 20 of the adjustment element 9 . the surface structure 13 a formed on the closure body 6 can be formed as a ring or a ring segment and extend on the surface of the closure body 6 along a direction x perpendicular to the movement direction v, preferably entirely around the circumference. the surface structures 13 a , 13 b , 13 c with the elevations 14 and/or depressions 25 can be realized in the manner of a roughened surface, having a roughness rz of more than 16 μm.
144-304-330-901-392
US
[ "JP", "KR", "EP", "WO", "TW", "CA", "US", "CN" ]
H04L12/56,G06F13/00,H04L29/06,H04L29/08,H04J3/16,H04L12/54,G06F15/16,H04L12/70,H04L12/801,H04L12/863,H04L12/28,G01R31/08
2006-05-02T00:00:00
2006
[ "H04", "G06", "G01" ]
system and methods for close queuing to support quality of service
certain embodiments of the present invention provide systems and methods for enqueuing transport protocol commands with data in a low-bandwidth network environment. the method may include receiving data for transmission via a network connection, enqueuing the data, enqueuing a transport protocol command related to the network connection, transmitting the data via the network connection, and transmitting the transport protocol command after transmission of the data. in certain embodiments, the data and the transport protocol command are enqueued based at least in part on manipulating a transport protocol layer of a communication network, such as a tactical data network. in certain embodiments, the data is prioritized based on at least one rule, such as a content-based rule and/or a protocol-based rule. in certain embodiments, the transport protocol command includes a close connection command, for example.
claims 1. a method for data communication, said method comprising: opening a connection between a first node and a second node in a network to communicate data between said first node and said second node; and holding a transport protocol command in relation to said data being communicated between said first node and said second node via said connection such that said transport protocol command is processed after communication of said data is complete. 2. the method of claim 1, wherein said step of holding further comprises enqueuing said transport protocol command behind said data such that said transport protocol command is executed with respect to said connection after said data has been communicated between said first node and said second node. 3. the method of claim 1, wherein said step of holding further comprises manipulating a transport protocol layer of said network to hold said transport protocol command in relation to said data. 4. the method of claim 1, further comprising holding said data at a transport protocol layer to prioritize communication of said data from said first node to said second node via said connection. 5. the method of claim 1, wherein said connection comprises a transmission control protocol ("tcp") connection. 6. the method of claim 1, wherein said network comprises a tactical data network having a bandwidth constrained by an environment in which said network operates. 7. the method of claim 1, wherein said transport protocol command comprises a close connection command. 8. a computer-readable medium having a set of instructions for execution on a processing device, said set of instructions comprising: a connection routine for establishing a transport connection between a first node and a second node to communicate data between said first node and said second node; and a hold routine operating at a network transport layer for holding a transport protocol command in relation to said data being communicated between said first node and said second node via said transport connection, wherein said transport protocol command is processed after communication of said data. 9. the set of instructions of claim 8, wherein said hold routine enqueues said transport protocol command behind said data such that said transport protocol command is executed with respect to said transport connection after said data has been communicated between said first node and said second node. 10. the set of instructions of claim 8, further comprising a queue routine for enqueuing said data and said transport protocol command in relation to said transport connection.
systems and methods for close queuing to support quality op service the presently described technology generally relates to communications networks. more particularly, the presently described technology relates to systems and methods for protocol filtering for quality of service. communications networks are utilized in a variety of environments. communications networks typically include two or more nodes connected by one or more links. generally, a communications network is used to support communication between two or more participant nodes over the links and intermediate nodes in the communications network. there may be many kinds of nodes in the network. for example, a network may include nodes such as clients, servers, workstations, switches, and/or routers. links may be, for example, modem connections over phone lines, wires, ethernet links, asynchronous transfer mode (atm) circuits, satellite links, and/or fiber optic cables. a communications network may actually be composed of one or more smaller communications networks. for example, the internet is often described as network of interconnected computer networks. each network may utilize a different architecture and/or topology. for .example, one network may be a switched ethernet network with a star topology and another network may be a fiber-distributed data interface (fddi) ring. communications networks may carry a wide variety of data. for example, a network may carry bulk file transfers alongside data for interactive real-time conversations. the data sent on a network is often sent in packets, cells, or frames. alternatively, data may be sent as a stream. in some instances, a stream or flow of data may actually be a sequence of packets. networks such as the internet provide general purpose data paths between a range of nodes and carrying a vast array of data with different requirements. communication over a network typically involves multiple levels of communication protocols. a protocol stack, also referred to as a networking stack or protocol suite, refers to a collection of protocols used for communication. each protocol may be focused on a particular type of capability or form of communication. for example, one protocol may be concerned with the electrical signals needed to communicate with devices connected by a copper wire. other protocols may address ordering and reliable transmission between two nodes separated by many intermediate nodes, for example . protocols in a protocol stack typically exist in a hierarchy. often, protocols are classified into layers. one reference model for protocol layers is the open systems interconnection ("osi") model. the osi reference model includes seven layers: a physical layer, data link layer, network layer, transport layer, session layer, presentation layer, and application layer. the physical layer is the "lowest" layer, while the application layer is the "highest" layer. two well-known transport layer protocols are the transmission control protocol ("tcp") and user datagram protocol ("udp") . a well known network layer protocol is the internet protocol ("ip") . at the transmitting node, data to be transmitted is passed down the layers of the protocol stack, from highest to lowest. conversely, at the receiving node, the data is passed up the layers, from lowest to highest. at each layer, the data may be manipulated by the protocol handling communication at that layer. for example, a transport layer protocol may add a header to the data that allows for ordering of packets upon arrival at a destination node . depending on the application, some layers may not be used, or even present, and data may just be passed through. one kind of communications network is a tactical data network. a tactical data network may also be referred to as a tactical communications network. a tactical data network may be utilized by units within an organization such as a military (e.g., army, navy, and/or air force) . nodes within a tactical data network may include, for example, individual soldiers, aircraft, command units, satellites, and/or radios. a tactical data network may be used for communicating data such as voice, position telemetry, sensor data, and/or realtime video. an example of how a tactical data network may be employed is as follows. a logistics convoy may be in-route to provide supplies for a combat unit in the field. both the convoy and the combat unit may be providing position telemetry to a command post over satellite radio links. an unmanned aerial vehicle ("uav") may be patrolling along the road the convoy is taking and transmitting real-time video data to the command post over a satellite radio link also. at the command post, an analyst may be examining the video data while a controller is tasking the uav to provide video for a specific section of road. the analyst may then spot an improvised explosive device ("ied") that the convoy is approaching and send out an order over a direct radio link to the convoy for it to halt and alerting the convoy to the presence of the ied. the various networks that may exist within a tactical data network may have many different architectures and characteristics. for example, a network in a command unit may include a gigabit ethernet local area network ("lan") along with radio links to satellites and field units that operate with much lower throughput and higher latency. field units may communicate both via satellite and via direct path radio frequency ("rf") . data may be sent point-to-point, multicast, or broadcast, depending on the nature of the data and/or the specific physical characteristics of the network. a network may include radios, for example, set up to relay data. in addition, a network may include a high frequency ("hf") network which allows long rang communication. a microwave network may also be used, for example. due to the diversity of the types of links and nodes, among other reasons, tactical networks often have overly complex network addressing schemes and routing tables. in addition, some networks, such as radio-based networks, may operate using bursts. that is, rather than continuously transmitting data, they send periodic bursts of data. this is useful because the radios are broadcasting on a particular channel that must -be shared by all participants, and only one radio may transmit at a time. tactical data networks are generally bandwidth- constrained. that is, there is typically more data to be communicated than bandwidth available at any given point in time. these constraints may be due to either the demand for bandwidth exceeding the supply, and/or the available communications technology not supplying enough bandwidth to meet the user's needs, for example. for example, between some nodes, bandwidth may be on the order of kilobits/sec. in bandwidth-constrained tactical data networks, less important data can clog the network, preventing more important data from getting through in a timely fashion, or even arriving at a receiving node at all. in addition, portions of the networks may include internal buffering to compensate for unreliable links. this may cause additional delays. further, when the buffers get full, data may be dropped. in many instances the bandwidth available to a network cannot be increased. for example, the bandwidth available over a satellite communications link may be fixed and cannot effectively be increased without deploying another satellite. in these situations, bandwidth must be managed rather than simply expanded to handle demand. in large systems, network bandwidth is a critical resource. it is desirable for applications to utilize bandwidth as efficiently as possible. in addition, it is desirable that applications avoid "clogging the pipe," that is, overwhelming links with data, when bandwidth is limited. when bandwidth allocation changes, applications should preferably react. bandwidth can change dynamically due to, for example, quality of service, jamming, signal obstruction, priority reallocation, and line- of-sight. networks can be highly volatile and available bandwidth can change dramatically and without notice. in addition to bandwidth constraints, tactical data networks may experience high latency. for example, a network involving communication over a satellite link may incur latency on the order of half a second or more. for some communications this may not be a problem, but for others, such as real-time, interactive communication (e.g., voice communications), it is highly desirable to minimize latency as much as possible. another characteristic common to many tactical data networks is data loss. data may be lost due to a variety of reasons. for example, a node with data to send may be damaged or destroyed. as another example, a destination node may temporarily drop off of the network. this may occur because, for example, the node has moved out of range, the communication' s link is obstructed, and/or the node is being jammed. data may be lost because the destination node is not able to receive it and intermediate nodes lack sufficient capacity to buffer the data until the destination node becomes available. additionally, intermediate nodes may not buffer the data at all, instead leaving it to the sending node to determine if the data ever actually arrived at the destination. often, applications in a tactical data network are unaware of and/or do not account for the particular characteristics of the network. for example, an application may simply assume it has as much bandwidth available to it as it needs. as another example, an application may assume that data will not be lost in the network. applications which do not take into consideration the specific characteristics of the underlying communications network may behave in ways that actually exacerbate problems. for example, an application may continuously send a stream of data that could just as effectively be sent less frequently in larger bundles . the continuous stream may incur much greater overhead in, for example, a broadcast radio network that effectively starves other nodes from communicating, whereas less frequent bursts would allow the shared bandwidth to be used more effectively. certain protocols do not work well over tactical data networks. for example, a protocol such as tcp may not function well over a radio-based tactical network because of the high loss rates and latency such a network may encounter. tcp requires several forms of handshaking and acknowledgments to occur in order to send data. high latency and loss may result in tcp hitting time outs and not being able to send much, if any, meaningful data over such a network. information communicated with a tactical data network often has various levels of priority with respect to other data in the network. for example, threat warning receivers in an aircraft may have higher priority than position telemetry information for troops on the ground miles away. as another example, orders from headquarters regarding engagement may have higher priority than logistical communications behind friendly lines. the priority level may depend on the particular situation of the sender and/or receiver. for example, position telemetry data may be of much higher priority when a unit is actively engaged in combat as compared to when the unit is merely following a standard patrol route. similarly, real-time video data from an uav may have higher priority when it is over the target area as opposed to when it is merely in-route. there are several approaches to delivering data over a network. one approach, used by many communications networks, is a "best effort" approach. that is, data being communicated will be handled as well as the network can, given other demands, with regard to capacity, latency, reliability, ordering, and errors. thus, the network provides no guarantees that any given piece of data will reach its destination in a timely manner, or at all. additionally, no guarantees are made that data will arrive in the order sent or even without transmission errors changing one or more bits in the data. another approach is quality of service ("qos") . qos refers to one or more capabilities of a network to provide various forms of guarantees with regard to data that is carried. for example, a network supporting qos may guarantee a certain amount of bandwidth to a data stream. as another example, a network may guarantee that packets between two particular nodes have some maximum latency. such a guarantee may be useful in the case of a voice communication where the two nodes are two people having a conversation over the network. delays in data delivery in such a case may result in irritating gaps in communication and/or dead silence, for example . qos may be viewed as the capability of a network to provide better service to aelected network traffic. the primary goal of qos is to provide priority including dedicated bandwidth, controlled jitter and latency (required by some real-time and interactive traffic) , and improved loss characteristics. another important goal is making sure that providing priority for one flow does not make other flows fail. that is, guarantees made for subsequent flows must not break the guarantees made to existing flows. current approaches to qos often require every node in a network to support qos, or, at the very least, for every node in the network involved in a particular communication to support qos. for example, in current systems, in order to provide a latency guarantee between two nodes, every node carrying the traffic between those two nodes must be aware of and agree to honor, and be capable of honoring, the guarantee. there are several approaches to providing qos. one approach is integrated services, or ^intserv." intserv provides a qos system wherein every node in the network supports the services and those services are reserved when a connection is set up. intserv does not scale well because of the large amount of state information that must be maintained at every node and the overhead associated with setting up such connections . another approach to providing qos is differentiated services, or "diffserv." diffserv is a class of service model that enhances the best-effort services of a network such as the internet. diffserv differentiates traffic by user, service requirements, and other criteria. then, diffserv marks packets so that network nodes can provide different levels of service via priority queuing or bandwidth allocation, or by choosing dedicated routes for specific traffic flows. typically, a node has a variety of queues for each class of service. the node then selects the next packet to send from those queues based on the class categories. existing qos solutions are often network specific and each network type or architecture may require a different qos configuration. due to the mechanisms existing qos solutions utilize, messages that look the same to current qos systems may actually have different priorities based on message content. however, data consumers may require access to high-priority data without being flooded by lower-priority data. existing qos systems cannot provide qos based on message content at the transport layer. as mentioned, existing qos solutions require at least the nodes involved in a particular communication to support qos. however, the nodes at the "edge" of network may be adapted to provide some improvement in qos, even if they are incapable of making total guarantees. nodes are considered to be at the edge of the network if they are the participating nodes in a communication (i.e., the transmitting and/or receiving nodes) and/or if they are located at chokepoints in the network. a chokepoint is a section of the network where all traffic must pass to another portion. for example, a router or gateway from a lan to a satellite link would be a choke point, since all traffic from the lan to any nodes not on the lan must pass through the gateway to the satellite link. if qos is provided for a tcp socket connection, for example, "open" and "close" commands are required for each connection. data may be queued for a connection in order to provide qos for that connection. when a tcp socket "close" is initiated by a communication application, any data that has been queued will be lost if the "close" is immediately honored. in current applications, the close is processed right away, and data may be lost if it is not processed prior to close of the connection. thus, there is a need for systems and methods to minimize data loss with a tcp socket connection. thus, there is a need for systems and methods providing qos in a tactical data network. there is a need for systems and methods for providing qos on the edge of a tactical data network. additionally, there is a need for adaptive, configurable qos systems and methods in a tactical data network. embodiments of the present invention provide systems and methods for facilitating communication of data. a method includes opening a connection between a first node and a second node in a network to communicate data between the first node and the second node and holding a transport protocol command in relation to the data being communicated between the first node and the second node via the connection such that the transport protocol command is processed after communication of the data is complete. holding may include enqueuing the transport protocol command behind the data such that the transport protocol command is executed with respect to the connection after the data has been communicated between the first and second nodes, for example. the transport protocol command may be held by manipulating a transport protocol layer of the network, for example. additionally, data may be enqueued at a transport protocol layer to prioritize communication of the data from the first node to the second node via the connection. the connection may include a transmission control protocol socket connection, for example. the transport protocol may include a transmission control protocol, for example. the network may be a tactical data network, for example, having a bandwidth constrained by an environment in which the network operates. the transport protocol command may include a close connection command, for example. certain embodiments provide a computer-readable medium having a set of instructions for execution on a processing device. the set of instructions includes a connection routine for establishing a transport connection between a first node and a second node to communicate data between the first node and the second node and a hold routine operating at a network transport layer for holding a transport protocol command in relation to the data being communicated between the first node and the second node via the transport connection, wherein the transport protocol command is processed after communication of the data. the hold routine may enqueue the transport protocol command in relation to the data being communicated between the first node and the second node via the transport connection, for example. the set of instructions may further include a queue routine, for example, for enqueuing the data and the transport protocol command in relation to the transport connection. the set of instructions may also include a prioritization routine, for example, for prioritizing communication of the data between the first node and the second node based on at least one rule. the transport connection may be established between the first node and the second node in a tactical data network, for example. certain embodiments provide a method for enqueuing transport protocol commands with data in a low-bandwidth network environment. the method may include receiving data for transmission via a network connection, enqueuing the data, enqueuing a transport protocol command related to the network connection, transmitting the data via the network connection, and transmitting the transport protocol command after transmission of the data. the data and the transport protocol command may be enqueued based at least in part on manipulating a transport protocol layer of a communication network, for example. the data may be prioritized based on at least one rule, such as a content-based rule and/or a protocol-based rule. the transport protocol command includes a close connection command, for example. fig. 1 illustrates a tactical communications network environment operating with an embodiment of the presently described technology. fig. 2 shows the positioning of the data communications system in the seven layer osi network model in accordance with an embodiment of the presently described technology. fig. 3 depicts an example of multiple networks facilitated using the data communications system in accordance with an embodiment of the presently described technology. fig. 4 illustrates a data communication environment operating with an embodiment of the present invention. fig. 5 illustrates an example of a queue system for qos operating above the transport layer in accordance with an embodiment of the presently described technology. fig. 6 illustrates a flow diagram for a method for communicating data in accordance with an embodiment of the present invention. the foregoing summary, as well as the following detailed description of certain embodiments of the presently described technology, will be better understood when read in conjunction with the appended drawings. for the purpose of illustrating the presently described technology, certain embodiments are shown in the drawings. it should be understood, however, that the presently described technology is not limited to the arrangements and instrumentality shown in the attached drawings. fig. 1 illustrates a tactical communications network environment 100 operating with an embodiment of the presently described technology. the network environment 100 includes a plurality of communication nodes 110, one or more networks 120, one or more links 130 connecting the nodes and network(s), and one or more communication systems 150 facilitating communication over the components of the network environment 100. the following discussion assumes a network environment 100 including more than one network 120 and more than one link 130, but it should be understood that other environments are possible and anticipated. communication nodes 110 may be and/or include radios, transmitters, satellites, receivers, workstations, servers, and/or other computing or processing devices, for example. network (s) 120 may be hardware and/or software for transmitting data between nodes 110, for example. network (s) 120 may include one or more nodes 110, for example. link(s) 130 may be wired and/or wireless connections to allow transmissions between nodes 110 and/or network (s) 120. the communications system 150 may include software, firmware, and/or hardware used to facilitate data transmission among the nodes 110, networks 120, and links 130, for example. as illustrated in fig. 1, communications system 150 may be implemented with respect to the nodes 110, network (s) 120, and/or links 130. in certain embodiments, every node 110 includes a communications system 150. in certain embodiments, one or more nodes 110 include a communications system 150. in certain embodiments, one or more nodes 110 may not include a communications system 150. the communication system 150 provides dynamic management of data to help assure communications on a tactical communications network, such as the network environment 100. as shown in fig. 2, in certain embodiments, the system 150 operates as part of and/or at the top of the transport layer in the osi seven layer protocol model. the system 150 may give precedence to higher priority data in the tactical network passed to the transport layer, for example. the system 150 may be used to facilitate communications in a single network, such as a local area network (lan) or wide area network (wan), or across multiple networks. an example of a multiple network system is shown in fig. 3. the system 150 may be used to manage available bandwidth rather than add additional bandwidth to the network, for example. in certain embodiments, the system 150 is a software system, although the system 150 may include both hardware and software components in various embodiments. the system 150 may be network hardware independent, for example. that is, the system 150 may be adapted to function on a variety of hardware and software platforms. in certain embodiments, the system 150 operates on the edge of the network rather than on nodes in the interior of the network. however, the system 150 may operate in the interior of the network as well, such as at "choke points" in the network. the system 150 may use rules and modes or profiles to perform throughput management functions such as optimizing available bandwidth, setting information priority, and managing data links in the network. by "optimizing" bandwidth, it is meant, for example, that the presently described technology may be employed to increase an efficiency of bandwidth use to communicate data in one or more networks. optimizing bandwidth usage may include removing functionally redundant messages, message stream management or sequencing, and message compression, for example. setting information priority may include differentiating message types at a finer granularity than internet protocol (ip) based techniques and sequencing messages onto a data stream via a selected rule- based sequencing algorithm, for example. data link management may include rule-based analysis of network measurements to affect changes in rules, modes, and/or data transports, for example. a mode or profile may include a set of rules related to the operational needs for a particular network state of health or condition. the system 150 provides dynamic, w on- the-fly" reconfiguration of modes, including defining and switching to new modes on the fly. the communication system 150 may be configured to accommodate changing priorities and grades of service, for example, in a volatile, bandwidth-limited network. the system 150 may be configured to manage information for improved data flow to help increase response capabilities in the network and reduce communications latency. additionally, the system 150 may provide interoperability via a flexible architecture that is upgradeable and scalable to improve availability / survivability, and reliability of communications. the system 150 supports a data communications architecture that may be autonomously adaptable to dynamically changing environments while using predefined and predictable system resources and bandwidth, for example. in certain embodiments, the system 150 provides throughput management to bandwidth-constrained tactical communications networks while remaining transparent to applications using the network. the system 150 provides throughput management across multiple users and environments at reduced complexity to the network. as mentioned above, in certain embodiments, the system 150 runs on a host node in and/or at the top of layer four (the transport layer) of the osi seven layer model and does not require specialized network hardware. the system 150 may operate transparently to the layer four interface. that is, an application may utilize a standard interface for the transport layer and be unaware of the operation of the system 150. for example, when an application opens a socket, the system 150 may filter data at this point in the protocol stack. the system 150 achieves transparency by allowing applications to use, for example, the tcp/ip socket interface that is provided by an operating system at a communication device on the network rather than an interface specific to the system 150. system 150 rules may be written in extensible markup language (xml) and/or provided via custom dynamic link libraries (dlls), for example. in certain embodiments, the system 150 provides quality of service (qos) on the edge of the network. the system's qos capability offers content-based, rule-based data prioritization on the edge of the network, for example. prioritization may include differentiation and/or sequencing, for example. the system 150 may differentiate messages into queues based on user-configurable differentiation rules, for example. the messages are sequenced into a data stream in an order dictated by the user-configured sequencing rule (e.g., starvation, round robin, relative frequency, etc.). using qos on the edge, data messages that are indistinguishable by traditional qos approaches may be differentiated based on message content, for example. rules may be implemented in xml, for example. in certain embodiments, to accommodate capabilities beyond xml and/or to support extremely low latency requirements, the system 150 allows dynamic link libraries to be provided with custom code, for example. inbound and/or outbound data on the network may be customized via the system 150. prioritization protects client applications from high-volume, low-priority data, for example. the system 150 helps to ensure that applications receive data to support a particular operational scenario or constraint. in certain embodiments, when a host is connected to a lan that includes a router as an interface to a bandwidth- constrained tactical network, the system may operate in a configuration known as qos by proxy. in this configuration, packets that are bound for the local lan bypass the system and immediately go to the lan. the system applies qos on the edge of the network to packets bound for the bandwidth-constrained tactical link. in certain embodiments, the system 150 offers dynamic support for multiple operational scenarios and/or network environments via commanded profile switching. a profile may include a name or other identifier that allows the user or system to change to the named profile. a profile may also include one or more identifiers, such as a functional redundancy rule identifier, a differentiation rule identifier, an archival interface identifier, a sequencing rule identifier, a pre-transmit interface identifier, a post- transmit interface identifier, a transport identifier, and/or other identifier, for example. a functional redundancy rule identifier specifies a rule that detects functional redundancy, such as from stale data or substantially similar data, for example. a differentiation rule identifier specifies a rule that differentiates messages into queues for processing, for example. an archival interface identifier specifies an interface to an archival system, for example. a sequencing rule identifier identifies a sequencing algorithm that controls samples of queue fronts and, therefore, the sequencing of the data on the data stream. a pre-transmit interface identifier specifies the interface for pre-transmit processing, which provides for special processing such as encryption and compression, for example. a post-transmit interface identifier identifies an interface for post-transmit processing, which provides for processing such as de- encryption and decompression, for example. a transport identifier specifies a network interface for the selected transport. a profile may also include other information, such as queue sizing information, for example. queue sizing information identifiers a number of queues and amount of memory and secondary storage dedicated to each queue, for example. in certain embodiments, the system 150 provides a rules-based "approach for optimizing bandwidth. for example, the system 150 may employ queue selection rules to differentiate messages into message queues so that messages may be assigned a priority and an appropriate relative frequency on the data stream. the system 150 may use functional redundancy rules to manage functionally redundant messages. a message is functionally redundant if it is not different enough (as defined by the rule) from a previous message that has not yet been sent on the network, for example. that is, if a new message is provided that is not sufficiently different from an older message that has already been scheduled to be sent, but has not yet been sent, the newer message may be dropped, since the older message will carry functionally equivalent information and is further ahead in the queue. in addition, functional redundancy many include actual duplicate messages and newer messages that arrive before an older message has been sent. for example, a node may receive identical copies of a particular message due to characteristics of the underlying network, such as a message that was sent by two different paths for fault tolerance reasons. as another example, a new message may contain data that supersedes an older message that has not yet been sent. in this situation, the system 150 may drop the older message and send only the new message. the system 150 may also include priority sequencing rules to determine a priority- based message sequence of the data stream. additionally, the system 150 may include transmission processing rules to provide pre-transmission and post-transmission special processing, such as compression and/or encryption. in certain embodiments, the system 150 provides fault tolerance capability to help protect data integrity and reliability. for example, the system 150 may use user-defined queue selection rules to differentiate messages into queues. the queues are sized according to a user-defined configuration, for example. the configuration specifies a maximum amount of memory a queue may consume, for example. additionally, the configuration may allow the user to specify a location and amount of secondary storage that may be used for queue overflow. after the memory in the queues is filled, messages may be queued in secondary storage. when the secondary storage is also full, the system 150 may remove the oldest message in the queue, logs an error message, and queues the newest message. if archiving is enabled for the operational mode, then the de-queued message may be archived with an indicator that the message was not sent on the network. memory and secondary storage for queues in the system 150 may be configured on a per-link basis for a specific application, for example. a longer time between periods of network availability may correspond to more memory and secondary storage to support network outages. the system 150 may be integrated with network modeling and simulation applications, for example, to help identify sizing to help ensure that queues are sized appropriately and time between outages is sufficient to help achieve steady-state and help avoid eventual queue overflow. furthermore, in certain embodiments, the system 150 offers the capability to meter inbound ("shaping") and outbound ("policing") data. policing and shaping capabilities help address mismatches in timing in the network. shaping helps to prevent network buffers form flooding with high- priority data queued up behind lower-priority data. policing helps to prevent application data consumers from being overrun by low-priority data. policing and shaping are governed by two parameters: effective link speed and link proportion. the system 150 may form a data stream that is no more than the effective link speed multiplied by the link proportion, for example. the parameters may be modified dynamically as the network changes. the system may also provide access to detected link speed to support application level decisions on data metering. information provided by the system 150 may be combined with other network operations information to help decide what link speed is appropriate for a given network scenario. in certain embodiments, qos may be provided to a communication network above the transport layer of the osi protocol model. specifically, qos technology may be implemented just below the socket layer of a transport protocol connection. the transport protocol may include a transmission control protocol (tcp) , user datagram protocol (udp) , or stream control transmission protocol (sctp) , for example. as another example, the protocol type may include internet protocol (ip) , internetwork packet exchange (ipx) , ethernet, asynchronous transfer mode (atm) , file transfer protocol (ftp) , and/or real-time transport protocol (rtp) . for purposes of illustration, one or more examples will be provided using tcp. since tcp is connection-oriented, sockets are opened and closed via "open" and "close" commands to begin and end a data communication connection between nodes or other network elements. when a tcp socket is closed by an application that is utilizing qos, prioritized data queued for transmission should be sent before the close command is executed by the network system. otherwise, data that has been queued may be lost if the "close" is immediately honored by the system. to do this, the "close" is queued until relevant data is sent, and then the close is processed after the data has been transmitted via the connection. thus, unlike a traditional tcp connection, a close command may be queued with data to allow coordinated processing and transmission of data related to the open connection before the connection is terminated in response to the close command. existing qos solutions in other network environments are implemented below the network layer so as to preclude "close" command queuing or holding. certain embodiments provide a mechanism to queue or otherwise hold the "close" commands so qos technology may be implemented above the transport layer, which allows for data inspection and discrimination, for example. for example, implementing qos solutions above the transport layer in tcp helps provide an ability to discriminate or differentiate data for qos processing unavailable below the network layer. in certain embodiments, a transport protocol is modified or otherwise manipulated so as to enable queuing or otherwise holding of system commands, such as close connection commands, in addition to data. for example, certain embodiments queue up or otherwise hold/store the transport protocol mechanism in along with the data to maintain an order between the protocol mechanism and associated data. for example, a tcp close command is identified and queued with associated data for a tcp socket connection so that the associated data is processed and transmitted via the connection before the close command is processed to terminate the connection. by operating above . the transport layer, certain embodiments are able to identify protocol mechanisms, such as a close command, and manipulate the mechanisms. in contrast, protocol mechanisms and data below the transport layer are segmented and compacted, and it is difficult to apply rules to manipulate a protocol mechanism, such as a close command, in relation to data. fig. 4 illustrates a data communication environment 400 operating with an embodiment of the present invention. the environment 400 includes a data communication system 410, one or more source nodes 420, and one or more destination nodes 430. the data communication system 410 is in communication with the source node(s) 420 and the destination node(s) 430. the data communication system 410 may communicate with the source node(s) 420 and/or destination node(s) 430 over links, such as radio, satellite, network links, and/or through inter-process communication, for example. in certain embodiments, the data communication system 410 may communication with one or more source nodes 420 and/or destination nodes 430 over one or more tactical data networks . the data communication system 410 may be similar to the communication system 150, described above, for example. in certain embodiments, the data communication system 410 is adapted to receive data from the one or more source nodes 420. in certain embodiments, the data communication system 410 may include one or more queues for holding, storing, organizing, and/or prioritizing the data. alternatively, other data structures may be used for holding, storing, organizing, and/or prioritizing the data. for example, a table, tree, or linked list may be used. in certain embodiments, the data communication system 410 is adapted to communicate data to the one or more destination nodes 430. the data received, stored, prioritized, processed, communicated, and/or otherwise transmitted by data communication system 410 may include a block of data. the block of data may be, for example, a packet, cell, frame, and/or stream of data. for example, the data communication system 410 may receive packets of data from a source node 420. as another example, the data communication system 410 may process a stream of data from a source node 420. in certain embodiments, data includes a header and a payload. the header may include protocol information and time stamp information, for example. in certain embodiments, protocol information, time stamp information, content, and other information may be included in the payload. in certain embodiments, the data may or may not be contiguous in memory. that is, one or more portions of the data may be located in different regions of memory. in certain embodiments, data may include a pointer to another location containing data, for example. source node(s) 420 provide and/or generate, at least in part, data handled by the data communication system 410. a source node 420 may include, for example, an application, radio, satellite, or network. the source node 420 may communicate with the data communication system 410 over a link, as discussed above. source node(s) 420 may generate a continuous stream of data or may burst data, for example. in certain embodiments, the source node 420 and the data communication system 410 are part of the same system. for example, the source node 420 may be an application running on the same computer system as the data communication system 410. destination node(s) 430 receive data handled by the data communication system 410. a destination node 430 may include, for example, an application, radio, satellite, or network. the destination node 430 may communicate with the data communication system 410 over a link, as discussed above. in certain embodiments, the destination node 430 and the data communication system 410 are part of the same system. for example, the destination node 430 may be an application running on the same computer system as the data communication system 410. the data communication system 410 may communicate with one or more source nodes 420 and/or destination nodes 430 over links, as discussed above. in certain embodiments, the one or more links may be part of a tactical data network. in certain embodiments, one or more links may be bandwidth constrained. in certain embodiments, one or more links may be unreliable and/or intermittently disconnected. in certain embodiments, a transport protocol, such as tcp, opens a connection between sockets at a source node 420 and a destination node 430 to transmit data on a link from the source node 420 to the destination node 430. in operation, data is provided and/or generated by one or more data sources 420. the data is received at the data communication system 410. the data may be received over one or more links, for example. for example, data may be received at the data communication system 410 from a radio over a tactical data network. as another example, data may be provided to the data communication system 410 by an application running on the same system by an inter-process communication mechanism. as discussed above, the data may be a block of data, for example. in certain embodiments, the data communication system 410 may organize and/or prioritize the data. in certain embodiments, the data communication system 410 may determine a priority for a block of data. for example, when a block of data is received by the data communication system 410, a prioritization component of the data communication system 410 may determine a priority for that block of data. as another example, a block of data may be stored in a queue in the data communication system 410 and a prioritization component may extract the block of data from the queue based on a priority determined for the block of data and/or for the queue . the prioritization of the data by the data communication system 410 may be used to provide qos, for example. for example, the data communication system 410 may determine a priority for a data received over a tactical data network. the priority may be based on the source address of the data, for example. for example, a source ip address for the data from a radio of a member of the same platoon as the platoon the data communication system 410 belongs to may be given a higher priority than data originating from a unit in a different division in a different area of operations. the priority may be used to determine which of a plurality of queues the data should be placed into for subsequent communication by the data communication system 410. for example, higher priority data may be placed in a queue intended to hold higher priority data, and in turn, the data communication system 410, in determining what data to next communicate may look first to the higher priority queue. the data may be prioritized based at least in part on one or more rules. as discussed above, the rules may be user defined. in certain embodiments, rules may be written in extensible markup language ("xml") and/or provided via custom dynamically linked libraries ("dlls"), for example. rules may be used to differentiate and/or sequence data on a network, for example. a rule may specify, for example, that data received using one protocol be favored over data utilizing another protocol. for example, command data may utilize a particular protocol that is given priority, via a rule, over position telemetry data sent using another protocol. as another example, a rule may specify that position telemetry data coming from a first range of addresses may be given priority over position telemetry data coming from a second range of addresses. the first range of addresses may represent ip addresses of other aircraft in the same squadron as the aircraft with the data communication system 410 running on it, for example. the second range of addresses may then represent, for example, ip addresses for other aircraft that are in a different area of operations, and therefore of less interest to the aircraft on which the data communication system 410 is running. in certain embodiments, the data communication system 410 does not drop data. that is, although data may be low priority, it is not dropped by the data communication system 410. rather, the data may be delayed for a period of time, potentially dependent on the amount of higher priority data that is received. in certain embodiments, data may be queued or otherwise stored, for example, to help ensure that the data is not lost or dropped until bandwidth is available to send the data. in certain embodiments, the data communication system 410 includes a mode or profile indicator. the mode indicator may represent the current mode or state of the data communication system 410, for example. as discussed above, the data communications system 410 may use rules and modes or profiles to perform throughput management , functions such as optimizing available bandwidth, setting information priority, and managing data links in the network. the different modes may affect changes in rules, modes, and/or data transports, for example. a mode or profile may include a set of rules related to the operational needs for a particular network state of health or condition. the data communication system 410 may provide dynamic reconfiguration of modes, including defining and switching to new modes "on-the-fiy, " for example. in certain embodiments, the data communication system 410 is transparent to other applications. for example, the processing, organizing, and/or prioritization performed by the data communication system 410 may be transparent to one or more source nodes 420 or other applications or data sources. for example, an application running on the same system as data communication system 410, or on a source node 420 connected to the data communication system 410, may be unaware of the prioritization of data performed by the data communication system 410. data is communicated via the data communication system 410. the data may be communicated to one or more destination nodes 430, for example. the data may be communicated over one or more links, for example. for example, the data may be communicated by the data communication system 410 over a tactical data network to a radio. as another example, data may be provided by the data communication system 410 to an application running on the same system by an inter-process communication mechanism. as discussed above, the components, elements, and/or functionality of the data communication system 410 may be implemented alone or in combination in various forms in hardware, firmware, and/or as a set of instructions in software, for example. certain embodiments may be provided as a set of instructions residing on a computer-readable medium, such as a memory, hard disk, dvd, or cd, for execution on a general purpose computer or other processing device. fig. 5 illustrates an example of a queue system 500 for qos operating above the transport layer in accordance with an embodiment of the presently described technology. while fig. 5 is illustrated and described in terras of a queue, it is understood that alternative data structures may be used to hold data and protocol mechanisms similar to the queue system 500. the queue system 500 includes one or more queues 510- 515. the queue 510-515 includes an enqueue pointer 520 and a dequeue pointer 530. the queue 510-515 may also include data 540-541 and/or a close command 550, for example. in certain embodiments, the data 540-541 may include contiguous or noncontiguous portions of data. in certain embodiments, the data 540-541 may include one or more pointers to other locations containing data. as shown in fig. 5, queue 510 first illustrates an empty queue with no data enqueued. then, one block of data 540 is enqueued in the queue 511. next, queue 512 has two blocks of data 540-541 enqueued. then, a close command 550 has been enqueued along with two blocks of data 540-541 in queue 513. the data blocks 540-541 are processed in the network while the close command 550 remains behind them in the queue 514. in certain embodiments, the data blocks 540-541 may be processed and transmitted in an order other than the order in which the blocks 540-541 were enqueued. then, as shown in queue 515, the close command 550 is removed from the queue 515 and processed to close the data connection. thus, for example, a system, such as data communication system 410, may manage a connection opened between a source node 420 and a destination node 430. system 410 may enqueue data transmitted via the connection and also enqueue protocol commands, such as transport protocol commands (e.g., "open connection" commands and "close connection" commands) . protocol commands may be associated with a certain connection between nodes and with certain data. the system 410 helps to ensure that data associated with the connection is transmitted and/or otherwise processed before the protocol command is processed. thus, for example, data being transmitted via a tcp socket connection between source node 420 and destination node 430 is transmitted via the socket connection before a close command is processed to terminate the connection. the close command is enqueued behind the data for the connection, and, although the data for the connection may be processed and/or transmitted in varying orders depending upon priority and/or other rules, the close command is not processed until completion of the data processing. once data for the connection has been processed, the close command is processed to terminate the tcp socket connection. in one embodiment, for example, a bandwidth- constrained network, such as a tactical data network, includes at least two communication nodes, such as an aircraft radio and a ground troop radio. the aircraft may transmit a message to the ground radio by activating or opening a tcp socket connection, for example, between the aircraft radio and the ground radio. transmission of data between the aircraft radio and the ground radio then begins. data is enqueued or otherwise temporarily stored during the transmission process in order to prioritize the data based on content, protocol, and/or other criteria. when the aircraft radio generates a close connection command to end the communication, the close command is stored or enqueued after the data to ensure that the data is prioritized and transmitted to the ground radio before the close command. thus, the system helps to ensure that the communication connection is not prematurely ended, and data thereby lost, due to premature processing of the close command. however, other environment conditions may result in termination and/or interruption of the communication connection. thus, queues and/or other data storage may be used to buffer data for resumed transmission in the event of an interruption in the communication connection. fig. 6 illustrates a flow diagram for a method 600 for communicating data in accordance with an embodiment of the present invention. the method 600 includes the following steps, which will be described below in more detail. at step 610, a connection is opened. at step 620, data is received. at step 630, data is enqueued. at step 640, a close command is enqueued. at step 650, data is dequeued and transmitted. at step 660, the close command is dequeued and executed. the method 600 is described with reference to elements of systems described above, but it should be understood that other implementations are possible. at step 610, a connection is opened. for example, a connection is opened between two nodes in a communications network. for example, a tcp connection may be opened between node sockets. at step 620, data is received. data may be received at the data communication system 410, for example. the data may be received over one or more links, for example. the data may be provided and/or generated by one or more data sources 420, for example. for example, data may be received at the data communication system 410 from a radio over a tactical data network. as another example, data may be provided to the data communication system 410 by an application running on the same system by an inter-process communication mechanism. as discussed above, the data may be a block of data, for example. in certain embodiments, the data communication system 410 may not receive all of the data. for example, some of the data may be stored in a buffer and the data communication system 410 may receive only header information and a pointer to the buffer. for example, the data cominunication system 410 may be hooked into the protocol stack of an operating system, and, when an application passes data to the operating system through a transport layer interface (e.g., sockets), the operating system may then provide access to the data to the data communication system 410. at step 630, data is enqueued. data may be enqueued by data communication system 410, for example. the data may be enqueued based on one or more rules or priorities established by the system 410, protocol used, and/or other mechanism, for example. the data may enqueued in the order in which it was received and/or in an alternate order, for example. in certain embodiments, data may be stored in one or more queues. the one or more queues may be assigned differing priorities and/or differing processing rules, for example. data in the one or more queues may be prioritized. the data may be prioritized and/or organized by data communication system 410, for example. the data to be prioritized may be the data that is received at step 620, for example. data may be prioritized before and/or after the data is enqueued, for example. in certain embodiments, the data communication system 410 may determine a priority for a block of data. for example, when a block of data is received by the data communication system 410, a prioritization component of the data communication system 410 may determine a priority for that block of data. as another example, a block of data may be stored in a queue in the data communication system 410 and a prioritization component may extract the block of data from the queue based on a priority determined for the block of data and/or for the queue. the priority of the block of data may be based at least in part on protocol information associated and/or included in the block of data. the protocol information may be similar to the protocol information described above, for example. for example, the data communication system 410 may determine a priority for a block of data based on the source address of the block of data. as another example, the data communication system 410 may- determine a priority for a block of data based on the transport protocol used to communicate the block of data. data priority may also be determined based at least in part on data content, for example. the prioritization of the data may be used to provide qos, for example. for example, the data communication system 410 may determine a priority for a data received over a tactical data network. the priority may be based on the source address of the data, for example. for example, a source ip address for the data from a radio of a member of the same platoon as the platoon the data communication system 410 belongs to may be given a higher priority than data originating from a unit in a different division in a different area of operations. the priority may be used to determine which of a plurality of queues the data should be placed into for subsequent communication by the data communication system 410. for example, higher priority data may be placed in a queue intended to hold higher priority data, and in turn, the data communication system 410, in determining what data to next communicate, may look first to the higher priority queue. the data may be prioritized based at least in part on one or more rules. as discussed above, the rules may be user defined and/or programmed based on system and/or operational constraints, for example. in certain embodiments, rules may be written in xml and/or provided via custom dlls, for example. a rule may specify, for example, that data received using one protocol be favored over data utilizing another protocol. for example, command data may utilize a particular protocol that is given priority, via a rule, over position telemetry data sent using another protocol. as another example, a rule may specify that position telemetry data coming from a first range of addresses may be given priority over position telemetry data coming from a second range of addresses. the first range of addresses may represent ip addresses of other aircraft in the same squadron as the aircraft with the data communication system 410 running on it, for example. the second range of addresses may then represent, for example, ip addresses for other aircraft that are in a different area of operations, and therefore of less interest to the aircraft on which the data communication system 410 is running. in certain embodiments, the data to be prioritized is not dropped. that is, although data may be low priority, it is not dropped by the data communication system 410. rather, the data may be delayed for a period of time, potentially dependent on the amount of higher priority data that is received. in certain embodiments, a mode or profile indicator may represent the current mode or state of the data communication system 410, for example. as discussed above, the rules and modes or profiles may be used to perform throughput management functions such as optimizing available bandwidth, setting information priority, and managing data links in the network. the different modes may affect changes in rules, modes, and/or data transports, for example. a mode or profile may include a set of rules related to the operational needs for a particular network state of health or condition. the data communication system 410 may provide dynamic reconfiguration of modes, including defining and switching to new modes "on-the-fiy, " for example. in certain embodiments, the prioritization of data is transparent to other applications. for example, the processing, organizing, and/or prioritization performed by the data communication system 410 may be transparent to one or more source nodes 420 or other applications or data sources. for example, an application running on the same system as data communication system 410, or on a source node 420 connected to the data communication system 410, may be unaware of the prioritization of data performed by the data communication system 410. at step 640, a system or protocol command, such as a transport protocol open or close command, is enqueued. thus, a protocol mechanism, such as a tcp close command, may be manipulated to be stored in one or more queues along with data. in certain embodiments, a close command for a connection may be stored in the same queue as data associated with the connection. alternatively, the command may be stored in a different queue from associated data. at step 650, data is dequeued. the data may be dequeued and transmitted, for example. the data dequeued may be the data received at step 620, for example. the data dequeued may be the data enqueued at step 630, for example. data may be prioritized before and/or during transmission, as described above. data may be communicated from the data communication system 410, for example. the data may be transmitted to one or more destination nodes 430, for example. the data may be communicated over one or more links, for example. for example, the data may be communicated by the data communication system 410 over a tactical data network to a radio. as another example, data may be provided by the data communication system 410 to an application running on the same system by an inter-process communication mechanism. data may be transmitted via a tcp socket connection, for ' example. at step 660, a command is dequeued. the command may be the command enqueued at step 640, for example. in certain embodiments, the command is dequeued after data associated with the command and/or a connection associated with the command has been dequeued and transmitted. for example, a close connection command may be dequeued after data associated with the connection has been dequeued and transmitted via the connection. one or more of the steps of the method 600 may be implemented alone or in combination in hardware, firmware, and/or as a set of instructions in software, for example. certain embodiments may be provided as a set of instructions residing on a computer-readable medium, such as a memory, hard disk, dvd, or cd, for execution on a general purpose computer or other processing device. certain embodiments of the present invention may omit one or more of these steps and/or perform the steps in a different order than the order listed. for example, some steps may not be performed in certain embodiments of the present invention. as a further example, certain steps may be performed in a different temporal order, including simultaneously, than listed above. thus, certain embodiments of the present invention provide systems and methods for queuing data and protocol mechanism commands for qos. certain embodiments provide a technical effect of helping to ensure that a connection is not prematurely closed by a protocol command before qos and data transmission is completed for that connection.
145-907-584-149-494
US
[ "US", "WO" ]
A63B21/078,A47C3/36,A63B24/00,G05B15/02
2021-03-09T00:00:00
2021
[ "A63", "A47", "G05" ]
motor powered lifting rack system
a motorized lifting rack that selectively lifts and lowers a barbell through linear actuators is provided. the powered lifting rack can engage a barbell at floor level. the linear actuators can also selectively raise and lower safety bars design to “spot” the barbell for a single user. the powered lifting rack is embodied in a system that includes a platform that the safety bars can nest in. the platform provides a plurality of actuating benches for a user of the barbell to utilize.
1 . a lifting rack system, comprising: a plurality of uprights; at least one linear actuator housed in each upright; a motor powering each linear actuator; for each upright, an interface extends along a length of the upright; and for each linear actuator, a support transition operatively associates a bar support and the at least one linear actuator so that the bar support is selectively movable along the interface. 2 . the lifting rack system of claim 1 , further comprising a platform, wherein a proximal end of each upright connects to the platform. 3 . the lifting rack system of claim 2 , the platform having a first actuating bench disposed between the plurality of uprights, wherein the motor is operatively associated with the first actuating bench in such a way that the first actuating bench is selectively movable between an extended position and a retracted position, wherein the retracted position an upper surface of the first actuating bench is approximately flush with an upper surface of the platform. 4 . the lifting rack system of claim 3 , the platform having at least one second actuating bench disposed outside of the plurality of uprights, wherein the motor is operatively associated with the at least one second actuating bench in such a way that the second actuating bench is selectively movable between an extended position and a retracted position, wherein the retracted position an upper surface of the second actuating bench is approximately flush with an upper surface of the platform. 5 . the lifting rack system of claim 4 , wherein the at least one linear actuator includes a support actuator and a safety actuator oriented in parallel, wherein the support actuator operates with the support transition, wherein the safety actuator operatively associated with a safety bar so that the safety bar is selectively movable between a nested condition and an elevated condition. 6 . the lifting rack system of claim 5 , wherein the platform provides a recess dimensioned to receive the safety bar in the nested condition so that an upper portion of the safety bar is flush with the upper surface of the platform. 7 . the lifting rack system of claim 6 , further comprising a safety transition operatively associated the safety actuator and a distal end of the safety bar. 8 . the lifting rack system of claim 7 , wherein the safety transition is u-shaped, and wherein the support transition is loop shaped. 9 . the lifting rack system of claim 7 , wherein a distal end of the safety bar has a cavity dimensioned to slidably receive the bar support so that the bar support is at least substantially received in the cavity when the safety transition and the support transition are at a shared elevation relative to the platform. 10 . the lifting rack system of claim 6 , wherein each bar support has a basket, and comprising a basket latch connected with the basket in such a way to pivot between a closed position and an open position. 11 . the lifting rack system of claim 10 , wherein safety bar has a notch, and wherein each notch has notch latch connected with the notch in such a way to pivot between a closed position and an open position. 12 . the lifting rack system of claim 11 , further comprising a plurality of cross members interconnecting each distal end of the plurality of uprights; and the motor disposed in the cross members. 13 . the lifting rack system of claim 12 , further comprising a plurality of cameras and at least one computer connected with the plurality of uprights or the plurality of cross members. 14 . the lifting rack system of claim 4 , wherein each second actuating bench is operatively associated with a bench actuator in such a way as to be selectively movable across a plurality of tilted orientations, wherein an upper surface of the second actuating is lockable in each tilted orientation defining an angle of incident relative to the platform, wherein the angle of incident selectively ranges between zero and forty-degrees. 15 . a lifting rack system, comprising: a platform; and at least one image capturing device coupled with at least one computer operatively associated with the lifting rack system, wherein the at least one computer is configured to provide a feedback regarding a user of the lifting rack system performing an exercise. 16 . the lifting rack system of claim 15 , wherein the feedback is a wireframe model of said user during the exercise. 17 . the lifting rack system of claim 16 , wherein the wireframe model includes a plurality of nodes, wherein each node represents a body portion of said user. 18 . the lifting rack system of claim 17 , wherein the at least one computer is configured to determine one or more reference angles between a respective body portion and the platform during the exercise. 19 . the lifting rack system of claim 18 , further comprising: a plurality of uprights supported by the platform; at least one linear actuator housed in each upright; a motor powering each linear actuator; for each upright, an interface extends along a length of the upright; for each linear actuator, a support transition operatively associates a bar support and the at least one linear actuator so that the bar support is selectively movable along the interface; and a fitness tool operatively associated with one or more of the bar supports, wherein the computer is configured to access a database of exercise routines and, based in part on a first comparison between the database and the wireframe model, selectively activate the motor to move at least one linear actuator. 20 . the lifting rack system of claim 19 , further comprising: an actuating bench disposed between the plurality of uprights, wherein the motor is operatively associated with the actuating bench in such a way that the actuating bench is selectively movable between an extended position and a retracted position, wherein the computer is configured to, based in part on a second comparison between the database and the wireframe model, selectively activate the motor to move the actuating bench.
cross-reference to related application this application claims the benefit of priority of u.s. provisional application no. 63/200,471, filed 9 mar. 2021, the contents of which are herein incorporated by reference. background of the invention every environment is a resource constrained environment, and the gym environment is no exception. an example of some but not all the constrained resources at the gym are as follows. equipment resources such as barbells, weights for the barbells, squat racks, lifting platforms, power racks, benches, dumbbells, other exercise devices or machines and space needed for the equipment. personnel resources such as personal trainers and people who can “spot” you in the gym. additionally, the amount of time you have while at the gym (this includes setting up and taking down the equipment as well as waiting for the equipment to be available) and the money you have for your training programs and memberships are additional limited resources. most equipment available today is inefficient at maximizing those limited resources. most multi-purpose equipment such as the power rack is insufficient at maximizing those limited resources. some but not all the inefficiencies of the power rack are as follows. even though the power rack is multi-purpose many power racks can only be used effectively by one user at a time. most power racks hold a barbell on two j-hooks (or similar devices that a barbell is placed on to facilitate use of the power rack). this barbell is not secured on the j-hooks, which are positioned in a series of vertical pinholes placed along space apart frame members. as a result, users of most power racks need to manually adjust the height of the j-hooks by, first removing the barbell (if not already placed), then pulling the j-hooks out of the frame members, and then placing two j-hooks in one of a series of vertical pinholes in the space apart frame members so that they properly align with each other. if the power rack has safety bars (or safety pins, straps or other similar devices), they too must be manually adjusted the same way as the j-hooks. additionally, since the barbell is not secured in place, completely loading one side of the barbell before loading the other side is difficult and dangerous because the barbell may tip over and fall off the j-hooks and land on someone when it is being loaded unevenly from high off the ground. in short, users cannot load large weights on one side of the barbell before loading the other side because nothing is securing the barbell on the j-hook and the barbell could tip over. accordingly, most power racks are inefficient when adjusting the height of the barbell on j-hooks, safety pins/bars and/or changing the weights on the barbell—which is amplified by the fact that each power rack in a gym will have dozens of users over the course of each day. therefore, different users must continuously change the height of the barbell and safety pins/bars in addition to changing the weight on the barbell before they can start just their first routine. then they may have to do it again for each following routine. additionally, most power racks may have users manually place a bench (or other similar device) for bench presses (or other similar movements) that must be moved into and out of position for those lifts so other users can use the power rack without the bench. this makes it additionally inefficient at training one or more people with the same barbell who need to use similar weight but for different lifts. additionally, most power racks may have users manually place lifting blocks (or other similar device) to perform barbell movements from higher starting positions on the power rack. this manual process of moving and repositioning the lifting blocks is inefficient at maximizing the number of people that can train with the equipment and/or minimizing the amount of time of people who need to do various barbell movements on the same power rack using the same barbell. additionally, most power racks may make it difficult to train multiple people even if they are using similar weight on the barbell. for example, the power rack may be configured for a user to squat two hundred and twenty-five pounds but even though a second user needs to deadlift two hundred and twenty-five pounds and a third user needs to bench two hundred and twenty-five pounds, there may be no way to effectively reconfigure the equipment to do those lifts on the same power rack within a few seconds of each other using the same barbell. the users in that example would have to manually unload/load the barbell, reconfigure the power rack for each movement and that process is inefficient. furthermore, most power racks may not have a real time active “spotting” system (similar to how users would “spot” each other) for users during their sessions and/or train them how to lift the weight with proper technique. this increases the risk of injury to users who do not know how to lift weight with proper technique and/or know how to properly configure the equipment. additionally, most power racks are incapable of lifting an unloaded or loaded barbell (or other loads) to an operable height from floor level for a user who does not have the strength or mobility to do it themselves. there is a need for a system that may help minimize one's time in the gym, minimize the amount of space needed for training equipment, maximize the number of people that can train at the gym and reduce the risk of injury during training. a system that may help the gym to be more profitable and cost effective for the gym owner, investor, as well as for users training with the present invention. summary of the invention broadly, an embodiment of the present invention provides a system that selectively lifts and lowers a barbell through linear actuators. the system can engage an unloaded barbell at floor level. the linear actuators can also selectively raise and lower safety bars to “spot” the barbell for a single user. the present invention is embodied in a system that includes a platform that the safety bars and actuating benches can nest in. the platform provides a plurality of actuating benches for a user of the barbell to utilize. the actuating benches on the outside of uprights can additionally “spot” the barbell and tilt to roll the barbell along the platform to reposition it for lifts on the floor or back to the bar supports and can deflect a dropped barbell away from the lifter. specifically, a system that selectively actuates the barbell on j-hooks, safety bars, side and central actuating benches between the floor and an operable height, wherein the j-hooks for the barbell to rest on and/or safety bars are provided with latches. the present invention relates to free weight training systems and, more particularly, may be a motor-powered lifting rack system that selectively lifts and lowers a barbell through linear actuators. the motor-powered lifting rack system can engage an unloaded barbell at floor level. the linear actuators can also selectively raise and lower safety bars to “spot” the barbell for a single user. the present invention is embodied in a system that includes a platform that the safety bars and actuating benches can nest in. the platform provides a plurality of actuating benches for a user of the barbell to utilize. the actuating benches on the outside of uprights can additionally “spot” the barbell and tilt to roll the barbell along the platform to reposition it for lifts on the floor or back to the bar supports and can deflect a dropped barbell away from the lifter. the motor-powered lifting rack system embodied in the present invention makes it more efficient and precise to adjust the height of the side and central actuating benches, the j-hooks, and safety bars with or without a barbell and with or without weight on the barbell. it also secures the barbell on the j-hooks or in a notch in the safety bars with bar locks/latches for more efficient loading and unloading of weight. the overall system enables a user to selectively lift, through linear actuators, the barbell from a ground level via the j-hooks, the safety bars or the side actuating benches. it also is more efficient at maximizing the number of people using the equipment because multiple people can perform different barbell exercises while using the same weight. for example, someone benches two hundred and twenty-five pounds and someone else needs to deadlift two hundred and twenty-five pounds. after the person benches two hundred and twenty-five pounds the present invention can lower the bench and lower the barbell that is on the j-hooks to floor level. then it can reposition the barbell to the middle of the platform by tilting the outside benches and rolling the barbell into place for the other person to deadlift two hundred and twenty-five pounds. the process is easily reversed after the deadlift so that the original person can bench two hundred and twenty-five pounds again. additionally, the motor-powered lifting rack system embodied in the present invention reduces the risk of injury by having a redundant spotting system to effectively “spot” users during their training sessions. this spotting system has a faster reaction time than human spotters and can “spot” far heavier weight that a normal human can. the present invention further reduces the risk of injury by training users how to properly lift the weight and reduces the risk of injury to other bystanders by having a command-and-control system to keep the barbell on the system, even when the barbell is accidentally or intentionally dropped on the system. in one aspect of the present invention, a lifting rack includes a plurality of uprights; at least one linear actuator housed in each upright; a motor powering each linear actuator; for each upright, an interface extends along a length of the upright; and for each linear actuator, a support transition operatively associates a bar support and at least one linear actuator so that the bar support is selectively movable along the interface. in another aspect of the present invention, the motor powered lifting rack system include the following: a platform, wherein a proximal end of each upright connects to the platform; the platform having a first actuating bench disposed between the plurality of uprights, wherein the motor is operatively associated with the first actuating bench in such a way that the first actuating bench is selectively movable between an extended position and a retracted position, wherein the retracted position an upper surface of the first actuating bench is approximately flush with an upper surface of the platform; the platform having at least one second actuating bench disposed outside of the plurality of uprights, wherein the motor is operatively associated with at least one second actuating bench in such a way that the second actuating bench is selectively movable between an extended position and a retracted position, wherein the retracted position an upper surface of the second actuating bench is approximately flush with an upper surface of the platform, wherein the at least one linear actuator includes a support actuator and a safety actuator oriented in parallel, wherein the support actuator operates with the support transition, wherein the safety actuator operatively associated with a safety bar so that the safety bar is selectively movable between a nested condition and an elevated condition, wherein the platform provides a recess dimensioned to receive the safety bar in the nested condition so that an upper portion of the safety bar is flush with the upper surface of the platform; a safety transition operatively associated the safety actuator and a distal end of the safety bar, wherein the safety transition is u-shaped at the distal end and loop shaped at the proximal end, and wherein the support transition is loop shaped, wherein a distal end of the safety bar has a cavity dimensioned to slidably receive the bar support so that the bar support is at least substantially received in the cavity when the safety transition and the support transition are at a shared elevation relative to the platform, wherein each bar support has a basket, and having a basket latch connected with the basket in such a way to pivot between a closed position and an open position, wherein safety bar has a notch, and wherein each notch has notch latch connected with the notch in such a way to pivot between a closed position and an open position; a plurality of cross members interconnecting each distal end of the plurality of uprights; and the motor disposed in the cross members; and a plurality of cameras and at least one computer connected with the plurality of uprights or the plurality of cross members. in yet another embodiment of the present invention, a lifting rack system includes each second actuating bench is operatively associated with a bench actuator in such a way as to be selectively movable across a plurality of tilted orientations, wherein an upper surface of the second actuating is lockable in each tilted orientation defining an angle of incident relative to the platform, wherein the angle of incident selectively ranges between zero and forty-degrees. in an additional embodiment of the present invention, a lifting rack system includes the following: a platform; and at least one image capturing device coupled with at least one computer operatively associated with the lifting rack system, wherein the at least one computer is configured to provide a feedback regarding a user of the lifting rack system performing an exercise, wherein the feedback is a wireframe model of said user during the exercise, wherein the wireframe model includes a plurality of nodes, wherein each node represents a body portion of said user, wherein the at least one computer is configured to determine one or more reference angles between a respective body portion and the platform during the exercise; and further including the following: a plurality of uprights supported by the platform; at least one linear actuator housed in each upright; a motor powering each linear actuator; for each upright, an interface extends along a length of the upright; for each linear actuator, a support transition operatively associates a bar support and the at least one linear actuator so that the bar support is selectively movable along the interface; and a fitness tool operatively associated with one or more of the bar supports, wherein the computer is configured to access a database of exercise routines and, based in part on a first comparison between the database and the wireframe model, selectively activate the motor to move at least one linear actuator; and further including an actuating bench disposed between the plurality of uprights, wherein the motor is operatively associated with the actuating bench in such a way that the actuating bench is selectively movable between an extended position and a retracted position, wherein the computer is configured to, based in part on a second comparison between the database and the wireframe model, selectively activate the motor to move the actuating bench. these and other features, aspects and advantages of the present invention will become better understood with reference to the following drawings, description and claims. brief description of the drawings fig. 1 is a perspective view of an exemplary embodiment of the present invention. fig. 2 is a perspective view of an exemplary embodiment of the present invention, illustrating deployment of a central actuating bench and safety bars. fig. 3 is a perspective view of an exemplary embodiment of the present invention, illustrating deployment of the side actuating benches. fig. 4a is a side elevation view of an exemplary embodiment of the present invention, with parts broken away for clarity. fig. 4b-4c is a detailed view of 4 a fig. 5 is a top plan view of an exemplary embodiment of the horizontal members of the present invention, with parts broken away for clarity. fig. 6 is a detailed section view of an exemplary embodiment of the present invention, taken along line 6 - 6 in fig. 3 . fig. 7 is a detailed section view of an exemplary embodiment of the present invention, taken along line 7 - 7 in fig. 3 . fig. 8 is a section view of an exemplary embodiment of the present invention, taken along line 8 - 8 in fig. 3 , illustrating deployment of the tilt function of the side actuating benches, with parts removed for clarity. fig. 9 is a perspective view of an alternative embodiment of the present invention. fig. 10a is a side view of an exemplary embodiment of the present invention, illustrating a digital image of a user. fig. 10b is a schematic view of an exemplary embodiment of the present invention, illustrating a wireframe overlaid onto the digital image of the user of fig. 10a . fig. 10c is the schematic view of an exemplary embodiment of the present invention, illustrating the wireframe of fig. 10b used for analysis. fig. 11 is a section view of an exemplary embodiment of the present invention, taken along line 11 - 11 in fig. 2 , illustrating deployment of the central actuating bench, with parts removed for clarity. fig. 12 is a perspective view of an exemplary embodiment of the present invention. fig. 13 is a schematic view of an exemplary embodiment of the present invention. detailed description of the invention the following detailed description is of the best currently contemplated modes of carrying out exemplary embodiments of the invention. the description is not to be taken in a limiting sense but is made merely for the purpose of illustrating the general principles of the invention, since the scope of the invention is best defined by the appended claims. referring now to figs. 1 through 13 , the following is an itemized reference number list for the figures. any assumed quantities and the naming convention used for the following references of the current embodiment of the invention is not limiting but for the reader to understand the best currently contemplated modes of carrying out exemplary embodiments of the present invention. 20 motor powered lifting rack system (or “system”), 30 platform, 40 side actuating bench, 50 central bench, 55 adjustable bench, 60 frame, 62 a- 62 d uprights, 64 a- 64 d crossmembers, 66 camera, 68 computer, 70 bar support, 70 a bar support transition, 70 b actuator, 70 c worm gear, 70 d worm screw, 70 e worm screw shaft, 70 f drive shaft, 71 bar support latch, 72 basket, 73 interface, 74 motor, 80 safety bar, 80 a safety bar transition, 80 b actuator, 80 c bevel and worm gear, 80 d worm screw, 80 e worm screw shaft, 80 f drive shaft, 80 g bevel gear, 80 h bevel gear shaft, 80 i bevel gear, 80 j actuator, 80 k safety bar transition, 81 safety bar latch, 82 safety bar notch, 83 a-b interfaces, 84 motor, 86 safety bar cavity, 88 safety bar recess, 90 scissor lifting actuator, 100 camera frame, 110 lifter, 112 outline, 114 wire model, 116 reference angle, 120 barbell and 121 weight plate. additionally, 80 a 1 - 2 are distal and proximal ends of 80 a, 80 k 1 - 2 are distal and proximal ends of 80 k, and 70 a 1 - 2 are the distal; and proximal ends of 70 a. referring now to figs. 1 through 12 , the present invention may include a system 20 . the system 20 may include a frame 60 having four vertical uprights 62 a- 62 d that extend from a platform 30 . four horizontal members 64 a- 64 d may interconnect the distal ends of the vertical uprights 62 a- 62 d, as illustrated in figs. 1 through 5 and fig. 12 . the platform 30 may be dimensioned and adapted to secure the vertical uprights 62 a- 62 d along a supporting surface. the platform 30 may provide a safety bar recess 88 extending between each pair of longitudinal uprights (e.g., 62 a and 62 c is one pair of longitudinal uprights). each safety bar recess 88 is dimensioned to receive a safety bar 80 operatively associated with the respective pair of longitudinal verticals uprights in a nested condition. in the nested condition an uppermost portion of said safety bar 80 is approximately flush with an upper surface of the platform 30 . each safety bar 80 may provide a safety bar notch 82 for receiving a portion of a barbell 120 . a latch 81 may close off an upper portion of the safety bar notch 82 , thereby preventing a received portion of the barbell 120 from being lifted out of the notch 82 . in the nested condition, the safety bar notch 82 may occupy a space below the upper surface of the platform 30 . therefore, a barbell 120 being supported by the platform 30 and/or side actuating benches 40 may be engaged by the notch 82 as the safety bar 80 moves from the nested condition to an elevated condition between the platform 30 and/or side actuating benches 40 and the distal ends of the associated pair longitudinal uprights 62 a- 62 c and 62 b- 62 d, respectively. the platform 30 may provide a central actuating bench 50 disposed between the two pairs of longitudinal uprights and disposed adjacent to a first pair of latitudinal uprights (e.g., 62 a and 62 b are a pair of latitudinal uprights). the central actuating bench 50 is movable between a retracted position (see fig. 1 ) and an extended position (see figs. 2 and 11 ). in the retracted position, an upper surface of the central actuating bench 50 is flush with an upper surface of the platform 30 . in the extended position the central actuating bench 50 is adapted to accommodate a recumbent human user. the central actuating bench 50 may have scissor lift actuator 90 or other actuation mechanism for moving the central actuating bench 50 between the retracted and extended positions, wherein the actuation mechanics are powered by the present invention as shown in fig. 11 . the central actuating bench 50 may be adapted to be an adjustable bench 55 as shown in fig. 9 . these adaptations are for seated incline, decline or flat bench presses or other similar movements. it is understood that the central actuating bench 50 may not be located between the uprights 62 a-d (as shown in figs. 1-3 and 12 ). such as but not limited to being mounted on the outside edge of the platform 30 between the uprights 62 a-d, elsewhere on the system 20 or on a wall mounted device sperate and next to the system 20 and may be lower/raised/pivoted, etc. into position for bench presses or other similar movements and when stored away is not in the way of the users to do other movements on the system 20 . the platform 30 may provide a side actuating bench 40 disposed to the outside of each pair of longitudinal uprights. each side actuating bench 40 is movable between a retracted position (see fig. 1 ) and an extended position (see fig. 3 and fig. 8 ). in the retracted position, an upper surface of each side actuating bench 40 is flush with an upper surface of the platform 30 . in the extended position the side actuating bench 40 is adapted to accommodate one or more recumbent human users. each side actuating bench 40 may have scissor lift actuator 90 or other actuation mechanism for moving between the retracted and extended positions, wherein the actuation mechanics are powered by the system 20 . the side actuating benches 40 may be adapted to lock in a tilted position, as illustrated in fig. 8 providing an angle of incident ‘a’ between the upper surface of the bench 40 and the platform 30 . the angle of incidence a may also be determined relative to a plane parallel with the platform 30 , wherein this parallel plane is associated with an initial, non-tilted orientation/position of the upper portion/surface of the bench, as illustrated in fig. 8 . the angle of incidence a can range from zero degrees (parallel with the platform) to any angle afforded by the upper portion of the actuating bench (at some point it may contact the platform 30 ). in some embodiments, the angle of incidence may be ninety degrees or more based on the topology of the platform and actuating bench. this is for rolling the barbell 120 up and down the platform 30 to reposition it for other lifts as well as deflecting a dropped barbell away from the user. two actuators 80 b-j and 70 b may be disposed in each vertical upright 62 a- 62 d. the actuators 80 b-j and 70 b may be vertically oriented and in a parallel relationship relative to each other as they extend a substantial length of the respective vertical upright (between the distal end and the proximal end, adjacent the platform 30 ). each actuator 80 b-j and 70 b operatively associates with an actuator interface 83 a-b and 73 , respectively, along an outer surface of the respective vertical upright, as illustrated in figs. 6 and 7 . for each pair of longitudinal uprights, the respective actuator interfaces 83 a-b and 73 face each other, as illustrated in fig. 4a . the actuator interface interfaces 83 a-b and 73 also extend a substantial length of the respective vertical upright. in some, but not all, embodiments the actuators 80 b-j and 70 b may be worm screw and gear jacks with a translation nut or other forms of linear actuators. in some embodiments, the actuator interfaces 83 a-b and 73 may be slots in the vertical upright that communicate with the respective worm screw and gear jacks with a translation nut linear actuator. the actuator interfaces 83 a-b and 73 may be dimensioned and adapted to receive and operatively associate with a transition 80 a-k and 70 a respectively. the safety transition 80 k may be u-shaped to be received and slide along safety bar actuator interface 83 a and may be loop-shaped 80 a to be received and slide along safety bar actuator interface 83 b. the support transition 70 a may be loop-shaped to be received and slide along a support actuator interface 73 . the u-shape and loop-shape complement each other and enable access to the respective actuators 80 b-j and 70 b that are spaced apart in a parallel orientation within the same vertical upright. each transition 80 a-k and 70 a may be received in its respective actuator interface 83 a-b and 73 by way of the distal end of the respective vertical upright. each transition 80 a/ 80 k and 70 a has a distal end 80 a 1 / 80 k 1 , 70 a 1 and a proximal end 80 a 2 / 80 k 2 and 70 a 2 , respectively. the distal ends 80 a 1 / 80 k 1 and 70 a 1 may have an engagement element or the like dimensioned and adapted to operatively associate with the respective engagement element of the actuators 80 b-j and 70 b. in certain embodiments, wherein the actuators 80 b-j and 70 b are screw actuators, the distal ends 80 a 1 / 80 k 1 and 70 a 1 may provide a first gear arrangement that engages a second gear arrangement of the screw actuator so that rotation (clockwise or counterclockwise) of the non-travelling screw actuator causes the transition 80 a/ 80 k or 70 a to travel linearly up or down the length of the screw actuator. the proximal ends of the transitions 80 a 2 / 80 k 2 and 70 a 2 may be removably or fixedly attached to the safety bar 80 and a bar support 70 , respectively. referring to fig. 5 , the horizontal members 64 a- 64 d may house a motor 74 / 84 (electric, pneumatic, or the like) with driving drive shafts 70 f/ 80 f that couple with the worm screw shafts 70 e/ 80 e, worm screws 70 d/ 80 d, worm gears 70 c, worm/bevel gear 80 c, bevel gears 80 g, bevel shaft 80 h, bevel gears 80 i, and actuators 80 b-j and 70 b in each vertical upright 62 a- 62 d so that the actuators 80 b-j and 70 b rotate, which in turn selective moves (i.e., causes travelling of) the respective transitions 80 a/ 80 k or 70 a. the present invention contemplates the actuators 80 b and 70 b (in a shared vertical upright) being independently rotatably relative to each other. it being understood that other methods to apply a force to lift the bar support 70 and safety bars 80 are contemplated by the present invention, such as block and tackle pulley systems, hydraulics, counterweights, other jack screw systems, linear actuators or belt systems. it is understood that the motor 74 / 84 , drive shafts 70 f/ 80 f, worm screw shafts 70 e/ 80 e, worm screws 70 d/ 80 d, worm gears 70 c, bevel shafts 80 h, bevel gears 80 g/ 80 i, worm/bevel gears 80 c need not be housed in the horizontal members 64 a- 64 d, they may be housed in the platform 30 as shown in fig. 9 , in the uprights 62 a-d or any combination of locations housed on or inside the system 20 . additionally, the motor 74 / 84 and drive shafts 70 f/ 80 f could be separate from the system 20 or a motor 74 / 84 could couple and directly engage the actuators 80 b-j, 70 b to reduce the number of components for the system 20 . one embodiment of the present invention may have two motors 74 / 84 that independently actuate the bar supports 70 and safety bars 80 relative to each other. one motor 74 may cause the translation of the bar supports 70 by engaging the drive shafts 70 f, that rotate the worm screw shafts 70 e, that rotate the worm screws 70 d, that engage the worm gears 70 c, which rotates 70 b clockwise or counterclockwise which in turn selective moves (i.e., causes travelling of) the respective transition 70 a. one motor 84 may cause the translation of the safety bars 80 by engaging the drive shafts 80 f, that rotate the worm screw shafts 80 e, that rotate the worm screws 80 d, that engage the worm/bevel gears 80 c, which rotates the bevel gears 80 g and bevel shaft 80 h clockwise or counterclockwise, which rotates the bevel gears 80 i clockwise or counter clockwise, to engage the rotation of 80 b-j that in turn selective moves (i.e., causes travelling of) the respective transition 80 a and 80 k as illustrated in fig. 4a-c and fig. 5 . it is understood that one motor can power the actuators 80 b-j, 70 or scissor lifting actuators 90 by use of a more complex gear box system (not shown) in the system 20 . referring the fig. 4a , the present invention may embody a bar support 70 that connects to the proximal end of each bar support transition 70 a. the bar support 70 may include but is not limited to j-hooks. the bar support 70 define a basket portion 72 for supporting a portion of the barbell 120 . the basket portion 72 has a depth. a basket latch 71 may close an upper portion of the basket portion 72 , thereby preventing a received portion of the barbell 120 from being lifted out of the basket portion 72 . it should be clear that the bar support 70 may not be j-hooks, but can include any structure (e.g., flat, spherical, cylindrical, etc.) that can engage various fitness equipment (e.g., dumbbells, free weights, resistance bands, etc.) or portions of the human user themselves. thus, the bar support 70 can be “universal”. additionally, it should be clear that the safety bar 80 may not be rectangular bars, but can include any structure (e.g., flat, spherical, cylindrical, etc.) that can engage various fitness equipment (e.g., dumbbells, free weights, resistance bands, etc.) or portions of the human user themselves for “spotting” or safety purposes. thus, the safety bar 80 can be “universal”. the bar support 70 and the safety bar 80 vertically align (since they both connect to the same vertical uprights). the distal ends of each safety bar 80 may provide cavities 86 into which the depth of the basket portion 72 can nest. note, that the safety bar 80 need not be in the nested condition for this to happen. though when this does happen in the nested condition, then an upper portion of the basket portion 72 may be approximately flush with the upper surface of the platform 30 (as the basket portion 72 occupies space within the safety bar 80 so that, like the safety bar notch 82 , the basket portion 72 may receive/engage a portion of a barbell 120 that is supported on the upper surface of the platform 30 and/or side actuating benches 40 and/or the safety bar 80 . the uprights 62 a-d may also serve as a stop for the barbell 120 should the barbell roll up or down the platform 30 and/or the side benches 40 and/or safety bar 80 . this may keep the barbell 120 from rolling off the system 20 . the uprights 62 a-d, safety bars 80 and the side actuating benches 40 may encompass (along with cameras 66 , a computer 68 , and the like, which are disclose in more detail below) a synergistic system to control the location of the barbell 120 on the system 20 . that system may keep the barbell 120 from rolling of the system 20 . the actuating side benches 40 and safety bars 80 may also assist the lifter with a “lift off” from the bar supports 70 or back to the bar supports 70 , should the lifter request it to do so. the central bench 50 may be used as a surface to squat on like a box for box squats. for that use of the system 20 , the user would have the barbell 120 placed in the notch 82 that is raised by the safety bar 80 to the user's height to begin the squat and the central bench 50 actuated to the appropriate anthropometry of the user to squat to. the user would lift the weight off the notch 82 while facing the computer 68 to squat to the central bench 50 . during the squat the notch 82 would be lowered by the safety bars 80 so that they would not get in the way of the user to squat to the central bench 50 . then when the user squats to the central bench 50 the user would stand back up while being spotted by the safety bars 80 and/or side actuating benches 40 until the barbell 120 is placed back in the notch 82 at the top of the squat. the frame 60 may support cameras 66 and electrically connected computers 68 to facilitate command and control of the selectively movable safety bars 80 , side actuating benches 40 , central bench 50 , adjustable bench 55 and bar supports 70 . the computer 68 may have a display and user interface for further enabling the command and control. for instance, the computers 68 may be configured, based on the pixels captured by the connected cameras 66 , to selective move the bar support 70 to provide the required spacing for the barbell 120 relative to a person recumbently disposed on the central actuating bench 50 for bench presses or other similar movements. as a default, the latitudinally opposing bar support 70 are kept in alignment. it is understood that the cameras 66 may not be mounted on the uprights 62 a-d, crossmembers 64 a-d or the frame 60 . the cameras 66 may be mounted on their own camera frame 100 as shown in fig. 9 and/or mounted separate from the system 20 . additionally, it is understood that the computer 68 may be mounted elsewhere on or inside the system 20 such as but limited to on the camera frame 100 as shown in fig. 9 and/or mounted separate from the system 20 . it is understood that there may be a combination of alternative configurations of the system 20 . such as but not limited to keeping the crossmembers 64 a-d but moving the linear actuator motors, shafts, screws and gears to be housed inside the platform 30 or in the uprights 62 a-d. additionally, this includes changing the camera 66 placement locations, camera 66 angles that look up/down towards the lifter or platform 30 , where the cameras 66 are focused to look on the system 20 , camera mount 100 placement locations and the number of cameras 66 . it is understood that the side actuating benches 40 may be additionally modified to recess lower than the surface of the platform 30 to allow deficit movements such as a deficit deadlift and the like. it is understood that the latches on the safety bar notch 81 and the bar support 70 may be additionally modified to electronically open and close by the computer or other electronic systems. it is understood that all the linear actuating systems described in this application may be modified to be an all-manual system powered by the human user. it is understood that when the barbell 120 is placed on the bar support 70 in the basket 72 or the safety bar notch 82 and secured by the latches 71 or 81 , respectively, the barbell 120 may be prevented from moving out of those locations and/or from keeping the barbell 120 from rotating while secured to use the barbell 120 as a pull-up bar that is adjustable to the user's height by use of the actuating systems decried in this application. computer system command and control applications referring to fig. 13 , the computer(s) 68 may assist the lifters in workout programming, exercise selection, counting and verifying repetitions of movement were properly executed in real time via use of the cameras 66 . the computer(s) 68 may assist the lifter(s) in the loading/unloading of a barbell 120 via use of the cameras 66 . the computer(s) 68 may assist in the transition, use, spotting, teaching, coaching/technique correction of the following movements and variations with a loaded or unloaded barbell 120 in real time via use of the cameras 66 (including lifting/lowering and repositioning of a loaded or unloaded barbell 120 to or from the platform 30 or side actuating benches 40 or central bench 50 or adjustable bench 55 or bar supports 70 or safety bars 80 ): press; bench press; squat; deadlift; clean; jerk; snatch: variations of those movements and the like. the computer(s) 68 may execute voice commands and/or independently set the safety bars 80 , the bar supports 70 , the central actuating bench 50 , adjustable bench 55 and side actuating benches 40 to different heights for better use and safety for each lifter based on their anthropometry. the computer 68 may use the cameras 66 to help provide weight verification by line of sight of weights 121 and/or the barbell 120 . each weight 121 is of a different thickness, diameter and/or color and knows which weight 121 weighs a certain amount. the computer 68 may use the cameras 66 to help assist the transition, use, spotting, teaching, coaching/technique movement pattern correction by using the bodies reference angles 116 based on anthropometry as shown in fig. 10a-c in real time. for example, the angle between the lifter's back 116 and the platform 30 can provide enough data if the lifter is setting their back correctly before the start of a deadlift. the computer 68 may use the cameras 66 to “see” the lifter/barbell that are linked up the computer 68 that controls the system 20 to better assist the lifter. the computer 68 may use cameras 66 to “see” if the bar latches/locks 71 / 81 are in use or not to help prevent the system 20 from operating if they are improperly used to prevent damage to the system 20 . cameras 66 may be placed at the following locations relative to the frame 60 : one front middle; one rear middle; two on the sides in the middle; and one on each side, whereby 360-degree visual coverage of the lifter and barbell 120 are captured. cameras 66 may be hung down from mounts on the ceiling of the frame 60 . the cameras 66 may be disposed approximately eight feet off the surface of the platform 30 when hanging from the ceiling mounts that are approximately nine feet from the surface of the platform 30 . the cameras 66 may be in fixed and/or moveable positions. the cameras 66 may be oriented to look downward and towards the center of the platform 30 . the computer 68 may be configured to provide logistics support by knowing what load and position of the barbell 120 is on the system 20 as well as on other systems 20 in a network of systems 20 , wherein the computer 68 can communicate to lifter(s) where to go next for their current and future lift(s) and what weight to use to minimize the que of the system 20 . for example, if a user had a plurality of systems 20 within a few feet of each other with different/similar loads on each barbell 120 on each system 20 the computers 68 will calculate where each person should go and what to do based on their workouts and tell them that in real time to reduce the que on the system 20 . the system 20 may “talk” to lifters via bluetooth or other wireless communication technology via earphones, speakers, or the like on the frame 60 , other software applications or “smart” devices. one system 20 may “talk” to other system 20 or other “smart” devices via wifi, lan or bluetooth in the network of system 20 . the system 20 may be capable of lan, wifi and/or ethernet wiring and/or being connected to the internet for live coaching by trainers, updates to the system 20 and/or transmit data to other computers 68 , a central computer or data storage and processing systems. the system 20 may be plugged into a power outlet, use batteries or other power storage and retrieval systems, have usb outlets, antennas or receivers. the computer 68 may be configured to keep track of the wear and tear of the system 20 for engineering updates and spread the wear and tear amongst systems 20 in a network of systems 20 . the computer 68 may be configured to provide weight verification on the barbell 120 so that the lifter is using the correct weight and prevents misloading of the barbell 120 . the computer 68 may be configured to provide advance lifting support by reducing the perceived load on the barbell 120 by providing an opposite force on the barbell 120 . for example, a barbell 120 may weigh forty-five pounds but a lifter can only lift and lower thirty-five pounds on the bench press. so, an upward force of ten pounds can be applied, via the linear actuators 80 b-j for the safety bars 80 and/or actuators 90 for the side actuating benches 40 , to make the barbell 120 “weigh” thirty-five pounds. the computer 68 may be configured to allow for the use of more advanced lifting techniques such as eccentric overload training. for example, the lifter puts three hundred and fifteen pounds on the barbell 120 for bench press but only can bench three hundred pounds. the lifter can lower the three hundred and fifteen pounds but when pressing the weight back up the system 20 can provide the fifteen or more pounds of force—via the linear actuators 80 b-j for the safety bars 80 and/or the side actuating bench actuators 90 for the side actuating benches 40 —necessary to help the lifter rack the weight. the computer 68 may be configured to selective move and lock the central actuating bench 50 , side actuating benches 40 , adjustable bench 55 , bar supports 70 and safety bars 80 for assisting the lifter in concentric, eccentric or isometric weight-lifting regimens. the computer(s) 68 may facilitate a tilt function of side actuating benches 40 that may be used for repositioning the barbell 120 along the platform 30 or side actuating benches 40 or safety bars 80 as shown in fig. 8 . by tilting the side actuating benches 40 clockwise or counterclockwise the barbell 120 is going to roll in that direction with or without weight on the barbell 120 . this tilt function may be controlled by a computer 68 that knows the degree of tilt of the side actuating bench 40 . the degree of tilt may be changed by using one of the actuators 90 to raise or lower one part of the side actuating bench 40 more than the other part of the same side actuating benches 40 and thus a tilt is created. a computer 68 may know the position of the barbell 120 via use of the cameras 66 and may tilt the side actuating benches 40 to control the location of the barbell 120 via use of the actuators 90 . each platform portion or associated benches 40 can be independently controlled to tilt to greater control the rolling of the barbell 120 to position. similarly, the platform 30 and side actuating benches 40 can be also used to help “catch” or “absorb” a dropped barbell 120 to help dampen the sound and keep the barbell 120 from bouncing away or towards the lifter(s). the basket latch 71 and the notch latch 81 may be manually controlled by the lifter(s) and may visually verify their securement by use of the cameras 66 and the computer 68 display. the computer 68 may verify the use of the basket latch 71 and notch latch 81 by the cameras 66 so that no damage to the system 20 will occur if improperly used. the computer(s) 68 may control all motors and actuators of the system 20 and cameras 66 . the computer(s) 68 may also process and relay data to other machines, computers, devices or a central computer in the network. the computer(s) 68 may collect data on every lifter on weights used, movements executed, time spent unloading/loading the barbell, resting and time spent on each lift including warm up and work sets in real time. as well as positioning of the equipment on the system 20 when the system 20 is in use. the computer(s) 68 may also collect data on how long each lifter is in que and time spent entering, leaving, and getting prepared for the lift or any other data wanted by trainers, researchers or the users themselves. the safety bars 80 with the notch 82 may give the system 20 the capability to perform as a mono-lift. for example, a person wants to squat two hundred and twenty-five pounds with the mono-lift function. they would load the barbell 120 to two hundred and twenty-five pounds while the barbell 120 is positioned in the notch 82 . they would position themselves under the barbell 120 as needed for the squat and then start the squat by standing up and move the barbell 120 off the notch 82 , the notch 82 may be lowered by the safety bars 80 controlled by the camera 66 and computer 68 system. then the user would squat without having to move their feet into a new position. then at the bottom of the repetition of the squat the safety bar 80 and notch 82 may be raised by the camera 66 and computer 68 system so that the lifter can rack the weight back into the notch 82 at the end of the repetition. the safety bars 80 and side actuating benches 40 may complement each other. they may provide more lifting forces and different ways to spot/assist a lifter. the safety bars 80 may provide a “track” when raised slightly more than the platform 30 or side actuating benches 40 thus keeping the barbell 120 from rolling off the system 20 . because self-locking worm screw and gear linear actuators 70 b and 80 b-j may be used for the uprights 62 a-d and actuators 90 , each height of the bar support 70 , safety bar 80 , side actuating benches 40 , central bench 50 , adjustable bench 55 is simultaneously self-locking. this makes the system safer in the event of power loss, weight dropping on the components and extends the life of the motors 74 / 84 and actuators 90 powering the system 20 by putting less stress on the motors 74 / 84 and actuators 90 when loads are moved, lifted, lowered or dropped on the system 20 . the computer 68 and camera 66 system may use the lifters anthropometry by approximations of the user's body to determine the correct reference angles and distances between joints and other parts of the human body for a lifter to configure themselves to lift the barbell 120 , other weights or devices. as illustrated in fig. 10a-c , the lifter 110 may have their image taken by the cameras 66 and simplified to an outline 112 and wire model 114 for analysis of and by use of the computer 68 to configure the user to lift the barbell 120 with proper technique or use other fitness tools. the points in fig. 10b are numbered to portray a simple example of where some key locations/nodes but not all locations of the human body are for calculating reference angles and proper technique. nodes 1 and 2 represent locations of the cervical spine c1 and c7 respectively and the rest of the numbers are odd numbered when viewed from the right to represent the right side of the user. the even numbers not shown represent the left side of the same location. node 3 is the right shoulder joint and 4 would be the left shoulder joint. node 5 is the right elbow and node 6 would be the left elbow, etc. node 7 is the right wrist, node 9 is the center of the right hand, node 11 is the right hip joint, node 13 is the right knee, node 15 is the right ankle, node 17 is the right heel and node 19 is the right toes. the computer 68 and camera 66 system may approximate these locations to determine the distances between them and each other to finally calculate the lifters anthropometry and references angles for the lifts to execute with proper technique in real time. the present invention contemplates a database of wireframe model exercise routines against which the computer(s) 68 may compare captured wireframe models to in order to make a determination of a proper or improper positioning of one or more of a user's body portions. the voice commands by the computer 68 to the user may be in the voice of the user, a generic “robotic” voice or other voices such as but limited to their trainers or a celebrity. the cameras 66 and computer 68 system may record the movements of a trainer or a user performing a workout in real time with which it can have users perform for their workout in real time for local or long-distance training on their own systems 20 . the system 20 may help the users of said workout routine with coaching of those movements in real time. the cameras 66 and computer 68 system may recognize other fitness tools such as dumbbells, exercise bikes, row machines, jump ropes or any other fitness tool and may train people how to use them the same way it would train people how to lift the barbell 120 . this includes bodyweight movements. the camera 66 and computer 68 system may “spot” the user via visual cues. for example, when the system 20 is configured for a user to bench press and the safety bars 80 and/or side actuating benches 40 are raised to a position slightly below the user on the central bench 50 such that if the barbell 120 is dropped, they may raise and contact the barbell 120 and not the user. for example, when the user is on the central bench 50 and takes the barbell 120 off the bar supports 70 the computer 68 and camera 66 system may know that first movement and position is the start and end of the movement. it may know the barbell 120 will touch the user's chest at the bottom of the movement before pressing the barbell 120 back to the initial position because the computer 68 might have a database of exercises and knows what to expect with that lift or other lifts or movements. while the user is performing the bench press and if they drop the barbell 120 intentionally, due to injury, muscle failure or can't press the weight of their chest or experience muscle failure during any other part of the movement or other reasons the camera 66 and computer 68 may “see” that and react by raising the safety bars 80 and/or side actuating benches 40 to contact the barbell 120 and assist the user to rack the weight back into the bar supports 70 . this process may be very similar to how another human user would “spot” another lifter using visual ques or body language or voice commands. this includes the user using voice commands or body language such as saying “help” or shaking their head “no” to get the system 20 to assist the lifter. this spotting process is not limited to the bench press but any movement with which the safety bars 80 and/or side actuating benches 40 are needed to “spot” the user. it is understood when the present invention is training users it may consider the users limitations such as but not limited to range of motion, previous or current injury(s), etc. for training purposes. it is understood when the barbell 120 becomes wedged in-between the bar supports 70 and safety bars 80 the system 20 may “see” that and prevent damage to the system 20 . it is understood that the priority of the system 20 may be the health of the user and not damage to the system 20 . platform, bar support, and safety bar specifications the following dimensions and specifications of the system 20 are given so the reader has a general sense of the relative size of the system 20 . many aspects of the system 20 may change. the system 20 may be significantly larger or smaller than what is specified. platform 30 base dimensions may be approximately eight feet wide and approximately nine feet long, height of base is determined by space needed for scissor lifts and motors/actuators, drive shaft, support trusses etc. for the scissor lifting actuators 90 or other actuating devices, but overall, the ceiling (top surface of the crossmembers 64 a-d) may be approximately nine feet from the surface of the platform 30 . the central actuating bench 50 supporting surface may be approximately ten inches wide and may be approximately forty-eight inches in length. the actuating bench 50 may extend to approximately twenty inches above the platform 30 . the side actuating benches 40 may raise approximately five feet from the platform 30 and their supporting surface may be approximately twenty-eight inches wide and may be approximately one hundred and four inches in length. the longitudinal spacing of the vertical uprights may be approximately one hundred inches. the latitudinal spacing of the vertical uprights may be approximately forty eight and one-half inches. the vertical uprights 62 a- 62 d may be approximately nine feet in length. the safety bar 80 may be approximately two inches wide, 96 inches in length. the notches 82 may be approximately mid-length along the safety bar 80 . the motors 74 / 84 , actuators 90 , worm gears 70 c, worm screws 70 d/ 80 d, bevel gears 80 g/ 80 i, worm/bevel gear 80 c, drive shafts 70 f/ 80 f, worm screw shafts 70 e/ 80 e, bevel shafts 80 h and connecting mechanisms may also be housed in the platform 30 or reengineered to be in the uprights 62 a-d. it is understood that the motor 74 / 84 and drive shafts 70 f/ 80 f may be separate from the system 20 . the wiring for the cameras 66 and computers 68 may be inside the horizontal members 64 a- 64 d as well as the vertical uprights 62 a- 62 d or inside the platform 30 . the system 20 may have a terminal where lifters can manually control aspects of the system 20 . the terminal may be located on the outward facing side of one vertical upright approximately five feet off the platform 30 . a method of using the present invention may include the following. the system 20 disclosed may be provided. a lifter would place the barbell 120 on the bar supports 70 , in the basket 72 without additional weight on the barbell 120 . the barbell 120 is secured with the basket latch 71 so the barbell 120 does not come off the bar supports 70 while adjusting the barbell 120 height or loading the barbell 120 with weight by way of operating the linear actuators 70 b via the computer 68 command and control functionality. to adjust the barbell 120 height the user would selectively operate the motor 74 accordingly. after adjusting the barbell 120 height and loading weight on the barbell 120 the basket latches 71 may be moved to an unlocked condition so the lifter can lift the weight. also, the lifter-user may lift, by way of the actuated bar supports 70 , the barbell 120 that is supported by the platform 30 and/or side actuating bench 40 through utilizing the nested position of the safety bar 80 and its cavities 86 , which is occupied by the basket portion 72 of the bar support 70 . as used in this application, the term “about” or “approximately” refers to a range of values within plus or minus 10% of the specified number. and the term “substantially” refers to up to 90% or more of an entirety. recitation of ranges of values herein are not intended to be limiting, referring instead individually to any and all values falling within the range, unless otherwise indicated, and each separate value within such a range is incorporated into the specification as if it were individually recited herein. the words “about,” “approximately,” or the like, when accompanying a numerical value, are to be construed as indicating a deviation as would be appreciated by one of ordinary skill in the art to operate satisfactorily for an intended purpose. ranges of values and/or numeric values are provided herein as examples only, and do not constitute a limitation on the scope of the described embodiments. the use of any and all examples, or exemplary language (“e.g.,” “such as,” or the like) provided herein, is intended merely to better illuminate the embodiments and does not pose a limitation on the scope of the embodiments or the claims. no language in the specification should be construed as indicating any unclaimed element as essential to the practice of the disclosed embodiments. in the following description, it is understood that terms such as “first,” “second,” “top,” “bottom,” “up,” “down,” and the like, are words of convenience and are not to be construed as limiting terms unless specifically stated to the contrary. it should be understood, of course, that the foregoing relates to exemplary embodiments of the invention and that modifications may be made without departing from the spirit and scope of the invention as set forth in the following claims.
147-071-959-525-739
AU
[ "WO", "AU", "US" ]
H04L9/00,G06N20/00,H04W12/00,H04L43/062,G16Y40/35,H04L43/04,H04L47/2441,H04L47/2483,H04L67/12,H04L67/30,G06F11/00,G16Y30/10,G16Y40/10,G16Y40/50,H04L43/026,H04W28/02
2018-12-14T00:00:00
2018
[ "H04", "G06", "G16" ]
apparatus and process for monitoring network behaviour of internet-of-things (iot) devices
a process for monitoring network behaviour of lot devices, which includes: monitoring a communication network traffic to identify tcp and udp traffic flows to and from each of one or more lot devices; processing the identified traffic flows to generate a corresponding data structure representing the identified network traffic flows of the iot device in terms of, for each of local and internet networks, one or more identifiers of respective hosts and/or devices that had a network connection with the iot device, source and destination ports and network protocols; and comparing the generated data structure for each iot device to corresponding data structures representing predetermined manufacturer usage description (mud) specifications of known types of iot devices to generate quantitative measures of similarity of the traffic flows of the iot device to traffic flows defined by the predetermined mud specifications to identify the type of the iot device
claims: 1. a process for monitoring network behaviour of internet of things (iot) devices, the process including the steps of: monitoring network traffic of a communications network to identify tcp and udp network traffic flows to and from each of one or more iot devices of the communications network; processing the identified network traffic flows of each iot device to generate a corresponding data structure for each iot device representing the identified network traffic flows of the iot device in terms of, for each of local and internet networks, one or more identifiers of respective hosts and/or devices that had a network connection with the iot device, source and destination ports and network protocols; and comparing the generated data structure for each iot device to corresponding data structures representing predetermined manufacturer usage description (mud) specifications of known types of iot devices to generate, for each iot device, quantitative measures of similarity of the traffic flows of the iot device to traffic flows defined by the predetermined mud specifications to identify the type of the iot device and/or to determine whether the traffic flows of the iot device conform to expected behaviour of the known types of iot devices. 2. the process of claim 1, wherein the data structure is a tree structure with branches respectively representing network traffic to the iot device and from the iot device, and for each branch of the tree structure, one or more sub-branches, each said sub branch representing a corresponding network address name, ethernet frame ethertype, internet protocol number, and port number. 3. the process of claim 2, wherein the tree structure branches respectively represent network traffic to internet, from internet, to local network and from local network. 4. the process of claim 2 or 3, including compacting the generated data structure for an iot device by combining branches of the tree structure of the generated data structure based on intersections between the branches and one or more corresponding branches of one or more corresponding data structures representing respective predetermined mud specifications of respective known types of iot devices. 5. the process of any one of claims 1 to 4, wherein the data structure is a tree structure with branches respectively representing network traffic to internet, from internet, to local network and from local network, and for each said branch, one or more sub branches, each said sub-branch representing a corresponding network address name, ethernet frame ethertype, internet protocol number, and port number. 6. the process of any one of claims 1 to 5, wherein the quantitative measures of similarity include dynamic similarity scores according to: where r represents the generated data structure for the iot device following removal of any redundant rules, and mi represents the corresponding data structures representing predetermined manufacturer usage description (mud) specifications of known types of iot devices. 7. the process of any one of claims 1 to 6, wherein the quantitative measures of similarity include static similarity scores according to: where r represents the generated data structure for the iot device following removal of any redundant rules, and mi represents the corresponding data structures representing predetermined manufacturer usage description (mud) specifications of known types of iot devices. 8. the process of any one of claims 1 to 7, including periodically repeating the steps of monitoring, processing and comparing to generate data representing the quantitative measures of similarity as a function of time. 9. the process of claim 8, including generating an alert if network traffic behaviour of an iot device changes substantially over time. 10. the process of any one of claims 1 to 9, wherein the processed network traffic flows of each iot device do not include ssdp flows. 11. the process of any one of claims 1 to 10, wherein the step of comparing includes independently generating the quantitative measures of similarity for the iot device for each of local network and internet channels to identify the type of the iot device, and only if the type of the iot device identified for the channels do not agree, then generating quantitative measures of similarity for the iot device for an aggregate of the local network channel and the internet channel to identify the type of the iot device. 12. an apparatus for monitoring network behaviour of internet of things (iot) devices configured to execute the device classification process of any one of claims 1 to 11. 13. at least one computer-readable storage medium having stored thereon executable instructions and/or fpga configuration data that, when the instructions are executed by at least one processor and/or when an fpga is configured in accordance with the fpga configuration data, cause the at least one processor and/or the fpga to execute the device classification process of any one of claims 1 to 11. 14. an apparatus for monitoring network behaviour of internet of things (iot) devices, including : a network traffic monitor to monitor network traffic of a communications network to identify tcp and udp network traffic flows to and from each of one or more iot devices of the communications network; an iot device identifier to process the identified network traffic flows of each iot device to generate a corresponding data structure for each iot device representing the identified network traffic flows of the iot device in terms of, for each of local and internet networks, one or more identifiers of respective hosts and/or devices that had a network connection with the iot device, source and destination ports and network protocols; and an anomaly detector to compare the generated data structure for each iot device to corresponding data structures representing predetermined manufacturer usage description (mud) specifications of known types of iot devices to generate, for each iot device, quantitative measures of similarity of the traffic flows of the iot device to traffic flows defined by the predetermined mud specifications to identify the type of the iot device and/or to determine whether the traffic flows of the iot device conform to expected behaviour of the known types of iot devices. 15. the apparatus of claim 14, wherein the data structure is a tree structure with branches respectively representing network traffic to the iot device and from the iot device, and for each branch of the tree structure, one or more sub-branches, each said sub-branch representing a corresponding network address name, ethernet frame ethertype, internet protocol number, and port number. 16. the apparatus of claim 15, wherein the tree structure branches respectively represent network traffic to internet, from internet, to local network and from local network. 17. the apparatus of claim 15 or 16, including a data structure compacting component configured to compact the generated data structure for an iot device by combining branches of the tree structure of the generated data structure based on intersections between the branches and one or more corresponding branches of one or more corresponding data structures representing respective predetermined mud specifications of respective known types of iot devices. 18. the apparatus of any one of claims 14 to 17, wherein the data structure is a tree structure with branches respectively representing network traffic to internet, from internet, to local network and from local network, and for each said branch, one or more sub-branches, each said sub-branch representing a corresponding network address name, ethernet frame ethertype, internet protocol number, and port number. 19. the apparatus of any one of claims 14 to 18, wherein the quantitative measures of similarity include dynamic similarity scores according to: where r represents the generated data structure for the iot device following removal of any redundant rules, and mi represents the corresponding data structures representing predetermined manufacturer usage description (mud) specifications of known types of iot devices. 20. the apparatus of any one of claims 14 to 19, wherein the quantitative measures of similarity include static similarity scores according to: where r represents the generated data structure for the iot device following removal of any redundant rules, and mi represents the corresponding data structures representing predetermined manufacturer usage description (mud) specifications of known types of iot devices. 21. the apparatus of any one of claims 14 to 20, wherein the apparatus is configured to periodically repeat the steps of monitoring, processing and comparing to generate data representing the quantitative measures of similarity as a function of time. 22. the apparatus of claim 21, including an alert generator configured to generate an alert if network traffic behaviour of an iot device changes substantially over time. 23. the apparatus of any one of claims 14 to 22, wherein the processed network traffic flows of each iot device do not include ssdp flows. 24. the apparatus of any one of claims 14 to 23, wherein the anomaly detector is configured to independently generate the quantitative measures of similarity for the iot device for each of local network and internet channels to identify the type of the iot device, and only if the type of the iot device identified for the channels do not agree, to generate quantitative measures of similarity for the iot device for an aggregate of the local network channel and the internet channel to identify the type of the iot device.
apparatus and process for monitoring network behaviour of internet-of-things (iot) devices technical field the present invention relates to network security, and in particular security of networks that include internet-of-things (iot) devices, and more particularly to an apparatus and process for monitoring network behaviour of iot devices. background networked devices continue to become increasingly ubiquitous in a wide variety of settings, including businesses and other organisations, and domestic settings. in particular, the addition of network connectivity to sensors and appliance-type devices generally dedicated to a specific task has created a new class of devices and interconnectivity, generally referred to as forming an 'internet-of-things', or simply ίot'. thus examples of iot devices include lightbulbs, doorbells, power switches, weight scales, security cameras, air conditioning equipment, home automation and voice- activated internet interfaces in the general form of audio speakers (e.g., google home and amazon echo) and other 'smart' devices, including a wide variety of networked sensors most commonly used to sense environmental parameters such as temperature, humidity, motion, smoke and air quality. there are now so many such devices available that their management has become challenging, particularly from a security standpoint, for large networks such as those found in large enterprises and university campuses, for example. such networks may include literally thousands of such devices which largely remain unidentified and may pose significant security risks to the network. most iot devices are relatively simple, and cannot defend themselves from cyber attacks. many connected iot devices can be found on search engines such as shodan, and their vulnerabilities exploited at scale. for example, a recent cyber attack on a casino relied upon compromised fish tank sensors, and a recent attack on a university campus network relied upon networked vending machines. dyn, a major dns provider, was subjected to a ddos attack originating from a large iot botnet comprising thousands of compromised ip-cameras. thus iot devices, exposing tcp/udp ports to arbitrary local endpoints within a home or enterprise, and to remote entities on the wider internet, can be used by inside and outside attackers to reflect/amplify attacks and to infiltrate otherwise secure networks. it is desired to overcome or alleviate one or more difficulties of the prior art, or to at least provide a useful alternative. summary in accordance with some embodiments of the present invention, there is provided a process for monitoring network behaviour of internet of things (iot) devices, the process including the steps of: monitoring network traffic of a communications network to identify tcp and udp network traffic flows to and from each of one or more iot devices of the communications network; processing the identified network traffic flows of each iot device to generate a corresponding data structure for each iot device representing the identified network traffic flows of the iot device in terms of, for each of local and internet networks, one or more identifiers of respective hosts and/or devices that had a network connection with the iot device, source and destination ports and network protocols; and comparing the generated data structure for each iot device to corresponding data structures representing predetermined manufacturer usage description (mud) specifications of known types of iot devices to generate, for each iot device, quantitative measures of similarity of the traffic flows of the iot device to traffic flows defined by the predetermined mud specifications to identify the type of the iot device and/or to determine whether the traffic flows of the iot device conform to expected behaviour of the known types of iot devices. in some embodiments, the data structure is a tree structure with branches respectively representing network traffic to the iot device and from the iot device, and for each branch of the tree structure, one or more sub-branches, each said sub-branch representing a corresponding network address name, ethernet frame ethertype, internet protocol number, and port number. in some embodiments, the tree structure branches respectively represent network traffic to internet, from internet, to local network and from local network. in some embodiments, the process includes compacting the generated data structure for an iot device by combining branches of the tree structure of the generated data structure based on intersections between the branches and one or more corresponding branches of one or more corresponding data structures representing respective predetermined mud specifications of respective known types of iot devices. in some embodiments, the data structure is a tree structure with branches respectively representing network traffic to internet, from internet, to local network and from local network, and for each said branch, one or more sub-branches, each said sub-branch representing a corresponding network address name, ethernet frame ethertype, internet protocol number, and port number. in some embodiments, the quantitative measures of similarity include dynamic similarity scores according to: where r represents the generated data structure for the iot device following removal of any redundant rules, and mi represents the corresponding data structures representing predetermined manufacturer usage description (mud) specifications of known types of iot devices. in some embodiments, the quantitative measures of similarity include static similarity scores according to: where r represents the generated data structure for the iot device following removal of any redundant rules, and mi represents the corresponding data structures representing predetermined manufacturer usage description (mud) specifications of known types of iot devices. in some embodiments, the process includes periodically repeating the steps of monitoring, processing and comparing to generate data representing the quantitative measures of similarity as a function of time. in some embodiments, the process includes generating an alert if network traffic behaviour of an iot device changes substantially over time. in some embodiments, the processed network traffic flows of each iot device do not include ssdp flows. in some embodiments, the step of comparing includes independently generating the quantitative measures of similarity for the iot device for each of local network and internet channels to identify the type of the iot device, and only if the type of the iot device identified for the channels do not agree, then generating quantitative measures of similarity for the iot device for an aggregate of the local network channel and the internet channel to identify the type of the iot device. in accordance with some embodiments of the present invention, there is provided an apparatus for monitoring network behaviour of internet of things (iot) devices configured to execute the process of any one of the above processes. in accordance with some embodiments of the present invention, there is provided at least one computer-readable storage medium having stored thereon executable instructions and/or fpga configuration data that, when the instructions are executed by at least one processor and/or when an fpga is configured in accordance with the fpga configuration data, cause the at least one processor and/or the fpga to execute the device classification process of any one of the above processes. in accordance with some embodiments of the present invention, there is provided a n apparatus for monitoring network behaviour of internet of things (iot) devices, including : a network traffic monitor to monitor network traffic of a communications network to identify tcp and udp network traffic flows to and from each of one or more iot devices of the communications network; an iot device identifier to process the identified network traffic flows of each iot device to generate a corresponding data structure for each iot device representing the identified network traffic flows of the iot device in terms of, for each of local and internet networks, one or more identifiers of respective hosts and/or devices that had a network connection with the iot device, source and destination ports and network protocols; and an anomaly detector to compare the generated data structure for each iot device to corresponding data structures representing predetermined manufacturer usage description (mud) specifications of known types of iot devices to generate, for each iot device, quantitative measures of similarity of the traffic flows of the iot device to traffic flows defined by the predetermined mud specifications to identify the type of the iot device and/or to determine whether the traffic flows of the iot device conform to expected behaviour of the known types of iot devices. in some embodiments, the data structure is a tree structure with branches respectively representing network traffic to the iot device and from the iot device, and for each branch of the tree structure, one or more sub-branches, each said sub-branch representing a corresponding network address name, ethernet frame ethertype, internet protocol number, and port number. in some embodiments, the tree structure branches respectively represent network traffic to internet, from internet, to local network and from local network. in some embodiments, the apparatus includes a data structure compacting component configured to compact the generated data structure for an iot device by combining branches of the tree structure of the generated data structure based on intersections between the branches and one or more corresponding branches of one or more corresponding data structures representing respective predetermined mud specifications of respective known types of iot devices. in some embodiments, the data structure is a tree structure with branches respectively representing network traffic to internet, from internet, to local network and from local network, and for each said branch, one or more sub-branches, each said sub-branch representing a corresponding network address name, ethernet frame ethertype, internet protocol number, and port number. in some embodiments, the quantitative measures of similarity include dynamic similarity scores according to: where r represents the generated data structure for the iot device following removal of any redundant rules, and mi represents the corresponding data structures representing predetermined manufacturer usage description (mud) specifications of known types of iot devices. in some embodiments, the quantitative measures of similarity include static similarity scores according to: where r represents the generated data structure for the iot device following removal of any redundant rules, and mi represents the corresponding data structures representing predetermined manufacturer usage description (mud) specifications of known types of iot devices. in some embodiments, the apparatus is configured to periodically repeat the steps of monitoring, processing and comparing to generate data representing the quantitative measures of similarity as a function of time. in some embodiments, the apparatus includes an alert generator configured to generate an alert if network traffic behaviour of an iot device changes substantially over time. in some embodiments, the processed network traffic flows of each iot device do not include ssdp flows. in some embodiments, the anomaly detector is configured to independently generate the quantitative measures of similarity for the iot device for each of local network and internet channels to identify the type of the iot device, and only if the type of the iot device identified for the channels do not agree, to generate quantitative measures of similarity for the iot device for an aggregate of the local network channel and the internet channel to identify the type of the iot device. brief description of the drawings some embodiments of the present invention are hereinafter described, by way of example only, with reference to the accompanying drawings, wherein: figure 1 is a schematic diagram of a communications network including internet of things (iot) devices and an apparatus for monitoring network behaviour of the iot devices; figure 2 is a block diagram of a system for assessing network behaviour of internet of things (iot) devices in accordance with an embodiment of the present invention; figure 3 is a flow diagram of a process for assessing network behaviour of internet of things (iot) devices in accordance with an embodiment of the present invention; figure 4 is a flow diagram of a flow rule generation process of the process of figure 3; figures 5 and 6 are sankey diagrams of mud profiles of, respectively, a tp-link camera iot device, and an amazon echo iot device; figure 7 shows a meta-graph consisting of six variables, five sets, and three edges; figure 8 is a meta-graph model of the mud policy of a lifx lightbulb iot device, representing its permitted traffic flow behaviour; figures 9 to 11 are graphical representations of respective different rule sets defining the same mud policy, where each rectangular region represents the network packets allowed by a corresponding rule, and figure 11 represents a canonical set of rules generated by horizontal partitioning of the aggregate polygon defined by the rule sets of figures 9 and 10; figures 12 and 13 are schematic representations of run-time profiles of a tp- link power plug iot device generated from network traffic collected over periods of 30 and 480 minutes, respectively; figure 14 is a schematic diagram illustrating a comparison of a run-time profile against known mud profiles; figures 15 to 17 are graphs of static and dynamic similarity scores generated for four different iot devices as a function of time during collection and analysis of network traffic flows; figure 18 is a schematic representation of an ssdp run-time profile across all devices on a network; figure 19 is a graph of the average number of winners and the average static similarity score as a function of time during collection and analysis of network traffic flows; figure 20 is a confusion matrix showing the relationship between predicted and true labels of 28 different iot devices; figure 21 is a schematic representation of dynamic similarity versus static similarity; figure 22 is a partial confusion matrix showing the relationship between predicted labels of 28 different iot devices and true labels of three iot devices; figures 23 and 24 are graphs showing the relationship between internet similarity scores and dynamic (figure 23) or static (figure 24) local similarity scores; figure 25 is a schematic representation of a profile difference tree structure for an ihome iot device; figure 26 is a schematic representation illustrating endpoint compaction for an hp printer iot device (for the "to internet" channel direction); figure 27 is a partial confusion matrix showing the relationship between true and predicted labels of five different iot devices; and figure 28 is a schematic representation of a profile difference for a constructed "sense-me" iot device infected by the mirai botnet virus. detailed description the security concerns described above have prompted standards bodies to provide guidelines for the internet community to build secure iot devices and services, and for regulatory bodies (such as the us fcc) to control their use. in particular, an ietf proposal named the "manufacturer usage description" (mud) specification (see https://datatracker.ietf.org/doc/dr3ft-ietf-opsawq-mud/) provides the first formal framework for iot behaviour that can be rigorously enforced. this framework requires manufacturers of iots to publish a behavioural profile of their device, as they are the ones with the best knowledge of how their device will behave when installed in a network. for example, an ip camera may need to use dns and dhcp on the local network, and communicate with ntp servers and a specific cloud-based controller in the internet, but nothing else. however, such requirements vary significantly across iots from different manufacturers. knowing each device's requirements would allow network operators to impose a tight set of access control list (acl) restrictions on each iot device in operation so as to reduce the potential attack surface of their networks. the ietf mud proposal provides a light-weight model to enforce effective baseline security for iot devices by allowing a network to auto-configure the required network access for the devices, so that they can perform their intended functions without having unrestricted network privileges. however, mud is a new and emerging paradigm, and the ietf mud specification is still evolving as a draft. accordingly, iot device manufacturers have not yet provided mud profiles for their devices, and moreover there is little collective wisdom today on how manufacturers should develop behavioural profiles of their iot devices, or how organizations should use mud profiles to secure their networks or monitor the runtime behaviour of iot devices. to address these difficulties, the inventors have developed apparatuses and processes for securing computer networks that include "internet-of-things" (iot) devices. as described herein, the apparatuses and processes process network traffic of a communications network to: (i) automatically generate a corresponding mud profile for each iot device in the network; and (ii) periodically assess whether the run-time network behaviour of each iot device is consistent with its corresponding mud profile, and to detect changes to its network behaviour that may be indicative of a security attack. in one example described below, the apparatuses and processes were applied to a testbed network including 28 distinct iot devices, capturing the network behaviour of the iot devices over a period of six months and processing the resulting data to identify, inter alia: (a) legacy iot devices without vendor mud support; (b) iot devices with outdated firmware; and (c) iot devices that are potentially compromised. in one aspect, described herein is an apparatus and process that help iot manufacturers generate and verify mud profiles, taking as input a network packet trace representing the operational behaviour of an iot device, and generating as output a mud profile for it. in another aspect, also described herein is an apparatus and process for monitoring network behaviour of iot devices, using observed traffic traces and known mud signatures to dynamically identify iot devices and monitor their behavioural changes within a network. as shown in figure 1, a communications network includes one or more interconnected network switches 102 and a gateway 104 that provides access to a wide area network 106 such as the internet. the switches 102 provide wired and wireless access to the network for network devices, including iot devices 110 and non-iot devices 112. the non-iot devices 112 typically include computing devices such as desktop and portable general purpose computers, tablet computers, smart phones and the like. in accordance with the described embodiments of the present invention, the communications network also includes an apparatus 200 for monitoring network behaviour of iot devices (also referred to herein as the "iot monitoring apparatus" 200), as shown in figure 2, that executes a process 300 for monitoring network behaviour of iot devices (also referred to herein as the "iot monitoring process" 300), as shown in figure 3, to dynamically identify network devices as being instances of known iot device types, and to monitor the network behaviour of these devices to detect any changes in network behaviour that may be indicative of an attack. in the described embodiments, the switches 102 are openflow switches under control of an sdn controller of the apparatus 200. however, it will be apparent to those skilled in the art that other embodiments of the present invention may be implemented using other types of network switches to identify and quantify network traffic flows of networked devices. as shown in figure 2, in the described embodiments the iot monitoring process 300 is implemented in the form of executable instructions of software components or modules 202 stored on a non-volatile storage medium 204 such as a solid-state memory drive (ssd) or hard disk drive (hdd). however, it will be apparent to those skilled in the art that at least parts of the process 300 can alternatively be implemented in other forms, for example as configuration data of a field-programmable gate arrays (fpga), and/or as one or more dedicated hardware components, such as application-specific integrated circuits (asics), or any combination of these forms. in the described embodiment, the iot monitoring process components 202 include a network traffic monitor, an iot device identifier, and an anomaly detector. the iot monitoring apparatus 200 includes random access memory (ram) 206, at least one processor 208, and external interfaces 210, 212, 214, all interconnected by at least one bus 216. the external interfaces include a network interface connector (nic) 212 which connects the apparatus 200 to the network switches 102 network, and may include universal serial bus (usb) interfaces 210, at least one of which may be connected to a keyboard 218 and a pointing device such as a mouse 219, and a display adapter 214, which may be connected to a display device such as a panel display 222. the iot monitoring apparatus 200 also includes an operating system 224 such as linux or microsoft windows, and an sdn or 'flow rule' controller such as the ryu framework, available from http://osrq.qithub.io/ryu/. although the network device management components 202 and the flow rule controller are shown as being hosted on a single operating system 224 and hardware platform, it will be apparent to those skilled in the art that in other embodiments the flow rule controller may be hosted on a separate virtual machine or hardware platform with a separate operating system. i - mud profile generation the inventors have developed an apparatus or 'tool' named "mudgee" to automatically generate a mud profile for an iot device from its network traffic trace in order to make the creation of mud profiles faster, cheaper and more accurate. a valid mud profile contains a root object in the form of an "access-lists" container which includes several access control entries (aces) serialized in json (javascript object notation) format. the access-lists are explicit in describing the direction of communication, from-device and to-device. each ace matches traffic on source/destination port numbers for tcp/udp, and type and code for icmp. the mud specifications also distinguish local-networks traffic from internet communications. in one example described further below, traffic flows of each iot device were captured over a six month observation period, and the set of collected flows were then processed to automatically generate mud rules. the rules reflect an application whitelisting model (/.e., only ' allow’ rules with a default action of ' drop’). having a combination of ' accept’ and ' drop' rules requires a notion of rule priority (/.e., order), and is not supported by the current ietf mud draft. for example, table 1 below summarises the traffic flows observed for a blipcare blood pressure monitor, which only generates traffic whenever it is used. the blipcare blood pressure monitor first resolves its intended server at tech.carematrix.com by exchanging a dns query/ response with the default gateway (/.e., the top two flows). it then uploads its measurement to its server operating on tcp port 8777 (described by the bottom two rules). table 1 - flow rules generated from a mud profile of a blipcare blood pressure monitor iot device source destination proto sport dport . . tech. carematix . com * 6 8777 * mudgee architecture mudgee implements a programmable virtual switch (vswitch) with a packet header inspection engine attached. it plays an input pcap trace (of an arbitrary iot device) into the switch. mudgee includes: (i) a flow rule generator that captures and tracks all tcp/udp flows to/from each device to generate corresponding flow rules, and (ii) a mud profile generator that generates a mud profile from the flow rules. network traffic flow capture consumer iot devices use services provided by remote servers on the cloud, and also expose services to local hosts ( e.g a mobile app). the flow rule generator tracks (intended) device activities using separate flow rules for remote and local communications. it is challenging to capture services (especially those operating on non-standard tcp/udp ports) that a device is either accessing or exposing. this is because local/remote services operate on static port numbers, whereas source port numbers are dynamic (and chosen randomly) for different flows of the same service. it is trivial to deduce the service for tcp flows by inspecting the syn flag, but not so easy for udp flows. figure 4 is a flow diagram of a flow capture process executed by the flow rule generator to capture bidirectional traffic flows of an iot device. the vswitch is first configured with a set of proactive rules, each with a specific action ("forward" or "mirror") and a priority, as shown in table 2 below table 2 - initial proactive rules proactive rules with a ' mirror' action feed the header inspection engine with a copy of the matched packets. the flow capture process of figure 4 inserts a corresponding reactive rule into the vswitch. the flow capture process matches a dns reply packet to a top priority flow, and extracts and stores the domain name and its associated ip address into a dns cache table. this dns cache is dynamically updated upon arrival of a dns reply matching an existing request. the mud specification also requires the segregation of traffic to and from a device for both local and internet communications. hence the flow capture process assigns a unique priority to the reactive rules associated with each of the groups: from-local, to- local, from-internet and to-internet. a specific priority is used for flows that contain a tcp syn to identify whether the iot device or the remote entity initiated the communication. flow translation to mud the mud profile generator processes the flow rules generated by analysing the traffic flows to generate a corresponding mud profile for each device based on the considerations below. consideration 1 : perform a reverse lookup of the ip address of the remote endpoint and identify the associated domain name (if any), using the dns cache. consideration 2: some consumer iots, especially ip cameras, typically use the stun protocol to verify that the user's mobile app can stream video directly from the camera over the internet. if a device uses the stun protocol over udp, the profile must allow all udp traffic to/from internet servers because the stun servers often require the client device to connect to different ip addresses or port numbers. consideration 3: it is observed that several smart ip cameras communicate with many remote servers operating on the same port {e.g., the belkin wemo switch). however, no dns responses were found corresponding to the server ip addresses. so, the device must obtain the ip address of its servers via a non-standard channel {e.g., the current server may instruct the device with the ip address of the subsequent server). if a device communicates with several remote ip addresses (in the described embodiment, more than a threshold value of five), all operating on the same port, then remote traffic to/from any ip addresses (/.e., *) is allowed on that specific port number. consideration 4: some devices (e.g., tplink plug) use the default gateway as the dns resolver, and others (e.g., belkin wemo motion) continuously ping the default gateway. the draft mud specification maps local communication to fixed ip addresses through the controller construct. the local gateway is considered to act as the controller, and the name-space urn : ietf : params : mud: gateway is used for the gateway. in this way, mud profiles were generated for the 28 consumer iot devices listed in table 4 below. table 3 - list of 28 iot devices for which mud profiles were automatically generated. devices with purely static functionality are shown in bold. devices with static functionality that is loosely defined (e.g., due to the use of the stun protocol) are italicised. devices with complex and dynamic functionality are underlined. insights and challenges the blipcare bp monitor is an example of an iot device with static functionalities. it exchanges dns queries/responses with the local gateway, and communicates with a single domain name over tcp port 8777. consequently, its behaviour can be locked down to a limited set of static flow rules. the majority of iot devices that were tested (/.e., 22 out of 28) fall into this category (listed in a bold typeface in table 3). figures 5 and 6 are sankey diagrams representing mud profiles in a human-friendly way. the second category of generated mud profiles is exemplified by figure 5. this sankey diagram shows how the tp-link camera accesses/exposes limited ports on the local network. the camera gets its dns queries resolved, discovers local network using mdns over udp 5353, probes members of certain multicast groups using igmp, and exposes two tcp ports 80 (management console) and 8080 (unicast video streaming) to local devices. all these activities can be defined by a tight set of acls. but, over the internet, the camera communicates with its stun server, accessing an arbitrary range of ip addresses and port numbers shown by the second top flow. due to this communication, the functionality of this device can only be loosely defined. devices that fall in to this category (/.e., due to the use of stun protocol), are marked in italics in table 3. the functionality of these devices can be more tightly defined if the manufacturers of these devices configure their stun servers to operate on a specific set of endpoints and port numbers, instead of a broad and arbitrary range. the amazon echo is an example of an iot device with complex and dynamic functionality, augmentable using custom recipes or skills. such devices (underlined in table 3) can communicate with a growing range of endpoints on the internet, which the original manufacturer cannot define in advance. for example, in the testbed the amazon echo interacts with the hue lightbulb by communicating with meethue . com over tcp 443. it also contacts the news website abc.net.au when prompted by a user. for these types of devices, the biggest challenge is how manufacturers can dynamically update their mud profiles to match the device capabilities. however, even the initial mud profile itself can help establish a minimum network-communication permissions set that can be updated over time. ii - checking run-time profiles oflot devices in a second aspect, the network behaviors of iot devices are tracked at run-time, mapping the behaviour of each device to one of a set of known mud profiles. this is needed for managing legacy iots that do not have support for the mud standard. to do so, a behavioral profile is automatically generated and updated at run-time (in form of a tree) for an iot device, and a quantitative measure of its " similarity " to each of the known static mud profiles (e.g., provided by manufacturers) is calculated. it is noted that computing similarity between two such profiles is a non-trivial task. profile structure a device profile has two main components, namely "internet" and "local" communication channels, as shown by shaded areas in figures 12 and 13. each profile is organized into a tree-like structure containing a set of nodes with categorical attributes (/.e., end-point, protocol, port number over internet/local channels) connected through edges. following the root node in each tree, there are nodes representing the channel/direction of communication, endpoints with which the device communicates, and the flow characteristics (/.e., the leaf node). the run-time profile of a device (given a set of known mud profiles) is generated using a method similar to that described above, with minor modifications, as described below. the mudgee tool tracks the traffic volume exchanged in each direction of udp flows distinguishing the server and the client. however, this would lead to a high consumption of memory for generating run-time profiles. therefore, given a udp flow all known mud profiles are searched for an overlapping region on either the iot side or the remote side (similar to the concept illustrated in figures 9 to 11. if an overlapping region is found, then the tree structure is updated with intersecting port ranges— this can be seen in figures 12 and 13 where the leaf node shown in bold-and-italic text has been changed according to known mud profiles. if no overlap is found with the mud profiles, then the udp flow is split into two leaf nodes: two flows matching the udp source port (with a wild-carded destination) and the udp destination port (with a wild-carded source) separately. this helps to identify the server side by a subsequent packet matching either of these two flows. this ensures that the tree structure becomes bounded. in addition, there is an upper bound for the maximum number of nodes that can be in any branch of the tree, and this is used to protect the tree from being unbounded during attacks. the runtime profile of a device is generated through packet inspection. initially, the dns packet is monitored to identify the corresponding dns bindings. then, the first packet of a flow in a specific direction is inspected. if the inspected packet is from a tcp flow and also contains either a syn or a syn-ack field, then a leaf node is inserted with entries for ethtype, proto and the server side port identified through the tcp flags, whereas for udp packets all 4 entries are added to the leaf node. meanwhile, as the tree structure is being generated, its growth is also iteratively (every 15 mins in the described embodiment) limited by 'compacting' (/.e., combining) its branches, based on the intersections between the run-time profile and all known mud profiles. metrics the run-time and mud profiles are denoted respectively by sets r and mi, as shown in figure 14. each element of these two sets is represented by a branch of the tree structure shown in figures 12 and 13. for a given iot device, the similarity of its r with a number of known mi’s is calculated. there are a number of metrics for measuring the similarity of two sets. for example, the jaccard index has been used for comparing two sets of categorical values, and is defined by the ratio of the size of the intersection of two sets to the size of their union, i.e., i ' r n mii/ s iii u mi\ s inspired by the jaccard index, in the described apparatus and process, the following two metrics are calculated : » dynamic similarity score: r n mi l ' » static similarity score: mi ) = mi these two metrics collectively represent the jaccard index. each of these metrics would take a value between 0 (i.e., dissimilar) and 1 (i.e., identical). similarity scores are computed every epoch time ( e.g ., 15 minutes). when computing | r n mi |, redundant branches of the run-time profile are temporarily removed based on the mud profile that it is being checked against. this assures that duplicate elements are pruned from r when checking against each mi. when calculating | r n mi |, both r and mi may be redundant to avoid duplicates in a set. removing redundant nodes from mi is straightforward— the redundancies can be removed from the tree structure by not having any leaf nodes inclusive to nodes from the same endpoint or with the wild card endpoint from the same direction. r's redundant structure depends on mi. for example, if r contains communication to ports 8000 and 8002 of internet server " abc . com", and if m i contains a rule of port number ranging from 8000 to 10000 with wild-carded endpoint (i.e., *), then both flows from r can be captured by a single rule. now assume another mud profile, say m2, contains the two rules of r, then this does not contain any redundancies. therefore before calculating similarities, it is important to remove the redundancies based on the structure of mi's. this is denoted as rm i . the run-time profile grows over time by accumulating nodes (and edges), as shown in figures 12 and 13, for example. it is seen in figure 12 that the run-time profile of a tp- link power plug consists of 8 elements (i.e., edges), 30 minutes after commencement of this profile generation. as shown in figure 13, the element count of the profile reaches 15 when more traffic an additional 450 minutes) of the device is considered. at the end of each epoch, a device (or a group of devices) will be chosen as the "winner" that has the maximum similarity score with the iot device whose run-time profile is being checked. it is expected to have a group of winner devices when the dynamic similarity is considered, especially when only a small subset of device behavioural profile is observed— the number of winners will reduce as the run-time profile grows over time. figures 15 to 17 are graphs of the winner similarity scores as a function of time for selected iot devices, including the awair air quality sensor, the lifx bulb, the wemo switch, and the amazon echo. in these plots, the winner is correctly identified for all of these four iots. figure 15 shows that the static similarity score grows slowly over time, and in a non-decreasing fashion. the convergence time depends on the complexity of the device behavioural profile. for example, the static similarity of the awair air quality and lifx bulb devices converges to 1 (/.e., full score) within 1000 minutes. but for the amazon echo, it takes more time to gradually discover all flows, ultimately converging to the full score in about 12 days. also, there are iot devices for which the static similarity might never converge to 1. for example, the wemo switch and wemo motion devices use a list of hard-coded ip addresses (instead of domain names) for their ntp communications. these ip addresses, however, do not serve the ntp service anymore, and consequently no ntp reply flow is captured. similarly, it was observed that the tplink plug uses the "sib.time.edu.cn" address for ntp communications, and this domain name also seems to be not operational anymore. in addition, devices such as the august doorbell and dropcam contact public dns resolvers {e.g., 8 . 8 . 4 . 4) if the local gateway fails to respond to a dns query of the iot device, meaning that this specific flow will only be captured if there is an internet outage. on the other hand, in figure 16 the dynamic similarity score grows quickly (it may even reach a value of 1, meaning h m). it may stay at 1 if no variation is observed. the awair air quality sensor is an example of such behaviour, as shown by dashed black lines in figure 16— 19 out of 28 iot devices in the testbed were found to behave similarly to the awair air quality sensor in their dynamic similarity score. in some other cases, this score may slightly fall and rise again. note that a fluctuating dynamic similarity never meets 1 due to missing elements (/.e., variations). missing elements can arise for various reasons, including : (a) mud profile is unknown or not well-defined by the manufacturer, (b) the device firmware is old and not up-to-date, and (c) the iot device is compromised or under attack. during testing, the inventors found that 9 of their lab iots had slight variations for two reasons: firstly, responding to discovery requests in local communications, if they support ssdp protocol — these responses cannot be tightly specified by the manufacturer in the mud profile since such flows depend on the environment in which the iot device is deployed. the wemo switch is an example of this group, as shown by dashed-dotted lines in figure 16. to address this issue, all discovery communications were used to generate a separate profile (shown in figure 18) by inspecting ssdp packets exchanged over the local network. the ssdp server port number on the device can change dynamically, thus the inspection of the first packet in a new ssdp flow is required. the second reason is that missing dns packets leads to the emergence of a branch in the profile with an ip address as the end-point instead of a domain name. this rarely occurs in the testbed network, because every midnight the process starts storing traffic traces into a new pcap file, and thus a few packets can be lost during this transition to a new pcap file. missing a dns packet was observed for the lifx bulb, as shown by dotted lines in figure 16. in view of the above, ssdp activity is excluded from local communications of iot devices to obtain a clear run-time profile. as shown in figure 17, without ssdp activity, the dynamic similarity score is able to correctly identify the correct winner for the wemo switch within a very short time interval. lastly, it is important to note that similarity scores (both static and dynamic) can be computed at an aggregate level (/.e., local and internet combined), or for individual channels, meaning one score for the local and one for the internet channel. the two scores might not converge in some cases where the local channel similarity chooses a winner while the internet channel similarity finds a different winner device. per-channel similarity never results in a wrong winner, though it may result in no winner. however, the aggregate similarity may end up identifying an incorrect winner, especially when the local activity becomes dominant in the behavioural profile. this is because many iots have a significant profile overlap in their local communications {e.g., dhcp, arp, or ssdp). therefore, the per-channel similarity is checked first. if the two channels disagree, the process switches to aggregate similarity to identify the winner. identifying iot devices at run-time packet traces (i.e., pcap files) were collected from the inventors' testbed network, including a gateway (a tp-link archer c7 flashed with openwrt firmware) that serves a number of iot devices. the tcpdump tool was used to capture and store all network traffic (local and internet) on usb storage connected to the gateway. the resulting traffic traces span three months starting from may 2018, containing traffic corresponding to the iot devices listed in table 3 (excluding the withings baby monitor). the mudgee tool was used to generate mud profiles for the iot devices in the testbed. as explained above, the dynamic similarity score converges faster than the static similarity score. the device identification process begins by tracking dynamic similarity at the channel level, and continues as long as the channels still agree (i.e., they both choose the same winner). depending on the diversity of observed traffic to/from the iot device (local versus internet), there can be multiple winners at the beginning of the process. in this case, the static similarity is fairly low, since a small fraction of the expected profile is likely to be captured in a short time interval. this means that the process needs to see additional traffic from the device before it concludes. figure 19 shows the time evolution of the winners count and static similarity, averaged across all 27 iot devices in the testbed. focusing on the solid blue line (left y-axis), there were up to 6 winners on average at the beginning of the identification process. the winners count gradually comes down (in less than three hours) to a single winner, and stabilizes. even with a single winner, the static similarity, shown by dashed black lines (right y- axis), needs about ten hours on average to exceed a score of 0.8. note that the similarity may take a very long time to reach the full score of 1 (sometimes, it may never reach the full score as explained above). it is up to the operator to choose an appropriate threshold at which this process concludes— a higher threshold increases the confidence level of the device identification, but it comes at the cost of longer convergence time. thus the dynamic similarity (starting with channel level similarity, and possibly switching to aggregate level) is used to identify the winner iot at run-time. the static similarity, on the other hand, is used to track the confidence level— an indication of safe convergence if the dynamic similarity of full score is not reached. [claim] to evaluate the efficacy of iot device identification at run-time, the traces collected in 2018 (/.e., data 2018) were replayed into the packet simulator tool. figure 20 is a confusion matrix of the results, where the rows are true labels, the columns are the predicted labels, and the cell values are percentages. for example, the first row shows that the amazon echo is always predicted as the sole winner in each and every epoch of the identification process, thus 100% in the first column and 0% in the remaining columns— no other device is identified as the winner in any single epoch time. looking at the dropcam row, it is identified as multiple devices (/.e., more than one winner) for some epochs— non-zero values are seen against all columns. but, it is important to note that dropcam is always one of the winners, thus 100% against the dropcam column. further, it is also identified for example as the amazon echo in 0.4% of epochs. a 100% correct convergence was observed for all devices except for the netatmo camera, whereby it is not correctly identified in 2.3% of epochs. this mis-identification occurs due to missing dns packets where some flows were incorrectly matched on stun related flows (with wild-carded endpoints) of the samsung camera and the tp-link camera. however, this mis-identification occurred only during the first few epochs and then it converged to the correct winner. monitoring behavioral change oflots in a real environment, there are several challenges to correctly identify an iot device at run-time: (a) there might be a device on the network for which no mud profile is known, (b) the device firmware might not be up-to-date (thus, the run-time profile would deviate from its intended known mud profile), and/or (c) the device might be under attack or even fully compromised. each of these three challenges and their impact on the similarity score (both dynamic and static) are discussed below. figure 21 depicts a simplified scatter plot of dynamic similarity versus static similarity, highlighting how these two metrics are interpreted. on the plot, states are labelled as 1, 2, 3, and 4. the ideal region is the quadrant highlighted for state-1 whereby both dynamic and static scores are high, and there is a single and correctly identified winner. considering state-2 in this figure, there is a high score of dynamic similarity, whereas the static similarity is fairly low. this score combination is typically expected when a small amount of traffic from the device is observed, and more traffic is needed to determine whether the dynamic similarity continues to maintain a high score and the static similarity possibly starts rising. in state-3, having a low dynamic similarity is alarming, given the high score in the static similarity— indicating high variations at run time. this score combination is observed when many flows observed in the device traffic are not listed in the intended mud profile for two possible reasons: (a) the device firmware is not current, or (b) the device is under attack (or even compromised). lastly, having low scores in both dynamic and static similarity metrics highlights a significant difference (or small overlap) between the run-time and mud profiles. this scenario likely results in identification of an incorrect winner. to summarize, iot network operators may need to set threshold values for both dynamic and static similarity scores to select the winner device. also, the identification process needs to begin with the channel-level similarity (for both dynamic and static metrics) avoiding a biased interpretation, and may switch to aggregate-level in the absence of convergence. the impact of three scenarios impacting the iot behavioral changes is described below. unknown mud profile to investigate this scenario, the mud profile of each device was removed from the list of known muds. figure 22 shows the partial results for selected devices. unsurprisingly, devices on the rows are identified as others (/.e., one or multiple wrong winners selected), since their intended mud profile is not present when checked at run-time. for example, the amazon echo converges to identification as a tp-link camera, and the awair air quality sensor is consistently identified as six other iot devices. ideally, there should not be any one device identified as the winner. note that these results are obtained while no thresholding is applied to the similarity scores, and only the maximum score indicates the winner. figures 23 and 24 are scatter plots of channel-level scores for dynamic and static similarity metrics, respectively. the 2018 dataset was used to generate two sets of results: one with mud profiles of the devices (shown by blue cross markers), and the other without their mud profiles (shown by red circle markers), across all 27 iot devices. for the dynamic similarity in figure 23, having two thresholds (/.e., about 0.60 on the internet channel and 0.75 on the local channel) would filter incorrect instances. for the static similarity in figure 24, a threshold of 0.50 on the internet channel is sufficient to avoid incorrect identifications. this single threshold is because the iot profile on the internet channel varies significantly for consumer devices (in the test bed setup), but enterprise iots may tend to be active on the local network — thus a different thresholding is generally required for each network. it is important to note that a high threshold would increase the identification time, and a low threshold accelerates the process but may lead to identification of a wrong winner. it is therefore up to the network operator to set appropriate threshold values. one conservative approach would be to accept no variation in the dynamic similarity, requiring a full score of 1 along with a static similarity score of more than 0.50 for each of the local and internet channels. for example, the results were regenerated by setting conservative thresholds mentioned above, and thus no winner was identified due to low scores in both dynamic and static similarity metrics, as shown by the state-4 quadrant in figure 21. this indicates that iot devices, in absence of their mud profiles, are consistently found in state-4, flagging possible issues. old firmware iot devices either upgrade their firmware automatically by directly communicating with a cloud server, or may require the user to confirm the upgrade ( e.g the wemo switch) via an app. for the latter, devices will remain behind the latest firmware until the user manually updates them. to illustrate the impact of old firmware, packet traces collected from the testbed over a duration of six months starting in october 2016 were used to generate run-time profiles against mud profiles generated from data 2018. table 4 below shows the results from data 2016. the column labeled "profile changed" indicates whether any changes on device behaviour were observed (/.e., verified manually) from the data 2016 dataset, compared to data 2018. these behavioural changes include endpoints and/or port number. for example, the tp-link camera communicates with a server endpoint "devs . tpiinkcioud . com" on tcp 50443 according to the data 2016. however, this camera communicates with the same endpoint on tcp 443 in the data 2018. additionally, in the data 2018 dataset, an endpoint "ipcs erv . tpiinkcioud . com" is observed, which did not exist in the data 2016. table 4 - run-time identification results using the“data 2016” dataset as described above. the "convergence" column in table 4 shows the performance of the device identification process (converging to a single winner) without thresholding, for two scenarios, namely known mud (/.e., the mud profile of the device is present) and unknown mud (/.e., the mud profile of the device is missing) . when mud profiles of device are known (/.e., present), all devices except the wemo switch converge to the correct winner. surprisingly, the wemo switch is consistently identified as the wemo motion— even the static similarity increases to 0.96. this is because both wemo motion and wemo switch share the same cloud-based endpoint for their internet communications in data 2016, but these endpoints have changed for the wemo switch (but not for the wemo motion) in data 2018. it is important to note that the primary objective is to secure iot devices by enforcing tight access-control rules to network elements. therefore, the wemo switch can be protected by the rules of the wemo motion until it is updated to the latest firmware. once the wemo switch is updated, the intrusion detection process may generate false alarms, indicating the need for re-identification. as discussed above, a threshold is required to improve the identification process, discovering unknown devices or problematic states. therefore, thresholds determined using the data 2018 were applied and the results are shown in the column labeled as "convergence with threshold" in table 4. devices that did not have behavioural changes (from 2016 to 2018), converge correctly and appear in perfect state-1. looking into other devices, for example the amazon echo, only 65.7% of instances are correctly identified— it took a while for the identification process to meet the expected thresholds set for similarity scores. it is observed that devices with profile changes are found in state-3 or state-4. in order to better understand the reason for a low score in dynamic similarity, the profile difference can be visualized in the form of a tree structure. for example, this difference (/.e., r-m) is shown in figure 25 for the ihome power plug iot device. it can be seen that this device (in data 2016) communicates over http with "api.evrything.com", and serves http to the local network. however, these communications do not exist in the mud profile for the device (generated from data 2018). this difference may indicate to a network operator that a firmware upgrade is needed or that the mud profile (offered by the manufacturer) is not complete. some devices {e.g., the hp printer and the hue bulb) may be found consistently in state-4 throughout the identification process. structural variations in the profile can arise largely due to changes in the endpoints or port numbers. tracking changes in port numbers is non-trivial. however, for endpoints fully qualified domain names can be compacted to primary domain names (/.e., removing sub-domain names). if the device is under attack or compromised, it likely communicates with a completely new primary domain. figure 26 illustrates endpoint compaction in an hp printer profile just for the "to internet" channel direction. for this channel direction and without endpoint compaction, the static and dynamic similarity scores are 0.28 and 0.25, respectively. applying endpoint compaction results in high scores of 1 and 0.83 for static and dynamic similarities, respectively. endpoint compaction was applied to all of the iot devices in the data 2016 dataset, and the results are shown under the column labelled "endpoint compacted" in table 4. interestingly, this technique has significantly enhanced the identification : all state-4 devices become state-1 devices. an interesting observation here is the unknown mud scenario for the wemo motion detector, where the rate of incorrect identification (as wemo switch) is fairly high, at 27.3%. however, it is not at all surprising to see different iot devices from the same manufacturer identified as each other when compacting endpoints. to summarize, if the identification process does not converge (or evolves very slowly), then the difference visualization and endpoint compaction described above enables network operators to discover iot devices running old firmware. attacked or compromised device the efficacy of the process when iot devices are under direct/ reflect! on attacks or compromised by a botnet was also evaluated, using traffic traces collected from the testbed in november 2017 ("data 2017"), and including a number volumetric attacks spanning reflection-and-amplification (snmp, ssdp, tcp syn, and smurf), flooding (tcp syn, fraggle, and ping of death), arp spoofing, and port scanning launched on four iot devices, namely the belkin netcam, the wemo motion sensor, the samsung smart-cam and the wemo switch (listed in table 5 below). these attacks were sourced from within the local network and from the internet. for the internet sourced attacks, port forwarding was enabled (emulating a malware behaviour) on the network gateway. since the iot devices in the testbed are all invulnerable to botnets, the inventors built a custom iot device named "senseme" using an arduino yun communicating with an open-source ws02 iot cloud platform. this device included a temperature sensor and a lightbulb. the senseme device was configured to periodically publish the local temperature to the server, and its lightbulb was remotely controlled via the mqtt protocol. first the mud profile of this device was generated, and then it was deliberately infected by the mirai botnet. in order to avoid harming others on the internet, the injection module of the mirai code was disabled so that only its scanning module was used. a mirai infected device scans random ip addresses on the internet to find open ports tcp 23 and tcp 2323 for telnet access. table 5 - list of attacks launched against iot devices (l: local, d: device, i: internet) the identification process with thresholding was applied to data 2017, and it identified all devices correctly with high static similarity and low dynamic similarity (i.e., high variations). a partial confusion matrix of the identification is shown in figure 27. since the mud profile of senseme is fairly simple in terms of branch count, it quickly converges to the winner with a high static similarity score, whereas other devices require more time to converge. therefore, the success rate for identifying senseme device is higher than for other devices. different attacks have different impacts on the run-time profiles of iot devices. for example, arp spoof and tcp syn would not create a new branch in the tree structure of the device profile, and consequently no variation is captured. fraggle, icmp, smurf, ssdp, and snmp attacks would result only two additional flows, meaning a minor variation is captured. however, port scans (botnet included) cause a large variation, since an increasing number of endpoints emerge in the tree structure at run-time. for example, the mirai botnet scans 30 ip addresses per second, causing the dynamic similarity score to approach 0. figure 28 shows the profile difference (or variation) for the infected senseme device at run-time. performance of monitoring profiles the performance of the process for real-time monitoring of iot behavioral profiles was quantified by four metrics, namely: convergence time, memory usage, inspected packets, and number of flows. convergence time: convergence time depends on user interaction with the device, the type of the device, and the similarity score thresholds. some devices do not communicate unless the user interacts with the device (e.g., the blipcare bp meter), devices like the awair air quality sensor and the wemo motion sensor do not require any user interaction, and devices such as cameras have many communication patterns, such as device to device, device to internet server and remote communication. therefore convergence times will vary based on the types of devices in the deployment. table 6 below lists the iot devices and the times it took to converge to the correct device. all the devices in the 2018 dataset converged to the correct device within a day. one possible reason for this is that during the data collection, user interaction with the mobile application was programmed using a touch replay tool (/.e., turning on the hue lightbulb, checking the live camera view) in a samsung galaxy tab, and the user interaction was replayed every 6 hours. therefore a significant number of states of the device was captured due to these interactions, whereas with the 2017 dataset it took 2 days. the shaded cells for the 2016 data set are the devices that converged due to endpoint compaction. other than the netatmo camera, all other devices only converged due to compaction. for the netatmo camera, it took 4410 minutes to converge when endpoint compaction was not applied; however due to endpoint compaction it converged within 1650 minutes. the smart things, hue bulb and amazon echo iot devices took a considerable time to converge. when the data was analysed, it was found that all 3 devices captured few flows due to an interaction from the first few minutes, and then it was stale until close to the convergence time. three limits for the monitoring time were used, in chronological order: the first is a time limit for convergence with thresholding, then a time limit for convergence whilst compaction, and lastly a time limit to stop monitoring. table 6 - convergence times in minutes for each dataset system performance: in order to quantify the performance of the apparatus, the following four metrics were calculated : the average number of inspected packets, the average number of flows, the average number of nodes in the device profile tree, and the computation time for the compaction of the tree, redundancy removal and similarity score calculation. the average number of flows is is an important metric for the operation of a hardware switch with limited tcam capacity, and the other 3 metrics are relevant to the scalability of the process. as shown in table 6, the average number of flows for each device is typically fewer than 10, with the largest flow count of about 20 for the august doorbell. this range of flow counts is easily manageable in an enterprise network setting with switches that are capable of handling millions of flow entries. however, in home networks with routers that can accommodate up to hundreds of flows, it may be necessary to limit the iot monitoring process to only a few devices at a time, in order to manage the tcam constraint. regarding the number of packets inspected, it is clear that the iot monitoring process is very effective by keeping the number of inspected packets to a minimum (e.g., mostly less than 10 packets per minute for each device). the computing time of the process solely depends on the number of nodes and the number of known mud profiles. the time complexity of the process can be expressed as 0(n: m: log n), where n is the number of branches in the profile tree and m is the number mud profiles we are checking against. the time complexity for the search space was reduced by employing standard hashing and binary search tree techniques know to those skilled in the art. for a chromescast device as an example, the average computing time is 5:20 ms, where there are on average 346 nodes in its run-time profile. this can be further improved by using parallelization, whereby similarity scores are computed over individual branches. it is important to note that the computing time is upper-bounded by setting an upper bound limit on the count of tree branches generated at run-time. lastly, in terms of space, 40 bytes of memory is required for each node of a tree. this means that for chromecast, on average, less than 14 kb of memory is needed. additionally, all known mud profiles are present in memory. therefore, the space complexity heavily depends on the number of mud profiles being checked. table 7 - performance metrics for the 2018 data set. many modifications will be apparent to those skilled in the art without departing from the scope of the present invention.
147-178-312-256-339
US
[ "WO", "US", "AU" ]
G06F/,G06F13/00,G06F15/173,G06F21/00,H04H60/45,H04N7/10,H04N7/14,H04N7/16,H04N7/173,G06F17/00,G06N5/02
2001-04-06T00:00:00
2001
[ "G06", "H04" ]
method and apparatus for identifying unique client users from user behavioral data
a method and system are provided for identifying a current user of a terminal device (14). the method includes providing a database containing multiple user input pattern profiles of prior user inputs to the terminal device (14). each of the possible users of the group are associated with at least one user pattern profile. current input patterns from the use of the terminal device (14) are detected, combined and then dynamically matched with one of the user input pattern profiles, and the possible user associated with the matched user input pattern is selected as the current user. the system for identifying a current user of a terminal (14) from a group of possible users includes a database containing multiple user input pattern profiles. the system detects current input patterns, then combines the patterns and dynamically matches the patterns with one of the user input pattern profiles. the system selects the possible user associated with the matched user input pattern profiles as the current user.
1. a method of identifying a current user of a terminal device from a group of possible users, comprising: providing a database containing a plurality of user input pattern profiles of prior user inputs to said terminal device, each of said possible users being associated with at least one of said user input pattern profiles; detecting at least one current input pattern from use of said terminal device; and dynamically matching said at least one current input pattern with one of said user input pattern profiles, and selecting the possible user associated with the one of said user input pattern profiles as the current user. 2. the method of claim 1 wherein said at least one current input pattern comprises a plurality of different input patterns, and wherein dynamically matching said at least one current input pattern comprises combining said plurality of different patterns and matching a combination of said different input patterns with one of said user input pattern profiles. 3. the method of claim 1 further comprising retraining said plurality of user input pattern profiles in said database with said at least one current input pattern. 4. the method of claim 1 further comprising determining a personal user profile associated with the current user. 5. the method of claim 4 further comprising transmitting targeted content to said current user in accordance with said personal user profile. 6. the method of claim 1 wherein said current input pattern comprises user clickstream data. 7. the method of claim 6 wherein said clickstream data relates to particular web sites visited by the user or the duration of visits to the web sites. 8. the method of claim 1 wherein said current input pattern comprises user keystroke data. 9. the method of claim 8 wherein said keystroke data comprises digraph interval data. 10. the method of claim 1 wherein said current input pattern comprises user mouse usage data. 11. the method of claim 1 wherein said current input pattern comprises user remote control usage data. 12. the method of claim 1 wherein said terminal device comprises a computer. 13. the method of claim 1 wherein said terminal device comprises a television set top box. 14. the method of claim 1 wherein said steps are implemented in a computer, and said computer communicates with said terminal device over a network. 15. the method of claim 14 wherein said network comprises the internet. 16. the method of claim 14 wherein said network comprises a nodal television distribution network. 17. a system for identifying a current user of a terminal device from a group of possible users, comprising: a database containing a plurality of user input pattern profiles of prior user inputs to said terminal device, each of said possible users being associated with at least one of said user input pattern profiles; means for detecting at least one current input pattern from use of said terminal device; and means for dynamically matching said at least one current input pattern with one of said user input pattern profiles, and selecting the possible user associated with the one of said user input pattern profiles as the current user. 18. the system of claim 17 wherein said at least one current input pattern comprises a plurality of different input patterns, and wherein said means for dynamically matching said at least one current input pattern combines said plurality of different patterns and matches a combination of said different input patterns with one of said user input pattern profiles. 19. the system of claim 17 further comprising means for retraining said plurality of user input pattern profiles in said database with said at least one current input pattern. 20. the system of claim 17 further comprising means for determining a personal user profile associated with the current user. 21. the system of claim 20 further comprising means for transmitting targeted content to said current user in accordance with said personal user profile. 22. the system of claim 17 wherein said current input pattern comprises user clickstream data. 23. the system of claim 22 wherein said clickstream data relates to particular web sites visited by the user or the duration of visits to the web sites. 24. the system of claim 17 wherein said current input pattern comprises user keystroke data. 25. the system of claim 24 wherein said keystroke data comprises digraph interval data. 26. the system of claim 17 wherein said current input pattern comprises user mouse usage data. 27. the system of claim 17 wherein said current input pattern comprises user remote control usage data. 28. the system of claim 17 wherein said terminal device comprises a computer. 29. the system of claim 17 wherein said terminal device comprises a television set top box. 30. the system of claim 17 wherein said system is implemented in a computer, and said computer communicates with said terminal device over a network. 31. the system of claim 30 wherein said network comprises the internet. 32. the system of claim 30 wherein said network comprises a nodal television distribution network. 33. a computer system for identifying a current user of a terminal device from a group of possible users, comprising: memory for storing a program and a plurality of user input pattern profiles of prior user inputs to said terminal device, each of said possible users being associated with at least one of said user input pattern profiles; and a processor operative with the program to: (a) detect at least one current input pattern from use of said terminal device; and (b) dynamically match said at least one current input pattern with one of said user input pattern profiles, and selecting the possible user associated with the one of said user input pattern profiles as the current user. 34. a method of delivering targeted content to a current user of a terminal device used by a plurality of possible users, comprising: providing a database containing a plurality of user input pattern profiles of prior user inputs to said terminal device, each of said possible users being associated with at least one of said user input pattern profiles; detecting at least one current input pattern from use of said terminal device; dynamically matching said at least one current input pattern with one of said user input pattern profiles, and selecting the possible user associated with the one of said user input pattern profiles as the current user; determining a personal user profile associated with the current user; and transmitting targeted content to said current user in accordance with said personal user profile. 35. the method of claim 34 wherein said targeted content comprises targeted advertising. 36. the method of claim 34 wherein said targeted content comprises recommended program viewing choices. 37. the method of claim 34 wherein said personal profile includes demographic or preference data on said current user. 38. the method of claim 37 wherein said demographic or preference data includes data on at least one of user age, user sex, number of children, income, and geographic location. 39. the method of claim 34 wherein said steps are implemented in a computer server. 40. the method of claim 39 wherein said server comprises a video server. 41. the method of claim 39 wherein said server comprises a web server. 42. the method of claim 34 wherein said terminal device comprises a set top box and a television monitor. 43. the method of claim 34 wherein said terminal device comprises a personal computer. 44. a method of identifying a current user of a terminal device from a group of possible users, comprising: detecting a plurality of different types of current input patterns from use of said terminal device by a current user; performing a soft match of each of said plurality of different types of current input patterns with a plurality of stored input patterns for each of said types of input patterns, said stored patterns representing input patterns for the group of possible users of said terminal device, said soft matches generating scored possible matches for each of said different types of data; determining possible combinations of said scored possible matches; determining a score for each said combination; and for the combination having the highest score, selecting a possible user associated with said combination as the current user. 45. the method of claim 44 further comprising retraining said plurality of stored input patterns with said current input patterns. 46. the method of claim 44 further comprising determining a personal user profile associated with the current user. 47. the method of claim 46 further comprising transmitting targeted content to said current user in accordance with said personal user profile. 48. the method of claim 44 wherein said different types of current input patterns include a user clickstream pattern. 49. the method of claim 48 wherein said clickstream pattern relates to particular web sites visited by the user or the duration of visits to the web sites. 50. the method of claim 44 wherein said different types of current input patterns include a user keystroke pattern. 51. the method of claim 50 wherein said keystroke pattern includes digraph interval data. 52. the method of claim 44 wherein said different types of current input patterns include user mouse usage data. 53. the method of claim 44 wherein said different types of current input patterns include user remote control usage data. 54. the method of claim 44 wherein said terminal device comprises a computer. 55. the method of claim 44 wherein said terminal device comprises a television set top box. 56. the method of claim 44 wherein said steps are implemented in a computer, and said computer communicates with said terminal device over a network. 57. the method of claim 56 wherein said network comprises the internet. 58. the method of claim 56 wherein said network comprises a nodal television distribution network. 59. a system for identifying a current user of a terminal device from a group of possible users, comprising: means for detecting a plurality of different types of current input patterns from use of said terminal device by a current user; means for performing a soft match of each of said plurality of different types of current input patterns with a plurality of stored input patterns for each of said types of input patterns, said stored patterns representing input patterns for the group of possible users of said terminal device, said soft matches generating scored possible matches for each of said different types of data; means for determining possible combinations of said scored possible matches; means for determining a score for each said combination; and means for selecting a possible user associated with combination having the highest score as the current user. 60. the system of claim 59 further comprising means for retraining said plurality of stored input patterns with said current input patterns. 61. the system of claim 59 further comprising means for determining a personal user profile associated with the current user. 62. the system of claim 59 further comprising means for transmitting targeted content to said current user in accordance with said personal user profile. 63. the system of claim 59 wherein said different types of current input patterns include a user clickstream pattern. 64. the system of claim 63 wherein said clickstream pattern relates to particular web sites visited by the user or the duration of visits to the web sites. 65. the system of claim 59 wherein said different types of current input patterns include a user keystroke pattern. 66. the system of claim 65 wherein said keystroke pattern includes digraph interval data. 67. the system of claim 59 wherein said different types of current input patterns include user mouse usage data. 68. the system of claim 59 wherein said different types of current input patterns include user remote control usage data. 69. the system of claim 59 wherein said terminal device comprises a computer. 70. the system of claim 59 wherein said terminal device comprises a television set top box. 71. the system of claim 59 wherein said system is implemented in a computer, and said computer communicates with said terminal device over a network. 72. the system of claim 71 wherein said network comprises the internet. 73. the system of claim 71 wherein said network comprises a nodal television distribution network. 74. the method of claim 6 wherein said clickstream pattern relates to particular programs or chamiels selected by the user or the duration of viewing of said programs or channels. 75. the system of claim 22 wherein said clickstream data relates to particular programs or channels selected by the user or the duration of viewing of said programs or channels. 76. the method of claim 48 wherein said clickstream pattern relates to particular programs or channels selected by the user or the duration of viewing of said programs or channels. 77. the system of claim 63 wherein said clickstream pattern relates to particular programs or channels selected by the user or the duration of viewing of said programs or channels. 78. a method of identifying a current subset of users of a terminal device from a set of possible users, comprising: providing a database containing a plurality of user input pattern profiles of prior user inputs to said terminal device, various subsets of said possible users being associated with at least one of said user input pattern profiles; detecting at least one current input pattern from use of said terminal device by a current subset of users; and dynamically matching said at least one current input pattern with one of said user input pattern profiles, and selecting the subset of users associated with the one of said user input pattern profiles as the current subset of users. 79. the method of claim 78 wherein said at least one current input pattern comprises a plurality of different input patterns, and wherein dynamically matching said at least one current input pattern comprises combining said plurality of different patterns and matching a combination of said different input patterns with one of said user input pattern profiles. 80. the method of claim 78 further comprising retraining said plurality of user input pattern profiles in said database with said at least one current input pattern. 81. the method of claim 78 further comprising determining a personal user profile associated with the current subset of users. 82. the method of claim 81 further comprising transmitting targeted content to said terminal device in accordance with said personal user profile. 83. the method of claim 78 wherein said current input pattern comprises user clickstream data. 84. the method of claim 83 wherein said clickstream data relates to particular web sites visited by the current subset of users or the duration of visits to the web sites. 85. the method of claim 83 wherein said clickstream data relates to particular programs or channels selected by the subset of users or the duration of viewing of said programs or channels. 86. the method of claim 78 wherein said current input pattern comprises user keystroke data. 87. the method of claim 86 wherein said keystroke data comprises digraph interval data. 88. the method of claim 78 wherein said current input pattern comprises user mouse usage data. 89. the method of claim 78 wherein said current input pattern comprises user remote control usage data. 90. the method of claim 78 wherein said terminal device comprises a computer. 91. the method of claim 78 wherein said terminal device comprises a television set top box. 92. the method of claim 78 wherein said steps are implemented in a computer, and said computer communicates with said terminal device over a network. 93. the method of claim 92 wherein said network comprises the internet. 94. the method of claim 92 wherein said network comprises a nodal television distribution network. 95. a method of identifying a current subset of users of a terminal device from a set of possible users, comprising: detecting a plurality of different types of current input patterns from use of said terminal device by the current subset of users; performing a soft match of each of said plurality of different types of current input patterns with a plurality of stored input patterns for each of said types of input patterns, said stored patterns representing input patterns for various subsets of possible users of said terminal device, said soft matches generating scored possible matches for each of said different types of data; determining possible combinations of said scored possible matches; determining a score for each said combination; and for the combination having score indicating a substantial match, selecting a subset of users associated with said combination as the current subset of users. 96. the method of claim 95 further comprising retraining said plurality of stored input patterns with said current input patterns. 97. the method of claim 95 further comprising determining a personal profile associated with the current subset of users. 98. the method of claim 97 further comprising transmitting targeted content to said terminal device in accordance with said personal profile. 99. the method of claim 95 wherein said different types of current input patterns include a user clickstream pattern. 100. the method of claim 99 wherein said clickstream pattern relates to particular web sites visited by the current subset of users or the duration of visits to the web sites. 101. the method of claim 99 wherein said clickstream pattern relates to particular programs or channels selected by the subset of users or the duration of viewing of said programs or channels. 102. the method of claim 95 wherein said different types of current input patterns include a user keystroke pattern. 103. the method of claim 102 wherein said keystroke pattern includes digraph interval data. 104. the method of claim 95 wherein said different types of current input patterns include user mouse usage data. 105. the method of claim 95 wherein said different types of current input patterns include user remote control usage data. 106. the method of claim 95 wherein said terminal device comprises a computer. 107. the method of claim 95 wherein said terminal device comprises a television set top box. 108. the method of claim 95 wherein said steps are implemented in a computer, and said computer communicates with said terminal device over a network. 109. the method of claim 108 wherein said network comprises the internet. 110. the method of claim 108 wherein said network comprises a nodal television distribution network.
method and apparatus for identifying unique client users from user behavioral data related application the present application is based on and claims priority from u.s. provisional patent application serial no. 60/282,028 filed on april 6, 2001 and entitled "method and apparatus for identifying unique client users from clickstream, keystroke and/or mouse behavioral data." background of the invention field of the invention the present invention relates generally to monitoring the activity of users of client terminal devices and, more particularly, to a method and system for identifying unique users from user behavioral data. description of related art various systems are available for profiling users of client terminal devices. profiling typically involves determining demographic and interest information on users such as, e.g., age, gender, income, marital status, location, and interests. user profiles are commonly used in selecting targeted advertising and other content to be delivered to particular users. delivery of targeted content is advantageous. for example, targeted advertising has been found to be generally more effective in achieving user response (such as in click-through rates) than advertising that is generally distributed to all users. client terminal devices are commonly used by multiple individual users. for example, a home computer or a household television set can typically be expected to be used by various different family members at different times. each particular user is likely to have a very different profile from other possible users, making delivery of targeted content ineffective. a need accordingly exists for distinguishing between various possible users at a given client terminal device. brief summary of various embodiments of the invention certain embodiments of the present invention are directed to a method and system for identifying a current user of a terminal device from a group of possible users. the method in accordance with one embodiment includes providing a database containing multiple user input pattern profiles of prior user inputs to the terminal device. each of the possible users of the group are associated with at least one of the user input pattern profiles. current input patterns from use of the terminal device are detected. the current input patterns are combined and then dynamically matched with one of the user input pattern profiles, and the possible user associated with the matched user input pattern profile is selected as the current user. the system for identifying a current user of a terminal device from a group of possible users in accordance with another embodiment includes a database containing multiple user input pattern profiles of prior user inputs to the terminal device. each of the possible users is associated with at least one of the user input pattern profiles. the system detects current input patterns from use of the terminal device, and then combines the patterns and dynamically matches the patterns with one of the user input pattern profiles. the system selects the possible user associated with the matched user input pattern profiles as the current user. these and other features of embodiments of the present invention will become readily apparent from the following detailed description wherein embodiments of the invention are shown and described by way of illustration of the best mode of the invention. as will be realized, the invention is capable of other and different embodiments and its several details may be capable of modifications in various respects, all without departing from the invention. accordingly, the drawings and description are to be regarded as illustrative in nature and not in a restrictive or limiting sense with the scope of the application being indicated in the claims. brief description of the drawings for a fuller understanding of the nature and objects of various embodiments the present invention, reference should be made to the following detailed description taken in connection with the accompanying drawings wherein: figure 1 is a schematic diagram illustrating of a representative network in which a system in accordance with various embodiments of the inventions can be implemented; figure 2 is a flowchart generally illustrating the algorithm for matching a current clickstream with a stored clickstream profile in accordance with one embodiment; and figure 3 is a flowchart generally illustrating the fusion algorithm for identifying a unique user from multiple sources of input data in accordance with another embodiment of the invention. detailed description of preferred embodiments the present invention is generally directed to a method and system for identifying a current user from a group of possible users of a client terminal device from user behavioral data. once the user is identified, targeted content (such as, e.g., targeted advertising or recommended programming) can be delivered to the terminal device. figure 1 schematically illustrates a representative network in which a system for identifying unique users can be implemented. in general, the system 10 includes a server system 12 for delivering content to a plurality of user terminals or client devices 14 over a network 16. each user terminal 14 has an associated display device 18 for displaying the delivered content. each terminal 14 also has a user input or interface interaction device 20 that enables the user to interact with a user interface on the terminal device 14. input devices 20 can include, but are not limited to infrared remote controls, keyboards, and mice or other pointer devices. in some embodiments, the network 16 can comprise a television broadcast network (such as, e.g., digital cable television, direct broadcast satellite, and terrestrial transmission networks), and the client terminal devices 14 can comprise, e.g., consumer television set-top boxes. the display device 18 can be, e.g., a television monitor. in some embodiments, the network 16 can comprise a computer network such as , e.g., the internet (particularly the world wide web), intranets, or other networks. the server system 12 can comprise, e.g., a web server, the terminal device 14 can comprise, e.g., a personal computer (or other web client device), and the display device 18 can be, e.g., a computer monitor. television system embodiments in the television system embodiments, the server system 12 can comprise, e.g., a video server, which sends data to and receives data from a terminal device 14 such as a television set-top box such as a digital set-top box. the network 16 can comprise an interactive television network that provides two-way communications between the server 12 and various terminal devices 14 with individual addressability of the terminal devices 14. the network 16 can, e.g., comprise a television distribution system such as a cable television network comprising, e.g., a nodal television distribution network of branched fiber-optic and /or coaxial cable lines. other types of networked distribution systems are also possible including, e.g., direct broadcast satellite systems, off-air terrestrial wireless systems and others. the terminal device 14 (e.g., set-top box) can be operated by a user with a user interface interaction device 20, e.g., a remote control device such as an infrared remote control having a keypad. internet embodiments in the internet (or other computer network) embodiments, the client terminals 14 connect to multiple servers 12 via the network 16, which is preferably the internet, but can be an intranet or other known connections. in the case of the internet, the servers 12 are web servers that are selectively accessible by the client devices. the web servers 12 operate so-called "web sites" and support files in the form of documents and pages. a network path to a web site generated by the server is identified by a uniform resource locator (url). one example of a client terminal device 14 is a personal computer such as, e.g., a pentium-based desktop or notebook computer running a windows operating system. a representative computer includes a computer processing unit, memory, a keyboard, a pointing device such as a mouse, and a display unit. the screen of the display unit is used to present a graphical user interface (gui) for the user. the gui is supported by the operating system and allows the user to use a point and click method of input, e.g., by moving the mouse pointer on the display screen to an icon representing a data object at a particular location on the screen and pressing on the mouse buttons to perform a user command or selection. also, one or more "windows" may be opened up on the screen independently or concurrently as desired. the content delivered by the system to users is displayed on the screen. client terminals 14 typically include browsers, which are known software tools used to access the servers 12 of the network. representative browsers for personal computers include, among others, netscape navigator and microsoft internet explorer. client terminals 14 usually access the servers 12 through some internet service provider (isp) such as, e.g., america online. typically, multiple isp "point-of-presence" (pop) systems are provided in the network, each of which includes an isp pop server linked to a group of client devices 14 for providing access to the internet. each pop server is connected to a section of the isp pop local area network (lan) that contains the user-to-internet traffic. the isp pop server can capture url page requests and other data from individual client devices 14 for use in identifying unique users as will be described below, and also to distribute targeted content to users. as is well known, the world wide web is the internet's multimedia information retrieval system. in particular, it is a collection of servers of the internet that use the hypertext transfer protocol (http), which provides users access to files (which can be in different formats such as text, graphics, images, sound, video, etc.) using, e.g., a standard page description language known as hypertext markup language (html). html provides basic document formatting and allows developers to specify links to other servers and files. these links include "hyperlinks," which are text phrases or graphic objects that conceal the address of a site on the web. a user of a device machine having an html-compatible browser (e.g., netscape navigator) can retrieve a web page (namely, an html formatted document) of a web site by specifying a link via the url (e.g., www.yahoo.com/photography). upon such specification, the client device makes a transmission control protocol /internet protocol (tcp/ip) request to the server identified in the link and receives the web page in return. u.s. patent application serial no. 09/558,755 filed april 21, 2000 and entitled "method and system for web user profiling and selective content delivery" is expressly incorporated by reference herein. that application discloses a method and system for profiling online users based on their observed surfing habits and for selectively delivering content, e.g., advertising, to the users based on their individual profiles. user identification from behavioral data various embodiments of the invention are directed to identifying a current individual user of a client device from a group of possible users. such identification can be made from user behavioral data, particularly from input patterns detected in use of input devices (such as keyboards, mice, and remote control devices). as will be described in further detail below, the detected input patterns from a current user are compared with a set of input pattern profiles, which can be developed over time and stored in a database for the group of possible users. the current user is identified by substantially matching the current input pattern with one of the stored pattern profiles, each of which is associated with one of the possible users. the database of input pattern profiles and software for detecting and matching current input patterns can reside at the client terminal 14 or elsewhere in the network such as at the server 12 or at the isp pop server, or distributed at some combination of locations. different types of input data patterns can be used separately or in combination for identifying current users. the various types of input data patterns can include, e.g., (1) clickstream data; (2) keystroke data; (3) mouse usage data; and (4) remote control device usage data. briefly, in an internet implementation, clickstream data generally relates to the particular websites accessed by the user. this information can include the urls visited and the duration of each visit. in a television implementation, clickstream data generally relates to television surf stream data, which includes data on the particular television channels or programs selected by a user. keystroke data relates to keyboard usage behavior by a user. mouse data relates to mouse (or other pointer device) usage behavior by a user. remote control device data relates to usage behavior of a remote control device, particularly to operate a television set. for each of these types of user data, a sub-algorithm can be provided for detecting and tracking recurring patterns of user behavior. user identification can be performed using any one of these types of user behavioral data or by combining two or more types of data. a so-called 'fusion' algorithm is accordingly provided for combining the outputs from two or more of the sub-algorithms to detect unique users. briefly, the fusion algorithm keeps track of which combinations of particular patterns from, e.g., three types of data (e.g., { click pattern "a," keystroke pattern "q," mouse pattern "f"}) recur most consistently, and associates these highly recurrent combinations with particular users. clickstream behavior tracking different web or television users have different web /television channel surfing styles and interests. the clickstream sub-algorithm described below extracts distinguishing features from raw clickstreams generated by users during web /online surfing or television viewing sessions. in general, recurrent patterns of behavior in various observed different clickstreams are detected and stored. incoming clickstreams from a user client device are compared with these stored patterns, and the set of patterns most similar to the incoming clickstream pattern is output, along with their corresponding similarity scores. clickstream statistics during an online session, the clickstream generated by the current user can be distilled into a set of statistics so that different clickstreams can be easily compared and their degree of similarity measured. this degree of similarity can be the basis for associating different clickstreams as possibly having been generated by the same person. the following are sets of example clickstream statistics: 1) total duration of visits to top-n urls or of viewing of top-n television programs or channels one set of clickstream statistics can be the top n (n can be variable, but is usually 8-10) unique urls or channels /programs that appear in the current clickstream, selected according to total duration of visits at tliese urls or of viewing of the channels /programs. the total duration is computed and stored. in addition to the top-n unique urls or channels /programs, a catch-all category named 'other' can also be maintained. 2) transition frequencies among top-n urls or channels /programs another set of clickstream statistics can be a matrix mapping 'from' urls to 'to' urls or 'from' channels /programs to 'to' channels /programs that captures the total number of all transitions from one url or channel /program to the next in the clickstream. transitions can be tracked among the top-n urls or channels /programs as well as those in the 'other category.' in addition, 'start' and 'end' urls or channels /programs can be used along with the 'from' and 'to' dimensions, respectively. these statistics can be used to form a pattern of user surfing behavior. they can capture both the content of the clickstream (as represented by the content of the top-n urls or channels/programs), as well as some of the idiosyncratic surfing behavior of the user (as manifested, e.g., in transition behavior and proportion of sites or channels /programs that are 'other'). user profiling can take into account the possibility that user input patterns are dependent on time. for example, in a television implementation, user viewing behavior can vary, e.g., based on the time of day or the given hour of a week. similarity metrics the similarity of clickstreams can be measured by calculating the similarity of statistics such as those described above between two different clickstreams. there are several different possible similarity metrics that can be used in distinguishing or comparing different clickstreams. examples of such metrics include the following: 1) dot-product of 'duration' unit vectors two 'duration' vectors are considered to be similar if they point in the same direction in url or channels/programs -space, i.e., each clickstream visits many of the same top-n urls or channels /programs in similar proportions, regardless of the actual length or duration of the clickstream. this similarity is measured by computing the dot-product between the 'duration' unit vectors. perfect similarity returns a value of unity, no similarity returns zero. the similarity of 'other' values is preferably not included in this calculation since two clickstreams with identical 'other' values might have in fact no similarity at all. 2) dot-product of unit-vectorized 'transition' matrices for similar reasons, the transition matrices can be compared using a dot- product metric. the matrices must first be vectorized (i.e., the elements are placed in a vector). transitions to and from 'other' are considered generally significant and can be included in the calculation. 3) similarity of 'other' duration the proportion of time spent at 'other' urls or channels /programs relative to the total user session time can be compared. similarity is measured by computing for each of the two clickstreams to be compared the proportion of time spent at 'other' urls or channels /programs, then dividing the smaller of the two by the larger. 4) similarity of total duration this is a measure of similarity in the total duration of the clickstreams. 5) similarity of total number of distinct urls or channels /programs this is a measure of the similarity in the total number of distinct urls or channels /programs appearing in these clickstreams. each of these similarity metrics can be computed separately. if multiple metrics are used, they can be combined using various techniques. for example, a composite similarity metric can be computed by multiplying a select subset of these metrics together. trial and error on actual data can determine which subset is most suitable. however, the similarity between duration vectors and transition matrices are likely to be more useful. a good similarity metric will result in a good separation between users who have reasonably different surfing habits, and will not overly separate clickstreams generated by the same individual when that person is manifesting reasonably consistent surfing behavior. matching clickstreams based on similarity clickstreams that have high similarity values can be considered possibly to have been generated by the same person. there are many possible ways to compute similarity. for example, one way to match a clickstream to one of a set of candidates is to select the candidate that has the highest similarity value. this technique is called a 'hard match'. alternatively, a somewhat more conservative approach can be to select a small group of very similar candidates rather than a single match. this group of candidates can subsequently be narrowed using some other criteria. this technique can be called finding a 'soft match'. a similarity threshold can be specified for soft matching. soft matching is preferable when it is desired to match users according to multiple input pattern types such as keystroke and mouse dynamics in addition to clickstream behavior. the tracking algorithm it is desired to match incoming clickstreams with stored clickstream profiles representing recurrent clickstream patterns that have been observed over time. each user is preferably associated with a single clickstream pattern profile. however, because an individual may have multifaceted interests, he or she may alternatively be associated with multiple clickstream pattern profiles. the process of matching incoming clickstreams with existing pattern profiles can be as follows as generally illustrated in figure 2. 1) first at step 50, a set of recurrent clickstream profiles is created and stored in a database. it is expected that for a single client terminal there will be multiple different observed clickstreams generated by usually a small set of individual users, each of whom may have several different strong areas of interest that manifest in their surfing behavior. these can be represented as a set of clickstream pattern profiles that summarize the content and surfing behavior of most or all the observed clickstreams. a clustering algorithm, e.g., can be used to generate a small set of clickstream pattern profiles to cover the space of observed clickstreams. new clickstream profiles can be added whenever a new (i.e., dissimilar to existing profiles) clickstream is observed. old profiles can be deleted if no similar incoming clickstreams have been observed for a given period of time. the growth/pruning behavior of the algorithm can be moderated by a similarity threshold value that determines how precisely the profiles are desired to match incoming clickstreams, and thus how many profiles will tend to be generated. 2) the next step 52 in the matching process is to dynamically (i.e., on-the- fly) match an incoming (i.e., current) clickstream to existing clickstream profiles. as a clickstream is being generated by a user, the partial clickstream can be compared on-the-fly at generally any time with the existing set of stored clickstream profiles. a hard or soft match can be made in order to determine the identity of the current user. 3) next at step 54, the stored clickstream profiles are preferably retrained with data from completed clickstream. upon termination of the current clickstream, the set of clickstream profiles is preferably retrained to reflect the latest clickstream observation. clickstream profiles can be adjusted according to their similarity to the current clickstream. keystroke behavior tracking another type of distinguishing user input pattern relates to the typing styles or keystroke dynamics of different users. different users have different typing styles. a keystroke dynamics algorithm is accordingly provided to capture these different styles so that they may be associated with unique users. the keystroke algorithm can be similar to the clickstream algorithm described above. the process can generally include the following steps: 1) statistics on current keyboard activity occurring concurrently with the current clickstream are compiled. 2) a set of keystroke profiles based on past observations of keyboard activity for a given terminal device are created and stored in a database. 3) the current keyboard activity is compared to the set of keystroke profiles to predict the user identity. 4) the keystroke profiles are preferably updated with the current keyboard activity once it has terminated. keystroke statistics can comprise a vector of average behavior that can be tested for similarity to other such vectors. the keystroke profiles for users can be created and trained in a similar manner as clickstream profiles. in addition, on-the-fly matching (hard or soft) of keystroke profiles to current keyboard input can be done in a similar manner as for clickstream matching. one type of keystroke statistic that is particularly efficient and useful for characterizing typing behavior is the "digraph" interval. this is the amount of time it takes a user to type a specific sequence of two keys. by tracking the average digraph interval for a small set of select digraphs, a profile of typing behavior can be constructed. the following is a list of frequent digraphs used in the english language (with the numbers representing typical frequency of the digraphs per 200 letters): th 50 at 25 st 20 er 40 en 25 io 18 on 39 es 25 le 18 an 38 of 25 is 17 re 36 or 25 ou 17 he 33 nt 24 ar 16 in 31 ea 22 as 16 ed 30 ti 22 de 16 nd 30 to 22 rt 16 ha 26 it 20 ve 16 several of the most frequent digraphs can be selected for use in each keystroke profile. it is preferable that the digraphs be selected such that substantially the entire keyboard is covered. mouse dynamics tracking there is generally an abundance of mouse (or other pointing device) activity during a typical web browsing session, making it useful to characterize user behavior according to mouse dynamics alone or in combination with other user input behavior. similar to keystrokes, the statistics collected for mouse dynamics can form a vector that can be compared for similarity to other such vectors and can be input to an algorithm such as a clustering algorithm to identify recurring patterns. mouse behavior can include pointer motion and clicking action. the following are some examples of possible mouse usage statistics that can be gathered. for clicking action, the average double-click interval can be determined. this can be the average time it takes a user to complete a double-click, which can be as personal as a keystroke digraph interval. also, for user clicking action, the ratio of double- to single-clicks can be determined. much web navigation requires only single-clicks, yet many web users have the habit of double-clicking very frequently, which can be a distinguishing factor. for user pointer motion behavior, the average mouse velocity and average mouse acceleration statistics can be distinctive characteristics of users. motion is preferably gauged as close to the hand of the person as possible since mouse ball motion is generally a more useful statistic than pixel motion. furthermore, the ratio of mouse to keystroke activity can also be a useful distinguishing characteristic of users. some people prefer to navigate with the mouse, while others prefer use of a keyboard. the algorithm for matching current mouse dynamics statistics with stored mouse usage profiles can be similar to that described above with respect to the clickstream algorithm. other input device usage tracking various other user input behavior can be used for determining unique users. for example, in the television embodiments, user input patterns can be determined from usage of devices such as infrared remote control devices. the following are examples of various usage characterizing patterns for such devices. these include (1) the length of time a button on the remote control device is depressed to activate the button control; (2) the particular channels selected for viewing; (3) the digraphs for selecting multi-digit channels; (4) the frequency of use of particular control buttons such as the mute button; and (5) the frequency with which adjustments such as volume adjustments are made. the algorithms for matching statistics such as these to stored input profiles can be similar to those previously described. the fusion algorithm multiple independent sources of user information (clickstream, keystroke, mouse and any other input data) can be available, each having a corresponding algorithm that tracks recurring patterns in the input data. the existence of a set of unique users can be inferred from significant associations among these recurring input patterns. for example, a certain individual will tend to generate a particular keystroke pattern, a particular mouse pattern, and one or possibly several clickstream patterns. by detecting that these patterns tend to occur together and making an association, the existence of a unique user can be inferred. a 'fusion' algorithm, which is generally illustrated in figure 3, is provided to track associations among recurring patterns, to determine which patterns are significant, and to assign unique user status to those that are most significant. in addition, the fusion algorithm manages the addition and deletion of unique users from the system. as previously described, each individual algorithm (e.g., for clickstream, keystroke, and mouse usage data) can perform a soft match between the current input data and its set of tracked patterns, and returns a list of most similar patterns along with their respective similarity scores as indicated in step 80. for example, the clickstream algorithm might return the following set of matched pattern data for clickstream data: { (pattern "2," .9), (pattern "4," .7), (pattern "1," .65) }, where the first entry of each matched pattern data indicates the particular matched pattern, and the second entry indicates the similarity score for that match. the fusion algorithm tracks the frequency of recurrence of each possible combination of patterns among multiple individual algorithms. for example, a possible combination can have the form: { click pattern "c," key pattern "k," mouse pattern "m" }. if there are a total of c click patterns, k keystroke patterns, and m mouse patterns being tracked, then the total number of tracked combinations is c*k*m, which can be on the order of a few hundred, assuming the number of keystroke and mouse patterns is about 5, and the number of clickstream patterns is about 10. given a soft match from each of the tracking algorithms, a complete set of associations can then be enumerated at step 82 and scored at step 84. enumeration creates all the possible combinations. for each combination, a score is computed, which can be the product of the similarities of the members of the combination. it is not required that the score be the product of all the similarities; the score can also be based on various other possible combinations. then at step 86, an on-the-fly unique user identification can be made. the individual matching algorithms generate on-the-fly soft matches to current input data, which is then be used by the fusion algorithm to perform a hard match to its existing set of unique users to identify the current user. once the current user is identified, it is possible to effectively deliver to the user targeted content such as, e.g., targeted advertising or program viewing recommendations. the fusion algorithm can then update the frequencies of recurrence for the enumerated combinations. one possible way of doing this would be by adding the current score of each particular combination to its previous cumulative score at step 88. it is preferable to decay all existing scores prior to applying the updates, so that infrequent or inactive patterns are weighted less heavily. unique users can be associated with patterns whose scores stand out significantly from the rest. after every update of combination scores, the fusion algorithm can determine at step 90 if any additions or deletions from the current set of inferred unique users is indicated. a new user can be added if the score of some combination exceeds a given threshold. an existing user can be deleted if the score of the corresponding combination falls below a given threshold. these thresholds are relative to the magnitudes of the entire set of scores, and represent degrees of "standing out" among all the combinations. before an addition occurs (entailing the creation of a new profile), it is preferably determined whether or not the presumed new user in fact corresponds to an existing user. since there is the possibility that an individual user could manifest more than one type of clickstream behavior, a new user having the same keystroke and mouse behavior of an existing user can be associated with the existing user, since keystroke and mouse behaviors are more likely to correlate strongly with individual users compared to clickstream behavior. if an addition and a deletion occur at about the same time, it is possible that a particular user has simply "drifted", in which case that profile should be reassigned rather than being deleted and a new personal profile created. while the embodiments described above generally relate to identifying or tracking individual users of client terminals, they can be applied as well to identifying recurring groups of such users. for example, television viewing at various times is performed by groups of individuals such as, e.g., by a husband and wife or by a group of children. these combinations of individuals could manifest distinct behavior. for cases in which the system is unable to identify a user (or group of users) with a sufficient degree of certainty, the user could be designated as 'unknown' and an average user profile for the terminal device could be assumed. having described preferred embodiments of the present invention, it should be apparent that modifications can be made without departing from the spirit and scope of the invention.
149-139-159-846-770
US
[ "US" ]
H04N7/18,H04N5/225,G06T7/20,H04N5/232,H05B37/02,H05B47/125,H05B47/19,G01J1/32,G08C19/12,G08G1/017
2007-06-29T00:00:00
2007
[ "H04", "G06", "H05", "G01", "G08" ]
outdoor lighting fixture and camera systems
one embodiment of the invention relates to an outdoor lighting fixture that includes a ballast for controlling the amount of current provided to a lamp. the lighting fixture also includes a fixture housing at least partially surrounding the ballast and the lamp and a mounting system for holding the fixture housing to at least one of a wall and a pole. the lighting fixture yet further includes a camera coupled to the housing and a control circuit wired to the camera. the lighting fixture also includes a radio frequency transceiver wired to the control circuit. the control circuit is configured to cause information from the camera to be wirelessly transmitted by the radio frequency transceiver.
1 . an outdoor lighting fixture comprising: a ballast for controlling the amount of current provided to a lamp; a fixture housing at least partially surrounding the ballast and the lamp; a mounting system for holding the fixture housing to at least one of a wall and a pole; a camera coupled to the housing; a control circuit wired to the camera; and a radio frequency transceiver wired to the control circuit, wherein the control circuit is configured to cause information from the camera to be wirelessly transmitted by the radio frequency transceiver. 2 . the outdoor lighting fixture of claim 1 , further comprising: a power supply providing power to the ballast, the camera, the control circuit and the radio frequency transceiver. 3 . the outdoor lighting fixture of claim 1 , further comprising: a motion sensor coupled to the outdoor lighting fixture and wired to the control circuit; wherein the control circuit changes an operational state associated with the camera in response to a determination of motion, wherein the control circuit makes the determination of motion using a signal from the motion sensor. 4 . the outdoor lighting fixture of claim 3 , wherein changing an operational state associated with the camera comprises at least one of powering-up the camera, storing video captured by the camera in a persistent memory device of the outdoor lighting fixture, marking the video, and transmitting the video to a remote source. 5 . the outdoor lighting fixture of claim 3 , wherein the control circuit is further configured to cause an indication of motion to be transmitted to a remote source in response to the determination of motion. 6 . the outdoor lighting fixture of claim 5 , wherein the control circuit and the radio frequency transceiver are configured to broadcast the indication of motion to a network of radio frequency transceivers associated with other outdoor lighting fixtures. 7 . the outdoor lighting fixture of claim 5 , wherein the control circuit and the radio frequency transceiver are configured to transmit the indication of motion to the remote source with at least one of an outdoor lighting fixture identifier and a zone identifier associated with the outdoor lighting fixture. 8 . the outdoor lighting fixture of claim 1 , wherein the control circuit comprises a video streaming module configured to use the radio frequency transceiver to stream video information to a remote source. 9 . the outdoor lighting fixture of claim 1 , wherein the radio frequency transceiver is configured for peer-to-peer communication with other radio frequency transceivers of other outdoor lighting fixtures and wherein the control circuit is configured to cause the information from the camera to be wirelessly transmitted to the remote source via the peer-to-peer communication with the other radio frequency transceivers of the other outdoor lighting fixtures. 10 . a kit for installing on an outdoor lighting fixture pole, comprising: an outdoor lighting fixture configured for mounting to the outdoor lighting fixture pole and having a ballast and at least one lamp; a radio frequency transceiver for wirelessly communicating lighting commands and lighting information to a remote source; a camera for mounting to at least one of the outdoor lighting fixture and the outdoor lighting fixture pole; and a control circuit for wiring to the camera and the radio frequency transceiver, the control circuit configured to cause video information from the camera to be transmitted by the radio frequency transceiver. 11 . the kit of claim 10 , further comprising: a power supply providing power to the ballast, the camera, the control circuit and the radio frequency transceiver. 12 . the kit of claim 10 , further comprising: a motion sensor coupled to the outdoor lighting fixture and wired to the control circuit; wherein the control circuit changes an operational state associated with the camera in response to a determination of motion, wherein the control circuit makes the determination of motion using a signal from the motion sensor; 13 . the kit of claim 12 , wherein changing an operational state associated with the camera comprises at least one of powering-up the camera, storing video captured by the camera in a persistent memory device of the outdoor lighting fixture, marking the video, and transmitting the video to a remote source. 14 . the kit of claim 12 , wherein the control circuit is further configured to cause an indication of motion to be transmitted to a remote source in response to the determination of motion. 15 . the kit of claim 12 , wherein the control circuit and the radio frequency transceiver are configured to broadcast the indication of motion to a network of radio frequency transceivers associated with other outdoor lighting fixtures. 16 . the kit of claim 12 , wherein the control circuit and the radio frequency transceiver are configured to transmit the indication of motion to the remote source with at least one of an outdoor lighting fixture identifier and a zone identifier associated with the outdoor lighting fixture. 17 . the kit of claim 10 , wherein the control circuit comprises a video streaming module configured to use the radio frequency transceiver to stream video information to a remote source. 18 . the kit of claim 10 , wherein the radio frequency transceiver is configured for peer-to-peer communication with other radio frequency transceivers of other outdoor lighting fixtures and wherein the control circuit is configured to cause the information from the camera to be wirelessly transmitted to the remote source via the peer-to-peer communication with the other radio frequency transceivers of the other outdoor lighting fixtures. 19 . an outdoor lighting fixture having a radio frequency transceiver for communicating data information to a remote source, the outdoor lighting fixture comprising: a camera; a mount for holding the camera to at least one of the outdoor lighting fixture or a pole for the outdoor lighting fixture; a control circuit wired to the camera and including memory for storing video from the camera; and an interface for wiring the control circuit to the radio frequency transceiver of the outdoor lighting fixture; wherein the control circuit is configured to receive video information from the camera and to provide the video information to the radio frequency transceiver via the interface and for communication to the remote source. 20 . the outdoor lighting fixture of claim 19 , wherein the camera is operable for capturing images, video, or images and video; and wherein the control circuit is configured to cause the stored images, video, or images and video to be wirelessly transmitted by the radio frequency transceiver.
cross-reference to related patent applications this application claims the benefit of priority under 35 u.s.c. §119(e) of u.s. provisional application no. 61/380,128, filed on sep. 3, 2010, and titled “outdoor lighting fixtures.” this application also claims the benefit of priority as a continuation-in-part of u.s. application ser. no. 12/875,930, filed on sep. 3, 2010, which claims the benefit of priority of u.s. application no. 61/275,985, filed on sep. 4, 2009. this application also claims the benefit of priority as a continuation-in-part of u.s. application ser. no. 12/550,270, filed on aug. 28, 2009, which is a continuation-in-part of application ser. no. 11/771,317, filed jun. 29, 2007, and is also a continuation-in-part of u.s. ser. no. 12/240,805, filed on sep. 29, 2008, which is a continuation-in-part of u.s. application ser. no. 12/057,217, filed mar. 27, 2008. the subject matter of application ser. nos. 61/380,128, 61/275,985, 12/875,930, 12/550,270, 12/240,805, 12/057,217, and 11/771,317 are hereby incorporated herein by reference in their entirety. background the present invention relates generally to the field of outdoor lighting fixtures. observation cameras (e.g., security cameras, traffic cameras, etc.) are conventionally mounted to a high pole or side of a building and are either wired or wirelessly connected to a base station dedicated to the observation camera. it has conventionally been challenging to provide proper light, power, and data communications facilities for observation cameras. summary one embodiment of the invention relates to an outdoor lighting fixture that includes a ballast for controlling the amount of current provided to a lamp. the lighting fixture also includes a fixture housing at least partially surrounding the ballast and the lamp and a mounting system for holding the fixture housing to at least one of a wall and a pole. the lighting fixture yet further includes a camera coupled to the housing and a control circuit wired to the camera. the lighting fixture also includes a radio frequency transceiver wired to the control circuit. the control circuit is configured to cause information from the camera to be wirelessly transmitted by the radio frequency transceiver. another embodiment of the invention relates to a kit for installing on an outdoor lighting fixture pole. the kit includes an outdoor lighting fixture configured for mounting to the outdoor lighting fixture pole and having a ballast and at least one lamp. the kit further includes a radio frequency transceiver for wirelessly communicating lighting commands and lighting information to a remote source. the kit also includes a camera for mounting to at least one of the outdoor lighting fixture and the outdoor lighting fixture pole. the kit yet further includes a control circuit wired to the camera and the radio frequency transceiver and configured to cause video information from the camera to be transmitted by the radio frequency transceiver. another embodiment of the invention relates to a device for use with an outdoor lighting fixture having a radio frequency transceiver for communicating data information to a remote source. the device includes a camera and a mount for holding the camera to at least one of the outdoor lighting fixture or a pole for the outdoor lighting fixture. the device further includes a control circuit wired to the camera and including memory for storing video from the camera. the device also includes an interface for wiring the control circuit to the radio frequency transceiver of the outdoor lighting fixture. the control circuit is configured to receive video information from the camera and to provide the video information to the radio frequency transceiver via the interface and for communication to the remote source. another embodiment of the invention relates to a device for an outdoor lighting fixture. the lighting fixture has a radio frequency transceiver for wirelessly communicating information. the device includes a camera for capturing images, video, or images and video and a mount for holding the camera to at least on of the outdoor lighting fixture or a pole. the device further includes a control circuit having a wired interface to the camera and including memory for storing the captured images, video, or images and video received from the camera via the wired interface. the device also includes a radio frequency transceiver wired to the control circuit. the control circuit is configured to cause the stored images, video, or images and video to be wirelessly transmitted by the radio frequency transceiver. alternative exemplary embodiments relate to other features and combinations of features as may be generally recited in the claims. brief description of the figures the disclosure will become more fully understood from the following detailed description, taken in conjunction with the accompanying figures, wherein like reference numerals refer to like elements, in which: fig. 1 is a bottom perspective view of an outdoor fluorescent lighting fixture; according to an exemplary embodiment; fig. 2 is an illustration of an outdoor lighting fixture including a camera, according to an exemplary embodiment; fig. 3a is a more detailed block diagram of the lighting fixture of figs. 1-2 , according to an exemplary embodiment; fig. 3b is a block diagram of a lighting fixture controller and circuit, according to an exemplary embodiment; fig. 3c is a block diagram of an accessory device including a camera for communicating with a lighting fixture via a wireless connection, according to an exemplary embodiment; fig. 3d is a block diagram of an accessory device including a camera for communicating with a lighting fixture via a wired connection, according to an exemplary embodiment; fig. 4a is a flow chart of a process for activating a camera based on a motion sensor indication, according to an exemplary embodiment; fig. 4b is a flow chart of a process for providing video information to a remote source, according to an exemplary embodiment; fig. 5 is a more detailed block diagram of the master controller of fig. 3a , according to an exemplary embodiment; and fig. 6 is a diagram of a zone system for a facility lighting system, according to an exemplary embodiment. detailed description referring generally to the figures, a camera is coupled to an outdoor lighting fixture configured for mounting to a building or high pole. the camera uses power from the power source for the outdoor lighting fixture and a communications interface associated with the outdoor lighting fixture to transmit video information back to a remote source for observation or analysis. the camera may be positioned to look down at an area illuminated by the outdoor lighting fixture. referring now to fig. 1 , a bottom perspective view of an outdoor fluorescent lighting fixture 102 is shown, according to an exemplary embodiment. outdoor lighting fixture 102 includes a camera 40 for capturing video information (e.g., pictures, video streams, video recordings, etc.). outdoor lighting fixture 102 may be used for security purposes, traffic camera purposes, observational purposes or otherwise. for example, outdoor fluorescent lighting fixture 102 may be configured for applications such as a street lighting application or a parking lot lighting application. in some embodiments, outdoor fluorescent lighting fixture 102 is configured to include a mounting system 32 for coupling the fluorescent lighting fixture to high poles or masts (e.g., high poles for holding street lights, high poles for holding parking lot lights, etc). outdoor fluorescent lighting fixture 102 may also be configured to provide wired or wireless communications capabilities, one or more control algorithms (e.g., based on sensor feedback, received wireless commands or wireless messages, etc.), built-in redundancy, and venting. many of the outdoor lighting fixtures described herein may advantageously mount to existing street light poles or other outdoor structures for holding lighting fixtures such that no modification to the existing infrastructure (other than replacing the lighting fixture itself) is necessary. in some embodiments, the outdoor lighting fixtures include control circuits for providing energy saving control features to a group of lighting fixtures or a municipality without changing existing power wiring run from pole to pole. while many of the embodiments described herein are of a fluorescent lighting fixture, in other embodiments the lighting fixture may be configured for illuminating an area using other lamp technologies (e.g., high intensity discharge (hid), led, etc.). in fig. 1 , outdoor lighting fixture 102 is configured for coupling to a pole and for directing substantially toward the ground. such an orientation may be used to illuminate streets, sidewalks, bridges, parking lots, and other outdoor areas where ground illumination is desirable. such an orientation may also direct camera 40 generally toward the ground for capturing video information of activity on the ground. outdoor lighting fixture 102 is shown to include a mounting system 32 and a housing 30 . mounting system 32 is configured to mount fixture 102 including housing 30 to a pole or mast. in an exemplary embodiment, housing 30 surrounds one or more fluorescent lamps 12 (e.g., fluorescent tubes) and includes a lens (e.g., a plastic sheet, a glass sheet, etc.) that allows light from the one or more fluorescent lamps 12 to be provided from housing 30 . mounting system 32 is shown to include a mount 34 and a compression sleeve 36 . compression sleeve 36 is configured to receive the pole and to tighten around the pole (e.g., when a clamp is closed, when a bolt is tightened, etc.). compression sleeve 36 may be sized and shaped for attachment to existing outdoor poles such as street light poles, sidewalk poles, parking lot poles, and the like. as is provided by mounting system 32 , the coupling mechanism may be mechanically adaptable to different poles or masts. for example, compression sleeve 36 may include a taper or a tapered cut so that compression sleeve 36 need not match the exact diameter of the pole or mast to which it will be coupled. while lighting fixture 102 shown in fig. 1 utilizes a compression sleeve 36 for the mechanism for coupling the mounting system to a pole or mast, other coupling mechanisms may alternatively be used (e.g., a two-piece clamp, one or more arms that bolt to the pole, etc.). according to an exemplary embodiment, fixture 102 and housing 30 are elongated and mount 34 extends along the length of housing 30 . mount 34 is preferably secured to housing 30 in at least one location beyond a lengthwise center point and at least one location before the lengthwise center point. in other exemplary embodiments, the axis of compression sleeve 36 also extends along the length of housing 30 . in the embodiment shown in fig. 1 , compression sleeve 36 is coupled to one end of mount 34 near a lengthwise end of housing 30 . housing 30 is shown to include a fixture pan 50 and a door frame 52 that mates with fixture pan 50 . in the embodiments shown in the figures, door frame 52 is mounted to fixture pan 50 via hinges 54 and latches 56 . when latches 56 are released, door frame 52 swings away from fixture pan 50 to allow access to fluorescent lamps 12 within housing 30 . latches 56 are shown as compression-type latches, although many alternative locking or latching mechanisms may be alternatively or additionally provided to secure the different sections of the housing. in some embodiments the latches may be similar to those found on “nema 4” type junction boxes or other closures. further, many different hinge mechanisms may be used. yet further, in some embodiments door frame 52 and fixture pan 50 may not be joined by a hinge and may be secured together via latches 56 on all sides, any number of screws, bolts or other fasteners that do not allow hinging, or the like. in an exemplary embodiment, fixture pan 50 and door frame 52 are configured to sandwich a rubber gasket that provides some sealing of the interior of housing 30 from the outside environment. in some embodiments the entirety of the interior of the lighting fixture is sealed such that rain and other environmental moisture does not easily enter housing 30 . housing 30 and its component pieces may be galvanized steel but may be any other metal (e.g., aluminum), plastic, and/or composite material. housing 30 , mounting system 32 and/or the other metal structures of lighting fixture 102 may be powder coated or otherwise treated for durability of the metal. according to an exemplary embodiment housing 30 is powder coated on the interior and exterior surfaces to provide a hard, relatively abrasion resistant, and tough surface finish. housing 30 , mounting system 32 , compression sleeve 36 , and the entirety of lighting fixture 102 are preferably extremely robust and able to withstand environmental abuses of outdoor lighting fixtures. the shape of housing 30 and mounting system 32 are preferably such that the effective projection area (epa) relative to strong horizontal winds is minimized—which correspondingly provides for minimized wind loading parameters of the lighting fixture. ballasts, structures for holding lamps, and the lamps themselves may be installed to the interior of fixture pan 50 . further, a reflector may be installed between the lamp and the interior metal of fixture pan 50 . the reflector may be of a defined geometry and coated with a white reflective thermosetting powder coating applied to the light reflecting side of the body (i.e., a side of the reflector body that faces toward a fluorescent light bulb). the white reflective coating may have reflective properties, which in combination with the defined geometry of the reflector, provides high reflectivity. the reflective coating may be as described in u.s. prov. pat. app. no. 61/165,397, filed mar. 31, 2009. in other exemplary embodiments, different reflector geometries may be used and the reflector may be uncoated or coated with other coating materials. in yet other embodiments, the reflector may be a “miro 4” type reflector manufactured and sold by alanod gmbh & co kg. the shape and orientation of housing 30 relative to the reflector and/or the lamps is configured to provide a near full cut off such that light does not project above the plane of fixture pan 50 . the lighting fixtures described herein are preferably “dark-sky” compliant or friendly. to provide further resistance to environmental variables such as moisture, housing 30 may include one or more vents configured to allow moisture and air to escape housing 30 while not allowing moisture to enter housing 30 . moisture may enter enclosed lighting fixtures due to vacuums that can form during hot/cold cycling of the lamps. according to an exemplary embodiment, the vents include, are covered by, or are in front of one or more pieces of material that provide oleophobic and hydrophobic protection from water, washing products, dirt, dust and other air contaminants. according to an exemplary embodiment the vents may include gore membrane sold and manufactured by w.l. gore & associates, inc. the vent may include a hole in the body of housing 30 that is plugged with a snap-fit (or otherwise fit) plug including an expanded polytetrafluoroethylene (eptfe) membrane with a polyester non-woven backing material. while various figures of the present disclosure, including fig. 1 , illustrate lighting fixtures for fluorescent lamps, it should be noted that embodiments of the present disclosure may be utilized with any type of lighting fixture and/or lamps. further, while housing 30 is shown as being fully enclosed (e.g., having a door and window covering the underside of the fixture), it should be noted that any variety of lighting fixture shapes, styles, or types may be utilized with embodiments of the present disclosure. the lighting fixture system includes controller 16 . controller 16 is connected to lighting fixture 102 via wire 14 . controller 16 is configured to control the switching between different states of lighting fixture 102 (e.g., all lamps on, all lamps off, some lamps on, etc.). while controller 16 is shown as having a housing that is exterior to housing 30 of lighting fixture 102 , it should be appreciated that controller 16 may be physically integrated with housing 30 . for example, one or more circuit boards or circuit elements of controller 16 may be housed within, on top of, or otherwise secured to housing 30 . further, in other exemplary embodiments, controller 16 (including its housing) may be coupled directly to housing 30 . for example, controller 16 's housing may be latched, bolted, clipped, or otherwise coupled to the interior or exterior of housing 30 . controller 16 's housing may generally be shaped as a rectangle (as shown), may include one or more non-right angles or curves, or otherwise configured. in an exemplary embodiment, controller 16 's housing is made of plastic and housing 30 for the lighting fixture 102 is made from metal. in other embodiments, other suitable materials may be used. according to various embodiments, controller 16 is further configured to log usage information for lighting fixture 102 in a memory device local to controller 16 . controller 16 may further be configured to use the logged usage information to affect control logic of controller 16 . controller 16 may also or alternatively be configured to provide the logged usage information to another device for processing, storage, or display. controller 16 is shown to include a sensor 13 coupled to controller 16 (e.g., controller 16 's exterior housing). controller 16 may be configured to use signals received from sensor 13 to affect control logic of controller 16 . further, controller 16 may be configured to provide information relating to sensor 13 to another device. referring further to fig. 1 , camera 40 is shown as mounted to the underside of frame 52 . in other embodiments camera 40 is mounted to other structures of outdoor lighting fixture 102 (e.g., fixture pan 50 , controller 16 , etc.). in yet other embodiments camera 40 is not mounted directly to a structure of lighting fixture 102 and is instead coupled to a pole, a building, or another structure nearby outdoor lighting fixture 102 when outdoor lighting fixture 102 is mounted. in such embodiments camera 40 is connected to a control circuit of outdoor lighting fixture 102 (e.g., circuitry in controller 16 ) via a wired link. in the embodiment of fig. 1 , camera 40 is shown as a small circular camera mounted on the corner of frame 52 ; according to various exemplary embodiments, camera 40 may be of a different size, shape, or configuration. camera 40 may be implemented using any suitable technology for capturing video information. for example, camera 40 may be or include a charge-coupled device (ccd), a video pick-up tube, a complementary metal-oxide-semiconductor (cmos), a passive pixel sensor, an active pixel sensor, a bayer sensor, or an image sensor of any other suitable technology. camera 40 is shown as a fixed-position camera configured to aim in the installed direction and for monitoring a specific area (e.g., the area illuminated by outdoor lighting fixture 102 ). in other embodiments camera 40 may be configured to pan, tilt, zoom (e.g., a pan-tilt-zoom (ptz) camera) or otherwise move, adjust, or change positions. in fig. 2 , an illustration of an outdoor lighting fixture system 100 is shown to include outdoor lighting fixtures 102 , 106 , according to an exemplary embodiment. outdoor lighting fixtures 102 , 106 are mounted to street light poles via mounting systems and are aimed to illuminate the road. camera 40 is aimed to capture video of vehicles in the road such as vehicle 101 . the video captured by camera 40 is provided to a control circuit of outdoor lighting fixture 102 that is wired to camera 40 . outdoor lighting fixture 102 further includes a radio frequency transceiver wired to the control circuit. the control circuit causes video information from the camera 40 to be wirelessly transmitted by the radio frequency transceiver in outdoor lighting fixture 102 . in the illustration of fig. 2 , a user interface provided by client device 112 is configured to receive the video information captured by camera 40 for playback. the video information is relayed to client device 112 via outdoor lighting fixture 106 , communications network 108 , and server 110 prior to arriving at client device 112 . outdoor lighting fixture 102 , and more particularly the control circuit and radio frequency transceiver of outdoor lighting fixture 102 , are configured to relay the video information to outdoor lighting fixture 106 . outdoor lighting fixture 106 has a wired connection to a data communications network 108 (e.g., an internet service provider, a provide wan, etc.). the video information can be relayed through data communications network 108 or other data communication links and arrive at server 110 . server 110 may be configured to store video information from many outdoor lighting fixture cameras. server 110 can include a web service, a video streamer, or another service for allowing client device 112 to access and playback the video information stored with server 110 . to avoid running a high speed wired data communication link such as link 108 to each outdoor lighting fixture in an area, the outdoor lighting fixtures in an area can each be configured to wirelessly route information to a “base station” or, as shown in fig. 2 , an outdoor lighting fixture 106 having the connection to the high speed wired data communication link. in some embodiments the radio frequency transceivers and control circuits of outdoor lighting fixture 102 transmits data addressed for outdoor lighting fixture 106 . in other embodiments the radio frequency transceivers and control circuits of outdoor lighting fixture 102 broadcast video information with an address for server 110 . in such embodiments, as long as at least one other outdoor lighting fixture is configured to receive and relay information in a network of outdoor lighting fixtures, the video information will be routed to outdoor lighting fixture 106 having the high speed data connection and thereafter routed to server 110 via data communications link 108 . each radio frequency transceiver in the network can be configured to support such a rebroadcast capability. for example, the network of outdoor lighting fixtures can have a meshed networking topology such that the network is self-routing or self-healing. in other words, each node in the network can determine how to best transmit video information back to an intended recipient. in some cases and with some network conditions video information from an originating node can take a first path and in other cases and with other network conditioning the video information from the same originating node may take a second path to the same recipient node. the outdoor lighting fixtures of an outdoor lighting fixture network can be arranged in a point-to-point, master-slave, or other relationship. in an exemplary embodiment the radio frequency transceivers are configured for peer-to-peer communication with other radio frequency transceivers of other outdoor lighting fixtures and the control circuit is configured to cause the information from the camera to be wirelessly transmitted to the remote source (e.g., server 110 ) via the peer-to-peer communication with the other radio frequency transceivers of the other outdoor lighting fixtures (e.g., outdoor lighting fixture 106 ). outdoor lighting fixture 102 additionally includes a sensor 13 (shown in fig. 1 ) for detecting motion of an object (e.g., vehicle 101 , people, etc.). sensor 13 provides a sensor output to the control circuit of outdoor lighting fixture 102 (e.g., via a wired connection). the control circuit of outdoor lighting fixture 102 can process the sensor output to determine if the sensor output is representative of motion in the area. in response to a determination of motion in the area, the control circuit can change an operational state associated with camera 40 . for example, changing an operational state associated with camera 40 can include one or more of powering-up the camera, storing video captured by the camera in a persistent memory device of the outdoor lighting fixture, marking the video, and transmitting the video to a remote source. such logic can advantageously prevent camera 40 from recording at all times or can help distinguish video information of interest from video information with no significant activity. in some exemplary embodiments the control circuit is further configured to cause an indication of motion to be transmitted to a remote source in response to the determination of motion. for example, the control circuit and the radio frequency transceiver may broadcast the indication of motion to a network of radio frequency transceivers associated with other outdoor lighting fixtures. cameras for those other outdoor lighting fixtures can also be configured to change an operational state of their cameras and to be ready to capture the motion. yet further, the other outdoor lighting fixtures can be configured to fully illuminate in response to receiving an indication of motion from a remote source. for example, an outdoor lighting fixture may be configured to switch from a dimmed or off state of operation to a brighter or fully illuminated state of operation. the control circuit for outdoor lighting fixture 102 can be configured to transmit the indication of motion to the other outdoor lighting fixtures or to another remote source with at least one of an outdoor lighting fixture identifier and a zone identifier associated with the outdoor lighting fixture. receiving devices can use the received identifier or identifiers to determine whether the motion relates to a nearby fixture or whether the received motion indication should be ignored. in an exemplary embodiment the control circuit of a receiving outdoor lighting fixture will compare the identifier (e.g., zone identifier) to a stored zone identifier of its own. if the motion occurred in the same zone, the control circuit will cause its local camera to begin recording and/or will fully illuminate a ballast of the fixture. client device 112 may be used to view the camera data or to provide camera 40 or the control circuit of outdoor lighting fixture 102 with commands. for example, client device 112 may provide a display of camera data (e.g., a slideshow of pictures, a near real-time view of streaming video from camera 40 , motion information relating to vehicle 100 as detected or calculated by motion sensor 13 , camera 40 and the control circuit, etc.). client device 112 may further provide a user interface for allowing a user to provide control instructions or commands to the control circuit associated with sensor 13 or camera 40 . for example, client device 112 , via server 110 , data communications network 108 , and outdoor lighting fixture 106 may be configured to control outdoor lighting fixture 102 including camera 40 . a user may view the data for the camera on client device 112 and provide client device 112 with user input to create camera instructions (e.g., an instruction for the camera to take various photos of the area, an instruction to follow vehicle 101 for as long as possible, an instruction for the camera to stay focused on a specific area for a specific time period, etc.), lighting fixture instructions (e.g., an instruction for a lighting fixture to stay in an illuminated state for a fixed or variable length of time based on the presence of vehicle 101 , an instruction for a lighting fixture to turn off, etc.), or other outdoor lighting fixture system 100 instructions. camera instructions may further include changing the zoom of camera 40 (e.g., zooming in or out on vehicle 101 ), panning camera 40 across a specific area (e.g., the area surrounding vehicle 101 ), tilting camera 40 (e.g., such that camera 40 shows a different angle of vehicle 101 ), or otherwise changing the position or configuration of camera 40 . outdoor lighting fixture instructions may also include instructions to provide lighting (e.g., by a secondary ballast of outdoor lighting fixture 102 , by outdoor lighting fixture 106 , etc.) such that camera 40 may better record an event or object, instructions to change lighting fixture status between an on state, an off state, and a dimmed state, etc. referring further to fig. 2 , each outdoor lighting fixture, camera, or radio frequency transceiver in a network or area can be associated with a unique identifier. the unique identifier can be associated with a location (e.g., a longitude/latitude coordinate, a gps coordinate, a coordinate on a city grid, etc.) and stored in memory of a server 110 or master controller (e.g., master controller 202 shown in fig. 5 or 6 ). the identifier or location or the identifier/location association can also or alternatively be stored in memory of the outdoor lighting fixture 102 . using the identifiers and locations, the server 110 can generate a map for display on a graphical user interface shown on an electronic display system of client device 112 . the server 110 may cause the map graphic to include indicia for the outdoor lighting fixture or camera (e.g., an icon), to include indicia for whether the camera is active (e.g., a green icon, a highlighted icon, a text descriptor “camera active”, etc.), or to show the motion status for the motion sensor (e.g., “detecting motion”). the server 110 may also allow user selection of an outdoor lighting fixture or camera for viewing the camera's video information, or may allow for “still” or streaming video to be shown in small windows on a map. when stills or streaming video are shown on the map, the server 110 can allow for the user to select, playback or enlarge one or more video streams of interest. referring still to fig. 2 , server 110 may be configured to provide a graphical user interface to client device 112 for manipulating the camera in a way that the camera can be used to inspect structures of the outdoor lighting fixture 102 . for example, one or more pan, tilt, or zoom controls may be provided by server 110 to the graphical user interface for receiving user commands. using the controls, a technician may be able to change the camera from focusing on, e.g., a street, to focusing on the fixture's lamps, the ballasts, the mounting system, other lighting fixtures (e.g., a fixture across the street, etc.). using these views, the technician may be able to determine if the lighting fixture is responding properly to commands (e.g., turn on, turn off), has a burnt-out or otherwise expired lamp, or may be able to conduct other observation or testing (e.g., testing a time-out feature of the fixture). fig. 3a is a diagram of another outdoor lighting fixture 200 , according to an exemplary embodiment. outdoor lighting fixture 200 is shown to include housing 260 and mounting system 233 (e.g., these may be similar to or different from the housing and mounting system shown in figs. 1 and 2 ). control circuit 210 for lighting fixture 200 is shown inside mounting system 233 (as opposed to being housed within controller 16 as shown in fig. 1 ). in an exemplary embodiment control circuit 210 is user-accessible via an opening in the top of mounting system 233 . the diagram shown in fig. 3a illustrates two lamp sets 240 , 242 with two fluorescent lamps forming each lamp set 240 , 242 . each lamp set 240 , 242 may include one or any number of additional lamps. lighting fixture 200 further includes two ballasts 244 , 246 . further, while some embodiments described herein relate to providing redundant lamp sets and ballasts, it should be appreciated that many embodiments of the present disclosure may only include a single lamp set and a single ballast. in other embodiments more than two ballasts and lamp sets may be included in a single lighting fixture. while the fluorescent lamps are illustrated as tube lamps extending lengthwise relative to the lighting fixture, the fluorescent lamps may be compact fluorescent bulbs, run perpendicular to the length of the lighting fixture, lamps of a different technology, or may be otherwise oriented. control circuit 210 is coupled to ballasts 244 , 246 and is configured to provide control signals to ballasts 244 , 246 . control circuit 210 may operate by controllably switching the relay from providing power to ballasts 244 , 246 to restricting power to ballasts 244 , 246 and vice versa. control circuit 210 is further shown to include radio frequency transceiver 206 communicably connected to control circuit 210 . according to an exemplary embodiment, the system shown in fig. 3a is configured to receive control signals from a master controller 202 or a master transceiver 204 via radio frequency transceiver 206 . in other embodiments outdoor lighting fixture 200 shown in fig. 3a is also configured to provide information to one or more remote sources such as other outdoor lighting fixtures via radio frequency transceiver 206 . in an exemplary embodiment radio frequency transceiver 206 is a zigbee transceiver configured for wireless meshed networking. in other embodiments radio frequency transceiver 206 operates according to a wifi protocol, a bluetooth protocol, or any other suitable protocol for short or long range wireless data transmission. outdoor lighting fixture 200 is further shown to include a wired uplink interface 211 . wired uplink interface 211 may be or include a wire terminal, hardware for interpreting analog or digital signals received at the wire terminal, or one or more jacks, connectors, plugs, filters, or other hardware (or software) for receiving and interpreting signals received via the wire 212 from a remote source. radio frequency transceiver 206 may include an encoder, a modulator, an amplifier, a demodulator, a decoder, an antenna, one or more filters, one or more buffers, one or more logic modules for interpreting received transmissions, and/or one or more logic modules for appropriately formatting transmissions. control circuit 210 shown in fig. 3a is shown as being entirely enclosed within mounting system 233 and as a single unit (e.g., single pcb, flexible pcb, separate pcb's but closely coupled). in other embodiments, however, control circuit 210 may be distributed (e.g., having some components outside of the mounting system, having some components within the fixture housing, etc.). fig. 3a is further shown to include an environment sensor 208 . environment sensor 208 is shown as located at the top of the mounting system 233 . in other embodiments, environment sensor 208 may be installed within housing 260 , to the underside of housing 260 , or to any other part of outdoor lighting fixture 200 . in yet other embodiments, environment sensor 208 may be remote from the fixture itself (e.g., coupled to a lower location on the pole, coupled to a street sign, coupled to a stop light, etc.). it should further be mentioned that one environment sensor 208 may serve multiple fixtures. this may be accomplished by environment sensor 208 directly providing wired or wireless output signals to multiple fixtures or by the environment sensor providing output signals to a single fixture (e.g., fixture 200 ) which is configured to forward the signals (or a representation or message derived from the signals) to other fixtures or to a master controller 202 for action. environment sensor 208 may be an occupancy sensor, a motion sensor, a photocell, an infrared sensor, a temperature sensor, or any other type of sensor for supporting the activities described herein. control circuit 210 coupled to environment sensor 208 may be configured to cause lamps 240 , 242 to illuminate when movement is detected or based on some other logic determination using sensor input. in an exemplary embodiment, control circuit 210 may also be configured to cause signals to be transmitted by radio frequency transceiver 206 to a security monitor observed by security personnel. receipt of these signals may cause a system controlling a pan-tilt-zoom security camera (e.g., camera 270 ) to aim toward the area covered by a light. the signals (or other alerts) may also be sent to other locations such as a police station system for action. for example, if activity continues occurring in a parking lot after-hours, as detected by motion sensors on a system of outdoor lighting fixtures as described herein, the outdoor lighting fixtures can each communicate (wired, wirelessly, etc.) this activity to master transceiver 204 and master controller 202 may make a determination to send a request for inspection to security or police. control circuit 210 may also be configured to turn lighting fixture 102 on for a period of time prior to turning lighting fixture 102 off if no further occupancy or motion is detected. camera 270 is shown coupled to the bottom side of housing 260 and may be connected to control circuit 210 either via a wireless or wired connection. camera 270 may alternatively be coupled to housing 260 or elsewhere on lighting fixture 200 . camera 270 may provide control circuit 210 with video and/or still photos for transmission to other lighting fixtures 230 , to a master controller 202 via a master transceiver 204 , a data communications network 250 via interface 211 , or other devices 232 wirelessly connected to lighting fixture 200 . referring now to fig. 3b , a block diagram of another controller 300 for an outdoor lighting fixture is shown, according to an exemplary embodiment. controller 300 includes control circuit 350 , power relays 302 , camera circuit 330 , sensor 318 , wireless controller 305 , and radio frequency transceiver 306 . in some embodiments activities of circuit 350 are controlled or facilitated using one or more processors 352 (e.g., a programmable integrated circuit, a field programmable gate array, an application specific integrated circuit, a general purpose processor, a processor configured to execute instructions it receives from memory, etc.). in other embodiments, activities of circuit 350 are controlled and facilitated without the use of one or more processors and are implemented via a circuit of analog and/or digital electronics components. memory 354 of circuit 350 may be computer memory, semiconductor-based, volatile, non-volatile, random access memory, flash memory, magnetic core memory, or any other suitable memory for storing information. controller 300 is shown to include a camera circuit 330 for receiving camera data and video information from camera 309 and processing the camera data and video information. the video information or camera data may then be provided to circuit 350 for transmission via rf transceiver 306 to a remote source or another lighting fixture. circuit 350 may further receive the camera data and perform additional processing or analysis of the camera data. for example, circuit 350 may use the video information or camera data to determine whether to change a lighting fixture status (turning the lighting fixture on or off, activating an extra ballast or lamp, etc.), to determine whether to change a schedule of the lighting fixture, or to make other control determinations. camera circuit 330 includes a camera interface 338 for communicating with a camera 309 connected (either via a wired connection or wirelessly) to controller 300 . camera interface 338 receives video information or camera data such as camera settings data, the current tilt or zoom of the camera, or the like. camera interface 338 may be a wired interface such as a ethernet interface, a digital video jack, an optical video connection, a usb interface, or another suitable interface for receiving video information from camera 309 . in alternative embodiments, camera interface 338 is a wireless interface for receiving data from the camera via a wireless connection. in yet other embodiments camera 309 is a part of camera circuit 330 (e.g., rigidly coupled to the circuit board of circuit 330 ). camera circuit 330 further includes modules (e.g., integrated circuits, computer code modules in a memory device and for execution by a processor, etc.) for processing the camera data received by camera interface 338 . camera circuit 330 includes processor 332 for executing computer codes of the various modules of camera circuit 330 , processing video information received from camera 309 , or to complete the execution of other activities described herein. for example, processor 332 may remove noise from the video signal (e.g., denoising), increase or decrease the brightness or contrast of the video signal or images (e.g., to improve the view provided by the video signal), resize or rescale the video signal or images (e.g., increasing the size such that a particular object in the video signal is more easily seen, interpolating the image, etc.), or perform other processing techniques on the video signal and images (e.g., deinterlacing, deflicking, deblocking, color grading, etc.). processor 332 may then provide the processed video signal or images to circuit 350 for transmitting to a remote source via radio frequency transceiver 306 , may provide the video information to video logic 336 for video analysis, may store the video in memory 334 for later use, or may conduct another activity described herein using the processed video information. memory 334 may be configured to store all video information or camera data received by camera circuit 330 , some of the video information or camera data received by camera circuit 330 , relevant video information or camera data selected by video logic 336 , all video information or camera data for a given time frame, all video information or camera data associated with a particular object within the video, or otherwise. for example, memory 334 may be configured to store all camera data that has a timestamp within the past hour, past 24 hours, past week, or within any other time frame. in another example, video logic 336 may retain all video information or camera data associated with a particular vehicle recorded by the camera, retain all camera data with a specific timestamp range (e.g., all data with a timestamp within a period of time in which sensor 318 detected motion), etc. video logic 336 receives the video information or camera data from camera interface 338 or from processor 332 and analyzes the data. the analysis of the video information may include the detection of an object within the video (either stationary or moving) or the detection of an event occurring in the area captured by the video. for example, video logic 336 may be used to identify a vehicle or license plate, and may provide circuit 350 with data regarding the vehicle (e.g., how fast the vehicle was appearing to move, the direction in which the vehicle was traveling, etc.) or the license plate. video logic 336 may include logic for determining which portions of a video signal and/or which images best represent a tracked object. camera circuit 330 further includes remote control module 340 . remote control module 340 is configured to allow for remote control of camera 309 . remote control of camera 309 may include adjusting the positioning, tilt, or zoom of the camera, adjusting when a camera records video, adjusting a camera resolution, stopping recording, starting recording, or initiating or changing any other camera activity. remote control module 340 may be configured to serve or otherwise provide user interface controls or user interface options to a remote source for adjusting the camera settings. remote control module 340 may receive an input from the user at the user interface controls or options and interpret the input (e.g., determine an adjustment to be made to camera 309 ). remote control module 340 may then cause camera circuit 330 and camera interface 338 to adjust camera 309 or remote control module 340 can cause changes to be made via other modules of camera circuit 330 such as camera settings module 346 . camera circuit 330 further includes video streamer 342 configured to process the video information from camera 309 and to provide a stream of the video to a remote source communicating with controller 300 (e.g., communicating wirelessly). video streamer 342 may process or otherwise prepare the stream of video information for streaming to the remote source. for example, video streamer 342 may compress the video for streaming, packetize the video for streaming, and wrap the packetized video according to a video streaming protocol compatible with the remote source. video streamer 342 may further be configured to negotiate and maintain a data streaming connection with the remote source. camera circuit 330 further includes server module 344 for serving video information and/or related user interfaces to a remote source. server module 344 may be, for example, a web server or web service configured to respond to requests for video information or user interfaces using one or more world wide web communications protocols. for example, server module 344 may respond to http requests by providing http formatted responses. server module 344 may be used to establish the streaming connection or streaming service provided by video streamer 342 . camera circuit 330 is further shown to include camera settings module 346 . camera settings module 346 is configured to receive commands provided to controller 300 by a remote source and relating to camera settings. camera settings module 346 can update stored camera settings or change the “live” behavior of the camera in response to the received commands. for example, radio frequency transceiver 306 can receive a command for the camera to change the default pan, tilt, and zoom settings of the camera from a remote source. radio frequency transceiver 306 and wireless controller 305 can provide the command to the control circuit 350 which may route the command to camera circuit 330 and more particularly camera settings module 346 . camera settings module 346 can parse the command and set the pan, tilt, and zoom parameters for the camera by updating variables stored in memory 334 and/or providing the new parameters to camera 309 via camera interface 338 . other adjustable camera settings may include a timeframe under which the camera should record video, video settings such as the resolution of the video, the desired frames per second (fps) of the video, the brightness, contrast, or color setting of the video, and/or a default position, tilt, and zoom set for the camera. camera settings module 346 can also automatically update settings for the camera in response to received user commands regarding other settings. for example, if the zoom of camera 309 is changed via user command, camera settings module 346 can include logic for determining that, for example, the brightness of the video at the new zoom setting should be adjusted. camera settings module 346 may be further used to adjust photo settings for the camera. photo settings may include a size or resolution of the photos, the brightness, contrast, or color settings of the photos, etc. photo settings further includes rules or logic for when to take photos or “stills” of video information. for example, photos may be taken by the camera on a scheduled interval, at specific pre-determined times, or when an object is detected and is in the view of the camera. such settings can be set, changed, and maintained by camera settings module 346 . circuit 350 is further shown to include a command and control module 356 , logging module 358 , an end of life module 360 , a scheduling module 362 , a timer 364 , an environment processing module 366 , and fixture data 368 . using signals received from communications electronics of the lighting fixture and/or signals received from one or more sensors (e.g., photocells, occupancy sensors, etc.), command and control module 356 is configured to control the ballasts and lamps of the lighting fixture. command and control module 356 may include the primary control algorithm/loop for operating the fixture and may call, initiate, pass values to, receive values from, or otherwise use the other modules of the circuit. for example, command and control module 356 may primarily operate the fixture using a schedule as described below with respect to scheduling module 362 , but may allow upstream or peer control (e.g., “override control”) to allow a remote source to cause the ballast/lamps to turn on or off. command and control module 356 may be used to control 2-way communication using communications electronics of the lighting fixture. command and control module 356 may further receive data from camera circuit 330 or from a user of a remote source connecting to controller 300 and may adjust the control of the ballasts and lamps (e.g., if camera data or a user command indicates a desire to turn on the lamps of the lighting fixture for the benefit of a camera recording video). for example, if camera data and/or sensor 318 indicate there is a vehicle approaching the lighting fixture, command and control module 356 may provide a command to change the lighting fixture state to a dimmed state or an “on” state. command and control module 356 may further change the lighting fixture state based on other camera data and/or sensor 318 data (e.g., other detected motion, an ambient light level, etc.). logging module 358 is configured to identify and store fixture event information. for example, logging module 358 may be configured to identify (e.g., by receiving a signal from another component of the circuit) when the lamps of the fixture are being or have been turned off or turned on. these events may be recorded by logging module 358 with a date/time stamp and with any other data. for example, logging module 358 may record each event as a row in a two dimensional table (e.g., implemented as a part of a relational database, implemented as a flat file stored in memory, etc.) with the fields such as event name, event date/time, event cause, event source. one module that may utilize such information is end of life module 360 . end of life module 360 may be configured to compile a time of use total by querying or otherwise aggregating the data stored by logging module 358 . events logged by the system may be transmitted using the communications interfaces or other electronics to a remote source via a wired or wireless connection. messages transmitting logged events or data may include an identifier unique to the lighting fixture (e.g., lighting fixture's communication hardware) that identify the fixture specifically. in addition to the activities of end of life module 360 , command and control module 356 may be configured to cause communications electronics of the fixture to transmit messages from the log or other messages upon identifying a failure (e.g., a power supply failure, a control system failure, a ballast failure, a lamp failure, etc.). while logging module 358 may be primarily used to log on/off events, logging module 358 (or another module of the control system) may log energy draw (or some value derived from energy draw such as a carbon equivalent amount) by the lighting fixture. in an exemplary embodiment, logging module 358 logs information relating to camera circuit 330 . for example, logging module 358 can log times when video logic 336 determined that motion was present in a captured scene, log the times when camera 309 was caused to be active based on motion detected using sensor 318 , or log other activities relating to camera circuit 330 or camera 309 . in an exemplary embodiment, controller 300 (e.g., via rf transceiver 306 ) is configured to transmit the logged usage information to remote devices such as master controller 202 of fig. 3a . wireless controller 305 may be configured to recall the logged usage information from memory 316 at periodic intervals (e.g., every hour, once a day, twice a day, etc.) and to provide the logged usage information to rf transceiver 306 at the periodic intervals for transmission back to master controller 202 . in other embodiments, master computer 202 (or another network device) transmits a request for the logged information to rf transceiver 306 and the request is responded to by wireless controller 305 by retrieving the logged usage information from memory 316 . in a preferred embodiment a plurality of controllers such as controller 300 asynchronously collect usage information for their fixture and master controller 202 , via request or via periodic transmission of the information by the controllers, gathers the usage information for later use. fig. 3b is further shown to include a scheduling module 362 . scheduling module 362 may be used by the circuit to determine when the lamps of the lighting fixture should be turned on or off scheduling module 362 may only consider time, or may also consider inputs received from environment sensor 318 (e.g., indicating that it is night out and that artificial light is necessary), a camera connected to controller 300 (e.g., a request from the camera to illuminate an area so that video of an area or event can be recorded), or from another source. scheduling module 362 may access a schedule stored in memory 354 of the circuit to carry out its tasks. in some embodiments schedule data may be user-updatable via a remote source and transmitted to the fixture via the circuit and a communications interface. while end of life module 360 may utilize an actual log of fixture events as described in the previous paragraph, in some embodiments end of life module 360 may utilize scheduling information to make an end of life determination. in yet other embodiments, logging module 358 may receive data from scheduling module 362 to create its log. controller 300 and circuit 350 is further shown to include a timer 364 that may be used by circuit 350 to maintain a date/time for use by or for checking against information of scheduling module 362 , end of life module 360 , or logging module 358 . environment processing module 366 may be configured to process signals received from one or more sensors such as environment sensor 318 . environment sensing module 366 may be configured to, for example, keep the lamp of the lighting fixture turned off between the hours of one and five a.m. if there is no movement detected by a nearby environment sensor. in other embodiments, environment sensing module 366 may interpret the signals received from sensors but may not make final fixture behavior determinations. in such embodiments, a main logic module for the circuit or logic included in processor 352 or memory 354 may make the fixture behavior determinations using input from, for example, environment processing module 366 , scheduling module 362 , timer 364 , and fixture data 368 . in an exemplary embodiment scheduling module 362 can complete or initiate scheduled activities relating to camera circuit 330 and camera 309 . for example, scheduling module 362 may orient a ptz camera in a first direction for morning rush hour traffic and the ptz camera in a second direction for evening rush hour traffic. the directional switch may be scheduled to occur at, e.g., 3:30 p.m and again at 3:30 a.m. in another example, the scheduling module 362 may schedule transmissions of video information from camera circuit 330 to a remote source via radio frequency transceiver 306 . in an outdoor lighting fixture network with many cameras and radio frequency transceivers, such transmissions may be scheduled in a staggered manner by a master controller or master transceiver and the particular schedules for each individual outdoor lighting fixture may be enforced by each outdoor lighting fixture's scheduling module 362 . controller 300 is shown to include power relays 302 configured to controllably switch on or off high voltage power outputs that may be provided to first ballast 244 and second ballast 246 of fig. 3a via wires 320 , 321 . it should be noted that in other exemplary embodiments, power relays 302 may be configured to provide a low voltage control signal, optical signal, or otherwise to the lighting fixture which may cause one or more ballasts, lamps, and/or circuits of the fluorescent lighting fixture that the controller serves to turn on and off. while power relays 302 are configured to provide high voltage power outputs to ballasts 244 , 246 , it should be appreciated that controller 300 may include a port, terminal, receiver, or other input for receiving power from a high voltage power source. in embodiments where a relatively low voltage or no voltage control signal is provided by relays 302 , power for circuitry of controller 300 may be received from a power source provided to the lighting fixtures or from another source. in any embodiment of controller 300 , appropriate power supply circuitry (e.g., filtering circuitry, stabilizing circuitry, etc.) may be included with controller 300 to provide power to the components of controller 300 (e.g., relays 302 ). when sensor 318 experiences an environmental condition, logic module 314 may determine whether or not circuit 350 should change “on/off” states of the lighting fixture. for example, if a high ambient lighting level is detected by sensor 318 , logic module 314 may determine that circuit 350 should change states such that power relays 302 are “off” conversely, if a low ambient lighting level is detected by sensor 318 , logic module 314 may cause circuit 350 to turn power relays 302 “on.” other control decisions, logic and activities provided by circuit 350 and wireless controller 305 and the components thereof are described herein and with reference to other figures. referring still to fig. 3b , controller 300 is shown to include wireless controller 305 and rf transceiver 306 which receives and provides data or control signals from/to circuit 350 . a command to turn the lighting fixture “off” may be received at wireless transceiver 306 and interpreted by wireless controller 305 . upon recognizing the “off” command, wireless controller 305 provides an appropriate control signal to circuit 350 which causes one or more of power relays 302 to switch off. wireless controller 305 may also be configured to resolve transmission failures, reception failures, and the like. for example, wireless controller 305 may respond to such failures by, for example, operating according to a retransmission scheme or another transmit failure mitigation scheme. wireless controller 305 may also control any other modulating, demodulating, coding, decoding, routing, or other activities of rf transceiver 306 . for example, controller 300 's control logic (e.g., controlled by logic module 314 ) may periodically include making transmissions to other controllers in a zone, making transmissions to particular controllers, or otherwise. such transmissions can be controlled by wireless controller 305 and such control may include, for example, maintaining a token-based transmission system, synchronizing clocks of the various rf transceivers or controllers, operating under a slot-based transmission/reception protocol, or otherwise. in the present disclosure, the term transceiver may refer to an integrated transmitter and receiver pair or a separate transmitter and receiver. referring still to fig. 3b , sensor 318 may be an infrared sensor, an optical sensor, a camera, a temperature sensor, a photodiode, a carbon dioxide sensor, or any other sensor configured to sense environmental conditions such as motion, lighting level or human occupancy of a space. in one exemplary embodiment, sensor 318 is a motion sensor and logic module 314 is configured to determine whether to change states of the lighting fixture based on whether sensor 318 indicates motion (e.g., signals from sensor 318 reach or exceed a threshold value for a period of time). logic module 314 may also or alternatively be configured to use the signal from sensor 318 to determine an ambient lighting level for an area. logic module 314 may then determine whether to change states based on the ambient lighting level. for example, logic module 314 may use a condition such as time of day in addition to ambient lighting level to determine whether to turn the lighting fixture off or on. during a critical time of the day (e.g., when a staffed assembly line is moving), even if the ambient lighting level is high, logic module 314 may refrain from turning the lighting fixture off. in another embodiment, by way of further example, logic module 314 is configured to provide a command to command and control module 356 that is configured to cause circuit 350 to turn the one or more lamps of the fluorescent lighting fixture on when logic module 314 detects motion via the signal from sensor 318 and when logic circuit 314 determines that the ambient lighting level is below a threshold setpoint. logic module 314 may also provide the determination of motion to camera circuit 330 for action. camera circuit 330 may respond to the receipt of an indication of motion by changing an operating state of camera circuit 330 or camera 309 . for example, camera circuit 330 may designate incoming video information as relating to motion, recording “start motion” and “stop motion” metadata in memory 334 . sensor interface 312 may be configured to receive signals from environment sensor 318 . sensor interface 312 may include any number of jacks, terminals, solder points or other connectors for receiving a wire or lead from environment sensor 318 . sensor interface 312 may also or alternatively be a radio frequency transceiver or receiver for receiving signals from wireless sensors. for example, sensor interface 312 may be a bluetooth protocol compatible transceiver, a zigbee transceiver, or any other standard or proprietary transceiver. regardless of the communication medium used, sensor interface 312 may include filters, analog to digital converters, buffers, or other components configured to handle signals received from environment sensor 312 . sensor interface 312 may be configured to provide the result of any signal transformation (or the raw signal) to circuit 350 for further processing. referring further to fig. 3b , logic module 314 may include a restrike violation module (e.g., in memory 316 ) that is configured to prevent logic module 314 from commanding circuit 350 to cause the fluorescent lamps to turn on while a restrike time is counted down. the restrike time may correspond with a maximum cool-down time for the lamp—allowing the lamp to experience its preferred strike-up cycle even if a command to turn the lamp back on is received at rf transceiver 306 . in other embodiments, logic module 314 may be configured to prevent rapid on/off switching due to sensed motion, another environmental condition, or a sensor or controller error. logic module 314 may be configured to, for example, entirely discontinue the on/off switching based on inputs received from the sensor by analyzing the behavior of the sensor, the switching, and a logged usage information. by way of further example, logic circuit 314 may be configured to discontinue the on/off switching based on a determination that switching based on the inputs from the sensor has occurred too frequently (e.g., exceeding a threshold number of “on” switches within a predetermined amount of time, undesired switching based on the time of day or night, etc.). logic module 314 may be configured to log or communicate such a determination. using such configurations, logic module 314 is configured to self-diagnose and correct undesirable behavior that would otherwise continue occurring based on the default, user, or system-configured settings. referring now to fig. 3c , an accessory device 370 is shown, according to an exemplary embodiment. accessory device 370 is for use with an outdoor lighting fixture 390 having a radio frequency transceiver 396 for communicating data to a remote source. the outdoor lighting fixture, in such embodiments, does not include a camera. accessory device 370 includes a camera 372 , a control circuit 374 , and an rf transceiver 378 . accessory device 370 can also include a mount 375 for holding camera 372 to outdoor lighting fixture 390 or a pole for the outdoor lighting fixture. control circuit 374 is wired to camera 372 via interface 371 and includes memory 376 for storing video from the camera 372 . camera 372 may have the same functionality as described in the present disclosure. camera 372 is configured to capture images and video and provide the images and video to control circuit 374 . control circuit 374 stores the images and video in memory 376 . control circuit 374 further provides the images and video to rf transceiver 378 . rf transceiver 378 is wired to control circuit 374 and wirelessly transmits the images and video to rf transceiver 396 of lighting fixture 390 . control circuit 392 of lighting fixture 390 may then receive and process the images and video or continue transmitting the video information to a remote source. referring now to fig. 3d , an accessory device 380 for use with an outdoor lighting fixture 391 is shown, according to an exemplary embodiment. accessory device 380 , as opposed to accessory device 370 shown in fig. 3c , includes a wired interface 388 for wiring the accessory device's control circuit 384 to radio frequency transceiver 397 of outdoor lighting fixture 391 . accessory device 380 includes a camera 382 for capturing images, video, or images and video. control circuit 384 includes a wired interface 381 to camera 382 and includes memory 386 for storing the captured images, video or images and video received from the camera 382 via wired interface 381 . accessory device 380 further includes a mount 373 for holding camera 382 to outdoor lighting fixture 391 or a pole for the outdoor lighting fixture. wired interface 388 provides the images and video received at control circuit 384 to rf transceiver 397 of lighting fixture 391 . control circuit 395 of lighting fixture 391 is coupled to rf transceiver 397 . control circuit 395 causes the transmission of the received video information by rf transceiver 397 . referring now to fig. 4a , a flow chart of a process 400 for activating a camera based on a motion sensor is shown, according to an exemplary embodiment. process 400 includes receiving a signal from a motion sensor of the lighting fixture (step 402 ). process 400 further includes analyzing the received signal to determine whether motion exists (step 404 ). process 400 further includes initiating a camera activity in response to motion detection (step 406 ). the camera activity may be turning the camera on, beginning recording with the camera, tracking an object in motion, or recording video for the duration of time the object is in view of the camera. process 400 further includes providing a motion indication to a local lighting fixture control circuit (step 408 ). the lighting fixture control circuit may use the information to turn on a ballast or lamp for illuminating an outdoor area. process 400 further includes transmitting the motion information to another lighting fixture (step 410 ). the next lighting fixture can further transmit the indication of motion or can use the indication of motion to determine whether to change lighting states (e.g., turn on one or more ballasts, brighten from a dimmed state, etc.). referring now to fig. 4b , a flow chart of a process 420 for providing video information to a remote source is shown, according to an exemplary embodiment. process 420 includes receiving a request from the remote source to serve video (step 422 ). the request may originate from a user interface, from an automated process for periodically serving video, or may be based on a condition sensed by the outdoor lighting fixture (e.g., in response to detected motion). process 420 further includes authenticating the remote source (step 424 ). the authentication may include verifying that the remote source or a user of the remote source has permission to view the video (e.g., via a user id or other identification method) or verifying security settings of the remote source. process 420 further includes providing available video information to the remote source (step 426 ). process 420 further includes providing a user interface to the remote source (step 428 ). the user interface may be used to provide a display for a user of the remote source to view the video. process 420 further includes receiving a selection of video information from the remote source (step 430 ). the selection of video information may include a request to view a specific video, specific portions of a video, meta information (e.g., a timestamp or timeframe) of the selected video, or other video-related requests. the selected video information is streamed to the remote source (step 432 ) in response to the selection. step 432 may include various pre-processing tasks. for example, pre-processing tasks may include compressing the video for streaming, packetizing the video for streaming, and wrapping the packetized video according to a video streaming protocol compatible with the remote source. process 420 further includes receiving setting information from the remote source (step 434 ). setting information may include various camera settings (e.g., video recording settings such as a resolution of the video, brightness or color settings, instructions for recording an object in the view of the camera, etc.). in response to the received setting information, settings in the camera are updated (step 436 ). process 420 further includes receiving ptz commands from the remote source (step 438 ) and adjusting ptz parameters of the camera based on the received commands (step 440 ). ptz commands may include an adjustment of the panning of the camera, the tilt of the camera, or the zoom level of the camera. the user interface of process 420 may include various controls for a user for providing a selection. for example, buttons that a user may click to change the tilt or zoom of the camera may be provided on the user interface, the user interface may show multiple camera views such that a user can select a specific camera view, etc. referring now to fig. 5 , a more detailed block diagram of master controller 202 is shown, according to an exemplary embodiment. master controller 202 (e.g., a control computer) may be configured as the “master controller” described in u.s. application ser. no. 12/240,805, filed sep. 29, 2008, and incorporated herein by reference in its entirety. master controller 202 is generally configured to receive user inputs (e.g., via touchscreen display 530 ) and to set or change settings of the camera or lighting system based on the user inputs. referring further to fig. 5 , master controller 202 is shown to include processing circuit 502 including memory 504 and processor 506 . in an exemplary embodiment, master controller 202 and more particularly processing circuit 502 are configured to run a microsoft windows operating system (e.g., xp, vista, etc.) and are configured to include a software suite configured to provide the features described herein. the software suite may include a variety of modules (e.g., modules 508 - 514 ) configured to complete various activities of master controller 202 . modules 508 - 514 may be or include computer code, analog circuitry, one or more integrated circuits, or another collection of logic circuitry. in various exemplary embodiments, processor 506 may be a general purpose processor, a specific purpose processor, a programmable logic controller (plc), a field programmable gate array, a combination thereof, or otherwise and configured to complete, cause the completion of, and/or facilitate the completion of the activities of master controller 202 described herein. memory 504 may be configured to store historical data received from lighting fixture controllers or other building devices, configuration information, schedule information, setting information, zone information, or other temporary or archived information. memory 504 may also be configured to store computer code for execution by processor 506 . when executed, such computer code (e.g., stored in memory 504 or otherwise, script code, object code, etc.) configures processing circuit 502 , processor 506 or more generally master controller 202 for the activities described herein. touch screen display 530 and more particularly user interface module 508 are configured to allow and facilitate user interaction (e.g., input and output) with master controller 202 . it should be appreciated that in alternative embodiments of master controller 202 , the display associated with master controller 202 may not be a touch screen, may be separated from the casing housing the control computer, and/or may be distributed from the control computer and connected via a network connection (e.g., internet connection, lan connection, wan connection, etc.). further, it should be appreciated that master controller 202 may be connected to a mouse, keyboard, or any other input device or devices for providing user input to master controller 202 . control computer is shown to include a communications interface 532 configured to connect to a wire associated with master transceiver 204 . communications interface 532 may be a proprietary circuit for communicating with master transceiver 204 via a proprietary communications protocol. in other embodiments, communications interface 532 may be configured to communicate with master transceiver 204 via a standard communications protocol. for example, communications interface 532 may include ethernet communications electronics (e.g., an ethernet card) and an appropriate port (e.g., an rj45 port configured for cat5 cabling) to which an ethernet cable is run from master controller 202 to master transceiver 204 . master transceiver 204 may be as described in u.s. application ser. nos. 12/240,805, 12/057,217, or 11/771,317 which are each incorporated herein by reference. communications interface 532 and more generally master transceiver 204 are controlled by logic of wireless interface module 512 . wireless interface module 512 may include drivers, control software, configuration software, or other logic configured to facilitate communications activities of master controller 202 with lighting fixture controllers. for example, wireless interface module 512 may package, address format, or otherwise prepare messages for transmission to and reception by particular controllers or zones. wireless interface module 512 may also interpret, route, decode, or otherwise handle communications received at master transceiver 204 and communications interface 532 . referring still to fig. 5 , user interface module 508 may include the software and other resources for the display and the handling of automatic or user inputs received at the graphical user interfaces of master controller 202 . while user interface module 508 is executing and receiving user input, user interface module 508 may interpret user input and cause various other modules, algorithms, routines, or sub-processes to be called, initiated, or otherwise affected. for example, control logic module 514 and/or a plurality of control sub-processes thereof may be called by user interface module 508 upon receiving certain user input events. user interface module 508 may also include server software (e.g., web server software, remote desktop software, etc.) configured to allow remote access to the display. user interface module 508 may be configured to complete some of the control activities described herein rather than control logic module 514 . in other embodiments, user interface module 508 merely drives the graphical user interfaces and handles user input/output events while control logic module 514 controls the majority of the actual control logic. control logic module 514 may be the primary logic module for master controller 202 and may be the main routine that calls, for example, modules 508 , 510 , etc. control logic module 514 may generally be configured to provide lighting control, energy savings calculations, demand/response-based control, load shedding, load submetering, hvac control, building automation control, workstation control, advertisement control, power strip control, “sleep mode” control, or any other types of control. in an exemplary embodiment, control logic module 514 operates based off of information stored in one or more databases of master controller 202 and stored in memory 504 or another memory device in communication with master controller 202 . the database may be populated with information based on user input received at graphical user interfaces and control logic module 514 may continuously draw on the database information to make control decisions. for example, a user may establish any number of zones, set schedules for each zone, create ambient lighting parameters for each zone or fixture, etc. this information is stored in the database, related (e.g., via a relational database scheme, xml sets for zones or fixtures, or otherwise) and recalled by control logic module 514 as control logic module 514 proceeds through its various control algorithms. control logic module 514 may include any number of functions or sub-processes. for example, a scheduling sub-process of control logic module 514 may check at regular intervals to determine if an event is scheduled to take place. when events are determined to take place, the scheduling sub-process or another routine of control logic module 514 may call or otherwise use another module or routine to initiate the event. for example, if the schedule indicates that a zone should be turned off at 5:00 pm, then when 5:00 pm arrives the scheduling sub-process may call a routine (e.g., of wireless interface module) that causes an “off” signal to be transmitted by master transceiver 204 . control logic module 514 may also be configured to conduct or facilitate the completion of any other process, sub-process, or process steps conducted by master controller 202 described herein. referring further to fig. 5 , device interface module 510 facilitates the connection of one or more field devices, sensors, or other inputs not associated with master transceiver 204 . for example, fieldbus interfaces 516 , 520 may be configured to communicate with any number of monitored devices 518 , 522 . the communication may be according to a communications protocol which may be standard or proprietary and/or serial or parallel. fieldbus interfaces 516 , 520 can be or include circuit cards for connection to processing circuit 502 , jacks or terminals for physically receiving connectors from wires coupling monitored devices 518 , 522 , logic circuitry or software for translating communications between processing circuit 502 and monitored devices 518 , 522 , or otherwise. in an exemplary embodiment, device interface module 510 handles and interprets data input from the monitored devices and controls the output activities of fieldbus interfaces 516 , 520 to monitored devices 518 , 522 . fieldbus interfaces 516 , 520 and device interface module 510 may also be used in concert with user interface module 508 and control logic module 514 to provide control to the monitored devices 518 , 522 . for example, monitored devices 518 , 522 may be mechanical devices configured to operate a motor, one or more electronic valves, one or more workstations, machinery stations, a solenoid or valve, or otherwise. such devices may be assigned to zones similar to the lighting fixtures described above and below or controlled independently. user interface module 508 may allow schedules and conditions to be established for each of devices 518 , 522 so that master controller 202 may be used as a comprehensive energy management system for a facility. for example, a motor that controls the movement of a spinning advertisement may be coupled to the power output or relays of a controller similar to controller 300 of fig. 3b or otherwise. this controller may be assigned to a zone (e.g., via user interfaces at touchscreen display 530 ) and provided a schedule for turning on and off during the day. in another embodiment, the electrical relays of the controller may be coupled to other building devices such as video monitors for informational display, exterior signs, task lighting, audio systems, or other electrically operated devices. referring further to fig. 5 , power monitor 550 is shown as coupled to fieldbus interfaces 516 in an exemplary embodiment. however, power monitor 550 may also or alternatively be coupled to its own controller or rf transceiver 551 for communicating with master transceiver 204 . power monitor 550 may generally be configured to couple to building power resources (e.g., building mains input, building power meter, etc.) and to receive or calculate an indication of power utilized by the building or a portion of the building. this input may be received in a variety of different ways according to varying embodiments. for example, power monitor 550 may include a current transformer (ct) configured to measure the current in the mains inlet to a building, may be coupled to or include a pulse monitor, may be configured to monitor voltage, or may monitor power in other ways. power monitor 550 is intended to provide “real time” or “near real time” monitoring of power and to provide the result of such monitoring to master controller 202 for use or reporting. when used with power monitor 550 , control logic module 514 may be configured to include logic that sheds loads (e.g., sends off signals to lighting fixtures via a lighting fixture controller network, sends off signals to monitored devices 518 , 522 , adjusts ambient light setpoints, adjusts schedules, shuts lights off according to a priority tier, etc.) to maintain a setpoint power meter level or threshold. in other exemplary embodiments, control logic module 514 may store or receive pricing information from a utility and shed loads if the metered power usage multiplied by the pricing rate is greater than certain absolute thresholds or tiered thresholds. for example, if daily energy cost is expected to exceed $500 for a building, control logic module 514 may be configured to change the ambient light setpoints for the lighting fixtures in the building until daily energy cost is expected to fall beneath $500. in an exemplary embodiment, user interface module 508 is configured to cause a screen to be displayed that allows a user to associate different zones or lighting fixtures with different demand/response priority levels. accordingly, a utility provider or internal calculation determines that a load should be shed, control logic module 514 will check the zone or lighting fixture database to shed loads of the lowest priority first while leaving higher priority loads unaffected. referring further to fig. 5 , master controller 202 and memory 504 includes various modules 560 - 568 for camera operation. memory 504 includes camera system client 560 . camera system client 560 is configured to manage the various cameras that wirelessly communicate with master controller 202 . for example, camera system client 560 may include identifying cameras (e.g., a name or id of the camera, the type of camera), identifying a zone or area associated with the cameras (e.g., grouping cameras together based on the location and functionality of the cameras), identifying a function of the cameras (e.g., identifying cameras configured to record video, cameras configured to record specific events, etc.), or otherwise. for example, camera system client 560 may group all cameras in a zone and provide camera information for each camera in the zone to the other modules of master controller 202 or to a remote source via master transceiver 204 . further, camera system client 560 may be used to sort cameras such that a user of touch screen display 530 may find and view all cameras in a specific zone, all cameras with a specific functionality, etc. master controller 202 further includes mass video processor 562 . mass video processor 562 processes video or video information provided by the cameras wirelessly communicating with master controller 202 . mass video processor 562 may include processing the video for playback on a user interface, for display as part of a display (e.g., a display provided by touch screen display 530 ), or other video processing for providing video or video information to a device or user wirelessly communicating with master controller 202 . master controller 202 further includes video storage 564 . video storage 564 stores various camera data (e.g., video or photos) received by master controller 202 or camera data to be transmitted wirelessly to cameras communicating with master controller 202 . video storage 564 may include storage of videos, photos, camera configuration information, a history of usage of the cameras, etc. master controller 202 further includes camera system configuration information 566 . camera system configuration information 566 provides configurations for the various cameras that wirelessly communicate with master controller 202 . configuration information may include camera positioning (e.g., adjusting the tilt or zoom of a ptz camera), resolution or other video quality properties, or other configuration information as described in the present disclosure. master controller 202 further includes camera system command module 568 . camera system command module 568 is configured to provide commands to various cameras that may wirelessly communicate with master controller 202 . commands provided to the cameras may include instructions for the camera to record an event, instructions relating to the time and duration of the recording, or other camera instructions as described in the present disclosure. referring now to fig. 6 , a diagram of a zone system for a facility lighting system 600 is shown, according to an exemplary embodiment. facility lighting system 600 is shown to include master controller 202 that is configured to conduct or coordinate control activities as described in fig. 5 . master controller 202 is preferably configured to provide a graphical user interface to a local or remote electronic display screen for allowing a user to adjust control parameters, turn lighting fixtures on or off, or to otherwise affect the operation of lighting fixtures in a facility. for example, master controller 202 includes touch screen display 530 for displaying such a graphical user interface and for allowing user interaction (e.g., input and output) with master controller 202 . touch screen display 530 is configured to provide a user with a display for viewing and managing lighting fixture and camera settings. for example, referring also to fig. 3b , master controller 202 may receive data from camera circuit 330 and may provide the data to touch screen display 530 . touch screen display 530 may then be configured to provide a user interface for a user to provide camera settings and commands as described in the embodiment of fig. 3b . it should be noted that while master controller 202 is shown in fig. 6 as housed in a wall-mounted panel it may be housed in or coupled to any other suitable computer casing or frame. the user interfaces are intended to provide an easily configurable lighting system and/or camera system for an environment such as the environment shown in fig. 2 . the user interfaces are intended to allow even untrained users to reconfigure or reset a lighting system or camera system using relatively few clicks. in an exemplary embodiment, the user interfaces do not require a keyboard for entering values. advantageously, users other than building managers may be able to setup, interact with, or reconfigure the systems using the provided user interfaces. referring further to fig. 6 , master controller is shown as connected to master transceiver 204 via communications interface 532 . master transceiver 204 may be a radio frequency transceiver configured to provide wireless signals to a network of controllers. in fig. 6 , master transceiver 204 is shown in bi-directional wireless communication with a plurality of lighting fixture controllers 602 , 604 , 606 , 608 . fig. 6 further illustrates controllers 602 , 604 forming a first logical group 610 identified as “zone i” and controllers 606 , 608 forming a second logical group 612 identified as “zone ii.” master controller 202 may be configured to provide different processing or different commands for zones 610 , 612 . while master controller 202 is configured to complete a variety of control activities for lighting fixture controllers 602 , 604 , 606 , 608 , in many exemplary embodiments of the present disclosure, each controller associated with a lighting fixture (e.g., controllers 602 , 604 , 606 , 608 ) includes circuitry configured to provide a variety of “smart” or “intelligent features” that are either independent of master controller 202 or operate in concert with master controller 202 . in the embodiment of fig. 6 , each lighting fixture may include or be coupled to a camera and may provide commands received from master controller 202 to its associated camera, or each zone may include a camera to which master controller 202 communicates with instead of a lighting fixture. according to various exemplary embodiments, any number of lighting fixtures and/or cameras may be included in a particular zone. according to an exemplary embodiment, different camera and lighting fixture settings may be provided to zones 610 , 612 . for example, one set of camera and lighting fixture settings may be provided to zone 610 in response to a vehicle traveling through zone 610 (e.g., instructions for recording vehicle movement and providing light for the vehicle) while a second set of camera settings may be provided to zone 612 (e.g., instructions for turning lighting fixtures 606 , 608 on to a dimmed state while positioning cameras to detect and pick up the vehicle if the vehicle enters zone 612 ). according to various exemplary embodiments, master controller 202 may provide the same camera and lighting fixture settings to each lighting fixture and camera in a zone, may provide different camera settings for different cameras and lighting fixtures of the zone, or otherwise. the construction and arrangement of the systems and methods as shown in the various exemplary embodiments are illustrative only. although only a few embodiments have been described in detail in this disclosure, many modifications are possible (e.g., variations in sizes, dimensions, structures, shapes and proportions of the various elements, values of parameters, mounting arrangements, use of materials, colors, orientations, etc.). for example, the position of elements may be reversed or otherwise varied and the nature or number of discrete elements or positions may be altered or varied. accordingly, all such modifications are intended to be included within the scope of the present disclosure. the order or sequence of any process or method steps may be varied or re-sequenced according to alternative embodiments. other substitutions, modifications, changes, and omissions may be made in the design, operating conditions and arrangement of the exemplary embodiments without departing from the scope of the present disclosure. in alternative exemplary embodiments the lighting fixtures shown and described throughout this application may be configured or modified for indoor use. for example, rather than including a mounting system for coupling the lighting fixture to a street pole, the lighting fixtures in alternative embodiments may include a mounting system for coupling the lighting fixture to a an indoor ceiling mount or an indoor wall mount. such camera-integrated indoor lighting fixtures may be used be used in warehouses, manufacturing facilities, sporting arenas, airports, or other environments. the present disclosure contemplates methods, systems and program products on any machine-readable media for accomplishing various operations. the embodiments of the present disclosure may be implemented using existing computer processors, or by a special purpose computer processor for an appropriate system, incorporated for this or another purpose, or by a hardwired system. embodiments within the scope of the present disclosure include program products comprising machine-readable media for carrying or having machine-executable instructions or data structures stored thereon. such machine-readable media can be any available media that can be accessed by a general purpose or special purpose computer or other machine with a processor. by way of example, such machine-readable media can comprise ram, rom, eprom, eeprom, cd-rom or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store desired program code in the form of machine-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer or other machine with a processor. combinations of the above are also included within the scope of machine-readable media. machine-executable instructions include, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing machines to perform a certain function or group of functions. although the figures may show a specific order of method steps, the order of the steps may differ from what is depicted. also two or more steps may be performed concurrently or with partial concurrence. such variation will depend on the software and hardware systems chosen and on designer choice. all such variations are within the scope of the disclosure. likewise, software implementations could be accomplished with standard programming techniques with rule based logic and other logic to accomplish the various connection steps, processing steps, comparison steps and decision steps.
149-641-166-154-848
EP
[ "US", "EP" ]
G01K15/00,C23C2/38,B05D5/00,B32B5/00,B32B7/02,C23C4/02,C23C4/12,G01N25/72,C23C4/00,C23C28/00,G01N21/88
2010-09-17T00:00:00
2010
[ "G01", "C23", "B05", "B32" ]
method for testing a thermography apparatus, designed for carrying out a thermography method, for its correct operation, test component therefor and method for its production
a test component coated with a coating system is provided. in at least one defined region of the test component, there is a delamination having defined properties deliberately introduced into the coating system. the test component may be employed in a method for testing a thermography apparatus that is designed for carrying out a thermography method, for its correct operation with a view to the detection of delaminations. in order to test the thermography apparatus, the at least one defined delamination of the coating system is detected on the test component by using the thermography method employed in the thermography apparatus.
1. a method for producing a test component having a metallic substrate that is to be coated with a coating system, the coating system including a bond coat and a thermal barrier coating, the method comprising: applying a localized auxiliary layer directly on the metallic substrate, wherein the auxiliary layer is a first ceramic layer, applying the bond coat as a metallic layer after having applied the localized auxiliary layer, and applying the thermal barrier coating as a second ceramic layer over the metallic bond coat, wherein applying the localized auxiliary layer deliberately produces, on a surface of the test component, at least one region comprising a defined delamination of the coating system. 2. the method as claimed in claim 1 , wherein the production of the defined delamination comprises: covering an uncoated component, wherein at least one defined region of the surface of the component remains uncovered; applying the auxiliary layer onto the at least one uncovered region of the surface, the auxiliary layer being selected so that the coating system has a lower adhesion or a different thermal conductivity compared with the surface of the component itself; removing the covering; and coating the component with the coating system. 3. the method as claimed in claim 2 , wherein the auxiliary layer is applied via a thermal spraying method. 4. the method as claimed in claim 2 , wherein the auxiliary layer is applied via atmospheric plasma spraying. 5. the method as claimed in claim 2 , wherein the auxiliary layer has a layer thickness of at least 10 μm. 6. the method as claimed in claim 1 , wherein the auxiliary layer comprises a layer of zirconium oxide, which is at least partially stabilized with yttrium oxide. 7. a test component, comprising: a metallic substrate, a localized auxiliary layer applied directly the metallic substrate, wherein the auxiliary layer comprises a first ceramic layer, and a coating system, comprising: a bond coat comprising a metallic layer which is applied after having applied the localized auxiliary layer, and a thermal barrier coating comprising a second ceramic layer which is applied over the metallic bond coat, wherein a delamination is introduced into the coating system by way of the localized auxiliary layer, with defined properties in at least one defined region of the test component. 8. the test component as claimed in claim 7 , wherein the auxiliary layer for the coating system has a lower adhesion or a different thermal conductivity relative to the surface of the component. 9. the test component as claimed in claim 7 , wherein the auxiliary layer comprises a layer of zirconium oxide, which is at least partially stabilized with yttrium oxide. 10. the test component as claimed in claim 7 , wherein the auxiliary layer has a layer thickness of at least 10 μm.
cross reference to related applications this application claims priority of european patent office application no. 10177354.7 ep filed sep. 17, 2010, which is incorporated by reference herein in its entirety. field of invention the present invention relates to a method for testing a thermography apparatus, designed for carrying out a thermography method, for its correct operation. the invention furthermore relates to a test component, which is used in the test method, and to a method for producing such a test component. background of invention components which are exposed to high thermal stresses during their use, for instance turbine guide vanes or rotor blades (both referred to below as turbine blades for brevity), are generally made of refractory nickel- or cobalt-based alloys. although such alloys have a high thermal load-bearing capacity, the components generally also need to be provided with a corrosion- and/or oxidation-inhibiting layer in order to extend their lifetime during the conditions prevailing during operation. in addition, a thermal barrier coating is generally also employed, which is applied onto the oxidation- and/or corrosion-inhibiting layer in order to reduce the temperature which this layer experiences and thus further extend the lifetime of the component. in this case, good bonding of the layer in question onto the underlying substrate is of great importance, since local disbonding of the layer increases the risk of flaking, so that the underlying substrate material is directly exposed to the thermally highly stressful ambient conditions, which necessitates premature replacement of the corresponding component. highly stressed coated components such as turbine blades are therefore examined for qualification of the coating by means of random sampling or examined alongside manufacture to one hundred percent nondestructively by means of thermography in order to ensure defect-free bonding of the coating. summary of invention to this end, thermography apparatuses are used in which the delaminations, i.e. layer disbonding, are detected by means of a thermography method. the thermography apparatuses are checked from time to time for their correct operation and for calibration, in order to ensure that delaminations can be detected reliably by the thermography methods. it is an object of the present invention to provide a test method with which a thermography apparatus can be tested reliably for its capacity to detect delaminations. it is another object of the present invention to provide a test component which allows reliable testing of a thermography apparatus for its capacity to detect delaminations. lastly, it is an object of the present invention to provide a method for producing such a test component. these objects are respectively achieved by the features of the independent claims. the dependent claims contain advantageous configurations of the invention. in the method according to the invention for producing a test component, which has a surface coated with a coating system, at least one region which comprises a defined delamination of the coating system, i.e. defined layer disbonding, is deliberately produced on the surface. the test component may in particular be configured as a turbine blade, i.e. as a guide vane or rotor blade of a turbine. with a test component, which comprises one or more known sites with defined layer disbonding, a thermography apparatus designed for detecting such delaminations by means of thermography methods can be tested reliably for whether the defined delaminations can be detected sufficiently accurately by means of the thermography method. by the deliberate production of delaminations, their size and the degree of residual adhesion of the coating on the underlying substrate material can be adjusted in a defined way. further parameters, which can be adjusted in a defined way with the method according to the invention for producing a test component, are the position of the delamination on the component and its geometrical shape. in particular, components which comprise a plurality of defined delaminations that differ at least by one of the features: position, size, shape or residual adhesion on the underlying substrate, can also be produced by the method. in this way, the test component thus produced can be used to test the thermography apparatus over a full range of delaminations having different properties. the production of the defined delamination may in particular comprise the steps: covering an uncoated region, at least one defined region of the surface of the component remaining uncovered;applying an auxiliary layer onto the at least one uncovered region of the surface, the auxiliary layer being selected so that the coating system has a lower adhesion compared with the surface of the component itself;removing the covering;coating the component with the coating system. in this case, a solid or liquid material may be used as the auxiliary layer. by using masks comprising openings of defined shape and size for covering the turbine component, auxiliary layers with a defined size and shape can thus be produced. furthermore, when producing the auxiliary layers, it is also possible to generate different layer thicknesses which influence the degree of residual adhesion of the delaminated coating system on the auxiliary layer. both thermal spraying methods such as high velocity flame spraying (hvof, high velocity oxygen fuel), cold gas spraying or plasma spraying, in particular atmospheric plasma spraying (aps), and physical vapor deposition methods, such as electron beam physical vapor deposition (eb-pvd), may in principle be used for producing auxiliary layers. particularly with atmospheric plasma spraying as a thermal spraying method, the auxiliary layers can be produced economically and with sufficient precision. by suitable selection of the method parameters in the chosen methods for applying the auxiliary layer, it is possible to adjust the surface roughness of the auxiliary layer in such a way that different roughnesses can be generated by the same method, so that the residual adhesion of the coating system can be influenced. a ceramic layer may in particular be employed as the auxiliary layer, for example a layer of zirconium oxide which is at least partially stabilized in its lattice structure with yttrium oxide (ysz, yttria-stabilized zirconia). nevertheless, other materials may also be used for forming the auxiliary layer so long as the coating system adheres less well on the material of the auxiliary layer than on the substrate material of the component, or the auxiliary layer has a different thermal conductivity. the residual adhesion of the coating system may in this case be influenced by the material selection for the auxiliary layer. the auxiliary layer should have a thickness of at least 10 μm, in order to induce the delaminations sufficiently reliably. in general, layer thicknesses in the range of between 10 and 100 μm are suitable, particularly in the range of between 10 and 50 μm, preferably between 30 and 40 μm. layer thicknesses of more than 100 μm are nevertheless also possible in principle. the coating system, with which the test component is coated, preferably comprises at least one bond coat which furthermore has an oxidation- and/or corrosion-inhibiting effect, for instance a so-called mcraly layer, where m stands for iron (fe), cobalt (co) or nickel (ni) and y stands for a rare earth element, in particular yttrium and/or hafnium. as an alternative or in addition, the coating system comprises a thermal barrier coating (tbc) which may be formed in particular as a zirconium oxide layer at least partially stabilized in its structure with yttrium oxide (ysz). advantageous test components can be produced by the method according to the invention for producing a test component. in particular, any desired number of defects, of which the size, shape and intensity of the delamination can furthermore be adjusted in a controlled way, can be introduced into the test component at any desired position. the production of the deliberate delaminations can also be carried out on any component type and with any coating. lastly, fewer components need to be reserved for examinations/tests than in the prior art, in which, for testing a thermography apparatus, operation is typically carried out with blades which have suffered from an adhesion deficit owing to the preceding processes, but without the adhesion deficit having been deliberately introduced. in this case, there is usually only one delamination at a single position with an as yet unknown size and intensity of the delamination, which reduces the reliability during the testing or first necessitates determination of the delamination with a thermography apparatus which is known to function correctly. furthermore, the blades used in the prior art are taken from the production process at a relatively late stage, so that unnecessary costs are incurred. in contrast to this, in the scope of the method according to the invention for producing a test component it is possible to use a component which drops out of the production chain early and would therefore have been classified early on as a reject component. as such, it no longer represents any significant value and, if it was not used as a test component, would be sent for scrap. overall, the production costs and the subsequent test unreliability when testing the thermography apparatus are reduced when producing a test component by the method according to the invention. the invention furthermore provides a test component coated with a coating system. in at least one defined region, the test component comprises a deliberately introduced delamination with defined properties in the coating system. here, in particular, defined properties are intended to mean the position, size and shape of the delamination, as well as the degree of disbonding. with the component according to the invention, which may be configured in particular as a turbine blade, a thermography apparatus in which delaminations are intended to be detected by a thermography method can be tested precisely for its correct operation since, owing to the defined properties of the delaminations in the test component, thermography signals can be recorded and evaluated for different degrees of delamination, positions, sizes and shapes. in the prior art, a plurality of turbine blades have generally been necessary for this, the delaminations of which had furthermore not been specified sufficiently before the testing so that only less accurate results have been possible in comparison with the test component according to the invention. in the at least one region of the test component comprising the deliberately introduced delamination, there may be an auxiliary layer between the surface of the component and the coating system, which has a lower adhesion to the coating system or a different thermal conductivity than the surface of the component. as such an auxiliary layer, it is for example possible to provide a ceramic layer, for instance a layer of zirconium oxide which is at least partially stabilized with yttrium oxide (ysz). the thickness of the auxiliary layer is preferably at least 10 μm and may lie in the range of between 10 μm and 100 μm, preferably in the range of between 10 and 50 μm, particularly in the range of between 30 and 40 μm, although greater layer thicknesses are also possible in principle. the use of an auxiliary layer greatly assists controlled production of the delamination. the size and shape of the delamination can then be influenced by the size and shape of the auxiliary layer, and the degree of delamination by the thickness, roughness and material of the auxiliary layer. the test component may in particular have a coating system which comprises at least one bond coat that furthermore has oxidation- and/or corrosion-inhibiting properties, for instance an mcraly layer, and/or at least one thermal barrier coating, for example a zirconium oxide layer at least partially stabilized in its structure with yttrium oxide (ysz). such coating systems are employed in particular for turbine blades, so that a test component having such a coating is suitable in particular for testing thermography apparatuses which are used to detect delaminations in turbine components. in principle, however, the test component may also have different coating systems which are selected specially with a view to the coating systems, in which delaminations are intended to be detected in the thermography apparatus. the invention furthermore provides a method for testing a thermography apparatus, designed for carrying out a thermography method, for its correct operation with a view to the detection of delaminations. in the method, in order to test the thermography apparatus for its correct operation, a test component according to the invention is employed, which may in particular be configured as a turbine blade. with the test component, the at least one defined delamination of the coating system is detected by means of the thermography method employed in the thermography apparatus. as has already been explained with reference to the test component itself, or the method for producing such a test component, the use of the test component offers the advantage that a wide range of different delaminations can be tested by means of a single test component, and that the accuracy of the test method is increased compared with the prior art since the properties of the delamination are already known sufficiently accurately before the test. brief description of the drawings other features, properties and advantages of the present invention may be found in the following description of exemplary embodiments with reference to the appended figures. fig. 1 shows a gas turbine by way of example in a partial longitudinal section. fig. 2 shows a perspective view of a turbine blade. fig. 3 shows an example of a gas turbine combustion chamber in a perspective representation. fig. 4 shows a test component according to the invention configured as a turbine blade. fig. 5 shows the test component of fig. 4 in a schematic sectional representation. fig. 6 shows the test component of fig. 5 in a first stage of the production method. fig. 7 shows the test component of fig. 5 in a second stage of the production method. fig. 8 shows the test component of fig. 5 in a third stage of the production method. fig. 9 shows the test component of fig. 5 in a fourth stage of the production method. detailed description of invention fig. 1 shows a gas turbine 100 by way of example in a partial longitudinal section. the gas turbine 100 internally comprises a rotor 103 , which will also be referred to as the turbine rotor, mounted so as to rotate about a rotation axis 102 and having a shaft 101 . successively along the rotor 103 , there are an intake manifold 104 , a compressor 105 , an e.g. toroidal combustion chamber 110 , in particular a ring combustion chamber, having a plurality of burners 107 arranged coaxially, a turbine 108 and the exhaust manifold 109 . the ring combustion chamber 110 communicates with an e.g. annular hot gas channel 111 . there, for example, four successively connected turbine stages 112 form the turbine 108 . each turbine stage 112 is formed for example by two blade rings. as seen in the flow direction of a working medium 113 , a guide vane row 115 is followed in the hot gas channel 111 by a row 125 formed by rotor blades 120 . the guide vanes 130 are fastened on an inner housing 138 of a stator 143 while the rotor blades 120 of a row 125 are fastened on the rotor 103 , for example by means of a turbine disk 133 . coupled to the rotor 103 , there is a generator or a work engine (not shown). during operation of the gas turbine 100 , air 135 is taken in and compressed by the compressor 105 through the intake manifold 104 . the compressed air provided at the turbine-side end of the compressor 105 is delivered to the burners 107 and mixed there with a fuel. the mixture is then burnt to form the working medium 113 in the combustion chamber 110 . from there, the working medium 113 flows along the hot gas channel 111 past the guide vanes 130 and the rotor blades 120 . at the rotor blades 120 , the working medium 113 expands by imparting momentum, so that the rotor blades 120 drive the rotor 103 which drives the work engine coupled to it. the components exposed to the hot working medium 113 become heated during operation of the gas turbine 100 . apart from the heat shield elements lining the ring combustion chamber 110 , the guide vanes 130 and rotor blades 120 of the first turbine stage 112 , as seen in the flow direction of the working medium 113 , are heated the most. in order to withstand the temperatures prevailing there, they may be cooled by means of a coolant. substrates of the components may likewise comprise a directional structure, i.e. they comprise a single crystal (sx structure) or only longitudinally directed grains (ds structure). iron-, nickel- or cobalt-based superalloys are for example used as material for the components, in particular for the turbine blades 120 , 130 and components of the combustion chamber 110 . such superalloys are known for example from ep 1 204 776 b1, ep 1 306 454, ep 1 319 729 a1, wo 99/67435 or wo 00/44949. the blades 120 , 130 may likewise have coatings against corrosion (mcralx; m is at least one element from the group iron (fe), cobalt (co), nickel (ni), x is an active element and stands for yttrium (y) and/or silicon, scandium (sc) and/or at least one rare earth element, or hafnium). such alloys are known from ep 0 486 489 b1, ep 0 786 017 b1, ep 0 412 397 b1 or ep 1 306 454 a1. on the mcralx, there may furthermore be a thermal barrier coating which consists for example of zro 2 , y 2 o 3 —zro 2 , i.e. it is not stabilized or is partially or fully stabilized by yttrium oxide and/or calcium oxide and/or magnesium oxide. rod-shaped grains are produced in the thermal barrier coating by suitable coating methods, for example electron beam evaporation (eb-pvd). the guide vane 130 comprises a guide vane root (not shown here) facing the inner housing 138 of the turbine 108 , and a guide vane head lying opposite the guide vane root. the guide vane head faces the rotor 103 and is fixed on a fastening ring 140 of the stator 143 . fig. 2 shows a perspective view of a rotor blade 120 or guide vane 130 of a turbomachine, which extends along a longitudinal axis 121 . the turbomachine may be a gas turbine of an aircraft or of a power plant for electricity generation, a steam turbine or a compressor. the blade 120 , 130 comprises, successively along the longitudinal axis 121 , a fastening region 400 , a blade platform 403 adjacent thereto as well as a blade surface 406 and a blade tip 415 . as a guide vane 130 , the vane 130 may have a further platform (not shown) at its vane tip 415 . a blade root 183 which is used to fasten the rotor blades 120 , 130 on a shaft or a disk (not shown) is formed in the fastening region 400 . the blade root 183 is configured, for example, as a hammerhead. other configurations as a firtree or dovetail root are possible. the blade 120 , 130 comprises a leading edge 409 and a trailing edge 412 for a medium which flows past the blade surface 406 . in conventional blades 120 , 130 , for example solid metallic materials, in particular superalloys, are used in all regions 400 , 403 , 406 of the blade 120 , 130 . such superalloys are known for example from ep 1 204 776 b1, ep 1 306 454, ep 1 319 729 a1, wo 99/67435 or wo 00/44949. the blade 120 , 130 may in this case be manufactured by a casting method, also by means of directional solidification, by a forging method, by a machining method or combinations thereof. workpieces with a single-crystal structure or single-crystal structures are used as components for machines which are exposed to heavy mechanical, thermal and/or chemical stresses during operation. such single-crystal workpieces are manufactured, for example, by directional solidification from the melt. these are casting methods in which the liquid metal alloy is solidified to form a single-crystal structure, i.e. to form the single-crystal workpiece, or is directionally solidified. dendritic crystals are in this case aligned along the heat flux and form either a rod crystalline grain structure (columnar, i.e. grains which extend over the entire length of the workpiece and in this case, according to general terminology usage, are referred to as directionally solidified) or a single-crystal structure, i.e. the entire workpiece consists of a single crystal. it is necessary to avoid the transition to globulitic (polycrystalline) solidification in these methods, since nondirectional growth will necessarily form transverse and longitudinal grain boundaries which negate the beneficial properties of the directionally solidified or single-crystal component. when directionally solidified structures are referred to in general, this is intended to mean both single crystals which have no grain boundaries or at most small-angle grain boundaries, and also rod crystal structures which, although they do have grain boundaries extending in the longitudinal direction, do not have any transverse grain boundaries. these latter crystalline structures are also referred to as directionally solidified structures. such methods are known from u.s. pat. no. 6,024,792 and ep 0 892 090 a1. the blades 120 , 130 may likewise have coatings against corrosion or oxidation, for example mcralx (m is at least one element from the group iron (fe), cobalt (co), nickel (ni), x is an active element and stands for yttrium (y) and/or silicon and/or at least one rare earth element, or hafnium (hf)). such alloys are known from ep 0 486 489 b1, ep 0 786 017 b1, ep 0 412 397 b1 or ep 1 306 454 a1. the density is preferably 95% of the theoretical density. a protective aluminum oxide layer (tgo=thermally grown oxide layer) is formed on the mcralx coating (as an interlayer or as the outermost coat). the coating composition preferably comprises co-30ni-28cr-8al-0.6y-0.7si or co-28ni-24cr-10al-0.6y. besides these cobalt-based protective coatings, it is also preferable to use nickel-based protective coatings such as ni-10cr-12al-0.6y-3re or ni-12co-21cr-11al-0.4y-2re or ni-25co-17cr-10al-0.4y-1.5re. on the mcralx, there may furthermore be a thermal barrier coating, which is preferably the outermost coat and consists for example of zro 2 , y 2 o 3 —zro 2 , i.e. it is not stabilized or is partially or fully stabilized by yttrium oxide and/or calcium oxide and/or magnesium oxide. the thermal barrier coating covers the entire mcralx coating. rod-shaped grains are produced in the thermal barrier coating by suitable coating methods, for example electron beam evaporation (eb-pvd). other coating methods may be envisaged, for example atmospheric plasma spraying (aps), lpps, vps or cdv. the thermal barrier coating may comprise porous, micro- or macro-cracked grains for better thermal shock resistance. the thermal barrier coating is thus preferably more porous than the mcralx coating. refurbishment means that components 120 , 130 may need to be stripped of protective coatings (for example by sandblasting) after their use. the corrosion and/or oxidation layers or products are then removed. optionally, cracks in the component 120 , 130 are also repaired. the component 120 , 130 is then recoated and the component 120 , 130 is used again. the blade 120 , 130 may be designed to be hollow or solid. if the blade 120 , 130 is intended to be cooled, it will be hollow and optionally also comprise film cooling holes 418 (indicated by dashes). fig. 3 shows a combustion chamber 110 of a gas turbine. the combustion chamber 110 is designed for example as a so-called ring combustion chamber in which a multiplicity of burners 107 , which produce flames 156 and are arranged in the circumferential direction around a rotation axis 102 , open into a common combustion chamber space 154 . to this end, the combustion chamber 110 as a whole is designed as an annular structure which is positioned around the rotation axis 102 . in order to achieve a comparatively high efficiency, the combustion chamber 110 is designed for a relatively high temperature of the working medium m, i.e. about 1000° c. to 1600° c. in order to permit a comparatively long operating time even under these operating parameters which are unfavorable for the materials, the combustion chamber wall 153 is provided with an inner lining formed by heat shield elements 155 on its side facing the working medium m. each heat shield element 155 made of an alloy is equipped with a particularly heat-resistant protective coating (mcralx coating and/or ceramic coating) on the working medium side, or is made of refractory material (solid ceramic blocks). these protective coatings may be similar to the turbine blades, i.e. for example mcralx means: m is at least one element from the group iron (fe), cobalt (co), nickel (ni), x is an active element and stands for yttrium (y) and/or silicon and/or at least one rare earth element, or hafnium (hf). such alloys are known from ep 0 486 489 b1, ep 0 786 017 b1, ep 0 412 397 b1 or ep 1 306 454 a1. on the mcralx, there may furthermore be an e.g. ceramic thermal barrier coating which consists for example of zro 2 , y 2 o 3 —zro 2 , i.e. it is not stabilized or is partially or fully stabilized by yttrium oxide and/or calcium oxide and/or magnesium oxide. rod-shaped grains are produced in the thermal barrier coating by suitable coating methods, for example electron beam evaporation (eb-pvd). other coating methods may be envisaged, for example atmospheric plasma spraying (aps), lpps, vps or cdv. the thermal barrier coating may comprise porous, micro- or macro-cracked grains for better thermal shock resistance. refurbishment means that heat shield elements 155 may need to be stripped of protective coatings (for example by sandblasting) after their use. the corrosion and/or oxidation layers or products are then removed. optionally, cracks in the heat shield element 155 are also repaired. the heat shield elements 155 are then recoated and the heat shield elements 155 are used again. owing to the high temperatures inside the combustion chamber 110 , a cooling system may also be provided for the heat shield elements 155 or for their retaining elements. the heat shield elements 155 are then hollow, for example, and optionally also have cooling holes (not shown) opening into the combustion chamber space 154 . fig. 4 shows a coated turbine blade 1 as an exemplary embodiment of a test component according to the invention. a coating system, which comprises a corrosion- and/or oxidation-inhibiting bond coat and a ceramic thermal barrier coating applied onto the bond coat, is provided as the coating. the coating system is in particular applied onto the blade surface 3 , although it may also be applied on at least a part of the blade platform 5 . the blade root 7 is generally not provided with such a coating system. the coating system applied onto the blade surface 3 comprises a number of defined regions 9 a to 9 f, in which there are delaminations deliberately introduced into the coating system. the delaminations provided in the regions 9 a to 9 f are well defined particularly in respect of their position, their size, their shape and their signal intensities, which they generate in a thermography method. a schematic cross section through the blade surface 3 provided with the coating system is represented in fig. 5 . the cross section shows the substrate 13 , as well as the coating system 11 applied thereon and comprising the bond coat 15 and the thermal barrier coating 17 . a region 9 comprising a deliberately introduced delamination 19 is furthermore shown. in the region of the delamination 19 , there is an auxiliary layer 21 which offers less adhesion for the coating system 11 , in particular the bond coat 15 , or a different thermal conductivity, than the substrate surface, i.e. the actual surface of the component. in the present exemplary embodiment, the auxiliary layer is formed as a ceramic layer. it has a thickness d of at least 10 μm and preferably has a thickness in the range of between 10 μm and 100 μm, particularly in the range of between 30 and 40 μm. the conditions for the adhesion of the coating system on the auxiliary layer may be influenced by a plurality of factors, in particular by the roughness of the surface of the auxiliary layer 21 ; less roughness reduces the adhesion and therefore increases the delamination. however, the thickness of the auxiliary layer 21 also plays a role, with thicker auxiliary layers leading to lower adhesion and therefore more pronounced delaminations. furthermore, the adhesion also depends on the material used for the auxiliary layer 21 . besides a defined thickness and a defined surface roughness, the auxiliary layer 21 also has a defined shape and a defined size. the size and shape of the deliberately introduced delamination can thereby be established. in a specific embodiment of the coating system 11 and the auxiliary layer 21 , the bond coat 15 is formed as an mcraly layer and the thermal barrier coating 17 as a zirconium oxide layer at least partially stabilized with yttrium. a zirconium oxide layer at least partially stabilized with yttrium is likewise employed as the auxiliary layer 21 . the mcraly layer adheres less well on its surface than on the surface of the nickel-, cobalt- or iron-based alloy of which the substrate consists in the present exemplary embodiment, so that the delamination 19 is formed at the position of the auxiliary layer 21 when the coating system 11 is applied. the production of a test component, comprising regions in which there are deliberately introduced delaminations 19 , will be described below with reference to figs. 6 to 9 . at least one region which comprises a defined delamination of a coating system is produced by the method on the surface of a test component, which is represented in fig. 6 by the substrate 13 . to this end, in a first step, the uncoated substrate 13 is for example covered by means of a mask 23 so that a defined region 25 of the substrate surface remains uncovered. this uncovered region 25 corresponds after the end of the method to a region 9 comprising a deliberately introduced delamination. after the substrate 13 has been covered, the uncovered region 25 is coated with the auxiliary layer 21 , for example a zirconium oxide layer at least partially stabilized with yttrium. the auxiliary layer 21 may in this case be applied particularly by means of a thermal spraying method, for instance by means of atmospheric plasma spraying. by varying the spraying parameters, the surface roughness of the auxiliary layer 21 can in this case be influenced. the auxiliary layer 21 is applied until the desired layer thickness d is reached, which is at least 10 μm and in the present exemplary embodiment lies in the range of from 30 to 40 μm. the test component after application of the auxiliary layer 21 in the uncovered region 25 is represented in fig. 7 . after the auxiliary layer 21 has been applied, the covering 23 is removed so that the surface of the substrate is now uncovered outside the surface sections provided with the auxiliary layer 21 . this state is represented in fig. 8 . the coating system 11 is then applied onto the substrate 13 which is in this state. in the present exemplary embodiment, the bond coat 15 is applied first, the adhesion of the bond coat on the auxiliary layer 21 being less than on the free substrate surface. a delamination 19 is therefore formed over the auxiliary layer 21 , as shown schematically in fig. 9 . in the present exemplary embodiment, an mcraly layer is employed as the bond coat 15 , which may be applied either by means of a thermal spraying method or by means of vapor deposition. a thermal barrier coating 17 is subsequently deposited onto the substrate 13 provided with the bond coat 15 . in the present exemplary embodiment, a zirconium oxide layer at least partially stabilized with yttrium oxide is employed as the thermal barrier coating 17 . like the application of the bond coat, the application of the ceramic thermal barrier coating may be carried out by means of a thermal spraying method or by means of vapor deposition. in particular, atmospheric plasma spraying may be envisaged as a thermal spraying method both for the deposition of the bond coat and for the deposition of the thermal barrier coating. after the ceramic thermal barrier coating has been applied, the final state of the test component as represented in fig. 5 is reached. the test component produced with the aid of the method according to the invention therefore comprises at least one delamination whose position on the test component, i.e. in the present exemplary embodiment on the turbine blade, is likewise previously known together with the size and shape of the delamination. since the degree of adhesion of the coating system 11 on the auxiliary layer 21 can also be influenced during production of the test component, the delamination can also be produced in a controlled way with a view to the signal to the expected, which the delamination provides in a thermography method. such a test component can then be used to test a thermography apparatus, designed for carrying out a thermography method, for its correct operation with a view to the detection of delaminations. in order to test the thermography apparatus for its correct operation, the test component is introduced into the thermography apparatus and the delaminations existing in the coating system are detected by means of the thermography method employed in the thermography apparatus. the properties of the delamination which are detected by means of the thermography method, i.e. its position, size and shape, and the degree of disbonding, can then be compared with the previously known properties of the existing delaminations existing in the test component and having been deliberately introduced. with the aid of the correspondence of the measured properties with the previously known properties, the quality of the detection by means of the thermography method can be deduced. optionally, parameters of the thermography apparatus may be readjusted on the basis of the test results, in order to improve the detection accuracy. with the aid of the invention as described with reference to exemplary embodiments, it is possible to produce test components which can advantageously be used in order to test thermography apparatuses. furthermore, components which have dropped out of the production chain early on and therefore represent reject ware, which could no longer be used for anything else, can be used to produce the test components.
150-681-687-898-465
US
[ "US" ]
F21K99/00,F21V7/00,H01L29/227
2003-10-27T00:00:00
2003
[ "F21", "H01" ]
device of white light-emitting diode
a white led device includes a member, a plurality of leds, fixed on the member, the leds further comprising blue gan leds, a reflector, in parabolic shape, to encase thed member and the plurality of leds, yellow phosphor, coated on the surface of the reflector facing the leds, and a supporting component, for connecting the member and the reflector in order to connect the leds, the member and the reflector together. the main feature of the present invention includes that the leds emit blue light when positively biased. the blue light triggers yellow phosphor to generate a yellow light, and the blue light mixed with the yellow light to become a white light. the white light is reflected by the reflector to project onto target objects.
1. a white led device, comprising: a member, a plurality of leds, fixed on the member, the leds further comprising blue gan leds, a reflector, in parabolic shape, to encase the member and the plurality of leds, yellow phosphor, coated on the surface of the reflector facing the leds, and a supporting component, for connecting the member and the reflector in order to connect the leds, the member and the reflector together, wherein the leds emit blue light when positively biased, the blue light stimulating the yellow phosphor to generate yellow light, the blue light mixed with the yellow light to become a white light, the white light reflected by the reflector to project onto target objects. 2. the device as claimed in claim 1 , wherein the leds further comprises blue gan leds, green gan leds, and red algaas leds. 3. the device as claimed in claim 1 , wherein the yellow phosphor is coated on a transparent film attached to the reflector. 4. the device as claimed in claim 1 , wherein the reflector is coated with the yellow phosphor by spin-coating, sputtering, printing, or other similar methods.
field of the invention the present invention relates to a device of white light-emitting diode (led) and, more particularly, to a lighting device manufactured with gan leds for emitting white light. background of the invention light-emitting diodes (led) have been one of the most important inventions in the history of technological advancement. as widely known, an led is a device that is able to emit light when a forward bias (voltage) is imposed on the semiconductor pn junction. he leds have the advantages of low energy-consumption, low heat-generation, high light-emitting stability, and long life-span, so that they are widely used in many industrial applications, such as an advertising billboard. the red, green, and blue leds are arranged in various shapes of arrays in an advertising billboard to display dynamic images. because of their high efficiency and stability, leds are also widely used to replace conventional small light bulbs as indicators in equipment to display the operating status, such as on, off, pause, or standby, and corresponding options to each status. leds are also used to manufacture lighting devices, such as torches, headlights for cars or bicycles. furthermore, leds are also used as light source in communication, such as local-area network (lan). with light emitted from the leds into its end, the multi-mode fiber optical is able to transmit the light inside it for a long distance. although the performance of leds in aforementioned applications is superior and stable, there exists some obstacles for overcoming. for example, when using red, green and blue leds in lighting devices, it is difficult to arrange and mixed the red, green and blue lights to generate the white light that is commonly used in lighting. as the related technology is still under development, the white light generated by current technologies are usually uneven, and sometimes mixed with lights of other colors. therefore, the overall lighting effect needs improvement. fig. 1 shows a structure of a conventional white led device, comprising a bell-shaped cover 1 made of epoxy for focusing light and protecting the internal components, such as led 3 , from external damages. the power for the led is fed from the power line, through conducting line supports 4 , to the electrodes 5 . when the power is on, the lfd emits the light, which is focused and redirected by bell-shaped cover 1 , serving as a convex lens. the light then travels straight forwards. however, the white light generated by the aforementioned technology is usually uneven, and forms a beam that appears yellowish on the side, and bluish at the center. it is because conventional led device are manufactured by directly applying yellow phosphor on blue leds, so that the generated white light is uneven. furthermore, as the heat generated by the blue led damages the yellow phosphor, the life span of the device is shortened. these are known disadvantages and restrictions of the convention led devices. the inventor of the present invention, based on years of experience and research, provides the present invention to solve the aforementioned obstacles. summary of the invention an object of the present invention is to provide a white led device that, unlike the conventional technologies, generates an even beam of white light. in general there are three different approaches to manufacture white led devices. the first approach is to use red, green, and blue leds to generate a white light. the second approach is to grow blue leds on a yellow substrate, such as a substrate made of znse. the mixture of yellow light and blue light becomes the white light. the third approach is to apply a layer of yellow phosphor on the blue leds to result in a white light. the second and third approaches are the market mainstream because of its small size and flexible application. particularly, the third approach uses the gan material, which last longer than znse substrate based leds, is more popular. even so, the life span of white led devices manufactured with the third approach is still short because the damage of the yellow phosphor by the heat generated by the leds. therefore, the present invention uses a reflective approach to extend the life span of the led device, as well as to reduce the manufacture cost. even more important, the resulted device can generates an even white that avoids the yellow-blue beam generated by the conventional techniques. unlike conventional techniques, the reflective approach does not apply the yellow phosphor directly on the blue leds. instead, a layer of the yellow phosphor is applied on the reflector, or alternatively, on a transparent film, which is then attached to the reflector, as the embodiments shown in figs. 2-4 . when the blue light emitted from the blue leds is mixed with the yellow light generated by the yellow phosphor stimulated by the blue light, a white light is generated. then, the reflector reflects the white light to light the area or object. as the yellow phosphor is not directly applied on the leds, it is not damaged by the heat generated by the leds. these and other objects, features and advantages of the invention will be apparent to those skilled in the art, from a reading of the following brief description of the drawings, the detailed description of the preferred embodiment, and the appended claims. brief description of the drawings fig. 1 shows a cross-sectional view of a conventional white led device. fig. 2 shows a cross-sectional view of a first embodiment of a white led device of present invention. fig. 3 shows a cross-sectional view of a second embodiment of a white led device of present invention. fig. 4 shows a cross-sectional view of a third embodiment of a white led device of present invention. fig. 5 shows a schematic view of another embodiment of a white led device of present invention, where a transparent film with phosphor is attached to a blue light conducting board. fig. 6 shows the comparison of the light decay in embodiments using blue leds and white leds. detailed description of the preferred embodiments fig. 2 shows a cross-sectional view of a first embodiment of a white led device of present invention. the white led device comprises a supporting component 7 , connecting a reflector 6 , and a member 9 with a plurality of leds 10 . as shown in fig. 9 , leds 10 are fixed on member 9 , and the yellow phosphor layer 8 is coated on reflector 6 . the coating methods can be spin-coating, sputtering, printing, or other similar methods. the gan blue leds are used in this embodiment. as shown in fig. 2 , the blue light emitted from gan leds is mixed with the yellow light from phosphor when stimulated by the blue light to generate a white light. reflector 6 reflects the mixed white light to the target surface, and lights up the target object. the reflective approach used in the present invention is different from the that of a conventional white led device. a conventional white led device projects the light directly onto the target object, therefore, the mixed light is less even, and usually appears yellow-blue. on the other hand, the reflective approach used in the present invention reflects the mixed light with a reflector, and the reflected light is then projected onto the target object. therefore, the present invention can effectively eliminate the uneven mixture of the light. fig. 3 shows a cross-sectional view of a second embodiment of a white led device of present invention. the white led device of the second embodiment is similar to that of the first embodiment shown in fig. 2 . the difference is that the blue gan leds 11 , green gan leds 11 , and red algaas leds 11 are used in the second embodiment. fig. 4 shows a cross-sectional view of a third embodiment of a white led device of present invention. the white led device of the third embodiment is similar to that of the first embodiment shown in fig. 2 . the difference is that yellow phosphor 8 is directly coated on reflector 6 in the first embodiment, while yellow phosphor 8 is coated on a transparent film 12 in the third embodiment. transparent film 12 is then attached to reflector 6 to achieve the same effect as in previous embodiments. fig. 5 shows an embodiment of attaching the transparent film 14 coated with yellow phosphor 16 to a blue light-conducting board 13 . this design separates the short life-span stimulated object (i.e., yellow phosphor 16 ) from the long life-span light source (i.e., blue led 15 ), so that the decay of the former does not affect the life-span of the latter. this provides a variation of coating used in the reflective approach. fig. 6 shows the comparison of the light decay in embodiments using blue leds and white leds. fig. 6 shows that the embodiment using blue leds suffers a smaller decay rate than the embodiment using white leds. another advantage of the present invention is that the design is modularized. the components, including reflector, leds, yellow phosphor, can be separately replaced when necessary without affecting the other components. while the invention has been described in connection with what is presently considered to the most practical and preferred embodiment, it is to be understood that the invention is not to be limited to the disclosed embodiment, but, on the contrary, it should be clear to those skilled in the art that the description of the embodiment is intended to cover various modifications and equivalent arrangement included within the spirit and scope of the appended claims.
151-358-046-955-104
US
[ "US" ]
G06Q40/00
2004-03-12T00:00:00
2004
[ "G06" ]
commissions and sales/mis reporting method and system
in one example, a method of generating commissions documents comprises calculating commissions earned by a party based, at least in part, on stored data related to the party, generating at least one file comprising at least one of the calculated commissions and the stored data, and converting the at least one file into at least one read-only file. the read only file may be a pdf file or a read-only html file, for example. access may be selectively provided to the file via a network, based on login information. the file may be generated based on a position of the user in a hierarchy table defining relationships between users, such as sales representatives and supervisors. selection links to information may also be displayed to a user based, at least in part, on the login information, to the user. the file may be e-mailed to users. systems are also disclosed.
1 . a method of generating commissions documents, comprising: calculating commissions earned by a party based, at least in part, on stored data related to the party; generating at least one file comprising at least one of the calculated commissions and the stored data; and converting the at least one file into at least one read-only file. 2 . the method of claim 1 , further comprising: selectively providing access to the at least one read-only file via a network. 3 . the method of claim 1 , further comprising: receiving login information from a user; and selectively providing access to the at least one read-only file based, at least in part, on the login information. 4 . the method of claim 3 , wherein the user is a sales representative and the party and the user are the same, the method comprising: providing access to at least one read-only file to the sales representative. 5 . the method of claim 3 , wherein the user is a supervisor of at least one party and the at least one read-only file comprises stored data related to activities of the at least one party, the method comprising: providing the supervisor access to the read-only file. 6 . the method of claim 3 , wherein the user is a supervisor of at least one supervisor of at least one party and the at least one read-only file comprises stored data related to activities of the at least one party, the method comprising: providing the supervisor access to the read-only file. 7 . the method of claim 4 , wherein the user is a third party, at least one sales representative represents the third party, and the read-only file comprises data related to the activities of the at least one sales representative representing the third party, the method comprising: providing the third party access to the at least one read-only. 8 . the method of claim 4 , comprising: generating the at least one file based, at least in part, on a position of the user in a hierarchy table defining relationships between users. 9 . the method of claim 8 , comprising: loading the stored data into respective tables based, at least in part, on the hierarchy table; and retrieving the information from the respective tables to generate the at least one file. 10 . the method of claim 1 , further comprising: sending the read-only file to the user by e-mail. 11 . the method of claim 1 , comprising: calculating the commissions for a party based, at least in part on the stored data and stored business rules. 12 . the method of claim 13 , wherein the business rules are stored in tables. 13 . the method of claim 1 , comprising: selectively displaying selection links to information available to the user based, at least in part, on the login information. 14 . the method of claim 1 , wherein the read-only file is a pdf file or a read-only html file. 15 . the method of claim 1 , further comprising: receiving login information from a user; and generating the at least one file based, at least in part, on the login information. 16 . a method of generating financial documents, comprising: collecting data related to activities of a plurality of sales representatives; loading the collected data into respective tables based, at least in part, on a hierarchy table defining the relationships between the sales representatives and supervisors of the sales representatives; and generating at least one file based, at least in part, on data in at least one respective table. 17 . the method of claim 16 , comprising: loading the data for sales representatives supervised by a first respective supervisor into a first respective table. 18 . the method of claim 17 , comprising: loading the data for sales representatives supervised by first respective supervisors supervised by a second respective supervisor, into a second respective table. 19 . the method of claim 17 , further comprising: loading the data for sales representatives representing a respective third party into a third respective table. 20 . the method of claim 16 , wherein the activities relate to transactions concerning products or services. 21 . the method of claim 20 , wherein the products or services are a third party's products or services. 22 . the method of claim 20 , further comprising: loading data related to respective sales representatives into respective tables; and generating a file comprising calculated commissions for each respective sales representative. 23 . the method of claim 16 , further comprising: receiving login information from a user; and selectively providing access to a respective file based, at least in part, on the login information. 24 . the method of claim 23 , further comprising: converting the file into a read-only file; and providing access to the read-only file. 25 . the method of claim 23 , wherein the user is a sales representative, the method comprising: providing the sales representative access to the at least one file related to the sales representative. 26 . the method of claim 23 , wherein the user is a supervisor of sales representatives, the method comprising: providing access to the at least one file related to the sales representatives supervised by the user. 27 . the method of claim 23 , wherein the user is a supervisor of supervisors of sales representatives, the method comprising: providing access to the at least one file related to the sales representatives supervised by the supervisors. 28 . the method of claim 23 , wherein the user is a third party, the method comprising: providing access to the at least one file related to the sales representatives representing the third party. 29 . the method of claim 23 , comprising: selectively displaying selection links to information based, at least in part, on the login information, to the user. 30 . the method of claim 16 , further comprising: e-mailing the at least one file to an authorized user. 31 . a method of generating financial documents, comprising: collecting data related to activities of a first plurality of parties; loading the collected data into respective tables based, at least in part, on a hierarchy table defining the relationships between the first plurality of parties and a second plurality of parties; and generating at least one file comprising data in at least one respective table. 32 . the method of claim 31 , wherein the second plurality of parties comprises supervisors of the first plurality of parties. 33 . the method of claim 31 , wherein the second plurality of parties comprises third parties represented by respective ones of the first plurality of parties. 34 . a method of generating commissions statements, comprising: calculating commissions for a party based, at least in part, on stored data; generating at least one file comprising the calculated commissions and data related to the calculated commissions; and e-mailing the at least one file to the party. 35 . the method of claim 25 , further comprising: converting the file into a read-only file prior to e-mailing; and e-mailing the read-only file to the party. 36 . the method of claim 26 , further comprising: storing the file; and selectively providing access to the file to a user based, at least in part, on login information provided by the user. 37 . a system to generate commissions statements, the system comprising: memory; and a processor configured to: collect data related to activities of a plurality of sales representatives; load the collected data into respective tables in the memory based, at least in part, on a hierarchy table defining the relationships between the sales representatives and supervisors of the sales representatives; and generate at least one file based, at least in part, on data in at least one respective table. 38 . the system of claim 37 , wherein the processor is configured to: load the data for sales representatives supervised by a first respective supervisor into a first respective table. 39 . the system of claim 38 , wherein the processor is configured to: load the data for sales representatives supervised by first respective supervisors supervised by a second respective supervisor, into a second respective table. 40 . the system of claim 39 , wherein the processor is configured to: load the data for sales representatives representing a respective third party into a third respective table. 41 . the system of claim 37 , wherein the processor is configured to: load data related to respective sales representatives into respective tables; and generate a file comprising calculated commissions for each respective sales representative. 42 . the system of claim 37 , wherein the processor is configured to: selectively provide access to a respective file based, at least in part, on login information provided by a user. 43 . the system of claim 42 , wherein the processor is configured to: convert the file into a read-only file; and provide access to the read-only file. 44 . the system of claim 42 , wherein the processor is configured to: selectively display selection links to information based, at least in part, on the login information, to the user. 45 . the system of claim 37 , wherein the processor is configured to: e-mail the at least one file to a authorized user. 46 . a system for generating commissions statements, the system comprising: memory; a processor configured to: calculate commissions for a party based, at least in part, on stored data; generate at least one file comprising the calculated commissions and data related to the calculated commissions; store the at least one file in the memory; and e-mail the at least one file to the party. 47 . the method of claim 46 , wherein the processor is further configured to: selectively provide access to the file to a user based, at least in part, on login information provided by the user.
the present application claims the benefit of u.s. application no. 60/625,742, filed on nov. 5, 2004, which is assigned to the assignee of the present invention and is incorporated by reference herein. the present application is also a continuation-in-part of u.s. application ser. no. 10/799,253, filed on mar. 12, 2004, which is also assigned to the assignee of the present invention and is also incorporated by referenced herein. field of the invention methods and systems for selectively providing access to calculated commissions and sales/management information system information. background of the invention fig. 1 is a schematic diagram of an example of a credit and debit card transaction system 10 in the united states. when a credit or debit card transaction is processed, data required to effectuate (or settle) the transaction is entered in a terminal, a request for authorization to complete the transaction (based on the transaction data) is generated, an authorization is either granted or denied, and if authorization is granted, necessary funds to effectuate the transaction are transferred. such a transaction typically involves multiple parties including a card holder 12 , an acquiring bank 14 , a merchant 16 , a bank card association 18 , and an issuing bank 20 . while only one of each party is shown for ease of illustration, it is understood that there may be a plurality of each type of party in the credit card transaction system 10 . the card holder 12 is an entity, such as a person or business, that purchases goods or services from the merchant 16 using a card, such as a credit card or debit card, issued by the issuing bank 20 . the merchant 16 is an entity, such as a business or person, that sells goods or services and is able to accept credit and/or debit cards to complete the sale. the merchant 16 may be a point of sale (“pos”) merchant, for example. the bank card association 18 is a card payment service association (such as visa, mastercard, discover and american express) that is made up of member financial institutions. the bank card association 18 , among other things, sets and enforces rules governing their cards and conducts clearing and settlement processing. the bank card association 18 neither issues cards nor signs merchants. instead, it licenses financial institutions, such as the issuing bank 20 , to issue cards, and licenses the acquiring bank 14 to acquire merchants' sales slips under the association's brand name. the bank card association 18 then manages the transfer of transaction data and funds between the issuing bank 20 and the acquiring bank 14 . in addition, the bank card association 18 maintains national and international networks through which data and funds are moved between the card holder 12 , the merchant 16 , the acquiring banks 14 and the issuing bank 20 . the acquiring bank 14 is an entity that owns the legal relationship with the merchant 16 . the acquiring bank 14 provides services and products to the merchant 16 , and buys (acquires) the rights to the sales slips of the merchant 16 . the acquiring bank 14 credits the value of the sales slip to the merchant's account at the acquiring bank. the acquiring bank 14 effectuates payment to the merchant 14 upon authorization of a card transaction and charges the merchant 14 a fee for handling each transaction. the acquiring bank 14 may have one or more partners 15 that specialize in processing card transactions and/or offers additional services and products. the partner 15 may be a bigger bank, such as j.p. morgan chase & co., new york, n.y., or a processor of transactions, such as first data merchant services (“fdms”), melville, n.y., for example. the combination of the acquiring bank 14 and one or more partners 15 is referred to as an “alliance” 17 . the issuing bank 20 issues cards to approved card holders, such as card holder 12 , and sends bills to and collects payment from the card holder 12 . a platform 22 serves as the liaison between the merchant 16 and the bank card association 18 . the platform 22 seeks authorization for the credit card transaction and conveys the authorization or rejection to the merchant 16 . the platform 22 also computes the interchange fees associated with each credit card transaction processed by the merchants 16 in accordance with predetermined business rules established by the bank card associations 18 . the platform 22 may be fdms, for example. thus, suppose the issuing bank 20 issues a credit card to the credit card holder 12 (a). the credit card holder makes a $50.00 purchase at a merchant 16 (b). upon inputting transaction data, the merchant 16 requests authorization from the platform 22 (c). the platform requests authorization from a bank card association 18 (d) and ultimately the issuing bank 20 (e). the request for authorization is transmitted from the merchant 16 to the issuing bank 20 through the platform 22 and bank card association 18 . the resulting authorization (or rejection) (f) is then issued by the issuing bank 20 and transmitted back to the merchant 16 through the bank card association 18 (g) and the platform 22 (h). upon completion of the transaction, the merchant 16 , at some subsequent point in time, is paid the transaction price by the acquiring bank 14 (i) that has purchased the rights to the merchant's sales slips (j). the acquiring bank 14 then receives payment from the issuing bank 20 (k). the acquiring bank 14 and the issuing bank 20 typically have their own clearing networks to effectuate their payments. for example, the partner 15 of the acquiring bank 14 may provide a clearing network. alliances 17 typically offer a range of credit and debit related services and products to the merchants 16 , such as credit cards, debit cards, electronic check processing, point of sale terminals, software, etc. the alliances 17 may hire sales representatives (“sales reps”) to offer the services and products to the merchants 16 . the sales reps are typically paid a commission for the sale and continued use of an alliance's services and products. the commission may be based on many factors, such as net sales, net revenues, processing volume, the length of the relationship with the merchant, meeting targets, etc. net sales, net revenues, etc., are offset by returns. sales reps may be compensated based on different compensation plans in which commissions may be calculated in different ways. alliances 17 may also offer many different promotional programs to encourage the sale of their services and products. as an added incentive to sales reps to sell particular services and products, an alliance 17 may offer higher commissions for the period of time that the promotion is taking place. the computation of commissions may therefore be complex. this is particularly true if there are a large number of sales reps and multiple, different payment plans, which may be changed over time. alliances 17 may use custom designed software to calculate commissions for their sales reps. commercially available software may be used, as well. commissions calculation software is typically not flexible enough to handle more than a few payment plans that may change over time. changes in payment plans may require months of rewriting of code and troubleshooting for successful implementation. whether the sales reps are hired by each financial institution or by a third party, the sales reps are typically paid by check through the mail. an itemized commissions statement summarizing the calculation of their commissions is typically included. summary of the invention first data merchant services (“fdms”), melville, n.y., employs sales representatives (“sales reps”) on behalf of multiple alliances 17 , to offer the services and products of the alliances to others. fdms pays the sales reps commissions based on the compensation plans of the alliances represented by the sales rep, and provides the sales reps with employment benefits, such as health insurance. the alliances 17 are thereby relieved of the administrative costs of employing and paying the sales reps. fdms charges the alliances 17 a fee for this service. the sales reps are assigned to offer the services and products of one alliance 17 at a time and, in some cases, are assigned to merchants of particular sizes or types. fdms sales reps are informed of their earned commissions by statements mailed by the u.s. postal service. it is expensive to mail commissions statements to a large number of sales reps every month, due to postage costs, as well as supply and handling costs. an efficient, cost effective method of providing sales reps access to their commissions payment statements to sales reps is needed. an efficient way to provide summary information concerning the activities of the sales reps to managers is also needed. methods and systems for automatically preparing, providing selective access to, and/or e-mailing documents, such as sales commissions reports and/or sales/management information systems (“mis”) reports based on predetermined business rules, are disclosed. access and e-mail of documents may be provided via a network in communication with a user's personal computer, for example. the business rules may relate to the respective information or type of information that may be accessed by different users of the system. users of the system may include sales reps selling products and services and the levels of supervisors, also referred to as managers, supervising the sales reps and other managers or supervisors. for example, sales reps may have access to their accrued commissions and the underlying information that contribute to the calculation of their commissions. managers may have access to and/or receive commissions information and summary sales/mis information related to the activities of the sales reps they supervise or who are within their geographical or other area of responsibility. managers who supervise other managers may have access to summary information related to the activities of the sales reps reporting to those managers, for example. where a system pays commissions to sales reps representing third parties, such as alliances, the third party may have access to and/or receive sales/mis information, as well. for example, account executives employed by the third party and overseeing the representation of the alliance 17 by the system may have such access. the rules determining the information users have access to may be defined by the third party or by the system. the system may then identify the information that a particular user of the system has access to based on login information provided by the user. a commissions statement for a sales rep may be converted into a readily accessible, secure, read-only (non-alterable) file format, such as portable document format (“pdf”) or a read-only html file, for example, for display to sales reps and other authorized users on a display device, such as a display of a pc, coupled to the network. the formatted reports may be displayed in a separate window, facilitating concurrent display of multiple reports. the formatted reports may also be stored on a hard disk or other such storage device by a user. the user may also have an option of viewing information in other formats, such as a spreadsheet. microsoft® excel is an example of such a spreadsheet. use of a spreadsheet may facilitate record keeping by users. for example, a user may add new reports to a spreadsheet each month. the reporting methods and systems of the embodiments described herein may be used with the methods and systems for automatically calculating sales commissions based on predetermined business rules disclosed in u.s. application ser. no. 10/799,253 (“the '253 application”), filed on mar. 12, 2004, which is assigned to the assignee of the present invention and is incorporated by reference, herein, for example. the commissions reporting systems and methods of the present invention may be used with other commissions calculating systems and methods, as well. in accordance with one embodiment of the invention, a method of generating commissions documents is disclosed comprising calculating commissions earned by a party based, at least in part, on stored data related to the party, generating at least one file comprising at least one of the calculated commissions and the stored data, and converting the at least one file into at least one read-only file. the read only file may be a pdf file or a read-only html file. for example. the method may further comprise selectively providing access to the at least one read-only file via a network. access may be selectively provided based, at least in part, on login information. different parties may have access to different files. for example, sales representatives (“sales reps”), supervisors of sales reps, supervisors of sales rep supervisors, and third parties may all have access to different files. files may be generated based, at least in part, on a position of a user in a hierarchy table defining relationships between users. the method may further comprise loading stored data into respective tables based, at least in part, on the hierarchy table and retrieving the information from the respective tables to generate the at least one file. the read-only file may also be sent to the user by e-mail. in accordance with another embodiment of the invention, a method of generating financial documents is disclosed comprising collecting data related to activities of a plurality of sales reps, loading the collected data into respective tables based, at least in part, on a hierarchy table defining the relationships between the sales reps and supervisors of the sales reps, and generating at least one file based, at least in part, on data in at least one respective table. the activities may relate to transactions concerning products or services, for example. the products or services may be third party's products or services. selection links to information may be displayed to a user based, at least in part, on the login information. in accordance with another embodiment of the invention, a method of generating financial documents is disclosed comprising collecting data related to activities of a first plurality of parties, loading the collected data into respective tables based, at least in part, on a hierarchy table defining the relationships between the first plurality of parties and a second plurality of parties, and generating at least one file comprising data in at least one respective table. in accordance with another embodiment of the invention, a method of generating commissions statements is disclosed comprising calculating commissions for a party based, at least in part, on stored data, generating at least one file comprising the calculated commissions and data related to the calculated commissions, and e-mailing the at least one file to the party. the file may be converted into a read-only file prior to e-mailing. in accordance with another embodiment of the invention, a system to generate commissions statements is disclosed comprising memory and a processor. the processor is configured to collect data related to activities of a plurality of sales representatives, load the collected data into respective tables in the memory based, at least in part, on a hierarchy table defining the relationships between the sales representatives and supervisors of the sales representatives, and generate at least one file based, at least in part, on data in at least one respective table. in accordance with another embodiment, a system for generating commissions statements is disclosed comprising memory and a processor. the processor is configured to calculate commissions for a party based, at least in part, on stored data, generate at least one file comprising the calculated commissions and data related to the calculated commissions, store the at least one file in the memory, and e-mail the at least one file to the party. brief description of the drawings fig. 1 is a schematic diagram of an example of a prior art credit card transaction system 10 in the united states; fig. 2 is a block diagram of an example of a commissions, sales/mis reporting system in accordance with an embodiment; fig. 3 is a more detailed schematic diagram of the data source of fig. 2 ; fig. 4 is a more detailed schematic diagram of the commissions calculator of fig. 2 ; fig. 5 is a summary of an example of a method of calculating commissions; fig. 6 is a more detailed schematic diagram of the sales/mis calculator of fig. 2 ; fig. 7 is a functional diagram of a user hierarchy table used by the sales/mis calculator of fig. 6 ; fig. 8 is a functional diagram of an example of the data flow among tables based on the hierarchy in the user hierarchy table of fig. 7 ; fig. 9 is a more detailed schematic diagram of an example of the reporting manager of fig. 2 ; figs. 10 a - 10 f are examples of graphical user interfaces that may be used in an embodiment of the present invention; fig. 11 is an example of a method of reporting commissions and sales/mis information in accordance with an embodiment; fig. 12 is an example of a method for selecting and displaying an appropriate graphical user interface (gui) including available document types based on the login information; and fig. 13 is an example of a method for generating documents containing commissions and sales/mis information, for e-mailing to users. detailed description of the preferred embodiments in accordance one with embodiment of the invention, methods and systems are disclosed to automatically generate and selectively make available documents related to activities of parties, such as sales representatives (“reps”) employed by a company and other parties, such as their managers. the sales rep activities may be related to the sale, lease, installation, etc., of products and services, for example. if the sales reps represent third parties, the third parties may have access to certain information, as well. in one example, one class of documents that are generated and accessible by authorized users include statements, which relate to the calculation of commissions earned by sales reps. statements may include a document summarizing the commissions earned in a time period, such as one month, as well as documents itemizing the components contributing to the earned commissions, such as the sales of equipment in the time period, the parties to the sales, and the dates of each sale, for example. another available class of documents is rep reports, which are cumulative summaries of particular commissions components over a different time period than a corresponding statement for the same component. rep reports may also be accessed by sales reps. for example, a statement may cover a monthly time period while a report may cover a daily, quarterly, or year to date time period. another class of available documents is sales/mis reports, which may be accessed by managers who supervise sales reps and managers of the managers who supervise the sales reps. sales/mis reports provide summaries of activities of the sales reps reporting to the particular managers. if the sales reps represent an alliance or other such third party, the alliance or third party may also access sales/mis documents related to those sales reps. the particular document or documents accessible to particular authorized users may be obtained by logging in to a website, for example. particular documents may also be e-mailed to authorized recipients. other or different reports may also be provided. fig. 2 is a block diagram of an example of a commissions, sales/mis reporting system (“system”) 100 in accordance with an embodiment of the invention. in this example, the system 100 comprises a calculating system (“cs”) 105 including a data source 110 , a commissions calculator 120 , and a sales/mis calculator 125 . the data source 110 comprises a processor 112 and memory 114 . the data source 110 accumulates data needed to calculate commissions and to prepare reports. data may be received from individual merchant terminals or from processing centers that process transaction data from merchant terminals, such as the platform 22 in fig. 1 , for example. date may be conveyed via a network 160 , such as the internet, an intranet, a wide area network, etc., through a web server 150 , for example. sales reps may also provide certain data to the data source 110 , as discussed further, below. the data source 110 provides the data to the commissions calculator 120 and the sales/mis calculator 125 , both of which also comprise processors 122 , 126 , and memories 124 , 128 respectively. the commissions calculator 120 calculates commissions based on the data provided by the data source 110 and business rules stored in the memory 122 , for example, as described in the '253 application. the sales/mis calculator 125 calculates sales and management information, also based on the data provided by the data source 110 and business rules stored in memory 128 , for example, as well as the commissions calculated by the commissions calculator 120 . the cs 105 provides the calculated commissions to a commissions payment manager 130 , which causes payment to sales reps of the calculated commissions. the commissions payment manager 130 may be part of the payroll department of the cs 105 , or an outside payroll payment company, for example. the commissions payment manager 130 may comprise a processor 132 and memory 134 , for example. in accordance with an embodiment of the invention, a reporting manager 140 , which also comprises a processor 142 and memory 144 , for example, also receives data from the cs 105 to prepare documents for selective access and/or e-mailing to users. the reporting manager 140 is also coupled to the web server 150 to receive log-in information from the users and to cause display of requested information on user's display device. the reporting manager 140 determines which documents a user is authorized to access based on the log-in information. the reporting manager 140 may also be coupled to an e-mail server 155 , to automatically e-mail documents to a pc of an authorized user. the e-mail server may be a lotus notes server, for example, available from ibm corporation, armonk, n.y. a microsoft® outlook server or other such servers may be used, instead. the web server 150 may also communicate with the e-mail server 155 . users may access the system 100 via personal computers (“pc”) pc 1 through pcn, for example. the personal computers pc 1 through pcn include respective display devices (not shown). the personal computers pc 1 through pcn include operating systems, such as windows 95, windows 98 or windows nt, for example, and internet browsers, such as internet explorer. the personal computers pc 1 through pcn also preferably include adobe acrobat reader, available from adobe systems incorporated, san jose, calif., via a free download at www.adobe.com, for example. as discussed above, one party, such as fdms, may hire sales reps to market the products and services of one or more third parties, such as alliances 17 , and pay commissions to the sales reps acting on behalf of the alliances. a billing manager 140 may be provided to bill the alliances 17 for the commissions paid by fdms to respective sales reps. the billing manager 140 may also compute a service charge to be included in the bill. in this example, the billing manager 140 comprises a processor 142 and a memory 144 . it is noted that the third party may also be the transaction processor or platform 22 for the alliances 17 , as discussed above with respect to fig. 1 , but that is not required. when a merchant agrees to purchase a new service or product promoted by a sales rep, the terms of the transaction are typically memorialized in and defined by a contract, referred to as a merchant's agreement. some of the information required by the commissions calculator 120 is derived from the merchant agreements. the information may be entered into the data source 110 to be stored in the memory 112 by the cs 105 , or by the respective sales rep, via one of the personal computers pc 1 through pcn, for example. the personal computers pc 1 through pcn may be connected to the data source 110 via the network 160 , for example. the merchant agreement may also be scanned into the data source 110 (or another such data source), to be stored in memory 112 as an image. while one data source 110 is shown comprising one processor 112 and memory 114 , a plurality of data sources 110 may be provided and/or a plurality of processors 112 and memory 114 may be provided in each data source 110 . one or more data sources 110 may be at different locations and/or may be dedicated to different classes of sales reps servicing different classes of merchants. multiple processors and memories may be provided in the other components of the cs 105 , as well. the data source 110 may be a mainframe computer, such as an ibm mainframe, for example. the commissions calculator 120 and the sales/mis calculator 125 may each be a server, such as a server from dell corporation, redrock, tex. the memory 122 may comprise a database, such as an oracle database server, available from oracle corporation, redwood shores, calif. multiple web servers 150 may also be provided to couple different components of the system 100 to the network 160 . the memory 114 of the data source 110 may comprise one or more databases to store information needed to conduct the commissions calculations. some of the required information is obtained from the merchant's agreements, as mentioned above. information related to transactions conducted by the merchants 16 may be provided by the platform 22 of fig. 1 to the data source 110 . the transaction information may be itemized for each merchant 16 . for example, the total dollar value of the transactions conducted by a merchant 16 in a time period and the total dollar value of revenues generated by the merchant in the time period may be provided. the cs 105 may also sell and lease equipment on behalf of respective alliances 17 . data concerning sales, lease, and installation of equipment is therefore readily available to the data source 110 , as well. in one example, commissions may be calculated based on business rules stored in tables in the memory 124 , facilitating the incorporation of new rules and changes in existing rules, as described in the '253 application. business rules may comprise associations between variables upon which commissions or components of commissions are calculated and conditions that determine the applicability of particular variables in particular circumstances. the business rules may vary among different compensation plans. in one example, a commissions calculation may be a function of the product of sales generated by an account and basis points. in this function, the value of the basis points is a variable. the particular basis points applied may depend, at least in part, on the age of an account generating the sales and a date the account was approved. the age and date are conditions. values for the variables and the conditions may populate fields in the table. in one example of a table, fields in a row are associated and define one or more business rules. a business rule may also only comprise conditions. for example, whether commissions paid in one period for enrolling an account needs to be debited against commissions earned in a subsequent period (recoup), may depend on whether the account has been activated by conducting a sufficient level of transactions within a predetermined period of time of being approved (by passing a credit check, for example). in this example, the time period and the level of transactions are conditions of business rules that may be stored in tables and may vary among compensation plans. functions or equations used to calculate commissions using the values of the variables may be written in software. the commissions paid may be a sum of a variety of types of commissions, referred to as “commissions components,” which may be offset by commissions adjustments. the values of the commissions components are functions of a variety of inputs related to net sales, net revenues, etc., attributable to the sales rep. as mentioned above, net sales and revenues are offset by returns. in the implementations described herein, reference to sales, revenues, or other values upon which commissions are calculated means net sales, net revenues, etc., but that is not required to practice the invention. in one example, sales reps represent one party, such as one alliance 15 . sales reps may represent multiple parties as well. the alliance 17 establishes one or more compensation plans defining business rules for use in calculating commissions. a sales rep may be compensated in accordance with one or more of the alliance's compensation plans. the sales reps have portfolios of merchants 16 that they have enrolled into an alliance's program. sales attributable to a sales rep include direct sales of services or products to merchants 16 and sales generated by merchants in a sales rep's portfolio, through card transactions. such transactions include credit card and debit card sales transactions with customers for the sales of products and services. revenues attributed to a sales rep are the fees paid by merchants 16 in a sales rep's portfolio, typically per transaction, sale, lease, installation, etc. the applicable commissions components, the functions for calculating the commissions components, and the business rules determining the variables of the functions are defined by the alliance 17 represented by the sales rep and the applicable compensation plan. an alliance 17 may determine that the commissions paid are based on only one component, as well. in general, sales reps are paid commissions for establishing new merchant accounts by enrolling new merchants into a program sponsored by an alliance 17 and for the transactions conducted by the accounts in their portfolio. sales generated in a time period for each merchant account is typically a significant factor in calculating commissions for a sales rep representing that account. revenues earned by an alliance 17 based on transaction fees may be considered along with or instead of sales. for example, one commissions component, referred to as “residuals” or “residual payments,” is based on a percentage of a value of transactions generated by each account represented by the sales rep, in a time period. the value of the transactions may be the net sales, for example, which is multiplied by predetermined basis points to yield the residual payment commissions component. residual payments in a time period may be capped. a minimum value of sales may have to be met in the time period before any residuals are paid. the predetermined basis points applied to calculate the residual payment may be based on the length of time the merchant has been enrolled, approved (passed credit check) or activated (met a predetermined level of activity) in a particular program, referred to as the “age” of the account, and may decrease as the age increases, for example. there may be limits on the length of time that residuals are paid for an account, such as for five years, for example. there may also be minimum levels that must be met in a time period before a residual payment is made for that time period. the values of the basis points and the associated conditions are determined by and may differ among the alliances 17 and among the alliance's compensation plans. the variables, these conditions, and identifications of a sales rep servicing the account, the alliance 17 represented by the sales rep, and the compensation plan applicable to the sales rep, may be associated in one or more tables for calculation of this commissions component. since the basis points are also dependent on the sales rep and alliance compensation plan, they may also be conditions in the business rules for calculating this and other commissions components discussed below, depending on how the tables are organized. other business rules may be used instead of or along with these rules. another commissions component in this example, referred to as “revenue performance,” is based on revenues earned from the fees paid by merchant accounts in a sales rep's portfolio, to the alliance 15 . the revenues attributable to a sales rep for each merchant account may be a function of the product of the revenues generated by that account and a particular value of basis points. in one example, the particular basis points, which is a variable, depends on a level of revenues achieved, which is a condition, as defined by business rules. in particular, an expected level of revenues is established by the alliance 17 for each merchant account in the sales rep's portfolio. the business rules may associate basis points values with ranges defining the relation between the actual revenues and the expected level. for example, if the actual revenues is from 65% to 75% of the expected revenues, a first basis points is selected from the table. if the actual revenues is greater than 75% but less than or equal to 100%, a second basis points greater than the first basis points is selected from the table. if the actual revenues is greater than 100% of the expected revenues, a third, higher basis points is selected from the table. additional ranges and basis points may be provided, as well. these variables and conditions may be associated with the applicable sales rep, alliance and compensation plans in a table, as discussed above. the alliance 17 and compensations may vary the ranges and associated basis points in their business rules. other business rules may be used, as well. another commissions component may be earned for the approval of new merchants 16 in an alliance's programs. as mentioned above, the approval process typically involves a credit check of the merchant 16 . a fixed amount may be paid for each account approved in a time period. the fixed amount paid, which is the variable, may vary based on the actual number of new accounts approved, which is a condition, in the business rules. different fixed amounts may be paid for each of the first 10 accounts signed in a time period, and each of the next 10 accounts signed in that time period, for example. such ranges are also conditions. another condition may be that a minimum number of new accounts must be met before any commissions are paid. in one example, a commission component, referred to as a “masters club bonus,” is earned for enrolling more than a particular number of new accounts in a time period, such as one month. the variables and conditions are determined by the business rules of the compensation plan of the respective alliance 17 applicable to a particular sales rep. the fixed amounts and ranges may be correlated with the sales rep servicing the account, the alliance 17 represented by the sales rep and the applicable compensation plan, in one or more tables. other business rules may be used as well. commissions are also paid for sales or revenues generated from special promotions. special promotions may be established by alliances 17 , typically for limited periods of time, to provide added incentives for the sales reps to sell particular services or products. such a bonus may be instituted to invigorate sales of lagging offerings, for example. in this implementation, a commissions component based on the sales of particular products and services is referred to as a product emphasis bonus (“peb”). the functions for calculating this commissions component and the business rules defining the variables and conditions may be similar to the calculation of the residual commissions component or the revenue performance commissions component, discussed above, depending on whether the commissions are based on sales or revenues, respectively. similar associations may be provided in one or more tables. the commissions may be calculated based on other business rules, as well. one or more commissions components may relate to equipment and products sold for an alliance 17 , such as credit and debit card validation terminals, printers, software, and internet services. for example, a product sale commissions component and a product lease commissions component may be calculated for the sale and/or lease of such products, respectively. the commissions may be a function of a product of the net sales, net revenues (from fees associated with the transaction) and/or net number of units sold, for example, and applicable basis points. conditions may include a minimum level of sales or value of sales or revenue, and/or unit sales or revenue growth targets that need to be met before any commissions are paid. if this commissions component for a particular alliance compensation plan for a particular sales rep is based on sales or revenues, the functions and business rules may be similar to the residual or revenue components, respectively, which are discussed above. if this commissions component for an alliance compensation plan for a particular sales rep is based on units sold, the functions and business rules may be similar to the master club bonus component. as above, the business rules are stored in one or more tables. the commissions for any or all of the sales reps may be calculated based on other business rules, as well. another commissions component may relate to the installation of equipment or software, for example. installation may include setting up a pos terminal, for example. if this commissions component for an alliance compensation plan is based on units sold, the business rules may be similar to the master club bonus component. if this commissions component for an alliance compensation plan is based on sales or revenues generated by the installed equipment, the business rules may be similar to the residual or revenue components, respectively. in either case, the business rules may be stored in one or more tables. the commissions may be calculated based on other business rules, as well. the commissions components above are examples. alliances 17 may have compensation plans based on any or all these commissions components, which may be calculated based on similar or different business rules. other commissions components may be used instead or along with the commissions components discussed above. the same commissions components may be calculated differently for different sales reps, dependent upon the alliance they represent and the applicable compensation plan. as mentioned above, the applicable commissions component or the sum of applicable commissions components may be offset by one or more commissions adjustments to determine the commissions paid. one common commissions adjustment is referred to as “recoup.” recoup may be performed when an alliance 17 authorizes the payment of commissions at the time of account approval, if the merchant 16 does not start processing transactions or does not meet a minimum level or value of transactions, in a predetermined time period, for example. use or sufficient use is referred to as “activation.” if the merchant account is not activated within the predetermined time period, the commissions payment, which may have been earned in the master club bonus, for example, is returned in a subsequent time period. typically, the amount of the advance commission is subtracted from the sum of the earned commissions in the subsequent time period. business rules related to calculating recoup may define a time period after payment to the sales rep that the merchant account must activate or commissions are recouped, as determined by the alliance compensation plan. the time period may be 60 or 90 days, for example. in this case, the time period and minimal value of activity are conditions. after the master club bonus or other such commissions component is calculated based on approved accounts, the accounts may be stored in a table including fields to indicate the account approval date, whether the account has been activated, the activation date, if any, the sales rep servicing the account, the alliance 17 represented by the sales rep and the applicable compensation plan. when sufficient activity is met, a flag is set in the appropriate field, and the date is entered in the date field, for example. if the field for the date does not indicate that activation has taken place by the number of days after approval defined by the business rules, then the commissions paid for that account is recouped in a current time period. the commissions paid is typically recouped by debiting the commissions earned in the current time period. the recoup adjustment is optional for each sales rep or all sales reps. if recoup is not allowed in a jurisdiction, for example, it need not be implemented. whether recoup is to be applied to a particular sales rep may be indicated in a table. another type of adjustment that may need to be made is referred to as miscellaneous adjustments, which compensate for commissions paid outside of a regular pay cycle. operational discrepancies or adjustments caused by the late receipt of data required to compute a commissions component in a proper time period, for example, may require such adjustments. other adjustments may be provided along with or instead of either or both of these adjustments. fig. 3 is a more detailed schematic diagram of the data source 110 . a plurality of databases 202 through 216 are provided, in which data required for the commissions and sales/mis calculations is stored. a sales rep database 202 may be provided to store identifying information about the sales reps, such as their name, mailing address, hire date, termination date (if any), the alliance or alliances 17 they currently represent and have represented in the past, associated start and end dates for those representations, the applicable compensation plan or plans of each alliance, the applicable time periods for the applicable plans, the merchant accounts serviced, etc. a merchant master file database 204 stores accumulated transactional information related to each merchant account, such as total sales and revenues in a time period, to be used to calculate the commissions in the time period, as mentioned above. other information stored in this database may include an identification of the sales rep servicing the account, the merchant approval date, the merchant activation date, and the accumulated number of enrolled, approved and/or activated accounts in the sales reps portfolio in a current time period. residual, revenue, and master club bonus commissions components, for example, may be calculated based, at least in part, on the information in this database. data concerning individual transactions related to each merchant account may be provided from the platform 22 to be accumulated in the merchant master file database 202 or the data may be accumulated by the platform 22 or the processor 112 prior to storage in the database. a financial history database 206 stores accumulated sales/revenue information for each merchant from prior time periods. this information may be provided by the merchant master file database 204 . past information may be needed to calculate residuals, recoup, and sales/mis information, for example. a pebs database 208 is provided to store accumulated information related to the product emphasis bonus commissions component. the information may include total sales of the particular services or products subject to the pebs bonus per merchant account. the responsible sales rep may be identified here, as well. if the sales reps and their merchant accounts are correlated in the sales rep database 202 , then it is not necessary to include the information about the sales rep here and in other databases dedicated to commissions components. an equipment database 210 contains information about equipment sales, such as credit card validation terminals, printers, software, internet services, etc., and the merchant making the purchase, which is used to calculate the product sale commissions component and sales/mis information. a lease database 212 contains information about leased equipment and the merchant the equipment is being leased to, which is used to calculate the product lease commissions component. installation information may be stored in the equipment database 210 , the lease database 212 or in another database (not shown), for example. the equipment database 210 and the lease database 212 may be readily combined. the scanning database 214 contains images of the scanned merchant agreements, discussed above. storing the agreements enables the agreements to be checked, if necessary. a global fee database 216 stores the accumulated fees charged each merchant 16 by the alliance 15 and card associations 18 to conduct each type of transaction, throughout the world. the accumulated fees include the discount rate charged by the sales rep servicing the account for the alliances 17 , minus the interchange fee charged by the card associations 18 , plus assessments charged by clearing networks 13 . revenues in the calculation of commissions components may be based on fees, such as the discount rate, charged to the merchant account, for conducting transactions. many different types of fees may be charged per transaction for different contracted services requested by the merchant. contracted services include the type statement to be received and account credit, for example. fees may be shown and summarized in the sales/mis reports, as well. the databases 202 through 216 are merely examples of ways to organize and store information used by the commissions calculator 120 and the sales/mis calculator 125 . the information may be organized and stored in other ways, as well. fig. 4 is a more detailed schematic diagram of the commissions calculator 120 , which comprises a data tables database 230 , a business rules tables database 232 , and a processor 234 . the processor 120 comprises a calculator 234 and memory 236 . the processor 120 loads the data received from the data source 110 into tables in the data tables database 230 . the variables and conditions in the business rules are stored in tables in the business rules tables database 232 . in the tables, applicable business rules are associated with the sales rep, the represented alliance 17 , the applicable compensation plan of each alliance, the merchant account, etc., as appropriate. information in the financial history database 206 is provided to the commissions calculator 120 upon the request of the processor 120 , when needed to perform certain calculations. examples of tables are provided in the '253 application, which is incorporated by reference herein and is identified further, above. the calculator 234 calculates the commissions for each sales rep, based on the data in the data tables database 230 and the business rules in the business rules tables database 232 . the commissions components described above are calculated by the calculator 234 and stored in the memory 236 . the stored values for the calculated commissions components are summed to yield a gross calculated commissions for each sales rep, which are also stored in the memory 236 . miscellaneous adjustments are calculated, to determine payments made to some sales reps outside of the regular pay cycles. operational discrepancies or adjustments due to late receipt of data, for example, may cause such adjustments. the miscellaneous adjustments for each sales rep, if any, are stored in the memory 236 , as well. recoup may then be optionally calculated for each sales rep, to determine how much, if any, money needs to be returned by a sales rep. the recoup value, if any, is stored in the memory 236 , as well. the adjustments may be calculated in any order. the gross calculated commissions for each sales rep is offset by the miscellaneous adjustments and the recoup, if any, to yield a net calculated commissions for each sales rep. this is done by subtracting the miscellaneous adjustments and the recoup from the gross calculated commissions, for each sales rep. the sales rep is paid the net calculated commissions. the commissions calculator 120 may comprise a plurality of modules to accommodate different business rules. different modules may be dedicated to different classes of sales reps or merchants. for example, business rules may vary based on the size of merchant account or location of the merchant, for example. recoup may be an adjustment for certain sales reps, but not others. providing separate modules dedicated to particular classes may facilitate processing, because only the applicable business rules need to be stored in that module. in another example, sales reps that work with merchants above a particular revenue range, such as 2.5 million dollars, may be handled by one module. in another example, sales reps working with smaller sized merchants may be handled by another. sales reps working with overseas (non-us) merchants may be handled by an overseas module. sales reps may work with different merchant accounts that may be handled by different modules. a module may be provided to handle all sales reps, as well. accumulated data for each month may be provided from the data source 110 to the commissions calculator 120 on the 1st day of the following month, for example. commissions may be calculated and paid by the end of every month. the commissions calculator 120 also comprises a commissions database 240 and a commissions history database 242 . the net calculated commissions for each sales rep, as well as the commissions components and adjustments for a first time period, are stored in the commissions database 240 . the first time period may be 13 months, for example. all the paid commissions and the underlying components and adjustments no longer stored in the commissions database 240 may be stored in the commissions history database 242 for the life of the cs 105 or some other time period. the payment manager 130 ( fig. 2 ) effectuates payment to the sales rep based on the information stored in the commissions database 240 . the data from the databases 202 through 218 is provided to the commissions calculator 120 in the form of one or more ascii files via an sql feed, for example. an ascii file may be provided from each database, for example. fig. 5 is an example of a method 300 of calculating commissions, as described above and in the '253 application. data necessary for the calculations is imported, in step 302 . commissions components are calculated for each sales rep based on business rules stored in tables, in step 304 . the commissions components are summed to yield a gross calculated commissions for each sales rep, in step 306 . miscellaneous adjustments are calculated for each sales rep, in step 308 . recoup is calculated for each sales rep, in step 310 , if applicable. the summed commissions components, referred to above as the gross calculated commissions, is offset by the calculated adjustments and recoup for each sales rep, if any, to yield a net calculated commissions for each sales rep, in step 312 . the commissions components, miscellaneous adjustments and recoup for each sales rep are stored, in step 314 . in the example described above, the information is stored to the commissions database 240 . the data is provided to the payroll department and the billing department of the cs 105 . fig. 6 is a more detailed schematic diagram of the sales/mis calculator 125 , which may have a similar structure to the commissions calculator 120 . as above, a database tables database 352 and a business rules tables database 354 are provided. the processor 125 comprises a calculator 350 and memory 358 . the sales/mis calculator 125 loads the data received from the data source 110 into tables in the data tables database 352 . the processor 125 organizes the sales/mis data for the various reports, based on the data in the data tables database 352 and the business rules in the business rules tables database 354 , and the calculator 350 sums the data, for example. the sales/mis calculator 125 , in this example, also comprises a sales/mis database 360 and a sales/mis history database 362 . the calculated sales/mis data for a predetermined period of time, such as the prior 30-90 days, for example, may be stored in the sales/mis database 360 . calculated sales/mis data no longer stored in the sales/mis database 360 is stored in the sales/mis history database 362 , for the life of the system 100 or some other time period. data may be provided from the data source 110 to the sales/mis calculator 125 daily, for example. as above, the data may be provided in the form of one or more ascii files via an sql feed. the data is loaded into tables in the data tables database 352 . the types of data may include: new accounts signed; signed accounts approved; approved accounts activated; transactions processed (in dollars); equipment sold, installed, and leased; and fees per sales rep and per merchant account, for example. the data is loaded in appropriate tables in association with an identification of the sales rep responsible for the related merchant account and an identification of the merchant account. other associations may be provided in the tables, as appropriate. the loaded data of each type may be summed by the calculator 350 , for example, to facilitate generation of summary reports. the business rules tables database 354 includes rules for calculating and organizing the sales/mis data. one such table may be a user hierarchy table, which associates the users of the system 100 with respect to other users based on the employee reporting structure (hierarchy) of the system 100 . in one example, sales reps representing a particular alliance are supervised by territorial managers, territorial managers are supervised by district managers, and district managers are supervised by regional managers. the regional managers report to one or more alliance managers of the alliance 17 . fig. 7 is a functional diagram of a user hierarchy table reflecting such relationships. in fig. 7 , sales rep 1 and sales rep 2 are supervised by a territorial manager 1 . sales rep 3 and sales rep 4 , sales rep 5 and sales rep 6 , and sales rep 7 and sales rep 8 are supervised by territorial managers 2 , 3 , and 4 respectively. territorial managers 1 and 2 are supervised by a district manager 1 and territorial managers 3 and 4 are supervised by a district manager 2 . a regional manager 1 supervises the district managers 1 and 2 . the regional manager 1 is supervised by the alliance manager 1 . all these parties represent the same alliance 17 . additional sales reps, territorial managers, district managers, regional managers, and/or alliances 17 may be provided and organized in a similar hierarchy. each party may access sales/mis data associated with parties beneath them in the user hierarchy table. the sales/mis data is organized in tables by the processor 125 , in accordance with this hierarchy, based on the associations in the user hierarchy table. fig. 8 is a functional diagram of an example of the data flow among tables based on the hierarchy in the user hierarchy table of fig. 7 . in fig. 8 , sales rep tables sr 1 through sr 8 are shown. each sales rep table sr 1 through sr 8 is associated with a respective sales rep 1 through sales rep 8 , in fig. 7 . data from the sr 1 table and the sr 2 table, which correspond to sales rep 1 and sales rep 2 in fig. 7 , respectively, is loaded into the territorial manager 1 table, which corresponds to the territorial manager 1 in fig. 7 . all the data and/or the calculated sums of the data may be loaded. similarly, the data from the sr 3 and sr 4 tables, the sr 5 and sr 6 tables, and the sr 7 and sr 8 tables are stored in the territorial manager 2 , territorial manager 3 , and the territorial manager 4 tables, respectively. the data from the territorial manager tables 1 and 2 , which correspond to territorial managers 1 and 2 in fig. 7 , and from the territorial manager tables 3 and 4 , which correspond to the territorial managers 3 and 4 in fig. 7 , is loaded into the district manager tables 1 and 2 , respectively. the district manager tables 1 and 2 correspond to the district managers 1 and 2 in fig. 7 . the data from the district manager tables 1 and 2 is loaded into the regional manager table 1 , which corresponds to the regional manager 1 in fig. 8 . finally, in this example, the data from the regional manager 1 is loaded into the alliance manager table 1 , which corresponds to the alliance manager 1 of fig. 7 . associations with the party to which the data is related may be provided from table to table, as well. in one example, data relating to approved accounts by all the sales reps, provided by the data source 110 to the sales/mis calculator 125 , is initially loaded into data input tables in the data tables 352 , as the data is received. the data may then be loaded into separate sales rep tables, or different portions of one or more sales rep tables, for each sales rep. the data may be summed for each sales rep and/or each account, in each table. summed data may be stored in the table or in another location. in this example, it will be assumed that summed data is also in the table. the data and the summed data is then loaded in the appropriate tables higher up the hierarchy, in accordance with the user hierarchy table of fig. 7 and the functional diagram of fig. 8 . other data types are similarly organized and summed in accordance with the user hierarchy table. separate tables may be provided for each data type, or multiple data types may be combined in the same table. the tables may be in the data tables database 352 , in the memory 358 of the processor 356 , or in another location. a manager table may be provided in the sales/mis calculator to correlate identifying information for all managers with the alliance 17 they represent, who they report to, and who reports to them, etc. such a table may be provided in the data source 110 , instead. the reporting manager 140 prepares multiple types of documents for selective display to authorized users. fig. 9 is a more detailed schematic diagram of an example of the reporting manager 140 . the processor 142 and the memory 144 are shown. the processor 142 comprises an authentication and report selection module 380 , a report generator module 382 , a pdf converter 384 , and a spreadsheet converter 386 . a reporting rules database 390 is also shown, coupled to the processor 142 . the reporting rules database 390 comprises one or more tables correlating users with their login information and the documents each user is authorized to view. the user may be correlated with specific document types or with an access level. access levels may be correlated to specific document types in one or more other tables, for example. all login information may be stored in a single table or separate tables may be provided for each user type. in one example, separate tables are provided for sales reps, territorial managers, district managers, regional managers and alliance managers. alternatively, certain digits of a user id, such as the first two (2) digits, may be the same for all users of a certain type. for example, the first two digits of all sales reps may be 11 ; the first two digits of all territorial managers may be 22 ; the first two digits for all district managers may be 33 ; the first two digits of all regional managers may be 44 ; and the first two digits of all alliance managers may be 55 . the available document types may then be correlated with those first two digits. the report generator 384 retrieves the data required to prepare a particular report from the tables in the commissions database 240 and/or the sales/mis database 360 , depending on the document type and the user. data may also be retrieved from the commissions history database 242 and the sales/mis database 362 , depending on the time period covered by available reports. the report generator 384 may be programmed to access the proper table locations for the data for particular reports. the table locations may also be stored in a table in the reporting manager 140 , as well. continuing the example above, if the territorial manager 1 desires a sales/mis document containing information relating to approved accounts in the manager's territory, the report generator 384 will retrieve the data from the territorial manager table 1 in fig. 8 . if the district manager 1 requests the same information, the data will be retrieved from the district manager table 1 in fig. 8 . the retrieved data may be stored in memory, such as the memory 144 . the report generator 384 processes the data into a predetermined format having a desired appearance, as a first file for a particular document. the pdf converter 386 converts the first file into a second, pdf file, which is identical or nearly identical in appearance to a display of the first file. the pdf file is made available to the user for viewing on the display device of their pc, via the network 160 and the web server 150 . the spreadsheet converter 388 may convert the first file into a spreadsheet file, such as a microsoft® excel spreadsheet. in one example, that conversion is performed upon the request of the user. rich text format, microsoft® word, or wordperfect® may also be used, in which case other appropriate converters would be provided. the report generator 384 , pdf converter 386 , and the spreadsheet converter 388 may use a reporting tool, such as crystal reports 8.5, developer edition, available from business objects sa, san jose, calif., for example, to create the first file document and then to convert the first file document into a pdf or spreadsheet file document. asp server side scripting and java script client side scripting may also be used. crystal reports 8.5, developer edition, includes an option to convert a document into a desired format, such as pdf, microsoft® word, rich text format (“rtf”), and excel. the conversion may be performed with different front end programming languages, such as visual basic or active server pages (“asp”) for developing web applications. exporting to pdf in crystal reports 8.5 retains the format and appearance of the original report, to a high degree. generating a crystal report by the report generator 384 and converting the crystal report to pdf or excel for viewing on a user's display device of their pc, may be provided by crystal report controls, written in an asp page, for example. first, the report is converted into a crystal report. then, the crystal report is exported into a pdf format, or spreadsheet format by the pdf converter 386 or the spreadsheet 388 , respectively. the read-only file can also be a read-only html file. a suitable converter may be readily provided. in an example of the operation of an aspect of the system 100 , documents are generated automatically on a regular basis by the report generator 384 , converted into a pdf document by the pdf converter 386 , and stored in the memory 144 , for example. the documents are then available for retrieval. different types of documents may be generated at different times as defined by the business rules, for example. when a user logs in to the system 100 by accessing the system's website at a web address, with the user's pc. a login graphical user interface (“gui”) may be presented on the user's display device, through which the user provides login information, such as the user's user id and password, to the system 100 . the authentication and report selection module 382 authenticates the user by comparing the user id and password to a table in the reporting rules database 390 correlating user id with passwords of authorized users. then the authentication and report selection module 382 correlates the user id with the documents or access level correlated with that user id in a table in the reporting rules database 390 . another gui may then be displayed on the user's display device, presenting the available document types for selection. alternatively, all available document types may be displayed but only the authorized document types may be selected. gui templates may be stored in the memory 144 , for example. examples of guis are discussed below with respect to figs. 10 a - 10 f . the gui may be modified, as necessary, by the processor 142 , for the particular user. for example, the options available on the gui may vary based on the document types available to the particular user. alternatively, the same gui may be presented to each user, but selection of only certain options will cause retrieval of a corresponding document. the gui may comprise options to select dropdown menus to display lists of categories of document types, for example. document categories are discussed below. once a document type is selected, then additional options may be presented in another gui, such as a date or time period for which a document is desired. after a selection is made, the selected document is retrieved and displayed. if the selected document has not been generated yet, it will be generated by the report generator 384 , converted into a pdf document by the pdf converter 386 , and displayed, as discussed above. alternatively, all available documents may be generated, converted into a pdf document, and stored prior to selection by a user. selection by a user then causes retrieval of the stored pdf file. the displayed document may be saved by the user on the user's pc. the gui may also present an option to convert the document into a spreadsheet, in which case the document is converted by the spreadsheet converter 388 and displayed, as is also discussed above. the displayed spreadsheet may also be saved on the user's pc. as discussed above, in one example, three categories of document types may be accessed by authorized users of the system 100 : 1) statements, which may be accessed by sales reps; 2) rep reports, which may also be accessed by sales reps; and 3) sales/mis reports, which may be accessed by managers, including alliance managers. data for statement reports and rep reports may be retrieved from the commissions database 240 . data for the sales/mis reports may be retrieved from the sales/mis database 360 . other or different reports may also be provided. statements relate to the calculation of commissions. the statements may cover a current time period, such as one week or one month, for example. multiple types of statements may be provided. one or more statements may be generated for each type of commissions component and adjustment, discussed above, as well as for the calculated earned commissions. for example, a statement may be provided for the residuals, revenue performance, masterclub bonus, pebs, and equipment (sold, leased, installed) commissions components, and the recoup and miscellaneous adjustments, etc., which were discussed above, for a given time period. the underlying data contributing to the commissions calculation may also be provided in the report, itemized by merchant account, for example. transaction dates may also be included. a recap summary statement may also be provided to summarize the commissions earned in a prior month and year to date, for example. underlying data may include sales and/or revenues, new accounts signed, equipment sold, etc. rep reports are cumulative summaries of particular commissions components over a time period different than a corresponding statement for the same component, such as daily, quarterly, yearly, and/or year to date, for example. another rep report that may be provided is a recap summary report, which provides a summary of the commissions received by the sales rep for a selected period, a year-to-date summary of the commissions earned for each commission component, and adjustments, for example. sales/mis reports provide summaries of activity of sales reps and managers below the manager requesting the report in the user hierarchy table of fig. 7 . for example, a territorial manager, such as the territorial manager 1 , may request a sales/mis report summarizing the activities of the sales reps reporting to that manager, in this case the sales rep 1 and sales rep 2 . the report may include summary information for each sales rep and/or a total for all the sales reps. a district manager, such as the district manager 2 , may request a report providing a total of some or all of the activities of all the sales reps in that manager's district, in this case, sales rep 5 , sales rep 6 , sales rep 7 , and sales rep 8 . the district manager may also request a report providing a total of some or all of the activities of the sales reps for any or all of the territorial managers in the district, in this case territorial manager 3 and territorial manager 4 . the same is true for regional and alliance managers, who may obtain the same types of reports, in accordance with the user hierarchy table. the activities may include any or all of: signing of new accounts; approval of signed accounts; transactions conducted by active accounts; sales, lease, and installation of equipment; etc. the particular sales/mis reports that are available may be determined by the system 100 , the alliance 17 and/or the manager. these are merely examples of types of sales/mis reports that may be provided. other reports may be provided, as well, or instead of these examples. fig. 10 a is an example of a login screen 400 that may be presented by the system 100 for the input of login information. the login screen 400 may be accessed by authorized users via a pc, such as pc 1 through pcn, for example, by entering a web address. fields 402 , 404 are provided for input of a user id and a password by a user interface device of a user's pc, such as pc 1 , respectively. the user interface device can be a keyboard (not shown), for example. after entry of the user id and password, clicking on a login button 406 causes processing that determines whether the password for that user is correct, in a manner known in the art. if the user id and password match, further processing is performed to identify the information accessible by that user via one or more tables, for example, as discussed above. if the user id is that of a sales rep, a sales rep reporting screen 410 is displayed, as shown in fig. 10 b . the sales rep reporting screen 410 displays a statements button 412 , a reports button 414 , and a display button 416 . placing the cursor of a user's interface device, such as a mouse, over the statements button 502 causes display of a drop down menu 418 listing statements relevant to that sales rep, as shown in fig. 10 c . separate statements are available for each of the components and adjustments of the commissions calculation for that sales rep for a time period, such as one month, based on the compensation plan of the sales rep. alternatively, a list of all types of statements available on the scrs 100 may be displayed but only statements available to that sales rep may be opened. the available statement reports in this example are residuals, revenue performance, masterclub bonus, pebs, equipment, recoup, and miscellaneous adjustments. highlighting the particular report by clicking once on a report name, for example, and then clicking on the display button 416 causes generation of a selected report. alternatively, the gui 410 may be arranged so that double clicking on a report name causes generation of a report. a display button may not then be required. placing the cursor over the reports button 414 causes display of the drop down menu 420 listing reports available to that sales rep, as shown in fig. 10 d . in the example of fig. 10 d , the available reports are the same as the available statements: residuals, revenue performance, master club bonus, psb, equipment, recoup and miscellaneous adjustments. recoup summary is also available, in this example. if a report may be prepared for multiple time periods, such as monthly, yearly, and/or year to date, a window may be displayed to present this option. a window may be presented to enter a time period, or a button or buttons may be presented to select a time period, for example. in one example, after selection of a particular statement or report, an appropriate document is retrieved and displayed. if the document has not been generated yet, it may be generated after selection and then displayed. if a supervisor or third party, such as an alliance manager, has logged in, a different window 450 is retrieved and provided to the user's pc 1 , as shown in fig. 10 e . the selection window 450 provides a sales/mis reports button 452 and on alliance reports button 454 for selection. fig. 10 f shows a drop down menu 456 displayed upon placing a cursor over the sales/mis reports button 452 . the options include summary reports of new signed accounts, new approved accounts, transactions, and equipment, organized by sales rep and territory, for example. an equipment report summarizes equipment sold, such as terminals. transactions is a summary of activity for all accounts. other or different reports may be provided, as well. separate windows may be presented for supervisors and third parties. the same report options may be provided if the user's cursor is placed over the alliace report button 454 . the content of each report may be different, however. for example, the reports may include additional sales reps, and the report may not be organized by sales rep. necessary data for each statement or report is collected by the reporting manager 140 , formatted, and converted into a pdf document or other document format for display on the user's pc and/or e-mail to the user's pc. the reports may be generated with a reporting tool, such as crystal reports 8.5, developer edition, available from business objects sa, san jose, calif., for example, as discussed above. asp server side scripting and java script client side scripting may also be used. some or all of the organized and summed data from the tables of fig. 8 may be stored in the sales/mis database 360 as needed for subsequent processing into reports for selective access by users by the reporting manager 140 . for example, if a district manager is only to have access to summary information for certain territorial managers, then only summed data for the sales reps reporting to those territorial managers need be stored in the sales/mis database 360 , in association with that district manager. if the district manager is to have access to information related to each sales rep, then such data would also be stored in the sales/mis database 360 . if raw data is provided to the sales/mis calculator 125 daily, for example, then updated, organized, and summed data may be transferred to the sales/mis database 360 daily, as well. after a predetermined period of time, such as from 30 to 90 days, for example, old data may be transferred to the sales/mis history database 362 , where it may be stored for a longer period of time. as mentioned above, after the document is formed, it may be e-mailed to a user via the e-mail server 155 in fig. 2 . e-mail statements may be provided to sales reps to avoid the need and cost of mailing commissions statements. e-mail reports may also be provided to managers, to facilitate their receipt of sales/mis information. the document, prepared as described above, may be sent as an attachment in the e-mail, to users. users may provide their e-mail address during a registration procedure. the e-mail address may be stored in a table in the reporting rules database 390 of the reporting manager, for example. manager collaborative data objects for windows nt (“cdonts”) or message application program interface (“mapi”) may be used to e-mail the document. visual basic and asp contain a reference to cdonts, which provides various methods and properties that may be used to automate the e-mail process. the e-mails may be sent by simple mail transfer protocol (“smtp”), for example. the smtp is configured to communicate to the e-mail server 155 to route the e-mails to the authorized users. fig. 10 is an example of a method 1000 of reporting commissions and sales/mis information in accordance with an embodiment of the invention. user login information is received, in step 1010 . the login information may be entered by a user on a pc and conveyed to the processor 142 of the reporting manager 140 via the network 160 , such as the internet, for example. the user is authenticated and available document types are identified for that user, in step 1120 , as described above with respect to the authentication and report selection module 382 . the available document types are provided to the user for display, in step 1130 , by the reporting manager 140 and the web server 150 , for example. a selection is received from the user, in step 1140 , by the reporting manager 140 , via the user's pc, the network 160 , and the web server 150 . if the document has been generated already, the selected document is retrieved, in step 1150 , and displayed in step 1160 . if the document has not already been generated, it is generated by the report generator 384 and the converters 386 and/or 388 , as described above, and then displayed. fig. 12 is an example of a method 2000 for selecting and displaying an appropriate gui including available document types and/or categories, corresponding to steps 1020 and 1030 of fig. 11 . the selection is based on the login information. the proper login information for each type of user is stored in a separate table or tables. in step 2010 , it is determined whether the login information matches the identification of a valid sales rep. this may be done by comparing the login information to entries in a table correlating login information with valid sales reps. if there is a match, a sales rep selection page is displayed, in step 2020 . the sales rep selection page (gil) may be retrieved from memory 144 of the reporting manager 140 , for example, and conveyed to a user's display device via the internet, for example. if there is not a match in step 2010 , then it is determined whether the login information matches that of a valid territorial, district, regional, or alliance manager, in step 2030 . this may be done by comparing the login information to entries in one or more tables correlating login information with valid managers. if there is a match, then an appropriate gui for that manager, including the available document types, is displayed, in step 2040 . the gui may be retrieved from memory 144 , for example, and conveyed to the user's display device via the internet or other such network, for example. if there is no match in step 2030 , then the user is disconnected, in step 2050 . if there is a match, after display of the appropriate selection page in steps 2040 , the method 1000 returns to fig. 11 , step 1040 , to receive a selection through the displayed gui. it is noted that the method of fig. 12 is merely an example and the tables may be accessed in any order. fig. 13 is an example of a method 3000 for generating documents for e-mailing to users. in this example, documents are generated automatically and periodically. stored data is retrieved, in step 3010 . if the data is sales/mis data, it may be retrieved daily, for example. if the data is commissions data, it may be retrieved monthly, for example. the particular document or documents to be e-mailed are predetermined by the system, the alliance, and/or the managers, based on the user, and the appropriate data is retrieved for that document by the report generator 384 , for example. the data is formatted, in step 3020 . the formatted data is exported into a file, in step 3030 . in this example, the file may be a pdf or spreadsheet file. the user may have the option to select the file type, or the system 100 may determine the file type. the file is then e-mailed to authorized users, in step 3040 . the generated documents may be stored by the system 100 , in step 3050 , for later retrieval after a request by a user, as well. the documents may be stored in the memory 144 of the reporting manager 140 , for example. other systems may use some or all of this type of information and other information, dependent upon the environment the system is used in (type of business, for example) and the particular details of the commission payment plans being implemented. the same or different documents may be generated. while the system 100 has been described above in the context of the credit and debit card industry, such a system may be used in any industry where commissions are paid. in addition, while described above with respect to a system hiring sales reps to represent alliances, the methods and systems described herein are applicable to systems hiring sales reps to represent any third party or to represent the system itself. the system and third parties may be involved in card processing, other financial transactions, or the sale of any types of products and services, whether financial or not. the embodiments above are examples of implementation of the invention. modifications may be made without going beyond the spirit and scope of the invention, which is defined by the claims, below.
151-639-308-014-12X
GB
[ "RU", "CA", "WO", "US", "CN", "MX", "GB", "BR", "EP" ]
A23L27/40,A23D7/005,A23D7/01,A23L27/00,A23L29/10,A23P10/35,A23P10/30,A23P20/10,A23L1/00,A23L33/10,A23P20/18,A23D7/00,A23L5/00
2014-05-01T00:00:00
2014
[ "A23" ]
seasoning for snack products
field: food industry.substance: presented method includes superficially applying said seasoning to a snack product. the seasoning is in the form of an emulsion, contains a plurality of seasoning particles and a dispersive oil phase. the plurality of seasoning particles comprise a phase of macro-particles in the dispersive oil phase. the seasoning particles contain a shell surrounding an encapsulated core. the shell contains an astringent material comprising at least one solid lipid and a core containing an aqueous solution of table salt with the salt concentration of 0.1 m to the saturated aqueous solution of table salt.effect: more pronounced salty taste in a surface seasoning and a lower salt content in snack products.38 cl, 2 ex
1. a topical seasoning for snack foods, the seasoning comprising a plurality of seasoning particles and a continuous oil phase, wherein the topical seasoning is in the form of an emulsion, the plurality of seasoning particles comprising a particulate phase within the continuous oil phase, the seasoning particles comprising a shell surrounding an encapsulated central core, wherein the shell comprises a matrix including at least one solid lipid and the core comprises an aqueous solution of sodium chloride which has a sodium chloride concentration of from 0.1m to a saturated aqueous solution of sodium chloride. 2. a topical seasoning according to claim 1 wherein the aqueous solution of sodium chloride has a sodium chloride concentration of from 1 to 6m. 3. a topical seasoning according to claim 2 wherein the aqueous solution of sodium chloride has a sodium chloride concentration of from 3 to 5.5m. 4. a topical seasoning according to any foregoing claim wherein the seasoning particle comprises 3 to 30 wt% sodium chloride based on the weight of the seasoning particle. 5. a topical seasoning according to any foregoing claim wherein the core comprises at least one other flavouring component. 6. a topical seasoning according to claim 4 wherein the at least one other flavouring component is in aqueous solution. 7. a topical seasoning according to any foregoing claim wherein the core is substantially spherical. 8. a topical seasoning according to any foregoing claim wherein the core has a maximum width dimension of from 2 to 100 μηι. 9. a topical seasoning according to any foregoing claim wherein the core has diameter which is from 15 to 96% of a diameter of the particle. 10. a topical seasoning according to any foregoing claim wherein the at least one solid lipid comprises at least one crystalline fat. 11. a topical seasoning according to any foregoing claim wherein the at least one solid lipid comprises at least one a triglyceride. 12. a topical seasoning according to claim 11 wherein the at least one solid lipid comprises a mixture of at least one monoglyceride and at least one triglyceride. 13. a topical seasoning according to claim 12 wherein the mixture of at least one monoglyceride and at least one triglyceride comprises from 0.25 to 5 wt monoglyceride(s) and from 95 to 99.75 wt%, each amount being based on the total weight of the mixture of monoglyceride(s) and triglyceride(s). 14. a topical seasoning according to claim 13 wherein the mixture of at least one monoglyceride and at least one triglyceride comprises from 0.25 to 1 wt%, monoglyceride(s) and from 99 to 99.75 wt%, triglyceride(s), each amount being based on the total weight of the mixture of monoglyceride(s) and triglyceride(s). 15. a topical seasoning according to claim 13 or claim 14 wherein the monoglyceride comprises a saturated monoglyceride, which is a fatty acid glyceryl monoester, with the fatty acid chain having an average of from 10 to 20 carbon atoms. 16. a topical seasoning according to claim 15 wherein the monoglyceride comprises at least one of glycerol monolaurate, glycerol monostearate or a mixture thereof. 17. a topical seasoning according to any foregoing claim wherein the at least one solid lipid comprises a vegetable-based triglyceride. 18. a topical seasoning according to claim 17 wherein the vegetable-based triglyceride is an unsaturated fat derived from at least one of sunflower oil and cottonseed oil. 1 . a topical seasoning according to claim 18 wherein the sunflower oil comprises at least 80 wt% oleic acid based on the total weight of fatty acids in the sunflower oil. 20. a topical seasoning according to any foregoing claim wherein the at least one solid lipid has a melting point of from 30 to 95 °c. 21. a topical seasoning according to any foregoing claim wherein the shell comprises at least one other flavouring component. 22. a topical seasoning according to claim 21 wherein the at least one other flavouring component is in a dispersion, suspension or solution in the matrix. 23. a topical seasoning according to any foregoing claim wherein the shell is substantially spherical. 24. a topical seasoning according to any foregoing claim wherein the shell has a maximum width dimension of from 10 to 150 μιη and/or a wall thickness of from 1 to 75μπι. 25. a topical seasoning according to any foregoing claim wherein the shell has a wall thickness which is from 2 to 42.5% of a diameter of the particle. 26. a topical seasoning according to any foregoing claim wherein the at least one solid lipid comprises from 3 to 15 wt% of the total weight of the seasoning particle. 27. a topical seasoning according to any foregoing claim wherein the core comprises from 15 to 96 wt% of the total weight of the seasoning particle. 28. a topical seasoning according to any foregoing claim wherein the oil phase comprises at least one of a monoglyceride, a triglyceride or a mixture thereof. 29. a topical seasoning according to any foregoing claim wherein the oil phase comprises a vegetable-based oil. 30. a topical seasoning according to claim 29 wherein the vegetable-based oil is an unsaturated oil derived from at least one of sunflower oil and cottonseed oil. 31. a topical seasoning according to claim 30 wherein the sunflower oil comprises at least 80 wt% oleic acid based on the total weight of fatty acids in the sunflower oil. 32. a topical seasoning according to any foregoing claim wherein at least one solid lipid comprises from 1 to 30 wt% of the total oil and fat content of the topical seasoning and the continuous oil phase comprises from 70 to 99 wt% of the total oil and fat content of the topical seasoning. 33. a topical seasoning according to any foregoing claim wherein the total oil and fat content of the topical seasoning is from 40 to 80 wt% of the total weight of the topical seasoning. 34. a topical seasoning according to any foregoing claim wherein sodium chloride comprises from 1 to 12 wt% of the total weight of the topical seasoning. 35. a method of topically seasoning a snack food, the method comprising topically applying the topical seasoning of any foregoing claim to a snack food. 36. a method according to claim 35 wherein the topical application is by spraying. 37. a snack food seasoned with the topical seasoning of any one of claims 1 to 34 or produced by the method of claim 35 or claim 36. 38. a snack food according to claim 37 wherein the application dosage of the topical seasoning provides a sodium chloride concentration of from 0.05 to 0.15 wt% sodium chloride based on the weight of the unseasoned snack food, 39. a snack food according to claim 38 wherein the core comprises an aqueous solution of sodium chloride which has a sodium chloride concentration of from 1 to 6m. 1.4
snack food seasoning the present invention relates to a topical seasoning for a snack food, such as a starch-based snack food for example a potato chip or an expanded snack food produced from a starch- based snack food pellet. the present invention also relates to a snack food seasoned with a topical seasoning. it is well known to employ topical seasoning for flavouring snack foods, for example starch-based snack foods, typically in the form of snack chips, such as potato chips or expanded snack foods produced from a starch-based snack food pellet. on subsequent cooking, the pellet expands to produce an expanded low density porous snack food. known topical seasonings include sodium chloride, since many snack foods require a salty seasoning to meet the taste demands of the consumer. such topical seasonings include sodium chloride crystals, in micronised or larger particle dimension, mixed with other seasoning ingredients. there is a general desire to reduce the salt content of many foods, including processed foods such as snack foods. however, for snack foods, there is a problem of achieving a reduced sodium chloride content of the topical seasoning, and of the resultant snack food product, while also achieving the desired taste sensation required by consumers. the present invention aims to solve this problem of the production of known topical seasoning for snack foods. accordingly, the present invention provides a topical seasoning for snack foods, the seasoning comprising a plurality of seasoning particles and a continuous oil phase, wherein the topical seasoning is in the form of an emulsion, the plurality of seasoning particles comprising a particulate phase within the continuous oil phase, the seasoning particles comprising a shell surrounding an encapsulated central core, wherein the shell comprises a matrix including at least one solid lipid and the core comprises an aqueous solution of sodium chloride which has a sodium chloride concentration of from 0.1m to a saturated aqueous solution of sodium chloride. the present invention further provides a snack food seasoned with the topical seasoning of the present invention. the application dosage of the topical seasoning may provide a sodium chloride concentration of from 0.05 to 0.15 wt% sodium chloride based on the weight of the unseasoned snack food. the core may comprise an aqueous solution of sodium chloride which has a sodium chloride concentration of from 1 to 6m. the snack food may be composed of a cut vegetable piece, such as a slice, or may have been prepared from a dough which has been shaped into a desired shape. the snack food may be an expanded snack food prepared from a pellet. the snack food has been cooked, for example by fried, baked, microwaved, directly extruded or popped. preferred features of all of these aspects of the present invention are defined in the dependent claims. embodiments of the present invention will now be described by way of example only, with reference to the accompanying drawings, in which: figure 1 is a schematic view of a seasoning particle in a topical seasoning for snack foods according to an embodiment of the present invention; and figure 2 is a flow chart showing steps in the production of the seasoning particle of figure 1 . the present invention is at least partly predicated on the finding by the present inventors that when sodium chloride is present in an encapsulated central core surrounded by a shell comprising a matrix including at least one solid lipid, this can form a seasoning particle which has particular application as a topical seasoning for snack foods. surprisingly it has been found that encapsulating the sodium chloride as an aqueous solution, typically having high molarity, within a lipid shell can provide a high salt taste in topical seasoning, thereby permitting lower salt content of snack food products to be achieved for an equivalent salty taste as compared to conventional crystalline salt seasonings. the lipid shell can rapidly dissolve in the mouth, as a result of elevated temperature and the aqueous environment provided by saliva, and then the sodium chloride is released into the mouth to provide an instant salt flavour which is desired by consumers. the seasoning particles aree produced, and provided for the seasoning operation, in a liquid oil, for example sunflower oil, which acts as a sprayable liquid vehicle for controllably dispensing the seasoning particles onto the surface of the snack food. the use of a topical oil seasoning is well known in the snack food industry. the conventional spraying equipment can be sued to dispense, in a single operation, both topical oil and seasoning particles containing sodium chloride. according to the present invention, there is provided a topical seasoning for snack foods. the seasoning comprises a plurality of seasoning particles. referring to figure 1 which shows a schematic view of a seasoning particle in a topical seasoning for snack foods according to an embodiment of the present invention, the seasoning particles 2 comprise a shell 4 surrounding an encapsulated central core 6, the shell 4 comprises a matrix 8 including at least one solid lipid and the core 6 comprises sodium chloride. the core 6 comprises an aqueous solution of sodium chloride. the aqueous solution of sodium chloride has a sodium chloride concentration of from 0.1 m to a saturated aqueous solution of sodium chloride, optionally from 1 to 6m, further optionally from 3 to 5.5m. typically, the seasoning particle 2 comprises 3 to 30 wt% sodium chloride based on the weight of the seasoning particle 2. the core 6 may comprise at least one other flavouring ingredient, which may optionally be in aqueous solution. the shell 4 may comprise at least one other flavouring ingredient, which may optionally be in a dispersion, suspension or solution in the matrix 8. such additional ingredients are conventional in the snack food industry. typically, the core 7 is substantially spherical. the core 6 may have a maximum width dimension of from 2 to 100 μηι. the core 6 may have diameter which is from 15 to 96% of a diameter of the particle 2. typically, the shell 4 is substantially spherical. the shell 4 may have a maximum width dimension of from 10 to 150 μιτι and/or a wall thickness of from 1 to 75μτη. the shell 4 may have a wall thickness which is from 2 to 42.5% of a diameter of the particle 2. typically, the at least one solid lipid comprises at least one crystalline fat, for example the lipid comprises at least one triglyceride. most preferably, the at least one solid lipid comprises a mixture of at least one monoglyceride and at least one triglyceride. in such a mixture, the at least one solid lipid comprises from 0.25 to 5 wt%, optionally from 0.25 to 1 wt%, monoglyceride(s) and from 95 to 99.75 wt%, optionally from 99 to 99.75 wt%, triglyceride(s), each amount being based on the total weight of the mixture of monoglyceride(s) and triglyceride(s). as described hereinbelow with respect to the manufacture of the particle, the shell is stabilised by a monoglyceride emulsifier to form a pickering particle in an oil phase. the pickering particle comprises the monoglyceride- and triglyceride-containing lipid shell surrounding the core which comprises a sodium chloride-containing aqueous phase, or solid sodium chloride if water have been permitted to evaporate or leach from the core, thereby forming a water-in-oil emulsion with the shell surrounding the aqueous phase. the monoglyceride surfactant functions to form seed crystals during formation of the shell when the lipid mixture is cooled from the melt to below the melting temperature of the lipid components. these seed crystals are formed at, or migrate to, the interface of the aqueous phase and the oil phase in the water-in-oil emulsion. the seed crystals agglomerate together to form a shell surrounding an aqueous phase droplet. after the monoglyceride seed crystals have been formed, the triglyceride crystallises onto the seed crystals, thereby forming a coherent shell surrounding the aqueous phase core. although the presence of the monoglyceride surfactant, which acts as an emulsifier, causes fat crystals preferentially to agglomerate at the oil phase aqueous phase interface and thereby form a shell, some solid fat crystals may remain within the continuous oil phase of the emulsion and are not incorporated into a shell. preferably, the at least one solid lipid comprises a vegetable-based triglyceride, for example an unsaturated triglyceride derived from at least one of sunflower oil and cottonseed oil. preferably, a high oleic acid sunflower oil is employed. typically, the sunflower oil comprises at least 80 wt% oleic acid based on the total weight of fatty acids in the sunflower oil. in preferred embodiments, the at least one solid lipid has a melting point of from 30 to 95 °c. preferably, the at least one solid lipid also comprises a saturated monoglyceride, which is a fatty acid glyceryl monoester, with the fatty acid chain having an average of from 10 to 20 carbon atoms, typically an average of from 12 to 18 carbon atoms. typically the monoglyceride has been distilled to provide the desired purity of the selected carbon chain length monoglyceride. the monoglyceride is typically derived from sunflower, rapeseed, palm and/or soya bean oil. alternatively a synthetic monoglyceride is employed. a typical monoglyceride is an emulsifier comprising glycerol monolaurate, glycerol monostearate or an emulsifier available in commerce from danisco, uk, under the trade name dimodan hp or dimodan p, or any mixture thereof. in some preferred embodiments, the at least one solid lipid comprises from 3 to 15 wt% of the total weight of the seasoning particle. in some preferred embodiments, the core comprises from 15 to 96 wt% of the total weight of the seasoning particle. the topical seasoning is in the form of an emulsion, with the plurality of seasoning particles comprising a particulate phase within a continuous oil phase. the oil phase may comprise at least one o a monoglyceride, a triglyceride or a mixture thereof, for example a vegetable- based oil, typically an unsaturated oil derived from at least one of sunflower oil and cottonseed oil. preferably, a high oleic acid sunflower oil is employed. preferably, the sunflower oil comprises at least 80 wt% oleic acid based on the total weight of fatty acids in the sunflower oil. the at least one solid lipid may comprise from 1 to 30 wt% of the total oil and fat content of the topical seasoning and the continuous oil phase comprises from 70 to 99 wt% of the total oil and fat content of the topical seasoning. typically, the total oil and fat content of the topical seasoning is from 40 to 0 wt% of the total weight of the topical seasoning. typically, sodium chloride comprises from 1 to 12 wt% of the total weight of the topical seasoning. a snack food may be seasoned with the topical seasoning of the present invention using any known seasoning technique or apparatus. for example, since the topical seasoning is in the form of an emulsion, the emulsion may be sprayed from a spray head onto the cooked snack food prior to packaging.. the topical seasoning of the present invention, as described above, may be made according to the following method, with reference to figure 2. aspects of this method for forming fat-crystal stabilised ater-in-oil emulsions are disclosed in "fat-crystal stabilised w/o emulsions for controlled salt release", sarah frasch- melnik et al, journal of food engineering 98 (2010) 437-442, "w|/0 w 2 double emulsions stabilised by fat crystals - formulation, stability and salt release", sarah frasch-melnik et al, journal of colloid and interface science 350 (2010) 178-185 and "fat-crystal stabilised water-in-oil emulsions as controlled release systems", maxime nadin et al, lwt-food science and technology 56 (2014) 248-255. although these publications disclose the production of stabilised water-in-oil emulsions for releasing salt, there is no disclosure or hint of the use of such emulsions as topical seasoning for snack food, or the unexpected effect of the enhanced salt delivery of such topical seasonings. in a first step, at least one lipid, to form the shell, is heated so as to be liquefied. the at least one lipid comprises the composition described above with respect to figure 1. typically, the at least one lipid is heated to a temperature of at least 10 °c above the melting point of the lipid mixture, ideally at least 80 °c. then, as illustrated in figure 2a, an aqueous solution of sodium chloride, preferably preheated to the same temperature as that of the at least one lipid, is mixed with the liquefied at least one lipid to form a water-in-oil pre-emulsion 20 comprising aqueous phase particles or droplets 22, containing sodium chloride, in a continuous oil phase 24. the aqueous solution of sodium chloride has the concentration as described above with respect to figure 1. typically the weight ratio of the aqueous solution to the at least one lipid is from 30:70 to 70:30, typically about 60:40. the pre-emulsion 20 is subjected to additional emulsification of the initial mixture to form a more well dispersed water in oil emulsion comprising aqueous phase particles or droplets in a continuous oil phase, whilst simultaneously cooling the emulsion to form a plurality of seasoning particles in the oil phase. as illustrated in figure 2b, the emulsification and cooling forms solid crystalline glyceride particles 26 in the oil phase. these constitute seed crystals 26. when the oil phase includes a mixture of a saturated monoglyceride and a triglyceride, as discussed above with respect to figure 1, the monoglyceride forms the seed crystals 26. these seed crystals 26 are formed at, or migrate to, the interface of the aqueous phase and the oil phase in the water- in-oil emulsion. as illustrated in figure 2c, the seed crystals 26 agglomerate together at the interface to form a shell 28 surrounding aqueous phase droplets 22. as illustrated in figure 2d, after the monoglyceride seed crystals 26 have agglomerated at the interface, the triglyceride crystallises onto the seed crystals 26, thereby forming a coherent shell 30 surrounding the aqueous phase core 32. the resultant seasoning particles 34 comprise a shell 30 surrounding an encapsulated central core 32. the shell 30 comprises a matrix 36 including the at least one solid lipid and the core 32 comprises the aqueous solution of sodium chloride. as described above, during the additional emulsification the cooling forms solid lipid crystals in the oil phase which migrate to an interface between the aqueous phase particles and the continuous oil phase, and the crystals agglomerate to form the shell. during the additional emulsification step, the emulsion is cooled to a temperature below 30°c, for example below 25 °c. typically, the additional emulsification is at least partially carried out in a cooling unit which is a scrape-surface heat exchanger having at least one surface cooled to a temperature of from 5 to 30 °c, optionally from 5 to 25 °c. the cooling unit may comprise an outer housing defining a cooling chamber and a rotating scraper mechanism within the chamber which scrapes solid lipid material off an inner surface of the outer housing. typically, the rotating scraper mechanism has a rotational speed of from 500 to 2000 rpm. after the additional emulsification, the plurality of seasoning particles in the oil phase may be passed, in a second mixing step, through a mixer to cause breakage of solid lipid linkages between seasoning particles. this may also reduce the particle size. the mixer may comprise a rotating pin stirrer. after the second mixing step, the resultant mixture is typically recycled through the additional emulsification which again simultaneously cools the emulsion. after the additional emulsification, the resultant mixture is recycled through the second mixing step. such recycling steps may cause further breakage of any solid lipid linkages between seasoning particles and may also further reduce the particle size. the product of the final mixing step may be an emulsion ready to use as a topical seasoning according to the present invention. examples the present invention is further illustrated with reference to the following non-limiting examples. comparative example 1 potato chips were coated in crystalline salt particles having a particle dimension representative of sea salt crystals typically used to season potato chips. the particles were dispensed onto the potato chips to provide a topical seasoning. the application dosage was selected to provide 0.3 wt% sodium chloride based on the weight of the unseasoned potato chips. the seasoned potato chips were subjected to a taste test. the texture of the seasoned potato chips was determined on a scale of from 0 to 10, a score of 0 representing poor texture (for example softness, excessive oiliness, staleness, variable texture) and a score of 10 representing good texture (for example crispiness, low oiliness, freshness, consistent texture). the saltiness of the seasoned potato chips was also determined on a scale of from 0 to 10, a score of 0 representing an absence of a salty flavour and a score of 10 representing a highly salty flavour. the seasoned potato chips of comparative example 1 exhibited a texture score of 9 and a saltiness score of 7. there was a crunchy texture with an instant salt flavour sensation (i.e. a flavour "hit"). comparative example 2 potato chips were coated in crystalline salt particles having a particle dimension representative of micronised salt crystals typically used to season potato chips. the particles were dispensed onto the potato chips to provide a topical seasoning. the application dosage was selected to provide 0.3 wt% sodium chloride based on the weight of the unseasoned potato chips. the seasoned potato chips were subjected to the same taste test as for comparative example 1. the seasoned potato chips of comparative example 2 exhibited a texture score of 7 and a saltiness score of 2. there was good texture with a low salt flavour. example 1 a water-in-oil emulsion was formed comprising 60 wt% aqueous phase and 40 wt% oil phase. the aqueous phase comprised sodium chloride having a concentration of 5m. the oil phase comprised 99.5 wt% high oleic acid sunflower oil and 0.5 wt% dimodan hp, an emulsifier comprising a distilled saturated monoglyceride. the water-in-oil emulsion was heated to a temperature of 80 °c and mixed in a high shear mixer to form a pre-emulsion. the pre-emulsion was then passed through a scraped-surface heat exchanger (called an "a unit"), and subsequently through a pin stirrer (called a "c unit"), both apparatus being known in the art for making fat-containing emulsions in the food industry. the scraped- surface heat exchanger and the pin stirrer were both cooled by water at a temperature of 5 °c. the oil phase in the water-in-oil emulsion rapidly crystallised in the scraped-surface heat exchanger to form fat crystals, as discussed above, and the pin stirrer applied shear to cause phase inversion under cooling to prevent the fat crystals from melting. after exiting the pin stirrer, the water-in-oil emulsion was recycled back, in a second pass, through the scraped-surface heat exchanger and the pin stirrer to reduce droplet size and remove fat linkages between the droplets. the resulting product comprised a water-in-oil emulsion seasoning particles in a continuous oil phase, the particles comprising a lipid shell surrounding an encapsulated central core comprising an aqueous solution of sodium chloride. the emulsion was sprayed onto potato chips to provide a topical seasoning. the application dosage was selected to provide 0.15 wt% sodium chloride based on the weight of the unseasoned potato chips. the seasoned potato chips were subjected to the same taste test. the seasoned potato chips of example 1 exhibited a texture score of 9 and a saltiness score of 7. there was good texture, and an instant salt flavour hit which faded. example 2 example 1 was repeated, but modified so as to use 0.5 wt% glycerol monolaurate as the saturated monoglyceride emulsifier. the emulsion was sprayed onto potato chips to provide a topical seasoning, and the application dosage was also modified to provide 0.06 wt% sodium chloride based on the weight of the unseasoned potato chips. the seasoned potato chips were subjected to the same taste test. the seasoned potato chips exhibited a texture score of 6 and a saltiness score of 6. there was good texture and a good salt flavour. a comparison of examples 1 and 2 with comparative examples 1 and 2 shows that the topical seasoning of the present invention can readily be used control the salty taste and texture of snack foods. moreover, the topical seasoning of the present invention can also achieve a similar salty taste as conventional crystalline salt, yet at significantly lower salt content. example 1 achieved an instant salt flavour hit, providing a higher salt flavour than the standard crystal salt used in comparative example 1, yet requiring only 50 % of the salt content of the resultant seasoned snack food which was used in comparative example 1. example 2 achieved a good salt flavour, providing a higher salt flavour than the standard micronized salt used in comparative example 2, yet requiring only 20 % of the salt content of the resultant seasoned snack food which was used in comparative example 2. surprisingly it has been found therefore that encapsulating the sodium chloride as an aqueous solution, typically having high molarity, within a lipid shell can provide a high salt taste in topical seasoning, thereby permitting lower salt content of snack food products to be achieved. the lipid shell can rapidly dissolve in the mouth, as a result of elevated temperature and the aqueous environment provided by saliva, and then the sodium chloride is released into the mouth to provide an instant salt flavour which is desired by consumers. various modifications to the present invention will be readily apparent to those skilled in the art.
152-176-348-032-70X
KR
[ "US" ]
C23C16/455,H01J37/32,H01L21/67,H01L21/673,C23C16/458,H01L21/687
2018-11-02T00:00:00
2018
[ "C23", "H01" ]
substrate supporting unit and a substrate processing device including the same
a substrate processing device capable of preventing deformation of a substrate during a process includes a substrate supporting unit having a contact surface that comes into contact with an edge of a substrate to be processed, wherein the substrate supporting unit includes a protruding (e.g. embossed) structure protruding from a base to support deformation from the inside of the edge of the substrate to be processed.
1. a substrate processing device having a substrate supporting unit configured to accommodate a substrate, the substrate supporting unit comprising: a base; a first protrusion protruding from the base to a first height, the first protrusion defining, in part, a recess within the substrate supporting unit; and a second protrusion within the recess and protruding from the base to a second height less than the first height, wherein the first protrusion comprises a contact surface to receive an edge of a substrate and a sealing surface configured to contact a reactor wall, wherein the contact surface defines a top of the recess, wherein the substrate is received above the recess, wherein a height of the contact surface is greater than the second height, and wherein the first protrusion surrounds the second protrusion. 2. the substrate processing device of claim 1 , wherein the base comprises a first region corresponding to the edge of the substrate, and the first protrusion is adjacent to the first region. 3. the substrate processing device of claim 2 , wherein the base further comprises a second region corresponding to a center of the substrate, and the second protrusion is adjacent to the second region. 4. the substrate processing device of claim 1 , wherein the second protrusion is electrically connected to a radio frequency (rf) power supply. 5. the substrate processing device of claim 1 , wherein a height of the sealing surface is different than the height of the contact surface. 6. the substrate processing device of claim 1 , wherein the substrate supporting unit is configured to: support the substrate to be processed through the contact surface of the first protrusion during a first operation of the substrate processing device, and support the substrate to be processed through both of the contact surface of the first protrusion and the upper surface of the second protrusion during a second operation of the substrate processing device, wherein active species are formed when an electric power is supplied between a gas supply unit and the substrate supporting unit, and wherein the second protrusion is configured to relocate the active species adjacent to the second protrusion such that the active species are arranged around a center of the substrate to be processed, wherein the substrate processing device further comprises a suction force generator generating a suction force, wherein, when the substrate to be processed is deformed due to the suction force, the deformed substrate is in line contact with the first protrusion and at the same time in contact with the second protrusion so that the line contact of the substrate by the first protrusion is maintained to prevent flow of reactive gas into the backside of the substrate to be processed and the second protrusion supports the substrate to be processed during the deformation. 7. the substrate processing device of claim 1 , wherein the contact surface and the sealing surface are about the same height. 8. the substrate processing device of claim 1 , wherein the substrate supporting unit is configured to support the substrate to be processed through the contact surface of the first protrusion during a first operation of the substrate processing device. 9. the substrate processing device of claim 8 , wherein the substrate supporting unit is configured to support the substrate to be processed through both of the contact surface of the first protrusion and the upper surface of the second protrusion during a second operation of the substrate processing device. 10. the substrate processing device of claim 1 wherein active species are formed when an electric power is supplied between a gas supply unit and the substrate supporting unit, wherein the second protrusion is configured to relocate the active species adjacent to the second protrusion such that the active species are arranged around a center of the substrate to be processed. 11. the substrate processing device of claim 1 , wherein the substrate processing device further comprises a suction force generator generating a suction force, wherein, when the substrate to be processed is deformed due to the suction force, the deformed substrate is in line contact with the first protrusion and at the same time in contact with the second protrusion. 12. the substrate processing device of claim 11 , wherein the line contact of the substrate by the first protrusion is maintained to prevent flow of reactive gas into the backside of the substrate to be processed and the second protrusion supports the substrate to be processed during the deformation. 13. the substrate processing device of claim 1 , wherein the second protrusion comprises a conductive material. 14. the substrate processing device of claim 1 , further comprising: a heater block below the substrate supporting unit, wherein a positioning hole is formed in the center of the base, and a position fixing pin is inserted into the positioning hole and a position of the base with respect to the heater block is fixed, wherein a plurality of second protrusions are symmetrically distributed with respect to the positioning hole. 15. the substrate processing device of claim 1 , wherein at least a part of an upper surface of the second protrusion is rounded. 16. the substrate processing device of claim 1 , wherein the substrate supporting unit is an edge-contact susceptor (ecs). 17. the substrate processing device of claim 1 , wherein the second protrusion is located at a center of the substrate supporting unit. 18. the substrate processing device of claim 1 , wherein the substrate supporting unit comprises positioning holes that are symmetrically distributed with respect to the second protrusion. 19. a substrate processing device having a substrate supporting unit configured to accommodate a substrate and a gas supply unit above the substrate supporting unit, the substrate supporting unit comprising: a base; a first protrusion protruding from the base to a first height; and a second protrusion protruding from the base to a second height less than the first height, wherein the first protrusion surrounds the second protrusion, wherein the first protrusion extends toward a peripheral area of the gas supply unit, and wherein the second protrusion extends toward a central area of the gas supply unit. 20. a substrate processing device having a substrate supporting unit configured to accommodate a substrate and a gas supply unit above the substrate supporting unit, the substrate supporting unit comprising: a base; a first protrusion adjacent to a periphery of the base and protruding from the base to a first height; and a second protrusion adjacent to a center of the base and protruding from the base to a second height less than the first height, wherein the first protrusion surrounds the second protrusion, wherein an upper surface of the first protrusion comprises a contact surface that comes into contact with an edge of a substrate to be processed, wherein the second protrusion has an upper surface lower than the contact surface of the first protrusion, wherein the substrate supporting unit is configured to: support the substrate to be processed through the contact surface of the first protrusion during a first operation of the substrate processing device, and support the substrate to be processed through both of the contact surface of the first protrusion and the upper surface of the second protrusion during a second operation of the substrate processing device, wherein active species are formed when an electric power is supplied between the gas supply unit and the substrate supporting unit, and wherein the second protrusion is configured to relocate the active species adjacent to the second protrusion such that the active species are arranged around a center of the substrate to be processed, wherein the substrate processing device further comprises a suction force generator generating a suction force, wherein, when the substrate to be processed is deformed due to the suction force, the deformed substrate is in line contact with the first protrusion and at the same time in contact with the second protrusion so that the line contact of the substrate by the first protrusion is maintained to prevent flow of reactive gas into the backside of the substrate to be processed and the second protrusion supports the substrate to be processed during the deformation.
cross-reference to related application this application is a continuation of, and claims priority to, u.s. patent application ser. no. 16/671,847 filed nov. 1, 2019 titled substrate supporting unit and a substrate processing device including the same; which claims the benefit of korean patent application no. 10-2018-0133838, filed on nov. 2, 2018, the disclosures of which are hereby incorporated by reference in their entirety. background 1. field one or more embodiments relate to a substrate supporting unit and a substrate processing device including the same, and more particularly, to a substrate supporting unit capable of preventing deformation of a substrate and realizing a symmetrical thin-film profile, and a substrate processing device including the substrate supporting unit. 2. description of the related art the size of a semiconductor device is continuously shrinked, and accordingly, the importance of precise control of a thin film processed (e.g., deposited) on a substrate is also increasing. as an example of the precise control, atomic layer deposition (ald) has been used as a technique to realize the precise control of the thin film in which atomic layer-sized thin films are formed layer-by-layer by sequentially and alternately supplying two or more reactive gases onto the substrate. through such an atomic layer deposition process, thin films may be uniformly and precisely deposited on the surface of a substrate having a complicated step structure. further, by applying a plasma atomic layer deposition process in which at least one reactive gas is excited by plasma, a thin film may be deposited at a lower temperature, thereby improving the reliability of a semiconductor device. meanwhile, in a plasma process, it is very important to generate plasma uniformly on a substrate. in order to generate uniform plasma in a reaction space on the substrate, it is preferable to arrange a radio frequency (rf) rod for supplying an rf current at the center of an upper electrode, for example, an upper surface of a showerhead. however, due to mutual physical interference by a gas supply port at the center of the upper surface of the showerhead, the arrangement of such an rf rod is substantially difficult. summary one or more embodiments include a device capable of overcoming the difficulty of disposing the rf rod at the center portion of the upper electrode described above to create uniform plasma on a substrate and deposit a uniform thin film. one or more embodiments include a device capable of preventing excessive deformation of a substrate that may occur during use of an edge-contact susceptor (ecs). additional aspects will be set forth in part in the description which follows and, in part, will be apparent from the description, or may be learned by practice of the presented embodiments. according to one or more embodiments, a substrate processing device includes: a gas supply unit; and a substrate supporting unit below the gas supply unit, wherein the substrate supporting unit includes: a base; a first protrusion adjacent to a periphery of the base and protruding to a first height; and a second protrusion adjacent to a center of the base and protruding to a second height less than the first height. the second protrusion may include a conductive material. the second protrusion may be electrically connected to ground. the second protrusion may be electrically connected to a radio frequency (rf) power supply. the substrate processing device may further include: a heater block below the substrate supporting unit, wherein a positioning hole may be formed in the center of the base, and a position fixing pin may be inserted into the positioning hole and a position of the base with respect to the heater block is fixed. a plurality of second protrusions may be symmetrically distributed with respect to the positioning hole. the first protrusion may include a contact surface that comes into contact with an edge of a substrate to be processed. the substrate processing device may further include: a suction force generator generating a suction force such that a backside of the substrate to be processed faces the base. the substrate to be processed may be deformed due to the suction force, and the second protrusion may support the substrate to be processed during the deformation. the substrate to be processed may be deformed due to a temperature change of the substrate processing device, and the second protrusion may support the substrate to be processed during the deformation. active species may be formed when an rf power is supplied between the gas supply unit and the substrate supporting unit, and the active species may be distributed adjacent to the second protrusion by an electric field concentrated on the second protrusion. according to one or more embodiments, a substrate supporting unit configured to accommodate a substrate includes: a base; a first protrusion protruding from the base to a first height; and a second protrusion protruding from the base to a second height less than the first height, wherein the first protrusion surrounds the second protrusion. the base may include a first region corresponding to an edge of the substrate, and the first protrusion may be adjacent to the first region. the base may further include a second region corresponding to a center of the substrate, and the second protrusion may be adjacent to the second region. according to one or more embodiments, a substrate processing device includes: a substrate supporting unit having a contact surface that comes into contact with an edge of a substrate to be processed, wherein the substrate supporting unit is configured to support deformation inside the edge of the substrate to be processed. the substrate supporting unit may be an ecs. the substrate supporting unit may include: a first protrusion having the contact surface; and a second protrusion supporting the deformation. the second protrusion may have a lower upper surface than an upper surface of the first protrusion. the substrate supporting unit may be configured such that active species arranged on the substrate to be processed are arranged around a center of the substrate to be processed. the substrate supporting unit may include: a first protrusion having the contact surface; and a second protrusion affecting an arrangement of the active species. brief description of the drawings these and/or other aspects will become apparent and more readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings in which: figs. 1 and 2 are views of substrate processing devices according to embodiments; figs. 3a and 3b are views of substrate supporting units. fig. 3a is a view of a conventional supporting unit. fig. 3b is a view of a substrate supporting unit according to embodiments; figs. 4a and 4b are views of substrate supporting units according to embodiments; figs. 5a and 5b show a thin-film profile when a sio 2 thin film is deposited on a substrate mounted on the substrate processing device of fig. 1 , by a plasma atomic layer deposition (peald) method; and figs. 6a and 6b are views showing the density of charges around embossings according to the polarity of electrodes; and figs. 7a and 7b are views showing the density of charges around embossings according to the polarity of electrodes. detailed description hereinafter, one or more embodiments will be described more fully with reference to the accompanying drawings. in this regard, the present embodiments may have different forms and should not be construed as being limited to the descriptions set forth herein. rather, these embodiments are provided so that the present disclosure will be thorough and complete, and will fully convey the scope of the present disclosure to one of ordinary skill in the art. the terminology used herein is for the purpose of describing particular embodiments and is not intended to limit the present disclosure. as used herein, the singular forms “a”, “an”, and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. it will be further understood that the terms “includes”, “comprises” and/or “including”, “comprising” used herein specify the presence of stated features, integers, steps, operations, members, components, and/or groups thereof, but do not preclude the presence or addition of one or more other features, integers, steps, operations, members, components, and/or groups thereof. as used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. expressions such as “at least one of,” when preceding a list of elements, modify the entire list of elements and do not modify the individual elements of the list it will be understood that, although the terms first, second, etc. may be used herein to describe various members, components, regions, layers, and/or sections, these members, components, regions, layers, and/or sections should not be limited by these terms. these terms do not denote any order, quantity, or importance, but rather are only used to distinguish one component, region, layer, and/or section from another component, region, layer, and/or section. thus, a first member, component, region, layer, or section discussed below could be termed a second member, component, region, layer, or section without departing from the teachings of embodiments. in the present disclosure, “gas” may include evaporated solids and/or liquids and may include a single gas or a mixture of gases. in the present disclosure, the process gas introduced into a reaction chamber through a gas supply unit may include a precursor gas and an additive gas. the precursor gas and the additive gas may typically be introduced as a mixed gas or may be separately introduced into a reaction space. the precursor gas may be introduced together with a carrier gas such as an inert gas. the additive gas may include a dilution gas such as a reactant gas and an inert gas. the reactant gas and the dilution gas may be mixedly or separately introduced into the reaction space. the precursor may include two or more precursors, and the reactant gas may include two or more reactant gases. the precursor may be a gas that is chemisorbed onto a substrate and typically contains metalloid or metal elements constituting a main structure of a matrix of a dielectric film, and the reactant gas for deposition may be a gas that reacts with the precursor chemisorbed onto the substrate when excited to fix an atomic layer or a monolayer on the substrate. the term “chemisorption” may refer to chemical saturation adsorption. a gas other than the process gas, that is, a gas introduced without passing through the gas supply unit, may be used to seal the reaction space, and it may include a seal gas such as an inert gas. in some embodiments, the term “film” may refer to a layer that extends continuously in a direction perpendicular to a thickness direction without substantially having pinholes to cover an entire target or a relevant surface, or may refer to a layer that simply covers a target or a relevant surface. in some embodiments, the term “layer” may refer to a structure, or a synonym of a film, or a non-film structure having any thickness formed on a surface. the film or layer may include a discrete single film or layer or multiple films or layers having some characteristics, and the boundary between adjacent films or layers may be clear or unclear and may be set based on physical, chemical, and/or some other characteristics, formation processes or sequences, and/or functions or purposes of the adjacent films or layers. in the present disclosure, the expression “same material” should be interpreted as meaning that main components (constituents) are the same. for example, when a first layer and a second layer are both silicon nitride layers and are formed of the same material, the first layer may be selected from the group consisting of si2n, sin, si3n4, and si2n3 and the second layer may also be selected from the above group but a particular film quality thereof may be different from that of the first layer. additionally, in the present disclosure, according as an operable range may be determined based on a regular job, any two variables may constitute an operable range of the variable and any indicated range may include or exclude end points. additionally, the values of any indicated variables may refer to exact values or approximate values (regardless of whether they are indicated as “about”), may include equivalents, and may refer to an average value, a median value, a representative value, a majority value, or the like. in the present disclosure where conditions and/or structures are not specified, those of ordinary skill in the art may easily provide these conditions and/or structures as a matter of customary experiment in the light of the present disclosure. in all described embodiments, any component used in an embodiment may be replaced with any equivalent component thereof, including those explicitly, necessarily, or essentially described herein, for intended purposes, and in addition, the present disclosure may be similarly applied to devices and methods. hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. in the drawings, variations from the illustrated shapes may be expected as a result of, for example, manufacturing techniques and/or tolerances. thus, the embodiments of the present disclosure should not be construed as being limited to the particular shapes of regions illustrated herein but may include deviations in shapes that result, for example, from manufacturing processes. fig. 1 schematically shows a substrate processing device according to embodiments. although a deposition device or an etching device of a semiconductor or a display substrate is described herein as the substrate processing device, it is to be understood that the present disclosure is not limited thereto. the substrate processing device may be any device necessary for processing a substrate. referring to fig. 1 , the substrate processing device may include a reactor wall 110 , a gas supply unit 120 , a substrate supporting unit 130 , a heater block h, an exhaust passage 140 , and an rf rod 130 . the reactor wall 110 may be a component of a reactor in the substrate processing device. in other words, a reaction space for deposition of the substrate may be formed by the reactor wall 110 . for example, the reactor wall 110 may include a sidewall and/or upper wall of the reactor. the upper wall of the reactor in the reactor wall 110 may provide a gas supply channel 150 through which source gas, purge gas, and/or reaction gas may be supplied. the gas supply unit 120 may be on the substrate supporting unit 130 . the gas supply unit 120 may be connected to the gas supply channel 150 . the gas supply unit 120 may be fixed to the reactor. for example, the gas supply unit 120 may be fixed to the reactor wall 110 via a fixing member (not shown). the gas supply unit 120 may be configured to supply gas to an object to be processed in a reaction space 160 . for example, the gas supply unit 120 may be a showerhead assembly. a gas flow channel 170 communicating with the gas supply channel 150 may be formed in the gas supply unit 120 . the gas flow channel 170 may be formed between a gas channel 125 (upper portion) of the gas supply unit 120 and a gas supply plate 127 (lower portion) of the gas supply unit 120 . although the gas channel 125 and the gas supply plate 127 are shown as separate structures in the drawings, the gas channel 125 and the gas supply plate 127 may be formed in an integrated structure. the substrate supporting unit 130 may be under the gas supply unit 120 . the substrate supporting unit 130 may perform a function of supporting a substrate s to be processed. further, the substrate supporting unit 130 may function as an electrode. for example, an rf power may be transferred to the reaction space 160 through the substrate supporting unit 130 , thereby forming plasma in the reaction space. by the rf power supplied through the substrate supporting unit 130 , a potential (e.g., a negative potential) may be formed on a substrate exposed in the reaction space. for example, the substrate supporting unit 130 may be connected to a plasma generation unit (not shown), and an rf power generated by the plasma generation unit may be supplied to the substrate s to be processed in the reaction space by the substrate supporting unit 130 . as a result, plasma may be formed in the reaction space between the substrate supporting unit 130 and the gas supply unit 120 . the substrate supporting unit 130 may be configured to contact a lower surface of the reactor wall 110 to form a reaction space. to this end, the substrate supporting unit 130 may include a sealing surface c 1 that is in contact with the lower surface of the reactor wall 110 . furthermore, the substrate supporting unit 130 may be configured to provide a space to which the substrate s to be processed is stably loaded. to this end, the substrate supporting unit 130 may include a contact surface c 2 that comes into contact with an edge of the substrate s to be processed. in an alternative embodiment, the sealing surface c 1 and the contact surface c 2 may be formed at different levels. in some embodiments, the substrate supporting unit 130 may be an edge-contact susceptor (ecs). in some embodiments, the substrate supporting unit 130 may include a first protrusion p 1 that provides at least one of the sealing surface c 1 and the contact surface c 2 . the first protrusion p 1 may protrude adjacent to the periphery of a base b of the substrate supporting unit 130 . the first protrusion p 1 may protrude to a first height. the first height may be a height from the base b to the contact surface c 2 . in other words, the first height may be defined as the same height as a rear surface of the substrate s to be processed which is loaded on the substrate supporting unit 130 . the first protrusion p 1 may include the same material as that of the base b. for example, when the base b includes a metal (e.g., aluminum), the first protrusion p 1 may also include a metal (e.g., aluminum). in another embodiment, the first protrusion p 1 may include a material different from that of the base b. for example, the base b may include a metal, while the first protrusion p 1 may include ceramics. a first portion of the first protrusion p 1 may include the contact surface c 2 that comes into contact with the edge of the substrate s to be processed. a surface of the substrate s to be processed and a surface of the first protrusion p 1 may contact each other through the contact surface c 2 . in an alternative embodiment, when a width of the contact surface contacting the substrate s is less than or equal to a certain threshold value, the contact surface c 2 contacting the substrate s may also be referred to as a contact line. when such a contact line is formed, it is defined that two faces are in line contact. such a contact line by the line contact may have a form of ring corresponding to the substrate s to be processed having a thin thickness (e.g., a continuous/non-continuous ring type). alternatively, the line contact may occur at a corner portion of the first protrusion p 1 . a second portion of the first protrusion p 1 may include the sealing surface c 1 that comes into contact with the lower surface of the reactor wall 110 . a reaction space may be formed by coupling the reactor wall 110 and the first protrusion p 1 through the sealing surface c 1 . in an embodiment, the sealing surface c 1 and the contact surface c 2 may be formed at an identical level. that is, the sealing surface c 1 and the contact surface c 2 may be formed on an identical plane. in another embodiment, the sealing surface c 1 and the contact surface c 2 may be formed at different levels. that is, the sealing surface c 1 and the contact surface c 2 may be formed on different planes. for example, as shown in fig. 1 , the sealing surface c 1 may be formed at a level higher than the contact surface c 2 , or the sealing surface c 1 may be formed at a level lower than the contact surface c 2 . the substrate supporting unit 130 may be configured to support deformation from the inside of the edge (e.g., a center of the substrate s) of the substrate s to be processed. to this end, the substrate supporting unit 130 may further include a second protrusion p 2 different from the first protrusion p 1 . the second protrusion p 2 may be disposed in the inner portion of the substrate supporting unit 130 compared to first protrusion p 1 . in other words, the first protrusion p 1 may surround the second protrusion p 2 in a horizontal direction (i.e., a direction in which the base b extends). the second protrusion p 2 may be adjacent to the center of the base b. that is, the second protrusion p 2 may be closer to the center (e.g., the center of the base b) between the center and an edge of the base b. for example, when the base b includes a first region corresponding to an edge of the substrate and a second region corresponding to the center of the substrate, the first protrusion p 1 may be adjacent to the first region and the second protrusion p 2 may be adjacent to the second region. the second protrusion p 2 may include the same material as that of the base b or may include a material different from that of the base b. for example, when the base b is a metal (e.g., aluminum), the first protrusion p 1 may also include a metal (e.g., aluminum). in another embodiment, the base b may include a metal, while the second protrusion p 2 may include ceramics. in another embodiment, the base b may include ceramics, while the second protrusion p 2 may include a metal. the second protrusion p 2 may be configured to support deformation of the substrate (e.g., deformation of the substrate in a direction of the base b). to this end, the second protrusion p 2 may protrude to have a lower height than the first protrusion p 1 . the height may be defined as a height lower than the rear surface of the substrate s to be processed. since the second protrusion p 2 has an upper surface lower than an upper surface of the first protrusion p 1 (i.e., the contact surface c 2 ), the substrate may be supported by the second protrusion p 2 when the substrate is bent downward due to a temperature change, gravity, and/or suction force. in more detail, the substrate s to be processed may be deformed due to a temperature change of the substrate processing device. in this case, the second protrusion p 2 may support the substrate s to be processed during deformation due to the temperature change. in another example, suction force may be generated by a suction force generator to be included in the substrate processing device, thereby causing deformation of the substrate s to be processed. for example, the suction force may be generated such that a backside of the substrate s to be processed faces the base b, and in this case, the substrate s to be processed may be bent in the direction of the base b. in this case, the second protrusion p 2 may support the substrate s to be processed during deformation due to the suction force. as such, when the substrate s to be processed is deformed to have a certain curvature toward the base b under a certain temperature and/or suction force, the deformed substrate may be in line contact with the first protrusion p 1 and at the same time in contact with the second protrusion p 2 . accordingly, the line contact of the substrate s to be processed by the first protrusion p 1 is maintained to prevent flow of reactive gas into the rear surface of the substrate s, and excessive bending (deformation) of the substrate may be prevented by the second protrusion p 2 . in addition to the above-described support function, the second protrusion p 2 may also function to relocate active species in the reaction space. to this end, the second protrusion p 2 may include a conductive material. in an embodiment, the second protrusion p 2 may be electrically connected to ground (see fig. 6 ). in another embodiment, the second protrusion p 2 may be electrically connected to a radio frequency (rf) power supply (see fig. 7 ). during a plasma process, an electric power (e.g., rf power) may be supplied between the gas supply unit 120 and the substrate supporting unit 130 , and active species may be formed by the power. meanwhile, in a case of a substrate supporting unit 130 to which an electric power, e.g. rf power is supplied, an electric field may be concentrated on the second protrusion p 2 including a conductive material. due to the concentration of such the electric field, the active species arranged on the substrate s to be processed may be distributed adjacent to the second protrusion p 2 . the second protrusion p 2 is adjacent to the center of the base b so that the active species may be arranged around the center of the substrate s to be processed. thus, substrate processing by the active species may be performed symmetrically around the center of the substrate s to be processed. as such, the second protrusion p 2 may be affecting an arrangement of the active species. in an alternative embodiment, a plurality of second protrusions p 2 may be arranged symmetrically around the center of the substrate supporting unit 130 , for symmetrical arrangement of the active species. the second protrusions p 2 may be in a non-continuous form (e.g., in the form of embossing) or may be in a continuous form (e.g., in the form of a ring). although not shown in the drawings, the second protrusions p 2 may be arranged at the center of the base b. that is, the second protrusions p 2 may be arranged such that the center of symmetry of the second protrusions p 2 and the center of the base b coincide with each other. as such, according to embodiments of the present disclosure, excessive deformation of the substrate may be prevented in vacuum suction and high temperature processes by forming a protrusion in a central portion of an ecs pedestal. further, a thin-film processing process having a more symmetrical thin-film profile in the plasma process may be performed. referring again to fig. 1 , the substrate supporting unit 130 may be supported by a body 200 , and the body 200 may be moved up and down and rotated. the substrate supporting unit 130 is separated from the reactor wall 110 or brought into contact with the reactor wall 110 by the up and down movement of the body 200 so that the reaction space 160 may be opened or closed. processing (e.g., deposition, etching, etc.) on the substrate may be performed in the reaction space 160 . gas and/or reaction residues and the like supplied through the gas supply unit 120 for treatment may be exhausted through the exhaust passage 140 . for example, the exhaust passage 140 may be connected to an exhaust pump (not shown), and the gas and/or reaction residues may be exhausted by the exhaust units. it should be noted that although the exhaust passage 140 of an upstream exhaust structure is shown in the drawings, the present disclosure is not limited thereto. in other words, an exhaust structure of the substrate processing device may be configured as a downstream exhaust structure. the substrate supporting unit 130 may further include the heater block h below the substrate supporting unit 130 . that is, the substrate supporting unit 130 may be between the substrate s and the heater block h. in some embodiments, an insulating material may be disposed between the substrate supporting unit 130 and the heater block h. in an alternative embodiment, the insulating material may include aluminum nitride. in another alternative embodiment, the insulating material may be a low dielectric constant material such as air. in a further embodiment, a positioning hole x may be formed in the center of the base b of the substrate supporting unit 130 . a position fixing pin (not shown) may be inserted into the positioning hole x and a position of the base b with respect to the heater block h may be fixed by the position fixing pin. in this case, the second protrusions p 2 may be symmetrically distributed with respect to the positioning hole x (see fig. 4 ). an rf rod r may be connected to the gas supply unit 120 through at least a portion of the reactor wall 110 . the rf rod r may be connected to an external plasma supply (not shown). although two rf rods r are shown in fig. 1 , the present disclosure is not limited thereto, and two or more rf rods r may be symmetrically installed to improve uniformity of plasma power supplied to the reaction space 160 . furthermore, although not shown in the drawings, an insulator (not shown) may be between the rf rod r and the reactor wall 110 to block electrical connection between the rf rod r and the reactor wall 110 . fig. 2 schematically shows a substrate processing device according to embodiments. the substrate processing device according to the embodiments may be a variation of the above-described substrate processing device according to the embodiments. hereinafter, repeated descriptions of the embodiments will not be given herein. referring to fig. 2 , fig. 2 is different from fig. 1 in that there is no rf rod r. this may occur in a configuration for supplying rf power through a lower electrode instead of supplying the rf power through the gas supply unit 120 which is an upper electrode. for example, the rf power may be supplied from the bottom of a reactor through the base b of the substrate supporting unit 130 , the first protrusion p 1 , and/or the second protrusion p 2 . by exciting reactive gas by the rf power, plasma is generated in the reaction space, in more detail, on the substrate s to be processed. by supplying the rf power through the substrate supporting unit 130 from the bottom of the reactor as described above, radicals in the reaction space may be accelerated toward the bottom (i.e., the substrate s to be processed) rather than the top (i.e., the gas supply unit 120 ) of the reactor. furthermore, according to some embodiments, the second protrusion p 2 may be formed at the center of the base b of the substrate supporting unit 130 , as shown in fig. 2 . in a further embodiment, symmetrically disposed positioning holes x may be formed around the second protrusion p 2 . by inserting the position fixing pin into each of the positioning holes x, a position of the base b with respect to the heater block h may be fixed. in this case, the positioning holes x may be symmetrically distributed with respect to the second protrusion p 2 . in addition, according to some other embodiments, the sealing surface c 1 of the substrate supporting unit 130 may be formed lower than the contact surface c 2 of the substrate supporting unit 130 , as shown in fig. 2 . thus, in some embodiments, an upper surface of the second protrusion p 2 may be higher than the sealing surface c 1 and may be lower than the contact surface c 2 . in this case, the first protrusion p 1 protruding to a first height higher than a second height of the second protrusion p 2 may be defined as a component providing only the contact surface c 2 . fig. 3 schematically shows substrate supporting units according to embodiments. fig. 3a shows deformation in a conventional substrate supporting unit, and fig. 3b shows deformation in the substrate supporting unit according to embodiments. the substrate supporting unit according to embodiments may be a variation of the substrate supporting unit included in the substrate processing device according to the above-described embodiments. hereinafter, repeated descriptions of the embodiments will not be given herein. as described above, disclosed herein is a substrate supporting unit capable of generating uniform plasma on a substrate to form a uniform thin film on the substrate. in more detail, the substrate supporting unit may include a susceptor to which the substrate is loaded, and a plurality of protrusions (e.g. embossings) arranged on an upper surface of the susceptor, and the protrusions may be selectively arranged at the center of the upper surface of the susceptor. the embodiments of fig. 3 all represent the ecs. an ecs 1 includes a pad 3 as a first protrusion and a concave portion 4 as a base, and an edge of the substrate 2 is stably loaded to a step formed in the middle of a pad 3 . the ecs may be a metal material, for example, an aluminum material. referring to fig. 3a , the degree of contact between an edge of the substrate 2 and a stepped portion of the pad 3 is increased by a vacuum force (indicated by an arrow) applied to the substrate 2 . thus, flow of reactive gas into a rear surface of the substrate 2 is prevented. however, the substrate 2 may be bent due to a vacuum suction force, and the substrate 2 may be excessively bent due to a temperature effect in a high temperature process. in this case, the deformation of a structure on the substrate 2 and the uniformity of thin film characteristics of respective portions of the substrate 2 may be different. referring to fig. 3b , embossings 5 , which are a second protrusion, are arranged at the center of the concave portion 4 of the susceptor. the height of the protrusion 5 is not higher than the height of the pad 3 . for example, an upper surface of a contact surface of the pad 3 may be located about 0.1 mm to about 0.3 mm higher than an upper surface of the protrusion 5 . in a more specific example, when the height of the pad 3 is 0.3 mm with respect to the bottom of the concave portion 4 as a base, the height of the protrusion 5 may be lower than 0.3 mm, for example, 0.2 mm. in some embodiments, a width of the protrusions 5 may be 0.1 mm to 0.3 mm (e.g., 0.2 mm). the protrusions 5 may support the substrate 2 when the substrate 2 is bent downward by the heat and vacuum suction force, and consequently, excessive bending or deformation of the substrate 2 may be prevented. fig. 4 schematically shows a substrate supporting unit according to embodiments. the substrate supporting unit according to the embodiments may be a variation of the substrate supporting unit according to the above-described embodiments. hereinafter, repeated descriptions of the embodiments will not be given herein. fig. 4a is a perspective view of the substrate supporting unit. a positioning hole 6 for fixing a position of the ecs 1 provided on a heater block (not shown) is arranged at the center of the concave portion 4 of the ecs 1 . a position fixing pin (not presented here) is inserted into the positioning hole 6 to fix the position of the ecs 1 on the heater block. the protrusions 5 are symmetrically distributed around the positioning hole 6 with respect to the positioning hole 6 . fig. 4b shows dimensions of the protrusion 5 and the positioning hole 6 . as described above, the protrusion 5 is not higher than the ecs pad 3 (see fig. 3 ) and prevents excessive deformation of a substrate by supporting the substrate when the substrate is deformed downward under the influence of a vacuum suction force and a high temperature. fig. 5 shows a thin-film profile when a sio 2 thin film is deposited on a substrate mounted on the substrate processing device of fig. 1 , by a plasma atomic layer deposition (peald) method. fig. 5a shows a thin-film profile in an existing substrate supporting unit without embossings and fig. 5b shows a thin-film profile in a substrate supporting unit with embossings according to embodiments of the present disclosure. as shown in fig. 5 , when there are no protrusions (e.g. embossings) at the center of concave portion of an ecs, an asymmetrical shape of the thin-film profile is shown, but when protrusions are present, the shape is symmetrical. this results in a symmetrical profile (a concave film profile) of a thin film formed by active species because the active species are redistributed around the protrusions when an rf power is applied. that is, plasma active species are uniformly circularly distributed around the protrusions, and the symmetrical thin-film profile is achieved by the active species. figs. 6 and 7 are views showing the density of charges around embossings according to the polarity of electrodes. fig. 6 shows the density of charges around the embossings according to the polarity of electrodes when an rf power is supplied through a gas supply unit and a substrate supporting unit is grounded. fig. 6a shows the distribution of active species on an existing substrate supporting unit without embossings and fig. 6b shows the distribution of active species on a substrate supporting unit with embossings according to embodiments of the present disclosure. referring to fig. 6 , it can be seen that the distribution of radicals on a substrate may be controlled by introducing protrusions (e.g. embossings) at the bottom of the substrate, that is, the base of an ecs. that is, since an electric field is concentrated on the protrusions (e.g. embossings), radicals in a reaction space may be concentrated in a space above the protrusions (e.g. embossings) corresponding to the electric field. in an alternative embodiment, an upper surface of the protrusions (e.g. embossings) may have a curvature of less than a certain value so that the electric field may be more concentrated. fig. 7 shows the density of charges around the embossings according to the polarity of electrodes when rf power is supplied through the substrate supporting unit and the gas supply unit is grounded. fig. 7a shows the distribution of active species in an existing substrate supporting unit without embossings and fig. 7b shows the distribution of active species in a substrate supporting unit with embossings according to embodiments of the present disclosure. referring to fig. 7 , it can be seen that the distribution of radicals on a substrate may be controlled by introducing protrusions (e.g. embossings) at the bottom of the substrate, that is, the base of an ecs. in particular, in the case of the present embodiment, since the substrate supporting unit functions as an rf power supply electrode, it can be advantageous that an upper gas supply unit does not need to have a separate rf rod. in this case, symmetrically arranged protrusions (e.g. embossings) may perform the function of an rf rod. shapes of each portion of accompanying drawings for a clear understanding of the present disclosure should be considered in descriptive sense, but may be modified into various shapes other than those shown. it should be understood that embodiments described herein should be considered in a descriptive sense only and not for purposes of limitation. descriptions of features or aspects within each embodiment should typically be considered as available for other similar features or aspects in other embodiments. while one or more embodiments have been described with reference to the figures, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the disclosure as defined by the following claims.
153-265-466-499-539
US
[ "US" ]
F16K27/07
1986-10-31T00:00:00
1986
[ "F16" ]
steam jacketed outlet reducer nozzle
a railway tank car having a wafer-type valve arrangement is provided with a nozzle member. the nozzle member has a mounting portion mounted on the valve arrangement. the mounting portion has an inlet opening therein which is large enough to receive a portion of the wafer valve when the valve is opened. a taper portion is formed integral with the mounting portion, and an outlet portion is formed integral with the taper portion. the outlet portion is smaller than the inlet opening and has securing means thereon for attachment to standard-sized connectors for unloading liquid or semi-liquid cargo. a breakage groove is provided in the nozzle member to protect the valve arrangement from impacts. a steam heating member with steam inlet and outlet means may be placed over the smaller outlet portion and secured to the nozzle member for heating cargo passing therethrough. the taper portion has an indentation therein to facilitate heating of cargo in the nozzle member. access detents are provided in the steam heating member to allow insertion of bolts in the mounting portion for mounting the nozzle member on the valve arrangement.
1. an outlet for a railway tank car body having an interior adapted to receive and carry cargo, said tank car body having an opening therein communicating with the interior thereof, said outlet comprising: a valve structure mounted on the tank car body for selectively covering and uncovering the opening; the valve structure including a valve member movable supported on the valve structure for movement with respect to the opening for the selective covering and uncovering thereof; said valve member extending outwardly of the car beyond the valve structure when moved to uncover the opening; a nozzle member supported on the valve structure and comprising: a mounting portion mounted on the valve structure; said mounting portion having an inlet opening therein communicating with the opening, said inlet opening being large enough to receive the valve member therein when the valve member is moved to uncover the opening; a taper portion formed integral with the mounting portion and extending away from the car, said taper portion having a space therein communicating with the inlet opening; an outlet portion formed integral with the taper portion and having an outlet opeing smaller thant he inlet opening, said outlet opening communicating with the space in the taper portion whereby the contents of the railway car pass through the valve means and the nozzle member to be removed from the car; said outlet portion including securing means for connecting the outlet portion to a receiving means for receiving the contents of the car passing through the nozzle member whereby standard-sized receiving means adapted for connection to an outlet smaller than the valve member may be used to receive cargo from said car through said valve structure; and said taper portion including a first taper segment portion formed integral with the mounting portion and a second taper segment portion connected with the first taper segment portion; said first and second taper segment portions being disposed with respect to each other to produce an indentation inwardly of the nozzle member in the outer surface portion of the nozzle member; and a steam heating member connected with the nozzle member, said steam heating member having an inner surface portion and each of the taper segment portions having an outer surface portion, said inner surface portion of said steam heating member and said outer surface portions of said first and second taper segment portion defining a steam heating space therebetween; steam inlet means and steam outlet means being connected with the steam heating member for introducing steam into said steam heating space to facilitate passage of congealable contents the car through said nozzle member by heating said contents in the nozzle member. 2. the invention according to claim 1 and the valve member being pivotally supported on the valve structure. 3. the invention according to claim 1 and the valve member being generally circular in shape and having a diameter of approximately six inches. 4. the invention according to claim 3 and said inlet opening being generally circular in shape and having a diameter of approximately six inches for receiving a portion of said valve member when the valve member is moved to uncover the opening. 5. the invention according to claim 4 and said outlet portion being generally cylindrical and having an outer diameter of approximately 5.2 inches for connection to standard-sized fittings. 6. the invention according to claim 1 and the nozzle member having a breakage groove therein to permit a portion of the nozzle member to break away in the event of an impact thereon. 7. the invention according to claim 6 and said breakage groove being located approximately six-and-one-half inches from the surface of the railway tank car. 8. the invention according to claim 1 and said inlet opening, said space within the taper portion, and the outlet opening being substantially circular in cross sections taken perpendicular to the direction of extension of the nozzle member. 9. the invention according to claim 1 and said mounting portion having attachment aperture means; attachment means extending through said attachment aperture means and engaging said valve structure for securing the mounting portion thereto; said attachment means being adapted to permit the nozzle member to break away from the valve structure in the event of an impact. 10. an outlet for a railway tank car having an interior adapted to receive and carry lading, said tank car having an opening therein communicating with the interior, said outlet comprising: a valve structure mounted on the tank car for selectively covering and uncovering the opening; the valve structure including a valve member movably supported on the valve structure for movement with respect to the opening for the selective covering and uncovering thereof; said valve member extending of the car beyond the valve structure when moved to uncover the opening; a nozzle member supported on the valve structure and comprising: a mounting portion mounted on the valve structure; said mounting portion having an inlet opening therein communicating with the opening, said inlet opening being large enough to receive the valve member therein when the valve member is moved to uncover the opening; a taper portion formed integral with the mounting portion and extending away from the car, said taper portion having a space therein communicating with the inlet opening; an outlet portion formed integral with the taper portion and having an outlet opening smaller than the inlet opening, said outlet opening communicating with the space in the taper portion whereby the contents of the railway car pass through the valve means and the nozzle member to be removed from the car; said outlet portion including securing means for connecting the outlet portion to a receiving means for receiving the contents of the car passing through the nozzle member; and a steam heating member connected with the nozzle member, said steam heating member having an inner surface portion and said nozzle member having an outer surface portion, said inner surface portion of said steam heating member and said outer surface of said nozzle member defining a steam heating space therebetween; steam inlet means and steam outlet means being connected with the steam heating member for introducing steam into said steam heating space to facilitate passage of congealable contents of the car through said nozzle member by heating said contents in the nozzle member; and said steam heating member having first and second aperture means therein adjacent the inner surface portion thereof, said aperture means receiving the nozzle member therein whereby said steam heating space is closed to the surrounding environment, and the only access to said seam heating space is through said steam inlet and outlet means; said outlet portion being small enough to pass through said first and second aperture means; and means for securing said steam member to said nozzle member whereby the nozzle member may be provided with steam heating by passing the steam heating member over the outlet portion and securing said steam heating member to said nozzle member. 11. the invention according to claim 10 and said mounting portion having attachment aperture means; attachment means extending through said attachment aperture means and engaging said valve structure for securing the mounting portion thereto. 12. the invention according to claim 11 and said steam heating member having a detent therein substantially aligned with the attachment aperture means to provide access to the attachment aperture means for placement and removal of the attachment means therein. 13. the invention according to claim 10 and said taper portion including a first taper segment portion formed integral with the mounting portion and a second taper segment portion connected with the first taper segment portion; said first and second taper segment portions being disposed with respect to each other to produce an indentation inwardly of the nozzle member in the outer surface portion of the nozzle member; said indentation providing increased volume of the steam heating space to facilitate circulation of steam within said steam heating space for heating said nozzle member, and said indentation and relative disposition of said first and second taper segment portions providing added surface area in said taper portion for enhancing heating of the contents of the car passing through the nozzle member. 14. the invention according to claim 10 and the nozzle member having a breakage groove therein to permit a portion of the nozzle member to break away in the event of an impact thereon; and said steam heating member being located between said mounting portion and said breakage groove. 15. the invention according to claim 14 and said breakage groove being located approximately six-and-one-half inches from the surface of the railway tank car. 16. the invention according to claim 10 and said nozzle member having a shoulder portion thereon for welding the steam heating member to the nozzle member. 17. the invention according to claim 10 and said inlet opening, said space within the taper portion, and said outlet opening being substantially circular in cross sections taken perpendicular to the direction of extension of the nozzle member. 18. the invention according to claim 10 and the valve member being generally circular in shape and having a diameter of approximately six inches; said inlet opening being generally circular in shape and having a diameter of approximately six inches for receiving a portion of said valve member when the valve member is moved to uncover the opening. 19. the invention according to claim 18 and said outlet portion being generally cylindrical and having an outer diameter of approximately 5.2 inches for connection to standard-sized fittings. 20. the invention according to claim 10, and said nozzle member and said steam heating member each being formed by extrusion. 21. an outlet for railway tank car having an interior adapted to receive and carry lading, said tank car having an opening therein communicating with the interior, said outlet comprising: a valve structure mounted on the tank car body for selectively covering and uncovering the opening; the valve structure including a valve member movably supported on the valve structure for movement with respect to the opening for the selective covering and uncovering thereof; said valve member extending outwardly of the car beyond the valve structure when moved to uncover the opening; a nozzle member supported on the valve structure and comprising: a mounting portion mounted on the valve structure; said mounting portion having an inlet opening therein communicating with the opening, said inlet opening being large enough to receive the valve member therein when the valve member is moved to uncover the opening; a taper portion formed integral with the mounting portion and extending away from the car, said taper portion having a space therein communicating with the inlet opening; an outlet portion formed integral with the taper portion and having an outlet opening smaller than the inlet opening, said outlet opening communicating with the space in the taper portion whereby the contents of the railway car pass through the valve means and the nozzle member to be removed from the car; said outlet portion including securing means for connecting the outlet portion to a receiving means for receiving the contents of the car passing through the nozzle member; and a steam heating member connected with the nozzle member, said steam heating member having an inner surface portion and said nozzle member having an outet surface portion, said inner surface portion of said steam heating member and said outer surface of said nozzle member defining a steam heating space therebetween; steam inlet means and steam outlet means being connected with the steam heating member for introducing steam into said steam heating space to facilitate passage of congealable contents of the car through said nozzle member by heating said contents in the nozzle member; and said steam inlet means and said steam outlet means being disposed symemtrically on the steam heating member whereby said steam inlet and outlet means may be used interchangeably. 22. the invention according to claim 21, and the steam inlet and outlet means each being angulated generally away from the tank car to facilitate operator connection of steam supply lines thereto.
field of the invention this invention relates to valve structures on railway tank cars for carrying various types of fluid or semi-fluid cargo. more particularly, this invention relates to an outlet reducer nozzle for connection to a valve structure on a railway tank car for improved unloading of the contents of the railway tank car. description of the prior art it is known in the prior art to provide a railway tank car with a valve structure selectively sealing and opening an outlet opening in the bottom of the railway car. a number of types of valves have been developed. one design of valve is the wafer type valve wherein a valve structure has a pivotally-mounted generally circular valve plate. the valve plate is connected with a han dle, allowing an operator to manually rotate the valve plate from a horizontal closed position to a vertical open position. an advantage of this design is that it provides a low profile valve when the valve is closed because the valve plate is horizontal adjacent the bottom of the railway tank car. a disadvantage of this design is that to provide for adequate flow of lading from within the tank car, the valve opening must be relatively large, necessitating a correspondingly large wafer valve plate. this produces difficulty in connections to smaller standard-sized connections and cargo withdrawing mechanisms which are sized in smaller diameters. the usual approach to make a connection between the relatively larger wafer type valve and a smaller standard-sized off loading device has been to attach a cylindrical spacer to the bottom of the valve. this spacer had an inner radius large enough to allow the wafer valve plate to pivot and uncover the valve opening. a standard-sized connector was bolted to the bottom of the spacer, allowing withdrawal of the lading through the valve but providing an arrangement which was difficult to install, was not particularly suited to provide good flow of cargo through the nozzle, and was generally considerably longer than required by the american association of railroads (aar) requirements. the spacer was supplied with steam to warm the cargo passing through the valve, but the cylindrical spacer had contact with cargo only at its inner diameter, which was large to accommodate the valve plate, and therefore was not as efficient as possible for heating the contents of the car passing through the spacer. summary of the invention a railway tank car has an interior and is provided with an opening communicating with the interior. a valve structure is mounted on the tank car for selectively covering and uncovering the opening for withdrawing lading from within the railway tank car. the valve structure has a valve member or disc which is movably supported for movement with respect to the opening to cover and uncover the opening. when the valve member is moved to uncover the opening, it extends outwardly of the car beyond the valve structure. a nozzle member is supported on the valve structure and includes a mounting portion mounted on the valve structure. the mounting portion has an inlet opening communicating with the valve opening. the inlet opening is large enough to receive the valve member when the valve member is moved into the valve open position. a taper portion is formed integral with the mounting portion and extends away from the car. the taper portion has a space therein communicating with the inlet opening. an outlet portion is formed integral with the taper portion and has an outlet opening smaller than the inlet opening. the outlet opening communicates with the space in the taper portion whereby the contents of the railway car pass through the valve means and the nozzle member to be removed from the car. the outlet portion has securing means for connecting the outlet portion to a receiving means for receiving the contents of the car passing through the nozzle member. a steam heating coil is connected with the nozzle member and forms a steam heating space around the nozzle member for heating the lading passing through the nozzle member to facilitate flow of the lading. the mounting portion of the nozzle member has attachment apertures, and bolts extend through the attachment apertures to secure the mounting portion to the valve structure. the steam coil has detents therein substantially aligned with the attachment apertures which provide access to the apertures for placement of the bolts. the taper portion of the nozzle member includes two taper segment portions formed integral with the mounting portion. the taper segment portions are angled with respect to each other to produce an indentation in the outer surface of the nozzle member. the indentation provides for increased volume of the heating space defined by the heating coil and produces a heating space of a shape which facilitates circulation of steam about the nozzle member for heating the lading therein. a breakage groove is provided in the nozzle member which permits the lower portion of the nozzle member to break away in the event of an impact. the breakage groove is located approximately 61/2 inches from the surface of the railway tank car. the valve member used in the preferred embodiment is a wafer type valve structure. the wafer valve member is generally circular in shape and has an inside diameter of approximately 6 inches. the inlet opening of the nozzle member is also circular in shape and has an inside diameter of approximately 6 inches for receiving a portion of the valve member when the valve member is pivoted to uncover the opening. the outlet portion is generally cylindrical and has an outer diameter of approximately 51/4 inches for connection to standard size fittings. brief description of the drawings fig. 1 is a cross sectional view through the longitudinal centerline of a wafer valve structure with the outlet reducer nozzle member of this invention. fig. 2 shows an elevational view of the nozzle member. fig. 3 shows a view taken along line 3--3 of fig. 2 showing the mounting portion of the nozzle. fig. 4 shows an elevational view of a nozzle member of this invention having a steam coil mounted thereon. fig. 5 is a section view taken along line 5--5 of fig. 4. fig. 6 is a view taken along line 6--6 of fig. 4. detailed description of the disclosure as best shown in fig. 1 a railroad tank car body 3 has a bottom 7. the bottom 7 has an opening therein generally designated at 9. the opening 9 is covered by a valve arrangement generally designated at 11. the valve arrangement 11 includes a protective flange 15 surrounding a valve structure 17. the protective flange 15 protects the valve arrangement 11 down to the shear plane indicated at a, by deflecting impacts which would otherwise potentially damage the valve structure 17. in th event of an impact, nozzle member 25 will shear away bolts 27 and separate from the bottom surface of the valve arrangement 11 at a shear plane slightly above shear plane a. the valve structure 17 has a valve opening therein generally designated at 19 which communicates with the opening 9 in the bottom 7 of the railway car 3. this valve opening 19 is selectively covered and uncovered by valve member 21 which is supported on operating rod 23. operating rod 23 is equipped with a handle (not shown) which may be rotated by an operator to rotate the valve member 21 from the valve closed position shown in fig. 1 to the valve open position shown in phantom in fig. 1. nozzle member 25 is mounted to the undersurface of valve structure 17 by attachment members in the form of bolts 27. as best shown in figs. 1,2, and 3, the nozzle member 25 has a mounting portion 31. mounting portion 31 is generally circular and has a plurality of attachment aperture means in the form of generally circular openings 33 in the mounting portion 31. bolts 27 (fig. 1) extend generally vertically through openings 33 to be secured in the valve structure 17. the mounting portion 31 has an inlet opening 37 therein. the inlet opening 37 is substantially the same diameter as the valve member 21 and is therefore large enough to receive a portion of the valve member 21 therein when the valve member 21 is turned to the valve open position, shown in phantom in fig. 1. a taper portion 41 is formed integral with the mounting portion 31 and extends away from the valve structure 17. the taper portion 41 includes a first taper segment portion 43 which is formed integral with the mounting portion 31. first taper portion 43 has an inner taper surface 45 which defines a first generally conical interior tapering space. a second taper segment portion 47 is formed integral with the first segment taper portion 43 and extends downwardly therefrom. second taper segment portion 47 has an inner surface 49 which defines a second generally conical interior space within the nozzle member 25. first taper portion 43 has an exterior surface 51 and second taper segment portion 47 has an exterior surface 53. the conical interior space defined by surface 45 of first taper segment portion 43 communicates at its widest point with inlet opening 37. at this point the interior surface 45 is substantially the same size as the opening 37 and the taper of inner taper surface 45 is gradual enough to permit valve member 21 to project into the first interior space. at the lower end of the space defined by surface 45, the space communicates with the space defined by surface 49 and at this point the surfaces 45 and 49 are substantially the same size to provide for smooth flow of lading through the nozzle member 25. conical surface 45 is angled away from the centerline which is the center of the cone circumscribed by surface 45 at an angle of 171/2 degrees. conical surface 49 is angled back from the centerline of the cone circumscribed thereby at an angle of 71/2 degrees, producing a two-stage taper which narrows the size of the space within the nozzle member 25 from a diameter of approximately 6 inches in opening 37 to a diameter of approximately 33/4 inches at the lower end of conical surface 49. taper segment portion 47 is connected at its lower end with outlet portion 55. outlet portion 55 has an inner surface 57 defining a generally cylindrical space therein communicating with the spaces formed by conical surfaces 49 and 45 and inlet opening 37. the space defined by surface 57 also communicates with the external environment, providing an outlet opening generally indicated at 61 in outlet portion 55. the diameter of thepace defined by surface 57 is approximately 33/4 inches in diameter. the outer surface of outlet portion 55 is provided with securing means in the form of threaded engaging means 63 for securing the nozzle portion to a cap 65 as shown in fig. 1 for additionally securing the seal on the tank car, or to receiving means in the form of conduits which are removably connected to the nozzle member 25 for unloading cargo from the interior of the tank car through the valve structure 17, through the nozzle member, and to the receiving means. the outer diameter of outlet portion 55 is approximately 51/4 inches and this is provided with threading of 4 threads per inch. this is a size compatible with standard sizes of conduit connections of receiving means used throughout the industry. nozzle member 25 is provided with a breakage groove 67 which extends around the nozzle member 25 at a distance of approximately 61/2 inches from the valve structure 17. the breakage groove 67 is provided in accordance with the aar specifications to provide for structure below the breakage groove 67 to break away in the event of an impact on the structure below the breakage groove 67. this serves to protect the valve arrangement 11 from being damaged in case of an impact. with reference to figs. 4, 5, and 6, the nozzle member 25 is shown equipped with a steam heating member or steam coil member 70. as shown in fig. 5, steam heating member 70 defines a space generally indicated at 73 between the outer surfaces 51 and 53 of first and second taper segment portions 43 and 47. the steam heating member 70 is provided with steam inlet means 75 and steam outlet means 77 which are connected to an external source of heated steam. when steam is introduced into inlet 75 into space 73 the steam circulates about through the space 70 surrounding the nozzle member 75 and transferring heat to the lading passing through the nozzle member 25. the inlet and outlet means 75 and 77 are situated towards the lower end of the steam heating member 70 to facilitate the draining of any condensed water accumulating in the space 70. the outer surfaces 51 and 53 of taper segment portions 43 and 47 are angularly disposed with respect to each other to provide an indentation or concavity generally indicated at 81 extending around the nozzle member. this indentation produces increased volume inside the space 73 within the heating member 70, and also results in greater heated surface area in contact with the cargo in the interior of the nozzle member 25 for transferring heat from the steam heating space 73 to the cargo as it passes through the nozzle member 25. the indentation allows for a relatively compact steam jacketed nozzle arrangement with efficient transfer of heat from the steam coil to the lading within the nozzle member 25. as best shown in fig. 5, steam heating member 70 is welded to an upper portion of the nozzle taper portion 41, and to an abutment shoulder 85 on the second taper segment portion 47. as best shown in fig. 6, the steam heating member 70 is provided with detents or detent portions 89 which provide clearance spaces substantially vertically aligned below the attachment openings 33 in the mounting portion 31. the presence of detent portions 89 permits ready installation and removal of attachment members such as bolts 27 which extend through openings 33 to secure the mounting portion 31 to the valve structure 17. the foregoing description and drawings merely explain and illustrate the invention and the invention is not limited thereto except insofar as the appended claims are so limited, as those skilled in the art who have the disclosure before them will be able to make modifications and variations therein without departing from the scope of the invention.
155-393-670-592-035
US
[ "EP", "KR", "CN", "AU", "JP", "IL", "CA", "WO", "NZ" ]
G06F3/01,G06K9/00,A61B90/00,F21V8/00,G02B6/27,G06F3/14,G09G3/00,G06V40/20,G06T19/00,H04N21/258,H04N21/266,G06F3/147,G06V20/20,G06V40/10,G06V40/19,G02B27/02,G06F3/04815,G16Z99/00,A63F13/00,A63F13/25,G02B27/01,G06K9/46,G06Q30/00,G16H10/60,G16H20/30,G06K9/36,G06T13/00,G06T13/20,G06T13/40,G06T15/00
2014-06-14T00:00:00
2014
[ "G06", "A61", "F21", "G02", "G09", "H04", "G16", "A63" ]
methods and systems for creating virtual and augmented reality
configurations are disclosed for presenting virtual reality and augmented reality experiences to users. the system may comprise an image capturing device to capture one or more images, the one or more images corresponding to a field of the view of a user of a head-mounted augmented reality device, and a processor communicatively coupled to the image capturing device to extract a set of map points from the set of images, to identify a set of sparse points and a set of dense points from the extracted set of map points, and to perform a normalization on the set of map points.
an augmented reality display system (10), comprising: an image capturing device (32, 2528, 2532, 5114, 5116, 5120, 6216, 7202, 7204, 11502, 11503, 11804) of the augmented reality display system (10) to capture at least one image, wherein the image capturing device comprises one or more image capturing sensors, at least a portion of the at least one image is perceived within a field of view of a user, and the at least one image captures at least one gesture (5112, 8416, 8816, 8946, 9726, 9728, 10320-10332, 12502, 12506, 12508, 12606, 12702, 12802, 12810, 12814, 12902, 12908, 12910, 12912, 12914, 13206, 13014) that is created by the user and interacts with virtual content projected by the augmented reality display system to the user; and a processor (38) communicatively coupled to the image capturing device (32, 2528, 2532, 5114, 5116, 5120, 6216, 7202, 7204, 11502, 11503, 11804) to recognize the at least one gesture as at least one recognized gesture, the processor configured to recognize the at least one gesture as the at least one recognized gesture is further configured to: determine whether the at least one image includes one or more identifiable depth points by performing a line search for the at least one image; determine an order of processing in which a plurality of analysis nodes is executed to perform respective gesture identification processes on the at least one gesture with respect to a plurality of candidate gestures based at least in part upon a plurality of computational resource utilization or expense requirements; applying a first computationally less-expensive algorithm at an earlier first analysis node of the plurality of analysis nodes (13544) to eliminate a first candidate of the plurality of candidate gestures to form a reduced plurality of candidate gestures; and applying a second computationally more-expensive algorithm at a later second analysis node of the plurality of analysis nodes (13544) to eliminate a second candidate of the reduced plurality of candidate gestures, wherein the second computationally more-expensive algorithm is configured to consume a larger amount of processing power than the first computationally less-expensive algorithm; and the processor (38) further configured to determine a user input based at least in part on the at least one recognized gesture. the augmented reality display system (10) of claim 1, wherein the processor is configured to generate a scoring value for a set of points identified for the at least one gesture based at least in part on comparison between the set of points and predetermined gestures and to recognize the at least one gesture when the scoring value exceeds a threshold value. the augmented reality display system (10) of claims 1 or 2, further comprising a database to store predetermined gestures, wherein the computational resource utilization or expense requirement includes reducing or minimizing computational resource utilization for the gesture recognition for the at least one gesture, the first computational resource or expense requirement corresponds to a relatively lower computation resource utilization, and the second computational resource or expense requirement corresponds to a relatively higher computational resource utilization when compared to the first computational resource or expense requirement. the augmented reality display system (10) of claim 3, further comprising a networked memory to access the database of predetermined gestures. the augmented reality display system (10) of any of claims 1-4, wherein the processor is further configured to recognize the at least one gesture that comprises a hand gesture or motion or a finger gesture or a finger motion. the augmented reality display system (10) of any of claims 1-5, wherein the augmented reality display system comprises a user wearable apparatus to display a virtual world as well as at least a portion of a physical environment in which the user is located. the augmented reality display system (10) of any of claims 1-6, where the processor is further configured to recognize the at least one gesture that comprises an inter-finger interaction. the augmented reality display system (10) any of claims 1-7, wherein the processor is further configured to recognize the at least one gesture comprising at least one of inter-finger interactions, pointing, tapping, or rubbing. the augmented reality display system (10) any of claims 1-8, further comprising a spatial light modulator that is communicatively coupled to the processor, and the processor is configured to control the spatial light modulator in a manner such that one or more virtual objects are displayed to the user based at least in part on the user input. the augmented reality display system (10) of claim 9, further comprising a virtual user interface to receive the user input or a user interaction with the virtual user interface or with the one or more virtual objects. a method for determining user input, comprising: capturing an image corresponding to a field of view of a user through an augmented reality system (10), wherein the image comprises a gesture image of at least one gesture (5112, 8416, 8816, 8946, 9726, 9728, 10320-10332, 12502, 12506, 12508, 12606, 12702, 12802, 12810, 12814, 12902, 12908, 12910, 12912, 12914, 13206, 13014) that is created by the user and interacts with virtual content projected by the augmented reality system to the user, and at least a portion of the image is perceived by the user within the field of view provided by the augmented reality system; determining whether the image includes one or more identifiable depth points by performing a line search for the image; determining an order of processing in which a plurality of analysis nodes is executed to perform respective gesture identification processes on the at least one gesture with respect to a plurality of candidate gestures based at least in part upon a plurality of computational resource utilization or expense requirements; applying a first computationally less-expensive algorithm at an earlier first analysis node of the plurality of analysis nodes (13544) to eliminate a first candidate of the plurality of candidate gestures to form a reduced plurality of candidate gestures; and applying a second computationally more-expensive algorithm at a later second analysis node of the plurality of analysis nodes (13544) to eliminate a second candidate of the reduced plurality of candidate gestures, wherein the second computationally more-expensive algorithm is configured to consume a larger amount of processing power than the first computationally less-expensive algorithm; and determining a user input based in part or in whole upon the at least one recognized gesture. the method of claim 11, further comprising generating a scoring value for a set of points for the at least one gesture based in part or in whole on results of comparing the set of points to a first set of points associated with a database including predetermined gestures. the method of claim 12, further comprising recognizing the at least one gesture when the scoring value exceeds a threshold value. the method of any of claims 11-13, further comprising overlaying a virtual world with at least a portion of a physical environment in which the user is located. the method of claim 14, further comprising accessing a networked memory to access a database including predetermined gestures.
background modern computing and display technologies have facilitated the development of systems for so called "virtual reality" or "augmented reality" experiences, wherein digitally reproduced images or portions thereof are presented to a user in a manner wherein they seem to be, or may be perceived as, real. a virtual reality, or "vr", scenario typically involves presentation of digital or virtual image information without transparency to other actual real-world visual input; an augmented reality, or "ar", scenario typically involves presentation of digital or virtual image information as an augmentation to visualization of the actual world around the user. for example, an augmented reality scene may allow a user of ar technology may see one or more virtual objects super-imposed on or amidst real world objects (e.g., a real-world park-like setting featuring people, trees, buildings in the background, etc.). the human visual perception system is very complex, and producing a vr or ar technology that facilitates a comfortable, natural-feeling, rich presentation of virtual image elements amongst other virtual or real-world imagery elements is challenging. traditional stereoscopic wearable glasses generally feature two displays that are configured to display images with slightly different element presentation such that a three-dimensional perspective is perceived by the human visual system. such configurations have been found to be uncomfortable for many users due to a mismatch between vergence and accommodation which may be overcome to perceive the images in three dimensions. indeed, some users are not able to tolerate stereoscopic configurations. although a few optical configurations (e.g., head-mounted glasses) are available (e.g., googleglass ® , occulus rift ® , etc.), none of these configurations is optimally suited for presenting a rich, binocular, three-dimensional augmented reality experience in a manner that will be comfortable and maximally useful to the user, in part because prior systems fail to address some of the fundamental aspects of the human perception system, including the photoreceptors of the retina and their interoperation with the brain to produce the perception of visualization to the user. the human eye is an exceedingly complex organ, and typically comprises a cornea, an iris, a lens, macula, retina, and optic nerve pathways to the brain. the macula is the center of the retina, which is utilized to see moderate detail. at the center of the macula is a portion of the retina that is referred to as the "fovea", which is utilized for seeing the finest details of a scene, and which contains more photoreceptors (approximately 120 cones per visual degree) than any other portion of the retina. the human visual system is not a passive sensor type of system; it actively scans the environment. in a manner somewhat akin to use of a flatbed scanner to capture an image, or use of a finger to read braille from a paper, the photoreceptors of the eye fire in response to changes in stimulation, rather than constantly responding to a constant state of stimulation. thus, motion is required to present photoreceptor information to the brain. indeed, experiments with substances such as cobra venom, which has been utilized to paralyze the muscles of the eye, have shown that a human subject will experience blindness if positioned with eyes open, viewing a static scene with venom-induced paralysis of the eyes. in other words, without changes in stimulation, the photoreceptors do not provide input to the brain and blindness is experienced. it is believed that this is at least one reason that the eyes of normal humans have been observed to move back and forth, or dither, in side-to-side motion, also known as "microsaccades". as noted above, the fovea of the retina contains the greatest density of photoreceptors. while it is typically perceived that humans have high-resolution visualization capabilities throughout a field of view, in actuality humans only a small high-resolution center that is mechanically swept around almost constantly, along with a persistent memory of the high-resolution information recently captured with the fovea. in a somewhat similar manner, the focal distance control mechanism of the eye (e.g., ciliary muscles operatively coupled to the crystalline lens in a manner wherein ciliary relaxation causes taut ciliary connective fibers to flatten out the lens for more distant focal lengths; ciliary contraction causes loose ciliary connective fibers, which allow the lens to assume a more rounded geometry for more close-in focal lengths) dithers back and forth by approximately ¼ to ½ diopter to cyclically induce a small amount of "dioptric blur" on both the close side and far side of the targeted focal length. this is utilized by the accommodation control circuits of the brain as cyclical negative feedback that helps to constantly correct course and keep the retinal image of a fixated object approximately in focus. the visualization center of the brain also gains valuable perception information from the motion of both eyes and components thereof relative to each other. vergence movements (e.g., rolling movements of the pupils toward or away from each other to converge the lines of sight of the eyes to fixate upon an object) of the two eyes relative to each other are closely associated with focusing (or "accommodation") of the lenses of the eyes. under normal conditions, changing the focus of the lenses of the eyes, or accommodating the eyes, to focus upon an object at a different distance will automatically cause a matching change in vergence to the same distance, under a relationship known as the "accommodation-vergence reflex." likewise, a change in vergence will trigger a matching change in accommodation, under normal conditions. working against this reflex (as is the case with most conventional stereoscopic ar or vr configurations) is known to produce eye fatigue, headaches, or other forms of discomfort in users. movement of the head, which houses the eyes, also has a key impact upon visualization of objects. humans tend to move their heads to visualize the world around them, and are often are in a fairly constant state of repositioning and reorienting the head relative to an object of interest. further, most people prefer to move their heads when their eye gaze needs to move more than about 20 degrees off center to focus on a particular object (e.g., people do not typically like to look at things "from the corner of the eye"). humans also typically scan or move their heads in relation to sounds - to improve audio signal capture and utilize the geometry of the ears relative to the head. the human visual system gains powerful depth cues from what is called "head motion parallax", which is related to the relative motion of objects at different distances as a function of head motion and eye vergence distance. in other words, if a person moves his head from side to side and maintains fixation on an object, items farther out from that object will move in the same direction as the head, and items in front of that object will move opposite the head motion. these may be very salient cues for where objects are spatially located in the environment relative to the person. head motion also is utilized to look around objects, of course. further, head and eye motion are coordinated with the "vestibulo-ocular reflex", which stabilizes image information relative to the retina during head rotations, thus keeping the object image information approximately centered on the retina. in response to a head rotation, the eyes are reflexively and proportionately rotated in the opposite direction to maintain stable fixation on an object. as a result of this compensatory relationship, many humans can read a book while shaking their head back and forth. interestingly, if the book is panned back and forth at the same speed with the head approximately stationary, the same generally is not true - the person is not likely to be able to read the moving book. the vestibulo-ocular reflex is one of head and eye motion coordination, and is generally not developed for hand motion. this paradigm may be important for ar systems, because head motions of the user may be associated relatively directly with eye motions, and an ideal system preferably will be ready to work with this relationship. indeed, given these various relationships, when placing digital content (e.g., 3-d content such as a virtual chandelier object presented to augment a real-world view of a room; or 2-d content such as a planar/flat virtual oil painting object presented to augment a real-world view of a room), design choices may be made to control behavior of the objects. for example, a 2-d oil painting object may be head-centric, in which case the object moves around along with the user's head (e.g., as in a googleglass ® approach). in another example, an object may be world-centric, in which case it may be presented as though it is part of the real world coordinate system, such that the user may move his head or eyes without moving the position of the object relative to the real world. thus when placing virtual content into the augmented reality world presented with an ar system, choices are made as to whether the object should be presented as world centric, body-centric, head-centric or eye centric. in head-centric approaches, the virtual object stays in position in the real world so that the user may move his body, head, eyes around it without changing its position relative to the real world objects surrounding it, such as a real world wall. in body-centric approaches, a virtual element may be fixed relative to the user's torso, so that the user can move his head or eyes without moving the object, but that is slaved to torso movements, in head centric approaches, the displayed object (and/or display itself) may be moved along with head movements, as described above in reference to googleglass ® ). in eye- centric approaches, as in a "foveated display" configuration, as is described below, content is slewed around as a function of the eye position. with world-centric configurations, it may be desirable to have inputs such as accurate head pose measurement, accurate representation and/or measurement of real world objects and geometries around the user, low-latency dynamic rendering in the augmented reality display as a function of head pose, and a generally low-latency display. document us 2013/342671 a1 discloses a system for detecting a gesture with an image capturing device. gestures are detected by detecting the presence of a finger in a portion of the image and by using a skin color classifier to identify a gesture. such a technique is not efficient in terms of power consumption. summary the invention relates to a system according to claim 1 and a method according to claim 11. specifically, the invention relates to the embodiments described in figures 135a through to 135j and the corresponding passages of the present description (i.e., the section entitled "gestures"). other embodiments are not encompassed by the wording of the claims but are considered as useful for understanding the invention. according to the invention, an augmented reality display system, comprises an image capturing device to capture one or more images, the one or more images corresponding to a field of view of a user, wherein the image captures at least one gesture created by the user, and a processor communicatively coupled to the image capturing device configured to identify a set of points as associated with the gesture, and to compare the set of points against a database of predetermined gestures, and to recognize the gesture based at least in part on the comparison, and to determine a user input based at least in part on the recognized gesture. in one or more embodiments, the processor generates a scoring value for the set of identified points based on the comparison. in one or more embodiments, the processor recognizes the gesture when the scoring based exceeds a threshold value. in one or more embodiments, the augmented reality display system comprises a database to store the set of predetermined gestures. in one or more embodiments, the system further comprises a networked memory to access the database of predetermined gestures. in one or more embodiments, the gesture is a hand gesture. in one or more embodiments, the gesture is a finger gesture. in one or more embodiments, the gesture is an inter-finger interaction. in one or more embodiments, the gesture is selected from the group consisting of inter-finger interactions, pointing, tapping and rubbing. in one or more embodiments, the augmented reality display system further comprises a spatial light modulator, the spatial light modulator communicatively coupled to the processor, wherein the processor controls the spatial light modulator in a manner such that one or more virtual objects are displayed to the user based at least in part on the determined user input. in one or more embodiments, the one or more virtual objects comprises a virtual user interface. additional and other objects, features, and advantages of the invention are described in the detail description, figures and claims. brief description of the drawings the drawings illustrate the design and utility of various embodiments. the invention is specifically illustrated with figures 135a-135j . it should be noted that the figures are not drawn to scale and that elements of similar structures or functions are represented by like reference numerals throughout the figures. in order to better appreciate how to obtain the above-recited and other advantages and objects of various embodiments, a more detailed description briefly described above will be rendered by reference to specific embodiments thereof, which are illustrated in the accompanying drawings. fig. 1 illustrates a system architecture of an augmented reality (ar) system interacting with one or more servers, according one illustrated embodiment. fig. 2 illustrates a detailed view of a cell phone used as an ar device interacting with one or more servers, according to one illustrated embodiment. fig. 3 illustrates a plan view of an example ar device mounted on a user's head, according to one illustrated embodiment. figs. 4a -4d illustrate one or more embodiments of various internal processing components of the wearable ar device. figs. 5a-5h illustrate embodiments of transmitting focused light to a user through a transmissive beamsplitter substrate. figs. 6a and 6b illustrate embodiments of coupling a lens element with the transmissive beamsplitter substrate of figs. 5a-5h . figs. 7a and 7b illustrate embodiments of using one or more waveguides to transmit light to a user. figs. 8a-8q illustrate embodiments of a diffractive optical element (doe). figs 9a and 9b illustrate a wavefront produced from a light projector, according to one illustrated embodiment. fig. 10 illustrates an embodiment of a stacked configuration of multiple transmissive beamsplitter substrate coupled with optical elements, according to one illustrated embodiment. figs 11a-11c illustrate a set of beamlets projected into a user's pupil, according to the illustrated embodiments. figs. 12a and 12b illustrate configurations of an array of microprojectors, according to the illustrated embodiments. figs. 13a-13m illustrate embodiments of coupling microprojectors with optical elements, according to the illustrated embodiments. figs. 14a- 14f illustrate embodiments of spatial light modulators coupled with optical elements, according to the illustrated embodiments. figs. 15a-15c illustrate the use of a wedge type waveguides along with a plurality of light sources, according to the illustrated embodiments. figs. 16a-16o illustrate embodiments of coupling optical elements to optical fibers, according to the illustrated embodiments. fig. 17 illustrates a notch filter, according to one illustrated embodiment. fig. 18 illustrates a spiral pattern of a fiber scanning display, according to one illustrated embodiment. figs. 19a-19n illustrate occlusion effects in presenting a darkfield to a user, according to the illustrated embodiments. figs. 20a-20o illustrate embodiments of various waveguide assemblies, according to the illustrated embodiments. figs. 21a-21n illustrate various configurations of does coupled to other optical elements, according to the illustrated embodiments. figs. 22a-22y illustrate various configurations of freeform optics, according to the illustrated embodiments. fig. 23 illustrates a top view of components of a simplified individual ar device. fig. 24 illustrates an example embodiment of the optics of the individual ar system. fig. 25 illustrates a system architecture of the individual ar system, according to one embodiment. fig. 26 illustrates a room based sensor system, according to one embodiment. fig. 27 illustrates a communication architecture of the augmented reality system and the interaction of the augmented reality systems of many users with the cloud. fig. 28 illustrates a simplified view of the passable world model, according to one embodiment. fig. 29 illustrates an example method of rendering using the passable world model, according to one embodiment. fig. 30 illustrates a high level flow diagram for a process of recognizing an object, according to one embodiment. fig. 31 illustrates a ring buffer approach employed by object recognizers to recognize objects in the passable world, according to one embodiment. fig. 32 illustrates an example topological map, according to one embodiment. fig. 33 illustrates a high level flow diagram for a process of localization using the topological map, according to one embodiment. fig. 34 illustrates a geometric map as a connection between various keyframes, according to one embodiment. fig. 35 illustrates an example embodiment of the topological map layered on top of the geometric map, according to one embodiment. fig. 36 illustrates a high level flow diagram for a process of performing a wave propagation bundle adjust, according to one embodiment. fig. 37 illustrates map points and render lines from the map points to the keyframes as seen through a virtual keyframe, according to one embodiment. fig. 38 illustrates a high level flow diagram for a process of finding map points based on render rather than search, according to one embodiment. fig. 39 illustrates a high level flow diagram for a process of rendering a virtual object based on a light map, according to one embodiment. fig. 40 illustrates a high level flow diagram for a process of creating a light map, according to one embodiment. fig. 41 depicts a user-centric light map., according to one embodiment fig. 42 depicts an object-centric light map, according to one embodiment. fig. 43 illustrates a high level flow diagram for a process of transforming a light map, according to one embodiment. fig. 44 illustrates a library of autonomous navigation definitions or objects, according to one embodiment. fig. 45 illustrates an interaction of various autonomous navigation objects, according to one embodiment. fig. 46 illustrates a stack of autonomous navigation definitions or objects, according to one embodiment. figs. 47a-47b illustrate using the autonomous navigation definitions to identify emotional states, according to one embodiment. fig. 48 illustrates a correlation threshold graph to be used to define an autonomous navigation definition or object, according to one embodiment. fig. 49 illustrates a system view of the passable world model, according to one embodiment. fig. 50 illustrates an example method of displaying a virtual scene, according to one embodiment. fig. 51 illustrates a plan view of various modules of the ar system, according to one illustrated embodiment. fig. 52 illustrates an example of objects viewed by a user when the ar device is operated in an augmented reality mode, according to one illustrated embodiment. fig. 53 illustrates an example of objects viewed by a user when the ar device is operated in a virtual mode, according to one illustrated embodiment. fig. 54 illustrates an example of objects viewed by a user when the ar device is operated in a blended virtual interface mode, according to one illustrated embodiment. fig. 55 illustrates an embodiment wherein two users located in different geographical locations each interact with the other user and a common virtual world through their respective user devices, according to one embodiment. fig. 56 illustrates an embodiment wherein the embodiment of figure 55 is expanded to include the use of a haptic device, according to one embodiment. fig. 57a-57b illustrates an example of mixed mode interfacing, according to one or more embodiments. fig. 58 illustrates an example illustration of a user's view when interfacing the ar system, according to one embodiment. fig. 59 illustrates an example illustration of a user's view showing a virtual object triggered by a physical object when the user is interfacing the system in an augmented reality mode, according to one embodiment. fig. 60 illustrates one embodiment of an augmented and virtual reality integration configuration wherein one user in an augmented reality experience visualizes the presence of another user in a virtual realty experience. fig. 61 illustrates one embodiment of a time and/or contingency event based augmented reality experience configuration. fig. 62 illustrates one embodiment of a user display configuration suitable for virtual and/or augmented reality experiences. fig. 63 illustrates one embodiment of local and cloud-based computing coordination. fig. 64 illustrates various aspects of registration configurations, according to one illustrated embodiment. fig. 65 illustrates an example scenario of interacting with the ar system, according to one embodiment. fig. 66 illustrates another perspective of the example scenario of fig. 65 , according to another embodiment. fig. 67 illustrates yet another perspective view of the example scenario of fig. 65 , according to another embodiment. fig. 68 illustrates a top view of the example scenario according to one embodiment. fig. 69 illustrates a game view of the example scenario of figs. 65-68 , according to one embodiment. fig. 70 illustrates a top view of the example scenario of figs. 65-68 , according to one embodiment. fig. 71 illustrates an augmented reality scenario including multiple users, according to one embodiment. figs. 72a-72b illustrate using a smartphone or tablet as an ar device, according to one embodiment. fig. 73 illustrates an example method of using localization to communicate between users of the ar system, according to one embodiment. figs. 74a-74b illustrates an example office scenario of interacting with the ar system, according to one embodiment. fig. 75 illustrates an example scenario of interacting with the ar system in a house, according to one embodiment. figs. 76 illustrates another example scenario of interacting with the ar system in a house, according to one embodiment. fig. 77 illustrates another example scenario of interacting with the ar system in a house, according to one embodiment. figs. 78a-78b illustrate yet another example scenario of interacting with the ar system in a house, according to one embodiment. figs. 79a-79e illustrate another example scenario of interacting with the ar system in a house, according to one embodiment. figs. 80a- 80c illustrate another example scenario of interacting with the ar system in a virtual room, according to one embodiment. fig. 81 illustrates another example user interaction scenario, according to one embodiment. fig. 82 illustrates another example user interaction scenario, according to one embodiment. figs. 83a-83b illustrates yet another example user interaction scenario, according to one or more embodiments. figs. 84a-84c illustrates the user interacting with the ar system in a virtual space, according to one or more embodiments. figs. 85a-85c illustrates various user interface embodiments. figs. 86a-86c illustrates other embodiments to create a user interface, according to one or more embodiments. figs. 87a-87c illustrates other embodiments to create and move a user interface, according to one or more embodiments. figs. 88a-88c illustrates user interfaces created on the user's hand, according to one or more embodiments. figs. 89a-89j illustrate an example user shopping experience with the ar system, according to one or more embodiments. fig. 90 illustrates an example library experience with the ar system, according to one or more embodiments. figs. 91a-91f illustrate an example healthcare experience with the ar system, according to one or more embodiments. fig. 92 illustrates an example labor experience with the ar system, according to one or more embodiments. figs. 93a-93l illustrate an example workspace experience with the ar system, according to one or more embodiments. fig. 94 illustrates another example workspace experience with the ar system, according to one or more embodiments. figs. 95a-95e illustrates another ar experience, according to one or more embodiments. figs. 96a-96d illustrates yet another ar experience, according to one or more embodiments. figs. 97a-97h illustrates a gaming experience with the ar system, according to one or more embodiments. figs. 98a-98d illustrate a web shopping experience with the ar system, according to one or more embodiments. fig. 99 illustrates a block diagram of various games in a gaming platform, according to one or more embodiments. fig. 100 illustrates a variety of user inputs to communicate with the augmented reality system, according to one embodiment. fig. 101 illustrates led lights and diodes tracking a movement of the user's eyes, according to one embodiment. fig. 102 illustrates a purkinje image, according to one embodiment. fig. 103 illustrates a variety of hand gestures that may be used to communicate with the augmented reality system, according to one embodiment. fig. 104 illustrates an example totem, according to one embodiment. fig. 105a-105c illustrate other example totems, according to one or more embodiments. fig. 106a-106c illustrate other totems that may be used to communicate with the augmented reality system. figs. 107a-107d illustrates other example totems, according to one or more embodiments. figs. 108a-108c illustrate example embodiments of ring and bracelet totems, according to one or more embodiments. figs. 109a-109c illustrate more example totems, according to one or more embodiments. figs. 110a-110b illustrate a charms totem and a keychain totem, according to one or more embodiments. fig. 111 illustrates a high level flow diagram for a process of determining user input through a totem, according to one embodiment. fig. 112 illustrates a high level flow diagram for a process of producing a sound wavefront, according to one embodiment. fig. 113 is a block diagram of components used to produce a sound wavefront, according to one embodiment. fig. 114 is an example method of determining sparse and dense points, according to one embodiment. fig. 115 is a block diagram of projecting textured light, according to one embodiment. fig. 116 is an example block diagram of data processing, according to one embodiment. fig. 117 is a schematic of an eye for gaze tracking, according to one embodiment. fig. 118 shows another perspective of the eye and one or more cameras for gaze tracking, according to one embodiment. fig. 119 shows yet another perspective of the eye and one or more cameras for gaze tracking, according to one embodiment. fig. 120 shows yet another perspective of the eye and one or more cameras for gaze tracking, according to one embodiment. fig. 121 shows a translational matrix view for gaze tracking, according to one embodiment. fig. 122 illustrates an example method of gaze tracking, according to one embodiment. figs. 123a-123d illustrate a series of example user interface flows using avatars, according to one embodiment. figs. 124a-124m illustrate a series of example user interface flows using extrusion, according to one embodiment. figs. 125a-125m illustrate a series of example user interface flows using gauntlet, according to one embodiment. figs. 126a-126l illustrate a series of example user interface flows using grow, according to one embodiment. figs. 127a-127e illustrate a series of example user interface flows using brush, according to one embodiment. figs. 128a-128p illustrate a series of example user interface flows using fingerbrush, according to one embodiment. figs. 129a-129m illustrate a series of example user interface flows using pivot according to one embodiment. figs. 130a-130i illustrate a series of example user interface flows using strings, according to one embodiment. figs. 131a-131i illustrate a series of example user interface flows using spiderweb, according to one embodiment. fig. 132 is a plan view of various mechanisms by which a virtual object relates to one or more physical objects. fig. 133 is a plan view of various types of ar rendering, according to one or more embodiments. fig. 134 illustrates various types of user input in an ar system, according to one or more embodiments. figs. 135a-135j illustrates various embodiments pertaining to using gestures in an ar system, according to one or more embodiments of the invention. fig. 136 illustrates a plan view of various components for a calibration mechanism of the ar system, according to one or more embodiments. fig. 137 illustrates a view of an ar device on a user's face, the ar device having eye tracking cameras, according to one or more embodiments. fig. 138 illustrates an eye identification image of the ar system, according to one or more embodiments. fig. 139 illustrates a retinal image taken with an ar system, according to one or more embodiments. fig. 140 is a process flow diagram of an example method of generating a virtual user interface, according to one illustrated embodiment. fig. 141 is another process flow diagram of an example method of generating a virtual user interface based on a coordinate frame, according to one illustrated embodiment. fig. 142 is a process flow diagram of an example method of constructing a customized user interface, according to one illustrated embodiment. fig. 143 is a process flow diagram of an example method of retrieving information from the passable world model and interacting with other users of the ar system, according to one illustrated embodiment. fig. 144 is a process flow diagram of an example method of retrieving information from a knowledge based in the cloud based on received input, according to one illustrated embodiment. fig. 145 is a process flow diagram of an example method of calibrating the ar system, according to one illustrated embodiment. detailed description various embodiments will now be described in detail with reference to the drawings, which are provided as illustrative examples so as to enable those skilled in the art to practice the invention. notably, the figures and the examples below are not meant to limit the scope of the present invention. where certain elements of the present invention may be partially or fully implemented using known components (or methods or processes), only those portions of such known components (or methods or processes) that are necessary for an understanding of the present invention will be described, and the detailed descriptions of other portions of such known components (or methods or processes) will be omitted so as not to obscure the invention. further, various embodiments encompass present and future known equivalents to the components referred to herein by way of illustration. in the foregoing specification, the invention has been described with reference to specific embodiments thereof. it will, however, be evident that various modifications and changes may be made thereto without departing from the scope of the claims. the specification and drawings are, accordingly, to be regarded in an illustrative rather than restrictive sense. disclosed are methods and systems for generating virtual and/or augmented reality. in order to provide a realistic and enjoyable virtual reality (vr) or augmented reality (ar) experience, virtual content may be strategically delivered to the user's eyes in a manner that is respectful of the human eye's physiology and limitations. the following disclosure will provide various embodiments of such optical systems that may be integrated into an ar system. although most of the disclosures herein will be discussed in the context of ar systems, it should be appreciated that the same technologies may be used for vr systems also, and the following embodiments should not be read as limiting. the following disclosure will provide details on various types of systems in which ar users may interact with each other through a creation of a map that comprises comprehensive information about the physical objects of the real world in real-time. the map may be advantageously consulted in order to project virtual images in relation to known real objects. the following disclosure will provide various approaches to understanding information about the real world, and using this information to provide a more realistic and enjoyable ar experience. additionally, this disclosure will provide various user scenarios and applications in which ar systems such as the ones described herein may be realized. system overview in one or more embodiments, the ar system 10 comprises a computing network 5, comprised of one or more computer servers 11 connected through one or more high bandwidth interfaces 15. the servers 11 in the computing network may or may not be co-located. the one or more servers 11 each comprise one or more processors for executing program instructions. the servers may also include memory for storing the program instructions and data that is used and/or generated by processes being carried out by the servers 11 under direction of the program instructions. the computing network 5 communicates data between the servers 11 and between the servers and one or more user devices 12 over one or more data network connections 13. examples of such data networks include, without limitation, any and all types of public and private data networks, both mobile and wired, including for example the interconnection of many of such networks commonly referred to as the internet. no particular media, topology or protocol is intended to be implied by the figure. user devices are configured for communicating directly with computing network 5, or any of the servers 11. alternatively, user devices 12 communicate with the remote servers 11, and, optionally, with other user devices locally, through a specially programmed, local gateway 14 for processing data and/or for communicating data between the network 5 and one or more local user devices 12. as illustrated, gateway 14 is implemented as a separate hardware component, which includes a processor for executing software instructions and memory for storing software instructions and data. the gateway has its own wired and/or wireless connection to data networks for communicating with the servers 11 comprising computing network 5. alternatively, gateway 14 can be integrated with a user device 12, which is worn or carried by a user. for example, the gateway 14 may be implemented as a downloadable software application installed and running on a processor included in the user device 12. the gateway 14 provides, in one embodiment, one or more users access to the computing network 5 via the data network 13. servers 11 each include, for example, working memory and storage for storing data and software programs, microprocessors for executing program instructions, graphics processors and other special processors for rendering and generating graphics, images, video, audio and multi-media files. computing network 5 may also comprise devices for storing data that is accessed, used or created by the servers 11. software programs running on the servers and optionally user devices 12 and gateways 14, are used to generate digital worlds (also referred to herein as virtual worlds) with which users interact with user devices 12. a digital world (or map)(as will be described in further detail below) is represented by data and processes that describe and/or define virtual, non-existent entities, environments, and conditions that can be presented to a user through a user device 12 for users to experience and interact with. for example, some type of object, entity or item that will appear to be physically present when instantiated in a scene being viewed or experienced by a user may include a description of its appearance, its behavior, how a user is permitted to interact with it, and other characteristics. data used to create an environment of a virtual world (including virtual objects) may include, for example, atmospheric data, terrain data, weather data, temperature data, location data, and other data used to define and/or describe a virtual environment. additionally, data defining various conditions that govern the operation of a virtual world may include, for example, laws of physics, time, spatial relationships and other data that may be used to define and/or create various conditions that govern the operation of a virtual world (including virtual objects). the entity, object, condition, characteristic, behavior or other feature of a digital world will be generically referred to herein, unless the context indicates otherwise, as an object (e.g., digital object, virtual object, rendered physical object, etc.). objects may be any type of animate or inanimate object, including but not limited to, buildings, plants, vehicles, people, animals, creatures, machines, data, video, text, pictures, and other users. objects may also be defined in a digital world for storing information about items, behaviors, or conditions actually present in the physical world. the data that describes or defines the entity, object or item, or that stores its current state, is generally referred to herein as object data. this data is processed by the servers 11 or, depending on the implementation, by a gateway 14 or user device 12, to instantiate an instance of the object and render the object in an appropriate manner for the user to experience through a user device. programmers who develop and/or curate a digital world create or define objects, and the conditions under which they are instantiated. however, a digital world can allow for others to create or modify objects. once an object is instantiated, the state of the object may be permitted to be altered, controlled or manipulated by one or more users experiencing a digital world. for example, in one embodiment, development, production, and administration of a digital world are generally provided by one or more system administrative programmers. in some embodiments, this may include development, design, and/or execution of story lines, themes, and events in the digital worlds as well as distribution of narratives through various forms of events and media such as, for example, film, digital, network, mobile, augmented reality, and live entertainment. the system administrative programmers may also handle technical administration, moderation, and curation of the digital worlds and user communities associated therewith, as well as other tasks typically performed by network administrative personnel. users interact with one or more digital worlds using some type of a local computing device, which is generally designated as a user device 12. examples of such user devices include, but are not limited to, a smart phone, tablet device, heads-mounted display (hmd), gaming console, or any other device capable of communicating data and providing an interface or display to the user, as well as combinations of such devices. in some embodiments, the user device 12 may include, or communicate with, local peripheral or input/output components such as, for example, a keyboard, mouse, joystick, gaming controller, haptic interface device, motion capture controller, an optical tracking device, audio equipment, voice equipment, projector system, 3d display, and/or holographic 3d contact lens. an example of a user device 12 for interacting with the system 10 is illustrated in fig. 2 . in the example embodiment shown in fig. 2 , a user 21 may interface one or more digital worlds through a smart phone 22. the gateway is implemented by a software application 23 stored on and running on the smart phone 22. in this particular example, the data network 13 includes a wireless mobile network connecting the user device (e.g., smart phone 22) to the computer network 5. in one implementation of a preferred embodiment, system 10 is capable of supporting a large number of simultaneous users (e.g., millions of users), each interfacing with the same digital world, or with multiple digital worlds, using some type of user device 12. the user device provides to the user, an interface for enabling a visual, audible, and/or physical interaction between the user and a digital world generated by the servers 11, including other users and objects (real or virtual) presented to the user. the interface provides the user with a rendered scene that can be viewed, heard or otherwise sensed, and the ability to interact with the scene in real-time. the manner in which the user interacts with the rendered scene may be dictated by the capabilities of the user device. for example, if the user device is a smart phone, the user interaction may be implemented by a user contacting a touch screen. in another example, if the user device is a computer or gaming console, the user interaction may be implemented using a keyboard or gaming controller. user devices may include additional components that enable user interaction such as sensors, wherein the objects and information (including gestures) detected by the sensors may be provided as input representing user interaction with the virtual world using the user device. the rendered scene can be presented in various formats such as, for example, two-dimensional or three-dimensional visual displays (including projections), sound, and haptic or tactile feedback. the rendered scene may be interfaced by the user in one or more modes including, for example, augmented reality, virtual reality, and combinations thereof. the format of the rendered scene, as well as the interface modes, may be dictated by one or more of the following: user device, data processing capability, user device connectivity, network capacity and system workload. having a large number of users simultaneously interacting with the digital worlds, and the real-time nature of the data exchange, is enabled by the computing network 5, servers 11, the gateway component 14 (optionally), and the user device 12. in one example, the computing network 5 is comprised of a large-scale computing system having single and/or multi-core servers (e.g., servers 11) connected through high-speed connections (e.g., high bandwidth interfaces 15). the computing network 5 may form a cloud or grid network. each of the servers includes memory, or is coupled with computer readable memory for storing software for implementing data to create, design, alter, or process objects of a digital world. these objects and their instantiations may be dynamic, come in and out of existence, change over time, and change in response to other conditions. examples of dynamic capabilities of the objects are generally discussed herein with respect to various embodiments. in some embodiments, each user interfacing the system 10 may also be represented as an object, and/or a collection of objects, within one or more digital worlds. the servers 11 within the computing network 5 also store computational state data for each of the digital worlds. the computational state data (also referred to herein as state data) may be a component of the object data, and generally defines the state of an instance of an object at a given instance in time. thus, the computational state data may change over time and may be impacted by the actions of one or more users and/or programmers maintaining the system 10. as a user impacts the computational state data (or other data comprising the digital worlds), the user directly alters or otherwise manipulates the digital world. if the digital world is shared with, or interfaced by, other users, the actions of the user may affect what is experienced by other users interacting with the digital world. thus, in some embodiments, changes to the digital world made by a user will be experienced by other users interfacing with the system 10. the data stored in one or more servers 11 within the computing network 5 is, in one embodiment, transmitted or deployed at a high-speed, and with low latency, to one or more user devices 12 and/or gateway components 14. in one embodiment, object data shared by servers may be complete or may be compressed, and contain instructions for recreating the full object data on the user side, rendered and visualized by the user's local computing device (e.g., gateway 14 and/or user device 12). software running on the servers 11 of the computing network 5 may, in some embodiments, adapt the data it generates and sends to a particular user's device 12 for objects within the digital world (or any other data exchanged by the computing network 5 as a function of the user's specific device and bandwidth. for example, when a user interacts with the digital world or map through a user device 12, a server 11 may recognize the specific type of device being used by the user, the device's connectivity and/or available bandwidth between the user device and server, and appropriately size and balance the data being delivered to the device to optimize the user interaction. an example of this may include reducing the size of the transmitted data to a low resolution quality, such that the data may be displayed on a particular user device having a low resolution display. in a preferred embodiment, the computing network 5 and/or gateway component 14 deliver data to the user device 12 at a rate sufficient to present an interface operating at 15 frames/second or higher, and at a resolution that is high definition quality or greater. the gateway 14 provides local connection to the computing network 5 for one or more users. in some embodiments, it may be implemented by a downloadable software application that runs on the user device 12 or another local device, such as that shown in fig. 2 . in other embodiments, it may be implemented by a hardware component (with appropriate software/firmware stored on the component, the component having a processor) that is either in communication with, but not incorporated with or attracted to, the user device 12, or incorporated with the user device 12. the gateway 14 communicates with the computing network 5 via the data network 13, and provides data exchange between the computing network 5 and one or more local user devices 12. as discussed in greater detail below, the gateway component 14 may include software, firmware, memory, and processing circuitry, and may be capable of processing data communicated between the network 5 and one or more local user devices 12. in some embodiments, the gateway component 14 monitors and regulates the rate of the data exchanged between the user device 12 and the computer network 5 to allow optimum data processing capabilities for the particular user device 12. for example, in some embodiments, the gateway 14 buffers and downloads both static and dynamic aspects of a digital world, even those that are beyond the field of view presented to the user through an interface connected with the user device. in such an embodiment, instances of static objects (structured data, software implemented methods, or both) may be stored in memory (local to the gateway component 14, the user device 12, or both) and are referenced against the local user's current position, as indicated by data provided by the computing network 5 and/or the user's device 12. instances of dynamic objects, which may include, for example, intelligent software agents and objects controlled by other users and/or the local user, are stored in a high-speed memory buffer. dynamic objects representing a two-dimensional or three-dimensional object within the scene presented to a user can be, for example, broken down into component shapes, such as a static shape that is moving but is not changing, and a dynamic shape that is changing. the part of the dynamic object that is changing can be updated by a real-time, threaded high priority data stream from a server 11, through computing network 5, managed by the gateway component 14. as one example of a prioritized threaded data stream, data that is within a 60 degree field-of-view of the user's eye may be given higher priority than data that is more peripheral. another example includes prioritizing dynamic characters and/or objects within the user's field-of-view over static objects in the background. in addition to managing a data connection between the computing network 5 and a user device 12, the gateway component 14 may store and/or process data that may be presented to the user device 12. for example, the gateway component 14 may, in some embodiments, receive compressed data describing, for example, graphical objects to be rendered for viewing by a user, from the computing network 5 and perform advanced rendering techniques to alleviate the data load transmitted to the user device 12 from the computing network 5. in another example, in which gateway 14 is a separate device, the gateway 14 may store and/or process data for a local instance of an object rather than transmitting the data to the computing network 5 for processing. referring now to fig. 3 , virtual worlds may be experienced by one or more users in various formats that may depend upon the capabilities of the user's device. in some embodiments, the user device 12 may include, for example, a smart phone, tablet device, head-mounted display (hmd), gaming console, or a wearable device. generally, the user device will include a processor for executing program code stored in memory on the device, coupled with a display, and a communications interface. an example embodiment of a user device is illustrated in fig. 3 , wherein the user device comprises a mobile, wearable device, namely a head-mounted display system 30. in accordance with an embodiment of the present disclosure, the head-mounted display system 30 includes a user interface 37, user-sensing system 34, environment-sensing system 36, and a processor 38. although the processor 38 is shown in fig. 3 as an isolated component separate from the head-mounted system 30, in an alternate embodiment, the processor 38 may be integrated with one or more components of the head-mounted system 30, or may be integrated into other system 10 components such as, for example, the gateway 14, as shown in fig. 1 and fig. 2 . the user device 30 presents to the user an interface 37 for interacting with and experiencing a digital world. such interaction may involve the user and the digital world, one or more other users interfacing the system 10, and objects within the digital world. the interface 37 generally provides image and/or audio sensory input (and in some embodiments, physical sensory input) to the user. thus, the interface 37 may include speakers (not shown) and a display component 33 capable, in some embodiments, of enabling stereoscopic 3d viewing and/or 3d viewing which embodies more natural characteristics of the human vision system. in some embodiments, the display component 33 may comprise a transparent interface (such as a clear oled) which, when in an "off setting, enables an optically correct view of the physical environment around the user with little-to-no optical distortion or computing overlay. as discussed in greater detail below, the interface 37 may include additional settings that allow for a variety of visual/interface performance and functionality. the user-sensing system 34 may include, in some embodiments, one or more sensors 31 operable to detect certain features, characteristics, or information related to the individual user wearing the system 30. for example, in some embodiments, the sensors 31 may include a camera or optical detection/scanning circuitry capable of detecting real-time optical characteristics/measurements of the user. the real-time optical characteristics/measurements of the user may, for example, be one or more of the following: pupil constriction/dilation, angular measurement/positioning of each pupil, spherocity, eye shape (as eye shape changes over time) and other anatomic data. this data may provide, or be used to calculate, information (e.g., the user's visual focal point) that may be used by the head-mounted system 30 and/or interface system 10 to optimize the user's viewing experience. for example, in one embodiment, the sensors 31 may each measure a rate of pupil contraction for each of the user's eyes. this data may be transmitted to the processor 38 (or the gateway component 14 or to a server 11), wherein the data is used to determine, for example, the user's reaction to a brightness setting of the interface display 33. the interface 37 may be adjusted in accordance with the user's reaction by, for example, dimming the display 33 if the user's reaction indicates that the brightness level of the display 33 is too high. the user-sensing system 34 may include other components other than those discussed above or illustrated in fig. 3 . for example, in some embodiments, the user-sensing system 34 may include a microphone for receiving voice input from the user. the user sensing system 34 may also include one or more infrared camera sensors, one or more visible spectrum camera sensors, structured light emitters and/or sensors, infrared light emitters, coherent light emitters and/or sensors, gyros, accelerometers, magnetometers, proximity sensors, gps sensors, ultrasonic emitters and detectors and haptic interfaces. the environment-sensing system 36 includes one or more sensors 32 for obtaining data from the physical environment around a user. objects or information detected by the sensors may be provided as input to the user device. in some embodiments, this input may represent user interaction with the virtual world. for example, a user viewing a virtual keyboard on a desk may gesture with fingers as if typing on the virtual keyboard. the motion of the fingers moving may be captured by the sensors 32 and provided to the user device or system as input, wherein the input may be used to change the virtual world or create new virtual objects. for example, the motion of the fingers may be recognized (e.g., using a software program of the processor, etc.) as typing, and the recognized gesture of typing may be combined with the known location of the virtual keys on the virtual keyboard. the system may then render a virtual monitor displayed to the user (or other users interfacing the system) wherein the virtual monitor displays the text being typed by the user. the sensors 32 may include, for example, a generally outward-facing camera or a scanner for interpreting scene information, for example, through continuously and/or intermittently projected infrared structured light. the environment-sensing system (36) may be used for mapping one or more elements of the physical environment around the user by detecting and registering the local environment, including static objects, dynamic objects, people, gestures and various lighting, atmospheric and acoustic conditions. thus, in some embodiments, the environment-sensing system (36) may include image-based 3d reconstruction software embedded in a local computing system (e.g., gateway component 14 or processor 38) and operable to digitally reconstruct one or more objects or information detected by the sensors 32. in one example embodiment, the environment-sensing system 36 provides one or more of the following: motion capture data (including gesture recognition), depth sensing, facial recognition, object recognition, unique object feature recognition, voice/audio recognition and processing, acoustic source localization, noise reduction, infrared or similar laser projection, as well as monochrome and/or color cmos sensors (or other similar sensors), field-of-view sensors, and a variety of other optical-enhancing sensors. it should be appreciated that the environment-sensing system 36 may include other components other than those discussed above or illustrated in fig. 3 . for example, in some embodiments, the environment-sensing system 36 may include a microphone for receiving audio from the local environment. the user sensing system (36) may also include one or more infrared camera sensors, one or more visible spectrum camera sensors, structure light emitters and/or sensors, infrared light emitters, coherent light emitters and/or sensors gyros, accelerometers, magnetometers, proximity sensors, gps sensors, ultrasonic emitters and detectors and haptic interfaces. as discussed above, the processor 38 may, in some embodiments, be integrated with other components of the head-mounted system 30, integrated with other components of the interface system 10, or may be an isolated device (wearable or separate from the user) as shown in fig. 3 . the processor 38 may be connected to various components of the head-mounted system 30 and/or components of the interface system 10 through a physical, wired connection, or through a wireless connection such as, for example, mobile network connections (including cellular telephone and data networks), wi-fi or bluetooth. in one or more embodiments, the processor 38 may include a memory module, integrated and/or additional graphics processing unit, wireless and/or wired internet connectivity, and codec and/or firmware capable of transforming data from a source (e.g., the computing network 5, the user-sensing system 34, the environment-sensing system 36, or the gateway component 14) into image and audio data, wherein the images/video and audio may be presented to the user via the interface 37. in one or more embodiments, the processor 38 handles data processing for the various components of the head-mounted system 30 as well as data exchange between the head-mounted system 30 and the gateway component 14 and, in some embodiments, the computing network 5. for example, the processor 38 may be used to buffer and process data streaming between the user and the computing network 5, thereby enabling a smooth, continuous and high fidelity user experience. in some embodiments, the processor 38 may process data at a rate sufficient to achieve anywhere between 8 frames/second at 320x240 resolution to 24 frames/second at high definition resolution (1280x720), or greater, such as 60-120 frames/second and 4k resolution and higher (10k+ resolution and 50,000 frames/second). additionally, the processor 38 may store and/or process data that may be presented to the user, rather than streamed in real-time from the computing network 5. for example, the processor 38 may, in some embodiments, receive compressed data from the computing network 5 and perform advanced rendering techniques (such as lighting or shading) to alleviate the data load transmitted to the user device 12 from the computing network 5. in another example, the processor 38 may store and/or process local object data rather than transmitting the data to the gateway component 14 or to the computing network 5. the head-mounted system 30 may, in some embodiments, include various settings, or modes, that allow for a variety of visual/interface performance and functionality. the modes may be selected manually by the user, or automatically by components of the head-mounted system 30 or the gateway component 14. as previously described, one example mode of the head-mounted system 30 includes an "off mode, wherein the interface 37 provides substantially no digital or virtual content. in the off mode, the display component 33 may be transparent, thereby enabling an optically correct view of the physical environment around the user with little-to-no optical distortion or computing overlay. in one example embodiment, the head-mounted system 30 includes an "augmented" mode, wherein the interface 37 provides an augmented reality interface. in the augmented mode, the interface display 33 may be substantially transparent, thereby allowing the user to view the local, physical environment. at the same time, virtual object data provided by the computing network 5, the processor 38, and/or the gateway component 14 is presented on the display 33 in combination with the physical, local environment. the following section will go through various embodiments of example head-mounted user systems that may be used for virtual and augmented reality purposes. user systems referring to figs. 4a-4d , some general componentry options are illustrated. in the portions of the detailed description which follow the discussion of figs. 4a-4d , various systems, subsystems, and components are presented for addressing the objectives of providing a high-quality, comfortably-perceived display system for human vr and/or ar. as shown in fig. 4a , a user 60 of a head-mounted augmented reality system ("ar system") is depicted wearing a frame 64 structure coupled to a display system 62 positioned in front of the eyes of the user. a speaker 66 is coupled to the frame 64 in the depicted configuration and positioned adjacent the ear canal of the user 60 (in one embodiment, another speaker, not shown, is positioned adjacent the other ear canal of the user to provide for stereo / shapeable sound control). the display 62 is operatively coupled 68, such as by a wired lead or wireless connectivity, to a local processing and data module 70 which may be mounted in a variety of configurations, such as fixedly attached to the frame 64, fixedly attached to a helmet or hat 80 as shown in the embodiment of fig. 4b , embedded in headphones, removably attached to the torso 82 of the user 60 in a configuration (e.g., placed in a backpack (not shown)) as shown in the embodiment of fig. 4c , or removably attached to the hip 84 of the user 60 in a belt-coupling style configuration as shown in the embodiment of fig. 4d . the local processing and data module 70 may comprise a power-efficient processor or controller, as well as digital memory, such as flash memory, both of which may be utilized to assist in the processing, caching, and storage of data (a) captured from sensors which may be operatively coupled to the frame 64, such as image capture devices (such as cameras), microphones, inertial measurement units, accelerometers, compasses, gps units, radio devices, and/or gyros; and/or (b) acquired and/or processed using the remote processing module 72 and/or remote data repository 74, possibly for passage to the display 62 after such processing or retrieval. the local processing and data module 70 may be operatively coupled (76, 78), such as via a wired or wireless communication links, to the remote processing module 72 and remote data repository 74 such that these remote modules (72, 74) are operatively coupled to each other and available as resources to the local processing and data module 70. the processing module 70 may control the optical systems and other systems of the ar system, and execute one or more computing tasks, including retrieving data from the memory or one or more databases (e.g., a cloud-based server) in order to provide virtual content to the user. in one embodiment, the remote processing module 72 may comprise one or more relatively powerful processors or controllers configured to analyze and process data and/or image information. in one embodiment, the remote data repository 74 may comprise a relatively large-scale digital data storage facility, which may be available through the internet or other networking configuration in a "cloud" resource configuration. in one embodiment, all data is stored and all computation is performed in the local processing and data module, allowing fully autonomous use from any remote modules. optical embodiments it should be appreciated that there may be many approaches in presenting 3d virtual content to the user's eyes through optical elements of the head-mounted user device. the following example embodiments may be used in combination with other approaches, and should not be read in a restrictive sense. the following example embodiments represent some example optical systems that may be integrated with the head-mounted user device (30) to allow the user to view virtual content in a comfortable and accommodation-friendly manner. referring to figs. 5a through 22y , various display configurations are presented that are designed to present the human eyes with photon-based radiation patterns that can be comfortably perceived as augmentations to physical reality, with high-levels of image quality and three-dimensional perception, as well as being capable of presenting two-dimensional content. referring to fig. 5a , in a simplified example, a transmissive beamsplitter substrate 104 with a 45-degree reflecting surface 102 directs incoming radiation 106, which may be output from a lens (not shown), through the pupil 45 of the eye 58 and to the retina 54. the field of view for such a system is limited by the geometry of the beamsplitter 104. to accommodate comfortable viewing with minimal hardware, in one embodiment, a larger field of view can be created by aggregating the outputs/reflections of various different reflective and/or diffractive surfaces. this may be achieved by using, e.g., a frame-sequential configuration in which the eye 58 is presented with a sequence of frames at high frequency that provides the perception of a single coherent scene. as an alternative to, or in addition to, presenting different image data via different reflectors in a time-sequential fashion, the reflectors may separate content by other means, such as polarization selectivity or wavelength selectivity. in addition to being capable of relaying two-dimensional images, the reflectors may also relay the three-dimensional wavefronts associated with true-three-dimensional viewing of actual physical objects. referring to fig. 5b , a substrate 108 comprising a plurality of reflectors at a plurality of angles 110 is shown, with each reflector actively reflecting in the depicted configuration for illustrative purposes. the reflectors may comprise switchable elements to facilitate temporal selectivity. in one embodiment, the reflective surfaces may be intentionally and sequentially activated with frame-sequential input information 106, in which each reflective surface presents a narrow field of view sub-image which is tiled with other narrow field of view sub-images presented by the other reflective surfaces to form a composite wide field of view image. for example, referring to figs. 5c , 5d , and 5e , surface 110 (e.g., at the middle of substrate 108), is switched "on" to a reflecting state, such that it reflects incoming image information 106 to present a relatively narrow field of view sub-image in the middle of a larger field of view, while the other potential reflective surfaces are in a transmissive state. referring to fig. 5c , incoming image information 106 coming from the right of the narrow field of view sub-image (as shown by the angle of incoming beams 106 relative to the substrate 108 at the input interface 112, and the resultant angle at which they exit the substrate 108) is reflected toward the eye 58 from reflective surface 110. fig. 5d illustrates the same reflector 110 as being active, with image information coming from the middle of the narrow field of view sub-image, as shown by the angle of the input information 106 at the input interface 112 and its angle as it exits substrate 108. fig. 5e illustrates the same reflector 110 active, with image information coming from the left of the field of view, as shown by the angle of the input information 106 at the input interface 112 and the resultant exit angle at the surface of the substrate 108. fig. 5f illustrates a configuration wherein the bottom reflector 110 is active, with image information 106 coming in from the far right of the overall field of view. for example, figs. 5c , 5d , and 5e can illustrate one frame representing the center of a frame-sequential tiled image, and fig. 5f can illustrate a second frame representing the far right of that tiled image. in one embodiment, the light carrying the image information 106 may strike the reflective surface 110 directly after entering substrate 108 at input interface 112, without first reflecting from the surfaces of substrate 108. in one embodiment, the light carrying the image information 106 may reflect from one or more surfaces of substrate 108 after entering at input interface 112 and before striking the reflective surface 110. for instance, substrate 108 may act as a planar waveguide, propagating the light carrying image information 106 by total internal reflection. light may also reflect from one or more surfaces of the substrate 108 from a partially reflective coating, a wavelength-selective coating, an angle-selective coating, and/or a polarizationselective coating. in one embodiment, the angled reflectors may be constructed using an electro-active material, such that upon application of a voltage and/or current to a particular reflector, the refractive index of the material comprising such reflector changes from an index substantially matched to the rest of the substrate 108. when the refractive index matches that of the rest of the substrate 108, the reflector is in a transmissive configuration. when the refractive index does not match that of the rest of the substrate 108, the reflector is in reflective configuration such that a reflection effect is created. example electro-active material includes lithium niobate and electro-active polymers. suitable substantially transparent electrodes for controlling a plurality of such reflectors may comprise materials such as indium tin oxide, which is utilized in liquid crystal displays. in one embodiment, the electro-active reflectors 110 may comprise liquid crystal, embedded in a substrate 108 host medium such as glass or plastic. in some variations, liquid crystal may be selected that changes refractive index as a function of an applied electric signal, so that more analog changes may be accomplished as opposed to binary (from one transmissive state to one reflective state). in an embodiment wherein 6 sub-images are to be presented to the eye frame-sequential to form a large tiled image with an overall refresh rate of 60 frames per second, it is desirable to have an input display that can refresh at the rate of about 360 hz, with an electro-active reflector array that can keep up with such frequency. in one embodiment, lithium niobate may be utilized as an electro-active reflective material as opposed to liquid crystal: lithium niobate is utilized in the photonics industry for high-speed switches and fiber optic networks and has the capability to switch refractive index in response to an applied voltage at a very high frequency. this high frequency may be used to steer line-sequential or pixel-sequential sub-image information, especially if the input display is a scanned light display, such as a fiber-scanned display or scanning mirror-based display. in another embodiment, a variable switchable angled mirror configuration may comprise one or more high-speed mechanically repositionable reflective surfaces, such as a mems (micro-electromechanical system) device. a mems device may include what is known as a "digital mirror device", or "dmd", (often part of a "digital light processing", or "dlp" system, such as those available from texas instruments, inc.). in another electromechanical embodiment, a plurality of air-gapped (or in vacuum) reflective surfaces could be mechanically moved in and out of place at high frequency. in another electromechanical embodiment, a single reflective surface may be moved up and down and re-pitched at very high frequency. referring to fig. 5g , it is notable that the switchable variable angle reflector configurations described herein are capable of passing not only collimated or flat wavefront information to the retina 54 of the eye 58, but also a curved wavefront 122 image information, as shown in the illustration of fig. 5g . this generally is not the case with other waveguide-based configurations, wherein total internal reflection of curved wavefront information causes undesirable complications, and therefore the inputs generally must be collimated. the ability to pass curved wavefront information facilitates the ability of configurations such as those shown in figs. 5b-5h to provide the retina 54 with input perceived as focused at various distances from the eye 58, not just optical infinity (which would be the interpretation of collimated light absent other cues). referring to fig. 5h , in another embodiment, an array of static partially reflective surfaces 116 (e.g., always in a reflective mode; in another embodiment, they may be electro-active, as above) may be embedded in a substrate 114 with a high-frequency gating layer 118 controlling outputs to the eye 58. the high-frequency gating layer 118 may only allow transmission through an aperture 120 which is controllably movable. in other words, everything may be selectively blocked except for transmissions through the aperture 120. the gating layer 118 may comprise a liquid crystal array, a lithium niobate array, an array of mems shutter elements, an array of dlp dmd elements, or an array of other mems devices configured to pass or transmit with relatively high-frequency switching and high transmissibility upon being switched to transmission mode. referring to figs. 6a-6b , other embodiments are depicted wherein arrayed optical elements may be combined with exit pupil expansion configurations to assist with the comfort of the virtual or augmented reality experience of the user. with a larger "exit pupil" for the optics configuration, the user's eye positioning relative to the display (which, as in figs. 4a-4d , may be mounted on the user's head in an eyeglasses sort of configuration) is not as likely to disrupt his experience - because due to the larger exit pupil of the system, there is a larger acceptable area wherein the user's anatomical pupil may be located to still receive the information from the display system as desired. in other words, with a larger exit pupil, the system is less likely to be sensitive to slight misalignments of the display relative to the user's anatomical pupil, and greater comfort for the user may be achieved through less geometric constraint on his or her relationship with the display/glasses. referring now to figs. 6a and 6b , an alternate approach is illustrated. as shown in fig. 6a , the display 140 on the left feeds a set of parallel rays into the substrate 124. in one embodiment, the display may be a scanned fiber display scanning a narrow beam of light back and forth at an angle as shown to project an image through the lens or other optical element 142, which may be utilized to collect the angularly-scanned light and convert it to a parallel bundle of rays. the rays may be reflected from a series of reflective surfaces (126, 128, 130, 132, 134, 136) which may partially reflect and partially transmit incoming light so that the light may be shared across the group of reflective surfaces (126, 128, 130, 132, 134, 136) approximately equally. with a small lens 138 placed at each exit point from the waveguide 124, the exiting light rays may be steered through a nodal point and scanned out toward the eye 58 to provide an array of exit pupils, or the functional equivalent of one large exit pupil that is usable by the user as he or she gazes toward the display system. for virtual reality configurations wherein it is desirable to also be able to see through the waveguide to the real world 144, a similar set of lenses 139 may be presented on the opposite side of the waveguide 124 to compensate for the lower set of lenses; thus creating a the equivalent of a zero-magnification telescope. the reflective surfaces (126, 128, 130, 132, 134, 136) each may be aligned at approximately 45 degrees as shown, or may have different alignments, akin to the configurations of figs. 5b-5h , for example). the reflective surfaces (126, 128, 130, 132, 134, 136) may comprise wavelength-selective reflectors, band pass reflectors, half silvered mirrors, or other reflective configurations. the lenses (138, 139) shown are refractive lenses, but diffractive lens elements may also be utilized. referring to fig. 6b , a somewhat similar configuration is depicted wherein a plurality of curved reflective surfaces (148, 150, 152, 154, 156, 158) may be utilized to effectively combine the lens (element 138 of fig. 6a ) and reflector (elements 126, 128, 130, 132, 134, 136 of fig. 6a ) functionality of the embodiment of fig. 6a , thereby obviating the need for the two groups of lenses (element 138 of fig. 6a ). the curved reflective surfaces (148, 150, 152, 154, 156, 158) may be various curved configurations selected to both reflect and impart angular change, such as parabolic or elliptical curved surfaces. with a parabolic shape, a parallel set of incoming rays will be collected into a single output point; with an elliptical configuration, a set of rays diverging from a single point of origin are collected to a single output point. as with the configuration of fig. 6a , the curved reflective surfaces (148, 150, 152, 154, 156, 158) preferably partially reflect and partially transmit so that the incoming light is shared across the length of the waveguide 146. the curved reflective surfaces (148, 150, 152, 154, 156, 158) may comprise wavelength-selective notch reflectors, half silvered mirrors, or other reflective configurations. in another embodiment, the curved reflective surfaces (148, 150, 152, 154, 156, 158) may be replaced with diffractive reflectors that reflect and also deflect. referring to fig. 7a , perceptions of z-axis difference (e.g., distance straight out from the eye along the optical axis) may be facilitated by using a waveguide in conjunction with a variable focus optical element configuration. as shown in fig. 7a , image information from a display 160 may be collimated and injected into a waveguide 164 and distributed in a large exit pupil manner using, e.g., configurations such as those described in reference to figs. 6a and 6b , or other substrate-guided optics methods known to those skilled in the art - and then variable focus optical element capability may be utilized to change the focus of the wavefront of light emerging from the waveguide and provide the eye with the perception that the light coming from the waveguide 164 is from a particular focal distance. in other words, since the incoming light has been collimated to avoid challenges in total internal reflection waveguide configurations, it will exit in collimated fashion, requiring a viewer's eye to accommodate to the far point to bring it into focus on the retina, and naturally be interpreted as being from optical infinity - unless some other intervention causes the light to be refocused and perceived as from a different viewing distance; one suitable such intervention is a variable focus lens. in the embodiment of fig. 7a , collimated image information from a display 160 is injected into a piece of glass 162 or other material at an angle such that it totally internally reflects and is passed into the adjacent waveguide 164. the waveguide 164 may be configured akin to the waveguides of figs. 6a or 6b (124, 146, respectively) so that the collimated light from the display is distributed to exit somewhat uniformly across the distribution of reflectors or diffractive features along the length of the waveguide. upon exiting toward the eye 58, in the depicted configuration the exiting light is passed through a variable focus lens element 166 wherein, depending upon the controlled focus of the variable focus lens element 166, the light exiting the variable focus lens element 166 and entering the eye 58 will have various levels of focus (a collimated flat wavefront to represent optical infinity, more and more beam divergence / wavefront curvature to represent closer viewing distance relative to the eye 58). to compensate for the variable focus lens element 166 between the eye 58 and the waveguide 164, another similar variable focus lens element 167 is placed on the opposite side of the waveguide 164 to cancel out the optical effects of the lenses 166 for light coming from the world 144 for augmented reality (e.g., as described above, one lens compensates for the other, producing the functional equivalent of a zero-magnification telescope). the variable focus lens element 166 may be a refractive element, such as a liquid crystal lens, an electro-active lens, a conventional refractive lens with moving elements, a mechanical-deformation-based lens (such as a fluid-filled membrane lens, or a lens akin to the human crystalline lens wherein a flexible element is flexed and relaxed by actuators), an electrowetting lens, or a plurality of fluids with different refractive indices. the variable focus lens element 166 may also comprise a switchable diffractive optical element (such as one featuring a polymer dispersed liquid crystal approach wherein a host medium, such as a polymeric material, has microdroplets of liquid crystal dispersed within the material; when a voltage is applied, the molecules reorient so that their refractive indices no longer match that of the host medium, thereby creating a high-frequency switchable diffraction pattern). one embodiment includes a host medium in which microdroplets of a kerr effect-based electro-active material, such as lithium niobate, is dispersed within the host medium, enabling refocusing of image information on a pixel-by-pixel or line-by-line basis, when coupled with a scanning light display, such as a fiber-scanned display or scanning-mirror-based display. in a variable focus lens element 166 configuration wherein liquid crystal, lithium niobate, or other technology is utilized to present a pattern, the pattern spacing may be modulated to not only change the focal power of the variable focus lens element 166, but also to change the focal power of the overall optical system - for a zoom lens type of functionality. in one embodiment, the lenses 166 could be telecentric, in that focus of the display imagery can be altered while keeping magnification constant - in the same way that a photography zoom lens may be configured to decouple focus from zoom position. in another embodiment, the lenses 166 may be nontelecentric, so that focus changes will also slave zoom changes. with such a configuration, such magnification changes may be compensated for in software with dynamic scaling of the output from the graphics system in sync with focus changes). referring back to the projector or other video display unit 160 and the issue of how to feed images into the optical display system, in a "frame sequential" configuration, a stack of sequential two-dimensional images may be fed to the display sequentially to produce three-dimensional perception over time; in a manner similar to a computed tomography system that uses stacked image slices to represent a three-dimensional structure. a series of two-dimensional image slices may be presented to the eye, each at a different focal distance to the eye, and the eye/brain would integrate such a stack into a perception of a coherent three-dimensional volume. depending upon the display type, line-by-line, or even pixel-by-pixel sequencing may be conducted to produce the perception of three-dimensional viewing. for example, with a scanned light display (such as a scanning fiber display or scanning mirror display), then the display is presenting the waveguide 164 with one line or one pixel at a time in a sequential fashion. if the variable focus lens element 166 is able to keep up with the high-frequency of pixel-by-pixel or line-by-line presentation, then each line or pixel may be presented and dynamically focused through the variable focus lens element 166 to be perceived at a different focal distance from the eye 58. pixel-by-pixel focus modulation generally requires an extremely fast / high-frequency variable focus lens element 166. for example, a 1080p resolution display with an overall frame rate of 60 frames per second typically presents around 125 million pixels per second. such a configuration also may be constructed using a solid state switchable lens, such as one using an electro-active material, e.g., lithium niobate or an electro-active polymer. in addition to its compatibility with the system illustrated in fig. 7a , a frame sequential multi-focal display driving approach may be used in conjunction with a number of the display system and optics embodiments described in this disclosure. referring to fig. 7b , an electro-active layer 172 (such as one comprising liquid crystal or lithium niobate) may be surrounded by functional electrodes (170, 174) (which may be made of indium tin oxide) and a waveguide 168 with a conventional transmissive substrate 176. the waveguide may be made from glass or plastic with known total internal reflection characteristics and an index of refraction that matches the on or off state of the electro-active layer 172, in one or more embodiments. the electro-active layer 172 may be controlled such that the paths of entering beams may be dynamically altered to essentially create a time-varying light field. referring to fig. 8a , a stacked waveguide assembly 178 may be utilized to provide three-dimensional perception to the eye/brain by having a plurality of waveguides (182, 184, 186, 188, 190) and a plurality of weak lenses (198, 196, 194, 192) configured together to send image information to the eye with various levels of wavefront curvature for each waveguide level indicative of focal distance to be perceived for that waveguide level. a plurality of displays (200, 202, 204, 206, 208), or in another embodiment a single multiplexed display, may be utilized to inject collimated image information into the waveguides (182, 184, 186, 188, 190), each of which may be configured, as described above, to distribute incoming light substantially equally across the length of each waveguide, for exit down toward the eye. the waveguide 182 nearest the eye is configured to deliver collimated light, as injected into such waveguide 182, to the eye, which may be representative of the optical infinity focal plane. another waveguide 184is configured to send out collimated light which passes through the first weak lens (192; e.g., a weak negative lens) and is delivered to the user's eye 58. the first weak lens 192 may be configured to create a slight convex wavefront curvature so that the eye/brain interprets light coming from the waveguide 184 as coming from a first focal plane closer inward toward the person from optical infinity. similarly, the next waveguide 186 passes its output light through both the first 192 and second 194 lenses before reaching the eye 58. the combined optical power of the first 192 and second 194 lenses may be configured to create another incremental amount of wavefront divergence so that the eye/brain interprets light coming from the waveguide 186 as coming from a second focal plane even closer inward toward the person from optical infinity than was light from the waveguide 184. the other waveguide layers (188, 190) and weak lenses (196, 198) are similarly configured, with the highest waveguide 190 in the stack sending its output through all of the weak lenses between it and the eye for an aggregate focal power representative of the closest focal plane to the person. to compensate for the stack of lenses (198, 196, 194, 192) when viewing/interpreting light coming from the world 144 on the other side of the stacked waveguide assembly 178, a compensating lens layer (180) is disposed at the top of the stack to compensate for the aggregate power of the lens stack (198, 196, 194, 192) below. such a configuration provides as many perceived focal planes as there are available waveguide/lens pairings, again with a relatively large exit pupil configuration as described above. both the reflective aspects of the waveguides and the focusing aspects of the lenses may be static (e.g., not dynamic or electro-active). in an alternative embodiment they may be dynamic using electro-active features as described above, enabling a small number of waveguides to be multiplexed in a time sequential fashion to produce a larger number of effective focal planes. referring to figs. 8b-8n , various aspects of diffraction configurations for focusing and/or redirecting collimated beams are depicted. other aspects of diffraction systems for such purposes are disclosed in u.s. patent application serial no. 14/331,218 . referring to fig. 8b , it should be appreciated that passing a collimated beam through a linear diffraction pattern 210, such as a bragg grating, will deflect, or "steer", the beam. it should also be appreciated that passing a collimated beam through a radially symmetric diffraction pattern 212, or "fresnel zone plate", will change the focal point of the beam. fig. 8c illustrates the deflection effect of passing a collimated beam through a linear diffraction pattern 210. fig. 8d illustrates the focusing effect of passing a collimated beam through a radially symmetric diffraction pattern 212. referring to figs. 8e and 8f , a combination diffraction pattern that has both linear and radial elements 214 produces both deflection and focusing of a collimated input beam. these deflection and focusing effects can be produced in a reflective as well as transmissive mode. these principles may be applied with waveguide configurations to allow for additional optical system control, as shown in figs. 8g-8n , for example. as shown in figs. 8g-8n , a diffraction pattern 220, or "diffractive optical element" (or "doe") has been embedded within a planar waveguide 216 such that as a collimated beam is totally internally reflected along the planar waveguide 216, it intersects the diffraction pattern 220 at a multiplicity of locations. preferably, the doe 220 has a relatively low diffraction efficiency so that only a portion of the light of the beam is deflected away toward the eye 58 with each intersection of the doe 220 while the rest continues to move through the planar waveguide 216 via total internal reflection. the light carrying the image information is thus divided into a number of related light beams that exit the waveguide at a multiplicity of locations and the result is a fairly uniform pattern of exit emission toward the eye 58 for this particular collimated beam bouncing around within the planar waveguide 216, as shown in fig. 8h . the exit beams toward the eye 58 are shown in fig. 8h as substantially parallel, because, in this case, the doe 220 has only a linear diffraction pattern. as shown in the comparison between figs. 8l , 8m , and 8n , changes to this linear diffraction pattern pitch may be utilized to controllably deflect the exiting parallel beams, thereby producing a scanning or tiling functionality. referring to fig. 8i , with changes in the radially symmetric diffraction pattern component of the embedded doe 220, the exit beam pattern is more divergent, which would require the eye to accommodation to a closer distance to bring it into focus on the retina and would be interpreted by the brain as light from a viewing distance closer to the eye than optical infinity. referring to fig. 8j , with the addition of another waveguide 218 into which the beam may be injected (by a projector or display, for example), a doe 221 embedded in this other waveguide 218, such as a linear diffraction pattern, may function to spread the light across the entire larger planar waveguide 216. this may provide the eye 58 with a very large incoming field of incoming light that exits from the larger planar waveguide 216, e.g., a large eye box, in accordance with the particular doe configurations at work. the does (220, 221) are depicted bisecting the associated waveguides (216, 218) but this need not be the case. in one or more embodiments, they may be placed closer to, or upon, either side of either of the waveguides (216, 218) to have the same functionality. thus, as shown in fig. 8k , with the injection of a single collimated beam, an entire field of cloned collimated beams may be directed toward the eye 58. in addition, with a combined linear diffraction pattern / radially symmetric diffraction pattern scenario such as that depicted in figs. 8f 214 and 8i 220, a beam distribution waveguide optic (for functionality such as exit pupil functional expansion; with a configuration such as that of fig. 8k , the exit pupil can be as large as the optical element itself, which can be a very significant advantage for user comfort and ergonomics) with z-axis focusing capability is presented, in which both the divergence angle of the cloned beams and the wavefront curvature of each beam represent light coming from a point closer than optical infinity. in one embodiment, one or more does are switchable between "on" states in which they actively diffract, and "off" states in which they do not significantly diffract. for instance, a switchable doe may comprise a layer of polymer dispersed liquid crystal, in which microdroplets comprise a diffraction pattern in a host medium, and the refractive index of the microdroplets can be switched to substantially match the refractive index of the host material (in which case the pattern does not appreciably diffract incident light). or, the microdroplet can be switched to an index that does not match that of the host medium (in which case the pattern actively diffracts incident light). further, with dynamic changes to the diffraction terms, such as the linear diffraction pitch term as in figs. 8l-8n , a beam scanning or tiling functionality may be achieved. as noted above, it may be desirable to have a relatively low diffraction grating efficiency in each of the does (220, 221) because it facilitates distribution of the light. also, because light coming through the waveguides that is desirably transmitted (for example, light coming from the world 144 toward the eye 58 in an augmented reality configuration) is less affected when the diffraction efficiency of the doe that it crosses 220 is lower, a better view of the real world through such a configuration may be achieved. configurations such as those illustrated in fig. 8k preferably are driven with injection of image information in a time sequential approach, with frame sequential driving being the most straightforward to implement. for example, an image of the sky at optical infinity may be injected at time1 and the diffraction grating retaining collimation of light may be utilized. then an image of a closer tree branch may be injected at time2 while a doe controllably imparts a focal change, say one diopter or 1 meter away, to provide the eye/brain with the perception that the branch light information is coming from the closer focal range. this kind of paradigm may be repeated in rapid time sequential fashion such that the eye/brain perceives the input to be all part of the same image. while this is simply a two focal plane example, it should be appreciated that preferably the system will be configured to have more focal planes to provide a smoother transition between objects and their focal distances. this kind of configuration generally assumes that the doe is switched at a relatively low speed (e.g., in sync with the frame-rate of the display that is injecting the images - in the range of tens to hundreds of cycles/second). the opposite extreme may be a configuration wherein doe elements can shift focus at tens to hundreds of mhz or greater, which facilitates switching of the focus state of the doe elements on a pixel-by-pixel basis as the pixels are scanned into the eye 58 using a scanned light display type of approach. this is desirable because it means that the overall display frame-rate can be kept quite low; just low enough to make sure that "flicker" is not a problem (in the range of about 60-120 frames/sec). in between these ranges, if the does can be switched at khz rates, then on a line-by-line basis the focus on each scan line may be adjusted, which may afford the user with a visible benefit in terms of temporal artifacts during an eye motion relative to the display, for example. for instance, the different focal planes in a scene may, in this manner, be interleaved, to minimize visible artifacts in response to a head motion (as is discussed in greater detail later in this disclosure). a line-by-line focus modulator may be operatively coupled to a line scan display, such as a grating light valve display, in which a linear array of pixels is swept to form an image; and may be operatively coupled to scanned light displays, such as fiber-scanned displays and mirror-scanned light displays. a stacked configuration, similar to those of fig. 8a , may use dynamic does (rather than the static waveguides and lenses of the embodiment of fig. 8a ) to provide multi-planar focusing simultaneously. for example, with three simultaneous focal planes, a primary focus plane (based upon measured eye accommodation, for example) could be presented to the user, and a + margin and - margin (e.g., one focal plane closer, one farther out) could be utilized to provide a large focal range in which the user can accommodate before the planes need be updated. this increased focal range can provide a temporal advantage if the user switches to a closer or farther focus (e.g., as determined by accommodation measurement). then the new plane of focus may be made to be the middle depth of focus, with the + and - margins again ready for a fast switchover to either one while the system catches up. referring to fig. 8o , a stack 222 of planar waveguides (244, 246, 248, 250, 252) is shown, each having a reflector (254, 256, 258, 260, 262) at the end and being configured such that collimated image information injected in one end by a display (224, 226, 228, 230, 232) bounces by total internal reflection down to the reflector, at which point some or all of the light is reflected out toward an eye or other target. each of the reflectors may have slightly different angles so that they all reflect exiting light toward a common destination such as a pupil. such a configuration is somewhat similar to that of fig. 5b , with the exception that each different angled reflector in the embodiment of fig. 8o has its own waveguide for less interference when projected light is travelling to the targeted reflector. lenses (234, 236, 238, 240, 242) may be interposed between the displays and waveguides for beam steering and/or focusing. fig. 8p illustrates a geometrically staggered version wherein reflectors (276, 278, 280, 282, 284) are positioned at staggered lengths in the waveguides (266, 268, 270, 272, 274) such that exiting beams may be relatively easily aligned with objects such as an anatomical pupil. since a distance between the stack (264) and the eye is known (such as 28mm between the cornea of the eye and an eyeglasses lens, a typical comfortable geometry), the geometries of the reflectors (276, 278, 280, 282, 284) and waveguides (266, 268, 270, 272, 274) may be set up to fill the eye pupil (typically about 8mm across or less) with exiting light. by directing light to an eye box larger than the diameter of the eye pupil, the viewer is free to make any number of eye movements while retaining the ability to see the displayed imagery. referring back to the discussion related to fig. 5a and 5b about field of view expansion and reflector size, an expanded field of view is presented by the configuration of fig. 8p as well, and it does not involve the complexity of the switchable reflective elements of the embodiment of fig. 5b . fig. 8q illustrates a version 286 wherein many reflectors 298 form a relatively continuous curved reflection surface in the aggregate or discrete flat facets that are oriented to align with an overall curve. the curve could a parabolic or elliptical curve and is shown cutting across a plurality of waveguides (288, 290, 292, 294, 296) to minimize any crosstalk issues, although it also could be utilized with a monolithic waveguide configuration. in one implementation, a high-frame-rate and lower persistence display may be combined with a lower-frame-rate and higher persistence display and a variable focus element to comprise a relatively high-frequency frame sequential volumetric display. in one embodiment, the high-frame-rate display has a lower bit depth and the lower-frame-rate display has a higher bit depth, and are combined to comprise an effective high-frame-rate and high bit depth display, that is well suited to presenting image slices in a frame sequential fashion. with such an approach, a three-dimensional volume that is desirably represented is functionally divided into a series of two-dimensional slices. each of those two-dimensional slices is projected to the eye frame sequentially, and in sync with this presentation, the focus of a variable focus element is changed. in one embodiment, to provide enough frame rate to support such a configuration, two display elements may be integrated: a full-color, high-resolution liquid crystal display ("lcd"; a backlighted ferroelectric panel display also may be utilized in another embodiment; in a further embodiment a scanning fiber display may be utilized) operating at 60 frames per second, and aspects of a higher-frequency dlp system. instead of illuminating the back of the lcd panel in a conventional manner (e.g., with a full size fluorescent lamp or led array), the conventional lighting configuration may be removed to accommodate the dlp projector to project a mask pattern on the back of the lcd. in one embodiment, the mask pattern may be binary (e.g., the dlp is either illuminated or not-illuminated. in another embodiment described below, the dlp may be utilized to project a grayscale mask image. it should be appreciated that dlp projection systems can be operated at very high frame rates. in one embodiment, for 6 depth planes at 60 frames per second, a dlp projection system can be operated against the back of the lcd display at 360 frames/second. then the dlp projector may be utilized to selectively illuminate portions of the lcd panel in sync with a high-frequency variable focus element (such as a deformable membrane mirror) that is disposed between the viewing side of the lcd panel and the eye of the user, the variable focus element (vfe) configured to vary the global display focus on a frame by frame basis at 360 frames/second. in one embodiment, the vfe is positioned to be optically conjugate to the exit pupil, in order to allow adjustments of focus without simultaneously affecting image magnification or "zoom." in another embodiment, the vfe is not conjugate to the exit pupil, such that image magnification changes accompany focus adjustments. in such embodiments, software may be used to compensate for optical magnification changes and any distortions by pre-scaling or warping the images to be presented. operationally, it's useful to consider an example in which a three-dimensional scene is to be presented to a user wherein the sky in the background is to be at a viewing distance of optical infinity, and a branch coupled to a tree extends from a tree truck so that the tip of the branch is closer to the user than is the proximal portion of the branch that joins the tree trunk. the tree may be at a location closer then optical infinity, and the branch may be even closer as compared to the tree trunk. in one embodiment, for a given global frame, the system may be configured to present on an lcd a full-color, all in-focus image of the tree branch in front the sky. then at subframe1, within the global frame, the dlp projector in a binary masking configuration (e.g., illumination or absence of illumination) may be used to only illuminate the portion of the lcd that represents the cloudy sky while functionally black-masking (e.g., failing to illuminate) the portion of the lcd that represents the tree branch and other elements that are not to be perceived at the same focal distance as the sky, and the vfe (such as a deformable membrane mirror) may be utilized to position the focal plane at optical infinity such that the eye sees a sub-image at subframe1 as being clouds that are infinitely far away. then at subframe2, the vfe may be switched to focus on a point about 1 meter away from the user's eyes (e.g., 1 meter for the branch location). the pattern of illumination from the dlp can be switched so that the system only illuminates the portion of the lcd that represents the tree branch while functionally black-masking (e.g., failing to illuminate) the portion of the lcd that represents the sky and other elements that are not to be perceived at the same focal distance as the tree branch. thus, the eye gets a quick flash of cloud at optical infinity followed by a quick flash of tree at 1 meter, and the sequence is integrated by the eye/brain to form a three-dimensional perception. the branch may be positioned diagonally relative to the viewer, such that it extends through a range of viewing distances, e.g., it may join with the trunk at around 2 meters viewing distance while the tips of the branch are at the closer position of 1 meter. in this case, the display system can divide the 3-d volume of the tree branch into multiple slices, rather than a single slice at 1 meter. for instance, one focus slice may be used to represent the sky (using the dlp to mask all areas of the tree during presentation of this slice), while the tree branch is divided across 5 focus slices (using the dlp to mask the sky and all portions of the tree except one, for each part of the tree branch to be presented). preferably, the depth slices are positioned having a spacing equal to or smaller than the depth of focus of the eye, such that the viewer will be unlikely to notice the transition between slices, and instead perceive a smooth and continuous flow of the branch through the focus range. in another embodiment, rather than utilizing the dlp in a binary (illumination or darkfield only) mode, it may be utilized to project a grayscale (for example, 256 shades of grayscale) mask onto the back of the lcd panel to enhance three-dimensional perception. the grayscale shades may be utilized to impart to the eye/brain a perception that something resides in between adjacent depth or focal planes. referring back to the above scenario, if the leading edge of the branch closest to the user is to be projected on focalplane1, then at subframe1, that portion on the lcd may be lit up with full intensity white from the dlp system with the vfe at focalplane1. then at subframe2, when the vfe at focalplane2 is right behind the part that was lit up, there will be no illumination. these are similar steps to the binary dlp masking configuration above. however, if there is a portion of the branch that is to be perceived at a position between focalplane1 and focalplane1, e.g., halfway, grayscale masking may be utilized. the dlp can project an illumination mask to that portion during both subframe1 and subframe2, but at half-illumination (such as at level 128 out of 256 grayscale) for each subframe. this provides the perception of a blending of depth of focus layers, with the perceived focal distance being proportional to the illuminance ratio between subframe1 and subframe2. for instance, for a portion of the tree branch that should lie 3/4ths of the way between focalplane1 and focalplane2, an about 25% intensity grayscale mask can be used to illuminate that portion of the lcd at subframe1 and an about 75% grayscale mask can be used to illuminate the same portion of the lcd at subframe2. in one embodiment, the bit depths of both the low-frame-rate display and the high-frame-rate display can be combined for image modulation, to create a high dynamic range display. the high dynamic range driving may be conducted in tandem with the focus plane addressing function described above, to comprise a high dynamic range multi-focal 3-d display. in another more efficient embodiment, only a certain portion of the display (e.g., lcd) output may be mask-illuminated by the projector (e.g., dlp, dmd, etc.) and may be variably focused en route to the user's eye. for example, the middle portion of the display may be mask illuminated, with the periphery of the display providing uniform accommodation cues to the user (e.g. the periphery could be uniformly illuminated by the dlp dmd, while a central portion is actively masked and variably focused en route to the eye). in the above described embodiment, a refresh rate of about 360 hz allows for 6 depth planes at about 60 frames/second each. in another embodiment, even higher refresh rates may be achieved by increasing the operating frequency of the dlp. a standard dlp configuration uses a mems device and an array of micro-mirrors that toggle between a mode of reflecting light toward the display or user to a mode of reflecting light away from the display or user, such as into a light trap-thus dlps are inherently binary. dlps typically create grayscale images using a pulse width modulation schema wherein the mirror is left in the "on" state for a variable amount of time for a variable duty cycle in order to create a brighter pixel, or pixel of interim brightness. thus, to create grayscale images at moderate frame rate, dlps are running at a much higher binary rate. in the above described configurations, such setup works well for creating grayscale masking. however, if the dlp drive scheme is adapted such that it is flashing subimages in a binary pattern, then the frame rate may be increased significantly - by thousands of frames per second, which allows for hundreds to thousands of depth planes being refreshed at 60 frames/second, which may be utilized to obviate the between-depth-plane grayscale interpolating as described above. a typical pulse width modulation scheme for a texas instruments dlp system has an 8-bit command signal (first bit is the first long pulse of the mirror; second bit is a pulse that is half as long as the first; third bit is half as long again; and so on) - such that the configuration can create 2 8 (2 to the 8th power) different illumination levels. in one embodiment, the backlighting from the dlp may have its intensity varied in sync with the different pulses of the dmd to equalize the brightness of the subimages that are created. this may be a practical approach by which to use existing dmd drive electronics to produce significantly higher frame rates. in another embodiment, direct control changes to the dmd drive electronics and software may be utilized to have the mirrors always have an equal on-time instead of the variable on-time configuration that is conventional, which would facilitate higher frame rates. in another embodiment, the dmd drive electronics may be configured to present low bit depth images at a frame rate above that of high bit depth images but lower than the binary frame rate, enabling some grayscale blending between focus planes, while moderately increasing the number of focus planes. in another embodiment, when limited to a finite number of depth planes, such as 6 in the example above, it may be desirable to functionally move these 6 depth planes around to be maximally useful in the scene that is being presented to the user. for example, if a user is standing in a room and a virtual monster is to be placed into his augmented reality view, the virtual monster being about 2 feet deep in the z axis straight away from the user's eyes, it may make be more useful to cluster all 6 depth planes around the center of the monster's current location (and dynamically move them with him as he moves relative to the user). this may provide more rich accommodation cues to the user, with all six depth planes in the direct region of the monster (for example, 3 in front of the center of the monster, 3 in back of the center of the monster). such allocation of depth planes is content dependent. for example, in the scene above the same monster may be presented in the same room, but also to be presented to the user is a virtual window frame element, and then a virtual view to optical infinity out of the virtual window frame, it will be useful to spend at least one depth plane on optical infinity, one on the depth of the wall that is to house the virtual window frame, and then perhaps the remaining four depth planes on the monster in the room. if the content causes the virtual window to disappear, then the two depth planes may be dynamically reallocated to the region around the monster. thus, content-based dynamic allocation of focal plane resources may provide the richest experience to the user given computing and presentation resources. in another embodiment, phase delays in a multicore fiber or an array of single-core fibers may be utilized to create variable focus light wavefronts. referring to fig. 9a , a multicore fiber (300) may comprise the aggregation of multiple individual fibers (302). fig. 9b shows a close-up view of a multicore assembly, which emits light from each core in the form of a spherical wavefront (304) from each. if the cores are transmitting coherent light, e.g., from a shared laser light source, these small spherical wavefronts ultimately constructively and destructively interfere with each other, and if they were emitted from the multicore fiber in phase, they will develop an approximately planar wavefront (306) in the aggregate, as shown. however, if phase delays are induced between the cores (using a conventional phase modulator such as one using lithium niobate, for example, to slow the path of some cores relative to others), then a curved or spherical wavefront may be created in the aggregate, to represent at the eyes/brain an object coming from a point closer than optical infinity. this may be another approach that may be used to present multiple focal planes without the use of a vfe, as was the case in the previous embodiments discussed above. in other words, such a phased multicore configuration, or phased array, may be utilized to create multiple optical focus levels from a light source. in another embodiment related to the use of optical fibers, a known fourier transform aspect of multi-mode optical fiber or light guiding rods or pipes may be utilized for control of the wavefronts that are output from such fibers. optical fibers typically are available in two categories: single mode and multi-mode. a multi-mode optical fiber typically has larger core diameters and allows light to propagate along multiple angular paths, rather than just the one of single mode optical fiber. it is known that if an image is injected into one end of a multi-mode fiber, angular differences that are encoded into that image will be retained to some degree as it propagates through the multi-mode fiber. in some configurations the output from the fiber will be significantly similar to a fourier transform of the image that was input into the fiber. thus in one embodiment, the inverse fourier transform of a wavefront (such as a diverging spherical wavefront to represent a focal plane nearer to the user than optical infinity) may be input such that, after passing through the fiber that optically imparts a fourier transform, the output is the desired shaped, or focused, wavefront. such output end may be scanned about to be used as a scanned fiber display, or may be used as a light source for a scanning mirror to form an image, for instance. thus such a configuration may be utilized as yet another focus modulation subsystem. other kinds of light patterns and wavefronts may be injected into a multi-mode fiber, such that on the output end, a certain spatial pattern is emitted. this may be utilized to provide an equivalent of a wavelet pattern (in optics, an optical system may be analyzed in terms of the zernicke coefficients; images may be similarly characterized and decomposed into smaller principal components, or a weighted combination of comparatively simpler image components). thus if light is scanned into the eye using the principal components on the input side, a higher resolution image may be recovered at the output end of the multi-mode fiber. in another embodiment, the fourier transform of a hologram may be injected into the input end of a multi-mode fiber to output a wavefront that may be used for three-dimensional focus modulation and/or resolution enhancement. certain single fiber core, multi-core fibers, or concentric core + cladding configurations also may be utilized in the aforementioned inverse fourier transform configurations. in another embodiment, rather than physically manipulating the wavefronts approaching the eye of the user at a high frame rate without regard to the user's particular state of accommodation or eye gaze, a system may be configured to monitor the user's accommodation and rather than presenting a set of multiple different light wavefronts, present a single wavefront at a time that corresponds to the accommodation state of the eye. accommodation may be measured directly (such as by infrared autorefractor or eccentric photorefraction) or indirectly (such as by measuring the convergence level of the two eyes of the user; as described above, vergence and accommodation are strongly linked neurologically, so an estimate of accommodation can be made based upon vergence geometry). thus with a determined accommodation of, say, 1 meter from the user, then the wavefront presentations at the eye may be configured for a 1 meter focal distance using any of the above variable focus configurations. if an accommodation change to focus at 2 meters is detected, the wavefront presentation at the eye may be reconfigured for a 2 meter focal distance, and so on. thus in one embodiment that incorporates accommodation tracking, a vfe may be placed in the optical path between an outputting combiner (e.g., a waveguide or beamsplitter) and the eye of the user, such that the focus may be changed along with (e.g., preferably at the same rate as) accommodation changes of the eye. software effects may be utilized to produce variable amounts blur (e.g., gaussian) to objects which should not be in focus to simulate the dioptric blur expected at the retina as if an object were at that viewing distance. this enhances the three-dimensional perception by the eyes/brain. a simple embodiment is a single plane whose focus level is slaved to the viewer's accommodation level. however, the performance demands on the accommodation tracking system can be relaxed if even a low number of multiple planes is used. referring to fig. 10 , in another embodiment, a stack 328 of about 3 waveguides (318, 320, 322) may be utilized to create three focal planes of wavefronts simultaneously. in one embodiment, the weak lenses (324, 326) may have static focal distances, and a variable focal lens 316 may be slaved to the accommodation tracking of the eyes such that one of the three waveguides (say the middle waveguide 320) outputs what is deemed to be the in-focus wavefront, while the other two waveguides (322, 318) output a + margin wavefront and a - margin wavefront (e.g., a little farther than detected focal distance, a little closer than detected focal distance). this may improve three-dimensional perception and also provide enough difference for the brain/eye accommodation control system to sense some blur as negative feedback, which, in turn, enhances the perception of reality, and allows a range of accommodation before a physical adjustment of the focus levels if necessary. a variable focus compensating lens 314 is also shown to ensure that light coming in from the real world 144 in an augmented reality configuration is not refocused or magnified by the assembly of the stack 328 and output lens 316. the variable focus in the lenses (316, 314) may be achieved, as discussed above, with refractive, diffractive, or reflective techniques. in another embodiment, each of the waveguides in a stack may contain their own capability for changing focus (such as by having an included electronically switchable doe) such that the vfe need not be centralized as in the stack 328 of the configuration of fig. 10 . in another embodiment, vfes may be interleaved between the waveguides of a stack (e.g., rather than fixed focus weak lenses as in the embodiment of fig. 10 ) to obviate the need for a combination of fixed focus weak lenses plus whole-stack-refocusing variable focus element. such stacking configurations may be used in accommodation tracked variations as described herein, and also in a frame-sequential multi-focal display approach. in a configuration wherein light enters the pupil with a small exit pupil, such as 1/2 mm diameter or less, one has the equivalent of a pinhole lens configuration wherein the beam is always interpreted as in-focus by the eyes/brain-e.g., a scanned light display using a 0.5 mm diameter beam to scan images to the eye. such a configuration is known as a maxwellian view configuration, and in one embodiment, accommodation tracking input may be utilized to induce blur using software to image information that is to be perceived as at a focal plane behind or in front of the focal plane determined from the accommodation tracking. in other words, if one starts with a display presenting a maxwellian view, then everything theoretically can be in focus. in order to provide a rich and natural three-dimensional perception, simulated dioptric blur may be induced with software, and may be slaved to the accommodation tracking status. in one embodiment a scanning fiber display is well suited to such configuration because it may be configured to only output small-diameter beams in a maxwellian form. in another embodiment, an array of small exit pupils may be created to increase the functional eye box of the system (and also to reduce the impact of a light-blocking particle which may reside in the vitreous or cornea of the eye), such as by one or more scanning fiber displays. or, this may be achieved through a doe configuration such as that described in reference to fig. 8k , with a pitch in the array of presented exit pupils that ensure that only one will hit the anatomical pupil of the user at any given time (for example, if the average anatomical pupil diameter is 4mm, one configuration may comprise 1/2 mm exit pupils spaced at intervals of approximate 4mm apart). such exit pupils may also be switchable in response to eye position, such that only the eye always receives one, and only one, active small exit pupil at a time; allowing a denser array of exit pupils. such user will have a large depth of focus to which software-based blur techniques may be added to enhance perceived depth perception. as discussed above, an object at optical infinity creates a substantially planar wavefront. an object closer, such as 1m away from the eye, creates a curved wavefront (with about 1m convex radius of curvature). it should be appreciated that the eye's optical system is required to possess sufficient optical power to bend the incoming rays of light such that the light rays are focused on the retina (convex wavefront gets turned into concave, and then down to a focal point on the retina). these are basic functions of the eye. in many of the embodiments described above, light directed to the eye has been treated as being part of one continuous wavefront, some subset of which would hit the pupil of the particular eye. in another approach, light directed to the eye may be effectively discretized or broken down into a plurality of beamlets or individual rays, each of which has a diameter less than about 0.5mm and a unique propagation pathway as part of a greater aggregated wavefront that may be functionally created with the an aggregation of the beamlets or rays. for example, a curved wavefront may be approximated by aggregating a plurality of discrete neighboring collimated beams, each of which is approaching the eye from an appropriate angle to represent a point of origin. the point of origin may match the center of the radius of curvature of the desired aggregate wavefront. when the beamlets have a diameter of about 0.5mm or less, this configuration is akin to a pinhole lens configuration. in other words, each individual beamlet is always in relative focus on the retina, independent of the accommodation state of the eye-however the trajectory of each beamlet will be affected by the accommodation state. for instance, if the beamlets approach the eye in parallel, representing a discretized collimated aggregate wavefront, then an eye that is correctly accommodated to infinity will deflect the beamlets to converge upon the same shared spot on the retina, and will appear in focus. if the eye accommodates to, say, 1 m, the beams will be converged to a spot in front of the retina, cross paths, and fall on multiple neighboring or partially overlapping spots on the retina-appearing blurred. if the beamlets approach the eye in a diverging configuration, with a shared point of origin 1 meter from the viewer, then an accommodation of 1 m will steer the beams to a single spot on the retina, and will appear in focus. if the viewer accommodates to infinity, the beamlets will converge to a spot behind the retina, and produce multiple neighboring or partially overlapping spots on the retina, producing a blurred image. stated more generally, the accommodation of the eye determines the degree of overlap of the spots on the retina, and a given pixel is "in focus" when all of the spots are directed to the same spot on the retina and "defocused" when the spots are offset from one another. this notion that all of the 0.5mm diameter or less beamlets are always in focus, and that the beamlets may be aggregated to be perceived by the eyes/brain as coherent wavefronts, may be utilized in producing configurations for comfortable three-dimensional virtual or augmented reality perception. in other words, a set of multiple narrow beams may be used to emulate a larger diameter variable focus beam. if the beamlet diameters are kept to a maximum of about 0.5mm, then a relatively static focus level may be maintained. to produce the perception of out-of-focus when desired, the beamlet angular trajectories may be selected to create an effect much like a larger out-of-focus beam (such a defocussing treatment may not be the same as a gaussian blur treatment as for the larger beam, but will create a multimodal point spread function that may be interpreted in a similar fashion to a gaussian blur). in a preferred embodiment, the beamlets are not mechanically deflected to form this aggregate focus effect, but rather the eye receives a superset of many beamlets that includes both a multiplicity of incident angles and a multiplicity of locations at which the beamlets intersect the pupil; to represent a given pixel from a particular viewing distance, a subset of beamlets from the superset that comprise the appropriate angles of incidence and points of intersection with the pupil (as if they were being emitted from the same shared point of origin in space) are turned on with matching color and intensity, to represent that aggregate wavefront, while beamlets in the superset that are inconsistent with the shared point of origin are not turned on with that color and intensity (but some of them may be turned on with some other color and intensity level to represent, e.g., a different pixel). referring to fig. 11a , each of a multiplicity of incoming beamlets (332) is passing through a small exit pupil (330) relative to the eye 58 in a discretized wavefront display configuration. referring to fig. 11b , a subset (334) of the group of beamlets (332) may be driven with matching color and intensity levels to be perceived as though they are part of the same larger-sized ray (the bolded subgroup (334) may be deemed an "aggregated beam"). in this case, the subset of beamlets are parallel to one another, representing a collimated aggregate beam from optical infinity (such as light coming from a distant mountain). the eye is accommodated to infinity, so the subset of beamlets are deflected by the eye's cornea and lens to all fall substantially upon the same location of the retina and are perceived to comprise a single in focus pixel. fig. 11c shows another subset of beamlets representing an aggregated collimated beam (336) coming in from the right side of the field of view of the user's eye 58 if the eye 58 is viewed in a coronal-style planar view from above. again, the eye is shown accommodated to infinity, so the beamlets fall on the same spot of the retina, and the pixel is perceived to be in focus. if, in contrast, a different subset of beamlets were chosen that were reaching the eye as a diverging fan of rays, those beamlets would not fall on the same location of the retina (and be perceived as in focus) until the eye were to shift accommodation to a near point that matches the geometrical point of origin of that fan of rays. with regards to patterns of points of intersection of beamlets with the anatomical pupil of the eye (e.g., the pattern of exit pupils), the points of intersection may be organized in configurations such as a cross-sectionally efficient hex-lattice (for example, as shown in fig. 12a ) or a square lattice or other two-dimensional array. further, a three-dimensional array of exit pupils could be created, as well as time-varying arrays of exit pupils. discretized aggregate wavefronts may be created using several configurations, such as an array of microdisplays or microprojectors placed optically conjugate with the exit pupil of viewing optics, microdisplay or microprojector arrays coupled to a direct field of view substrate (such as an eyeglasses lens) such that they project light to the eye directly, without additional intermediate viewing optics, successive spatial light modulation array techniques, or waveguide techniques such as those described in relation to fig. 8k . referring to fig. 12a , in one embodiment, a lightfield may be created by bundling a group of small projectors or display units (such as scanned fiber displays). fig. 12a depicts a hexagonal lattice projection bundle 338 which may, for example, create a 7mm-diameter hex array with each fiber display outputting a sub-image (340). if such an array has an optical system, such as a lens, placed in front of it such that the array is placed optically conjugate with the eye's entrance pupil, this will create an image of the array at the eye's pupil, as shown in fig. 12b , which essentially provides the same optical arrangement as the embodiment of fig. 11a . each of the small exit pupils of the configuration is created by a dedicated small display in the bundle 338, such as a scanning fiber display. optically, it's as though the entire hex array 338 is positioned right into the anatomical pupil 45. such embodiments may be used for driving different subimages to different small exit pupils within the larger anatomical entrance pupil 45 of the eye, comprising a superset of beamlets with a multiplicity of incident angles and points of intersection with the eye pupil. each of the separate projectors or displays may be driven with a slightly different image, such that subimages may be created that pull out different sets of rays to be driven at different light intensities and colors. in one variation, a strict image conjugate may be created, as in the embodiment of fig. 12b , wherein there is direct 1-to-1 mapping of the array 338 with the pupil 45. in another variation, the spacing may be changed between displays in the array and the optical system (lens (342), in fig. 12b ) such that instead of receiving a conjugate mapping of the array to the eye pupil, the eye pupil may be catching the rays from the array at some other distance. with such a configuration, one would still get an angular diversity of beams through which one could create a discretized aggregate wavefront representation, but the mathematics regarding how to drive which ray and at which power and intensity may become more complex (although, on the other hand, such a configuration may be considered simpler from a viewing optics perspective). the mathematics involved with light field image capture may be leveraged for these calculations. referring to fig. 13a , another lightfield creating embodiment is depicted wherein an array of microdisplays or microprojectors 346 may be coupled to a frame (344), such as an eyeglasses frame. this configuration may be positioned in front of the eye 58. the depicted configuration is a nonconjugate arrangement wherein there are no large-scale optical elements interposed between the displays (for example, scanning fiber displays) of the array 346 and the eye 58. one can imagine a pair of glasses, and coupled to those glasses are a plurality of displays, such as scanning fiber engines, positioned orthogonal to the eyeglasses surface, and all angled inward so they are pointing at the pupil of the user. each display may be configured to create a set of rays representing different elements of the beamlet superset. with such a configuration, at the anatomical pupil 45 the user may receive a similar result as received in the embodiments discussed in reference to fig. 11a , in which every point at the user's pupil is receiving rays with a multiplicity of angles of incidence and points of intersection that are being contributed from the different displays. fig. 13b illustrates a nonconjugate configuration similar to that of fig. 13a , with the exception that the embodiment of fig. 13b features a reflecting surface (348) to facilitate moving the display array 346 away from the eye's 58 field of view, while also allowing views of the real world 144 through the reflective surface (348). thus another configuration for creating the angular diversity necessary for a discretized aggregate wavefront display is presented. to optimize such a configuration, the sizes of the displays may be decreased to the maximum. scanning fiber displays which may be utilized as displays may have baseline diameters in the range of 1 mm, but reduction in enclosure and projection lens hardware may decrease the diameters of such displays to about 0.5 mm or less, which is less disturbing for a user. another downsizing geometric refinement may be achieved by directly coupling a collimating lens (which may, for example, comprise a gradient refractive index, or "grin", lens, a conventional curved lens, or a diffractive lens) to the tip of the scanning fiber itself in a case of a fiber scanning display array. for example, referring to fig. 13d , a grin lens (354) is shown fused to the end of a single mode optical fiber. an actuator 350, such as a piezoelectric actuator, may be coupled to the fiber 352 and may be used to scan the fiber tip. in another embodiment the end of the fiber may be shaped into a hemispherical shape using a curved polishing treatment of an optical fiber to create a lensing effect. in another embodiment a standard refractive lens may be coupled to the end of each optical fiber using an adhesive. in another embodiment a lens may be built from a dab of transmissive polymeric material or glass, such as epoxy. in another embodiment the end of an optical fiber may be melted to create a curved surface for a lensing effect. fig. 13c-2 shows an embodiment wherein display configurations (e.g., scanning fiber displays with grin lenses, shown in close-up view of fig. 13c-1 ) such as that shown in fig. 13d may be coupled together through a single transparent substrate 356 preferably having a refractive index that closely matches the cladding of the optical fibers 352 such that the fibers themselves are not substantially visible for viewing of the outside world across the depicted assembly. it should be appreciated that if the index matching of the cladding is done precisely, then the larger cladding/housing becomes transparent and only the small cores, which preferably are about 3 microns in diameter, will be obstructing the view. in one embodiment the matrix 358 of displays may all be angled inward so they are directed toward the anatomic pupil of the user (in another embodiment, they may stay parallel to each other, but such a configuration is less efficient). referring to fig. 13e , another embodiment is depicted wherein rather than using circular fibers to move cyclically, a thin series of planar waveguides (358) are configured to be cantilevered relative to a larger substrate structure 356. in one variation, the substrate 356 may be moved to produce cyclic motion (e.g., at the resonant frequency of the cantilevered members 358) of the planar waveguides relative to the substrate structure. in another variation, the cantilevered waveguide portions 358 may be actuated with piezoelectric or other actuators relative to the substrate. image illumination information may be injected, for example, from the right side (360) of the substrate structure to be coupled into the cantilevered waveguide portions (358). in one embodiment the substrate 356 may comprise a waveguide configured (such as with an integrated doe configuration as described above) to totally internally reflect incoming light 360 along its length and then redirect it to the cantilevered waveguide portions 358. as a person gazes toward the cantilevered waveguide portions (358) and through to the real world 144 behind, the planar waveguides are configured to minimize any dispersion and/or focus changes with their planar shape factors. in the context of discretized aggregate wavefront displays, there may be value in having some angular diversity created for every point in the exit pupil of the eye. in other words, it is desirable to have multiple incoming beams to represent each pixel in a displayed image. referring to figs. 13f-1 and 13f-2 , one approach to gain further angular and spatial diversity is to use a multicore fiber and place a lens at the exit point, such as a grin lens. this may cause exit beams to be deflected through a single nodal point 366. this nodal point 366 may then be scanned back and forth in a scanned fiber type of arrangement (such as by a piezoelectric actuator 368). if a retinal conjugate is placed at the plane defined at the end of the grin lens, a display may be created that is functionally equivalent to the general case discretized aggregate wavefront configuration described above. referring to fig. 13g , a similar effect may be achieved not by using a lens, but by scanning the face of a multicore system at the correct conjugate of an optical system 372 in order to create a higher angular and spatial diversity of beams. in other words, rather than having a plurality of separately scanned fiber displays (as shown in the bundled example of fig. 12a described above), some of this requisite angular and spatial diversity may be created through the use of multiple cores to create a plane which may be relayed by a waveguide. referring to fig. 13h , a multicore fiber 362 may be scanned (such as by a piezoelectric actuator 368) to create a set of beamlets with a multiplicity of angles of incidence and points of intersection which may be relayed to the eye 58 by a waveguide 370. thus in one embodiment a collimated lightfield image may be injected into a waveguide, and without any additional refocusing elements, that lightfield display may be translated directly to the human eye. figs. 13i-13l depict certain commercially available multicore fiber 362 configurations (from vendors such as mitsubishi cable industries, ltd. of japan), including one variation 363 with a rectangular cross section, as well as variations with flat exit faces 372 and angled exit faces 374. referring to fig. 13m , some additional angular diversity may be created by having a waveguide 376 fed with a linear array of displays 378, such as scanning fiber displays. referring to figs. 14a-14f , another group of configurations for creating a fixed viewpoint lightfield display is described. referring back to fig. 11a , if a two-dimensional plane was created that was intersecting all of the small beams coming in from the left, each beamlet would have a certain point of intersection with that plane. if another plane was created at a different distance to the left, then all of the beamlets would intersect that plane at a different location. referring back to fig. 14a , if various positions on each of two or more planes are allowed to selectively transmit or block the light radiation directed through it, such a multi-planar configuration may be utilized to selectively create a lightfield by independently modulating individual beamlets. the basic embodiment of fig. 14a shows two spatial light modulators, such as liquid crystal display panels (380, 382). in other embodiments, the spatial light modulators may be mems shutter displays or dlp dmd arrays. the spatial light modulators may be independently controlled to block or transmit different rays on a high-resolution basis. for example, referring to fig. 14a , if the second panel 382 blocks or attenuates transmission of rays at point "a" 384, all of the depicted rays will be blocked. however, if only the first panel 380 blocks or attenuates transmission of rays at point "b" 386, then only the lower incoming ray 388 will be blocked/attenuated, while the rest will be transmitted toward the pupil 45. each of the controllable panels or planes may be deemed a "spatial light modulator" or "fatte". the intensity of each transmitted beam passed through a series of slms will be a function of the combination of the transparency of the various pixels in the various slm arrays. thus without any sort of lens elements, a set of beamlets with a multiplicity of angles and points of intersection (or a "lightfield") may be created using a plurality of stacked slms. additional numbers of slms beyond two provides more opportunities to control which beams are selectively attenuated. as noted briefly above, in addition to using stacked liquid crystal displays as slms, planes of dmd devices from dlp systems may be stacked to function as slms. in one or more embodiments, they may be preferred over liquid crystal systems as slms due to their ability to more efficiently pass light (e.g., with a mirror element in a first state, reflectivity to the next element on the way to the eye may be quite efficient; with a mirror element in a second state, the mirror angle may be moved by an angle such as 12 degrees to direct the light away from the path to the eye). referring to fig. 14b , in one dmd embodiment, two dmds (390, 390) may be utilized in series with a pair of lenses (394, 396) in a periscope type of configuration to maintain a high amount of transmission of light from the real world 144 to the eye 58 of the user. the embodiment of fig. 14c provides six different dmd (402, 404, 406, 408, 410, 412) plane opportunities to intercede from an slm functionality as beams are routed to the eye 58, along with two lenses (398, 400) for beam control. fig. 14d illustrates a more complicated periscope type arrangement with up to four dmds (422, 424, 426, 428) for slm functionality and four lenses (414, 420, 416, 418). this configuration is designed to ensure that the image does not flip upside down as it travels through to the eye 58. fig. 14e illustrates in embodiment in which light may be reflected between two different dmd devices (430, 432) without any intervening lenses (the lenses in the above designs are useful in such configurations for incorporating image information from the real world), in a hall-of-mirrors type of arrangement wherein the display may be viewed through the "hall of mirrors" and operates in a mode substantially similar to that illustrated in fig. 14a . fig. 14f illustrates an embodiment wherein a the non-display portions of two facing dmd chips (434, 436) may be covered with a reflective layer to propagate light to and from active display regions (438, 440) of the dmd chips. in other embodiments, in place of dmds for slm functionality, arrays of sliding mems shutters (such as those available from vendors such as pixtronics, a division of qualcomm, inc.) may be utilized to either pass or block light. in another embodiment, arrays of small louvers that move out of place to present light-transmitting apertures may similarly be aggregated for slm functionality. a lightfield of many small beamlets (say, less than about 0.5mm in diameter) may be injected into and propagated through a waveguide or other optical system. for example, a conventional "birdbath" type of optical system may be suitable for transferring the light of a lightfield input, or a freeform optics design, as described below, or any number of waveguide configurations. figs. 15a-15c illustrate the use of a wedge type waveguide 442 along with a plurality of light sources as another configuration useful in creating a lightfield. referring to fig. 15a , light may be injected into the wedge-shaped waveguide 442 from two different locations/displays (444, 446), and will emerge according to the total internal reflection properties of the wedge-shaped waveguide at different angles 448 based upon the points of injection into the waveguide. referring to fig. 15b , if a linear array 450 of displays (such as scanning fiber displays) is created, projecting into the end of the waveguide as shown, then a large angular diversity of beams 452 will be exiting the waveguide in one dimension, as shown in fig. 15c . indeed, if yet another linear array of displays injecting into the end of the waveguide is added but at a slightly different angle, then an angular diversity of beams may be created that exits similarly to the fanned out exit pattern shown in fig. 15c , but at an orthogonal axis. together, these beams may be utilized to create a two-dimensional fan of rays exiting each location of the waveguide. thus another configuration is presented for creating angular diversity to form a lightfield display using one or more scanning fiber display arrays (or alternatively using other displays which will meet the space requirements, such as miniaturized dlp projection configurations). alternatively, as an input to the wedge-shaped waveguides shown herein, a stack of slm devices may be utilized, in this embodiment, rather than the direct view of slm output as described above, the lightfield output from the slm configuration may be used as an input to a configuration such as that shown in fig. 15c . it should be appreciated that while a conventional waveguide is best suited to relay beams of collimated light successfully, with a lightfield of small-diameter collimated beams, conventional waveguide technology may be utilized to further manipulate the output of such a lightfield system as injected into the side of a waveguide, such as a wedge-shaped waveguide, due to the beam size / collimation. in another related embodiment, rather than projecting with multiple separate displays, a multicore fiber may be used to generate a lightfield and inject it into the waveguide. further, a time-varying lightfield may be utilized as an input, such that rather than creating a static distribution of beamlets coming out of a lightfield, dynamic elements that are methodically changing the path of the set of beams may also be introduced. this may be accomplished by using components such as waveguides with embedded does (e.g., such as those described above in reference to figs. 8b-8n , or liquid crystal layers, as described in reference to fig. 7b ), in which two optical paths are created. one path is a smaller total internal reflection path wherein a liquid crystal layer is placed in a first voltage state to have a refractive index mismatch with the other substrate material that causes total internal reflection down just the other substrate material's waveguide. another path is a larger total internal reflection optical path wherein the liquid crystal layer is placed in a second voltage state to have a matching refractive index with the other substrate material, such that the light totally internally reflects through the composite waveguide which includes both the liquid crystal portion and the other substrate portion. similarly a wedge-shaped waveguide may be configured to have a bi-modal total internal reflection paradigm. for example, in one variation, wedge-shaped elements may be configured such that when a liquid crystal portion is activated, not only is the spacing changed, but also the angle at which the beams are reflected. one embodiment of a scanning light display may be characterized simply as a scanning fiber display with a lens at the end of the scanned fiber. many lens varieties are suitable, such as a grin lens, which may be used to collimate the light or to focus the light down to a spot smaller than the fiber's mode field diameter providing the advantage of producing a numerical aperture (or "na") increase and circumventing the optical invariant, which is correlated inversely with spot size. smaller spot size generally facilitates a higher resolution opportunity from a display perspective, which generally is preferred. in one embodiment, a grin lens may be long enough relative to the fiber that it may comprise the vibrating element (e.g., rather than the usual distal fiber tip vibration with a scanned fiber display). in another embodiment, a diffractive lens may be utilized at the exit end of a scanning fiber display (e.g., patterned onto the fiber). in another embodiment, a curved mirror may be positioned on the end of the fiber that operates in a reflecting configuration. essentially any of the configurations known to collimate and focus a beam may be used at the end of a scanning fiber to produce a suitable scanned light display. two significant utilities to having a lens coupled to or comprising the end of a scanned fiber (e.g., as compared to configurations wherein an uncoupled lens may be utilized to direct light after it exits a fiber) are (a) the light exiting may be collimated to obviate the need to use other external optics to do so, and (b) the na, or the angle of the cone at which light sprays out the end of the single-mode fiber core, may be increased, thereby decreasing the associated spot size for the fiber and increasing the available resolution for the display. as described above, a lens such as a grin lens may be fused to or otherwise coupled to the end of an optical fiber or formed from a portion of the end of the fiber using techniques such as polishing. in one embodiment, a typical optical fiber with an na of about 0.13 or 0.14 may have a spot size (also known as the "mode field diameter" for the optical fiber given the numerical aperture (na)) of about 3 microns. this provides for relatively high resolution display possibilities given the industry standard display resolution paradigms (for example, a typical microdisplay technology such as lcd or organic light emitting diode, or "oled" has a spot size of about 5 microns). thus the aforementioned scanning light display may have 3/5 of the smallest pixel pitch available with a conventional display. further, using a lens at the end of the fiber, the aforementioned configuration may produce a spot size in the range of 1-2 microns. in another embodiment, rather than using a scanned cylindrical fiber, a cantilevered portion of a waveguide (such as a waveguide created using microfabrication processes such as masking and etching, rather than drawn microfiber techniques) may be placed into scanning oscillatory motion, and may be fitted with lensing at the exit ends. in another embodiment, an increased numerical aperture for a fiber to be scanned may be created using a diffuser (e.g., one configured to scatter light and create a larger na) covering the exit end of the fiber. in one variation, the diffuser may be created by etching the end of the fiber to create small bits of terrain that scatter light. in another variation, a bead or sandblasting technique, or direct sanding/scuffing technique may be utilized to create scattering terrain. in yet another variation, an engineered diffuser, similar to a diffractive element, may be created to maintain a clean spot size with desirable na. referring to fig. 16a , an array of optical fibers 454 is shown coupled in to a coupler 456 configured to hold them in parallel together so that their ends may be ground and polished to have an output edge at a critical angle (458; 42 degrees for most glass, for example) to the longitudinal axes of the input fibers, such that the light exiting the angled faces will exit as though it had been passing through a prism, and will bend and become nearly parallel to the surfaces of the polished faces. the beams exiting the fibers 460 in the bundle will become superimposed, but will be out of phase longitudinally due to the different path lengths (referring to fig. 16b , for example, the difference in path lengths from angled exit face to focusing lens for the different cores is visible). what was an x axis type of separation in the bundle before exit from the angled faces, will become a z axis separation. this fact is helpful in creating a multifocal light source from such a configuration. in another embodiment, rather than using a bundled/coupled plurality of single mode fibers, a multicore fiber, such as those available from mitsubishi cable industries, ltd. of japan, may be angle polished. in one embodiment, if a 45 degree angle is polished into a fiber and then covered with a reflective element, such as a mirror coating, the exiting light may be reflected from the polished surface and emerge from the side of the fiber (in one embodiment at a location wherein a flat-polished exit window has been created in the side of the fiber) such that as the fiber is scanned, it is functionally scanned in an equivalent of an x-y scan rather than an x-y scan, with the distance changing during the course of the scan. such a configuration may be beneficially utilized to change the focus of the display as well. multicore fibers may be configured to play a role in display resolution enhancement (e.g., higher resolution). for example, in one embodiment, if separate pixel data is sent down a tight bundle of 19 cores in a multicore fiber, and that cluster is scanned around in a sparse spiral pattern with the pitch of the spiral being approximately equal to the diameter of the multicore, then sweeping around will effectively create a display resolution that is approximately 19xthe resolution of a single core fiber being similarly scanned around. indeed, it may be more practical to arrange the fibers more sparsely positioned relative to each other, as in the configuration of fig. 16c , which has 7 clusters 464. it should be appreciated that seven clusters is used for illustrative purposes because it is an efficient tiling/hex pattern, and other patterns or numbers may be utilized (e.g., a cluster of 19). the configuration is scalable (up or down) of 3 fibers each housed within a conduit 462. with a sparse configuration as shown in fig. 16c , scanning of the multicore scans each of the cores through its own local region, as opposed to a configuration wherein the cores are all packed tightly together and scanned. the cores may overlap with scanning if the cores are overly proximate to each other, and the na of the core is not large enough, the very closely packed cores may cause blurring with each other, thereby not creating as discriminable a spot for display. thus, for resolution increases, it is preferable to have sparse tiling rather than highly dense tiling, although both approaches may be utilized. the notion that densely packed scanned cores can create blurring at the display may be utilized as an advantage in one embodiment wherein a plurality (say a triad or cores to carry red, green, and blue light) of cores are intentionally packed together densely such that each triad forms a triad of overlapped spots featuring red, green, and blue light. with such a configuration, one is able to have an rgb display without having to combine red, green, and blue into a single-mode core, which is an advantage, because conventional mechanisms for combining a plurality (such as three) wavelets of light into a single core are subject to significant losses in optical energy. referring to fig. 16c , in one embodiment each tight cluster of 3 fiber cores contains one core that relays red light, one core that relays green light, and one core that relays blue light, with the 3 fiber cores close enough together that their positional differences are not resolvable by the subsequent relay optics, forming an effectively superimposed rgb pixel; thus, the sparse tiling of 7 clusters produces resolution enhancement while the tight packing of 3 cores within the clusters facilitates seamless color blending without the need to utilize glossy rgb fiber combiners (e.g., those using wavelength division multiplexing or evanescent coupling techniques). referring to fig. 16d , in another more simple variation, one may have just one cluster 464 housed in a conduit 468 for, say, red/green/blue (and in another embodiment, another core may be added for infrared for uses such as eye tracking). in another embodiment, additional cores may be placed in the tight cluster to carrying additional wavelengths of light to comprise a multi-primary display for increased color gamut. referring to fig. 16e , in another embodiment, a sparse array of single cores 470 within a conduit 466 may be utilized (e.g., in one variation with red, green, and blue combined down each of them). such a configuration is workable albeit somewhat less efficient for resolution increase, but not optimum for red/green/blue combining. multicore fibers also may be utilized for creating lightfield displays. indeed, rather than keeping the cores separated enough from each other such that the cores do not scan on each other's local area at the display panel, as described above in the context of creating a scanning light display, with a lightfield display, it may be desirable to scan around a densely packed plurality of fibers. this is because each of the beams produced represents a specific part of the lightfield. the light exiting from the bundled fiber tips can be relatively narrow if the fibers have a small na. lightfield configurations may take advantage of this and utilize an arrangement in which a plurality of slightly different beams are being received from the array at the anatomic pupil. thus there are optical configurations with scanning a multicore that are functionally equivalent to an array of single scanning fiber modules, and thus a lightfield may be created by scanning a multicore rather than scanning a group of single mode fibers. in one embodiment, a multi-core phased array approach may be used to create a large exit pupil variable wavefront configuration to facilitate three-dimensional perception. a single laser configuration with phase modulators is described above. in a multicore embodiment, phase delays may be induced into different channels of a multicore fiber, such that a single laser's light is injected into all of the cores of the multicore configuration so that there is mutual coherence. in one embodiment, a multi-core fiber may be combined with a lens, such as a grin lens. such a lens may be, for example, a refractive lens, diffractive lens, or a polished edge functioning as a lens. the lens may be a single optical surface, or may comprise multiple optical surfaces stacked up. indeed, in addition to having a single lens that extends the diameter of the multicore, a smaller lenslet array may be desirable at the exit point of light from the cores of the multicore, for example. fig. 16f shows an embodiment wherein a multicore fiber 470 is emitting multiple beams into a lens 472, such as a grin lens. the lens collects the beams down to a focal point 474 in space in front of the lens. in many conventional configurations, the beams exiting the multicore fiber may be diverging. the grin or other lens is configured to function to direct them down to a single point and collimate them, such that the collimated result may be scanned around for a lightfield display, for instance. referring to fig. 16g , smaller lenses 478 may be placed in front of each of the cores of a multicore 476 configuration, and these lenses may be utilized to collimate the rays. in addition, a shared lens 480 may be configured to focus the collimated beams down to a diffraction limited spot 482 that is aligned for all of the three spots. by combining three collimated, narrow beams with narrow na together as shown, one effectively combines all three into a much larger angle of emission which translates to a smaller spot size in, for example, a head mounted optical display system. referring to fig. 16h , one embodiment features a multicore fiber 476 with a lenslet 478 array feeding the light to a small prism array 484 that deflects the beams generated by the individual cores to a common point. alternatively one may have the small lenslet array shifted relative to the cores such that the light is being deflected and focused down to a single point. such a configuration may be utilized to increase the na. referring to fig. 16i , a two-step configuration is shown with a small lenslet 478 array capturing light from the multicore fiber 476, followed sequentially by a shared lens 486 to focus the beams to a single point 488. such a configuration may be utilized to increase the numerical aperture. as discussed above, a larger na corresponds to a smaller pixel size and higher possible display resolution. referring to fig. 16j , a beveled fiber array which may be held together with a coupler 456, such as those described above, may be scanned with a reflecting device 494 such as a dmd module of a dlp system. with multiple single fibers 454 coupled into the array, or a multicore instead, the superimposed light can be directed through one or more focusing lenses (490, 492) to create a multifocal beam. with the superimposing and angulation of the array, the different sources are at different distances from the focusing lens, which creates different focus levels in the beams as they emerge from the lens 492 and are directed toward the retina 54 of the eye 58 of the user. for example, the farthest optical route/beam may be set up to be a collimated beam representative of optical infinity focal positions. closer routes/beams may be associated with diverging spherical wavefronts of closer focal locations. the multifocal beam may be passed into a scanning mirror which may be configured to create a raster scan (or, for example, a lissajous curve scan pattern or a spiral scan pattern) of the multifocal beam which may be passed through a series of focusing lenses and then to the cornea and crystalline lens of the eye. the various beams emerging from the lenses create different pixels or voxels of varying focal distances that are superimposed. in one embodiment, one may write different data to each of the light modulation channels at the front end, thereby creating an image that is projected to the eye with one or more focus elements. by changing the focal distance of the crystalline lens (e.g., by accommodating), different incoming pixels may be brought into and out of focus, as shown in figs. 16k and 16l wherein the crystalline lens is in different z axis positions. in another embodiment, the fiber array may be actuated/moved around by a piezoelectric actuator. in another embodiment, a relatively thin ribbon array may be resonated in cantilevered form along the axis perpendicular to the arrangement of the array fibers (e.g., in the thin direction of the ribbon) when a piezoelectric actuator is activated. in one variation, a separate piezoelectric actuator may be utilized to create a vibratory scan in the orthogonal long axis. in another embodiment, a single mirror axis scan may be employed for a slow scan along the long axis while the fiber ribbon is vibrated resonantly. referring to fig. 16m , an array 496 of scanning fiber displays 498 may be beneficially bundled/tiled for an effective resolution increase. it is anticipated that with such as configuration, each scanning fiber of the bundle is configured to write to a different portion of the image plane 500, as shown, for example, in fig. 16n . referring now to fig. 16n , each portion of the image plane is addressed by the emissions from a least one bundle. in other embodiments, optical configurations may be utilized that allow for slight magnification of the beams as the beams exit the optical fiber such that there is some overlap in the hexagonal, or other lattice pattern, that hits the display plane. this may allow for a better fill factor while also maintaining an adequately small spot size in the image plane while maintaining a subtle magnification in that image plane. rather than utilizing individual lenses at the end of each scanned fiber enclosure housing, in one embodiment a monolithic lenslet array may be utilized, so that the lenses may be arranged as closely packed as possible. this allows for even smaller spot sizes in the image plane because one may use a lower amount of magnification in the optical system. thus, arrays of fiber scan displays may be used to increase the resolution of the display, or in other words, they may be used to increase the field of view of the display, because each engine is being used to scan a different portion of the field of view. for a lightfield configuration, the emissions may be more desirably overlapped at the image plane. in one embodiment, a lightfield display may be created using a plurality of small diameter fibers scanned around in space. for example, instead of all of the fibers addressing a different part of an image plane as described above, the configuration may allow for more overlapping (e.g., more fibers angled inward, etc.). or, in another embodiment, the focal power of the lenses may be changed such that the small spot sizes are not conjugate with a tiled image plane configuration. such a configuration may be used to create a lightfield display to scan a plurality of smaller diameter rays around that become intercepted in the same physical space. referring back to fig. 12b , it was discussed that one way of creating a lightfield display involves making the output of the elements on the left collimated with narrow beams, and then making the projecting array conjugate with the eye pupil on the right. referring to fig. 16o , with a common substrate block 502, a single actuator may be utilized to actuate a plurality of fibers 506 in unison together, which is similar to the configuration discussed above in reference to figs. 13-c-1 and 13-c-2 . it may be practically difficult to have all of the fibers retain the same resonant frequency, vibrate in a desirable phase relationship to each other, or have the same dimensions of cantilevering from the substrate block. to address this challenge, the tips of the fibers may be mechanically coupled with a lattice or sheet 504, such as a graphene sheet that is very thin, rigid, and light in weight. with such a coupling, the entire array may vibrate similarly and have the same phase relationship. in another embodiment a matrix of carbon nanotubes may be utilized to couple the fibers, or a piece of very thin planar glass (such as the kind used in creating liquid crystal display panels) may be coupled to the fiber ends. further, a laser or other precision cutting device may be utilized to cut all associated fibers to the same cantilevered length. referring to fig. 17 , in one embodiment it may be desirable to have a contact lens directly interfaced with the cornea, and configured to facilitate the eye focusing on a display that is quite close (such as the typical distance between a cornea and an eyeglasses lens). rather than placing an optical lens as a contact lens, in one variation the lens may comprise a selective filter. fig. 17 depicts a plot 508 or a "notch filter", which, due to its design blocks only certain wavelength bands, such as 450nm (peak blue), 530nm (green), and 650nm and generally passes or transmits other wavelengths. in one embodiment several layers of dielectric coatings may be aggregated to provide the notch filtering functionality. such a filtering configuration may be coupled with a scanning fiber display that is producing a very narrow band illumination for red, green, and blue, and the contact lens with the notch filtering will block out all of the light coming from the display (such as a minidisplay, such as an oled display, mounted in a position normally occupied by an eyeglasses lens) except for the transmissive wavelengths. a narrow pinhole may be created in the middle of the contact lens filtering layers/film such that the small aperture (e.g., less than about 1.5mm diameter) does allow passage of the otherwise blocked wavelengths. thus a pinhole lens configuration is created that functions in a pinhole manner for red, green, and blue only to intake images from the mini-display, while light from the real world, which generally is broadband illumination, will pass through the contact lens relatively unimpeded. thus a large depth of focus virtual display configuration may be assembled and operated. in another embodiment, a collimated image exiting from a waveguide would be visible at the retina because of the pinhole large-depth-of-focus configuration. it may be useful to create a display that can vary its depth of focus over time. for example, in one embodiment, a display may be configured to have different display modes that may be selected (preferably rapidly toggling between the two at the command of the operator) by an operator, such as a first mode combining a very large depth of focus with a small exit pupil diameter (e.g., so that everything is in focus all of the time), and a second mode featuring a larger exit pupil and a more narrow depth of focus. in operation, if a user is to play a three-dimensional video game with objects to be perceived at many depths of field, the operator may select the first mode. alternatively, if a user is to type in a long essay (e.g., for a relatively long period of time) using a two-dimensional word processing display configuration, it may be more desirable to switch to the second mode to have the convenience of a larger exit pupil, and a sharper image. in another embodiment, it may be desirable to have a multi-depth of focus display configuration wherein some subimages are presented with a large depth of focus while other subimages are presented with small depth of focus. for example, one configuration may have red wavelength and blue wavelength channels presented with a very small exit pupil so that they are always in focus. then, a green channel only may be presented with a large exit pupil configuration with multiple depth planes (e.g., because the human accommodation system tends to preferentially target green wavelengths for optimizing focus level). thus, in order to reduce costs associated with including too many elements to represent with full depth planes in red, green, and blue, the green wavelength may be prioritized and represented with various different wavefront levels. red and blue may be relegated to being represented with a more maxwellian approach (and, as described above in reference to maxwellian displays, software may be utilized to induce gaussian levels of blur). such a display would simultaneously present multiple depths of focus. as described above, there are portions of the retina which have a higher density of light sensors. the fovea portion, for example, generally is populated with approximately 120 cones per visual degree. display systems have been created in the past that use eye or gaze tracking as an input, and to save computation resources by only creating really high resolution rendering based on where the person is gazing at the time. however, lower resolution rendering is presented to the rest of the retina. the locations of the high versus low resolution portions may be dynamically slaved to the tracked gaze location in such a configuration, which may be termed a "foveated display". an improvement on such configurations may comprise a scanning fiber display with pattern spacing that may be dynamically slaved to tracked eye gaze. for example, with a typical scanning fiber display operating in a spiral pattern, as shown in fig. 18 (the leftmost portion 510 of the image in fig. 18 illustrates a spiral motion pattern of a scanned multicore fiber 514; the rightmost portion 512 of the image in fig. 18 illustrates a spiral motion pattern of a scanned single fiber 516 for comparison), a constant pattern pitch provides for a uniform display resolution. in a foveated display configuration, a non-uniform scanning pitch may be utilized, with smaller/tighter pitch (and therefore higher resolution) dynamically slaved to the detected gaze location. for example, if the user's gaze is detected as moving toward the edge of the display screen, the spirals may be clustered more densely in such location, which would create a toroid-type scanning pattern for the high-resolution portions, and the rest of the display being in a lower-resolution mode. in a configuration wherein gaps may be created in the portions of the display in a lower-resolution mode, blur could be intentionally and dynamically created to smooth out the transitions between scans, as well as between transitions from high-resolution to lower-resolution scan pitch. the term lightfield may be used to describe a volumetric 3-d representation of light traveling from an object to a viewer's eye. however, an optical see-through display can only reflect light to the eye, not the absence of light, and ambient light from the real world will add to any light representing a virtual object. that is, if a virtual object presented to the eye contains a black or very dark portion, the ambient light from the real world may pass through that dark portion and obscure that it was intended to be dark. it is nonetheless desirable to be able to present a dark virtual object over a bright real background, and for that dark virtual object to appear to occupy a volume at a desired viewing distance; e.g., it is useful to create a "darkfield" representation of that dark virtual object, in which the absence of light is perceived to be located at a particular point in space. with regard to occlusion elements and the presentation of information to the eye of the user so that he or she can perceive darkfield aspects of virtual objects, even in well lighted actual environments, certain aspects of the aforementioned spatial light modulator, or "slm", configurations are pertinent. as described above, with a light-sensing system such as the eye, one approach for selective perception of dark field is to selectively attenuate light from such portions of the display. in other words, darkfield cannot be specifically projected - it's the lack of illumination that may be perceived as darkfield. the following discussion will present various configurations for selective attenuation of illumination. referring back to the discussion of slm configurations, one approach to selectively attenuate for a darkfield perception is to block all of the light coming from one angle, while allowing light from other angles to be transmitted. this may be accomplished with a plurality of slm planes comprising elements such as liquid crystal (which may not be the most optimal due to its relatively low transparency when in the transmitting state), dmd elements of dlp systems (which have relative high transmission/reflection ratios when in such mode), and mems arrays or shutters that are configured to controllably shutter or pass light radiation, as described above. with regard to suitable liquid crystal display ("lcd") configurations, a cholesteric lcd array may be utilized for a controlled occlusion/blocking array. as opposed to the conventional lcd paradigm wherein a polarization state is changed as a function of voltage, with a cholesteric lcd configuration, a pigment is being bound to the liquid crystal molecule, and then the molecule is physically tilted in response to an applied voltage. such a configuration may be designed to achieve greater transparency when in a transmissive mode than conventional lcd, and a stack of polarizing films may not be needed. in another embodiment, a plurality of layers of controllably interrupted patterns may be utilized to controllably block selected presentation of light using moiré effects. for example, in one configuration, two arrays of attenuation patterns, each of which may comprise, for example, fine-pitched sine waves printed or painted upon a transparent planar material such as a glass substrate, may be presented to the eye of a user at a distance close enough that when the viewer looks through either of the patterns alone, the view is essentially transparent, but if the viewer looks through both patterns lined up in sequence, the viewer will see a spatial beat frequency moiré attenuation pattern, even when the two attenuation patterns are placed in sequence relatively close to the eye of the user. the beat frequency is dependent upon the pitch of the patterns on the two attenuation planes, so in one embodiment, an attenuation pattern for selectively blocking certain light transmission for darkfield perception may be created using two sequential patterns, each of which otherwise would be transparent to the user, but which together in series create a spatial beat frequency moiré attenuation pattern selected to attenuate in accordance with the darkfield perception desired in the ar system. in another embodiment a controlled occlusion paradigm for darkfield effect may be created using a multi-view display style occluder. for example, one configuration may comprise one pin-holed layer that fully occludes with the exception of small apertures or pinholes, along with a selective attenuation layer in series, which may comprise an lcd, dlp system, or other selective attenuation layer configuration, such as those described above. in one scenario, with the pinhole array placed at a typical eyeglasses lens distance from the cornea (about 30mm), and with a selective attenuation panel located opposite the pinhole array from the eye, a perception of a sharp mechanical edge out in space may be created. in essence, if the configuration will allow certain angles of light to pass, and others to be blocked or occluded, than a perception of a very sharp pattern, such as a sharp edge projection, may be created. in another related embodiment, the pinhole array layer may be replaced with a second dynamic attenuation layer to provide a somewhat similar configuration, but with more controls than the static pinhole array layer (the static pinhole layer could be simulated, but need not be). in another related embodiment, the pinholes may be replaced with cylindrical lenses. the same pattern of occlusion as in the pinhole array layer configuration may be achieved, but with cylindrical lenses, the array is not restricted to the very tiny pinhole geometries. to prevent the eye from being presented with distortions due to the lenses when viewing through to the real world, a second lens array may be added on the side of the aperture or lens array opposite of the side nearest the eye to compensate and provide the view-through illumination with basically a zero power telescope configuration. in another embodiment, rather than physically blocking light for occlusion and creation of darkfield perception, the light may be bent or redirected. or, a polarization of the light may be changed if a liquid crystal layer is utilized. for example, in one variation, each liquid crystal layer may act as a polarization rotator such that if a patterned polarizing material is incorporated on one face of a panel, then the polarization of individual rays coming from the real world may be selectively manipulated so they catch a portion of the patterned polarizer. there are polarizers known in the art that have checkerboard patterns wherein half of the "checker boxes" have vertical polarization and the other half have horizontal polarization. in addition, if a material such as liquid crystal is used in which polarization may be selectively manipulated, light may be selectively attenuated with this. as described above, selective reflectors may provide greater transmission efficiency than lcd. in one embodiment, if a lens system is placed such that light coming in from the real world is focused on an image plane, and if a dmd (e.g., dlp technology) is placed at that image plane to reflect light when in an "on" state towards another set of lenses that pass the light to the eye, and those lenses also have the dmd at their focal length, then an attenuation pattern that is in focus for the eye may be created. in other words, dmds may be used in a selective reflector plane in a zero magnification telescope configuration, such as is shown in fig. 19a , to controllably occlude and facilitate creating darkfield perception. as shown in fig. 19a , a lens (518) is taking light from the real world 144 and focusing it down to an image plane 520. if a dmd (or other spatial attenuation device) 522 is placed at the focal length of the lens (e.g., at the image plane 520), the lens 518 utilizes the light coming from optical infinity and focus it onto the image plane 520. then the spatial attenuator 522 may be utilized to selectively block out content that is to be attenuated. fig. 19a shows the attenuator dmds in the transmissive mode wherein they pass the beams shown crossing the device. the image is then placed at the focal length of the second lens 524. preferably the two lenses (518, 524) have the same focal power such that the light from the real world 144 is not magnified. such a configuration may be used to present unmagnified views of the world while also allowing selective blocking/attenuation of certain pixels. in another embodiment, as shown in figs. 19b and 19c , additional dmds may be added such that light reflects from each of four dmds (526, 528, 530, 532) before passing to the eye. fig. 19b shows an embodiment with two lenses preferably with the same focal power (focal length "f") placed at a 2f relationship from one another (the focal length of the first being conjugate to the focal length of the second) to have the zero-power telescope effect; fig. 19c shows an embodiment without lenses. the angles of orientation of the four reflective panels (526, 528, 530, 532) in the depicted embodiments of figs. 19b and 19c are shown to be around 45 degrees for simple illustration purposes, but specific relative orientation may be required(for example, a typical dmd reflect at about a 12 degree angle) in one or more embodiments. in another embodiment, the panels may also be ferroelectric, or may be any other kind of reflective or selective attenuator panel or array. in one embodiment similar to those depicted in figs. 19b and 19c , one of the three reflector arrays may be a simple mirror, such that the other 3 are selective attenuators, thus still providing three independent planes to controllably occlude portions of the incoming illumination in furtherance of darkfield perception. by having multiple dynamic reflective attenuators in series, masks at different optical distances relative to the real world may be created. alternatively, referring back to fig. 19c , one may create a configuration wherein one or more dmds are placed in a reflective periscope configuration without any lenses. such a configuration may be driven in lightfield algorithms to selectively attenuate certain rays while others are passed. in another embodiment, a dmd or similar matrix of controllably movable devices may be created upon a transparent substrate as opposed to a generally opaque substrate, for use in a transmissive configuration such as virtual reality. in another embodiment, two lcd panels may be utilized as lightfield occluders. in one variation, the two lcd panels may be considered attenuators due to their attenuating capability as described above. alternatively, they may be considered polarization rotators with a shared polarizer stack. suitable lcds may comprise components such as blue phase liquid crystal, cholesteric liquid crystal, ferroelectric liquid crystal, and/or twisted nematic liquid crystal. one embodiment may comprise an array of directionally-selective occlusion elements, such as a mems device featuring a set of louvers that can change rotation such that the majority of light that is coming from a particular angle is passed, but in a manner such that a broad face is presented to light that is coming from a different angle. this somewhat similar to the manner in which plantation shutters may be utilized with a typical human scale window. the mems/louvers configuration may be placed upon an optically transparent substrate, with the louvers substantially opaque. ideally such a configuration would comprise a louver pitch fine enough to selectively occlude light on a pixel-by-pixel basis. in another embodiment, two or more layers or stacks of louvers may be combined to provide further controls. in another embodiment, rather than selectively blocking light, the louvers may be polarizers configured to change the polarization state of light on a controllably variable basis. as described above, another embodiment for selective occlusion may comprise an array of sliding panels in a mems device such that the sliding panels may be controllably opened (e.g., by sliding in a planar fashion from a first position to a second position; or by rotating from a first orientation to a second orientation; or, for example, combined rotational reorientation and displacement) to transmit light through a small frame or aperture, and controllably closed to occlude the frame or aperture and prevent transmission. the array may be configured to open or occlude the various frames or apertures such that rays that are to be attenuated are maximally attenuate, and rays that are to be transmitted are only minimally attenuated. in an embodiment in which a fixed number of sliding panels can either occupy a first position occluding a first aperture and opening a second aperture, or a second position occluding the second aperture and opening the first aperture, there may always be the same amount of light transmitted overall (because 50% of the apertures are occluded, and the other 50% are open, with such a configuration), but the local position changes of the shutters or doors may create targeted moiré or other effects for darkfield perception with the dynamic positioning of the various sliding panels. in one embodiment, the sliding panels may comprise sliding polarizers. if the sliding panels are placed in a stacked configuration with other polarizing elements, the panel may be either static or dynamic, and may be utilized to selectively attenuate. referring to fig. 19d , another configuration providing an opportunity for selective reflection, such as via a dmd style reflector array (534), is shown, such that a stacked set of two waveguides (536, 538) along with a pair of focus elements (540, 542) and a reflector (534; such as a dmd) may be used to capture a portion of incoming light with an entrance reflector (544). the reflected light may be totally internally reflected down the length of the first waveguide (536), into a focusing element (540) to bring the light into focus on a reflector (534) such as a dmd array. the dmd may selectively attenuate and reflect a portion of the light back through a focusing lens (542; the lens configured to facilitate injection of the light back into the second waveguide) and into the second waveguide (538) for total internal reflection down to an exit reflector (546) configured to exit the light out of the waveguide and toward the eye 58. such a configuration may have a relatively thin shape factor, and may be designed to allow light from the real world 144 to be selectively attenuated. as waveguides work most cleanly with collimated light, such a configuration may be well suited for virtual reality configurations wherein focal lengths are in the range of optical infinity. for closer focal lengths, a lightfield display may be used as a layer on top of the silhouette created by the aforementioned selective attenuation / darkfield configuration to provide other cues to the eye of the user that light is coming from another focal distance. in another embodiment, an occlusion mask may be out of focus, even non-desirably so. in yet another embodiment, a lightfield on top of the masking layer may be used such that the user does not detect that the darkfield may be at a wrong focal distance. referring to fig. 19e , an embodiment is shown featuring two waveguides (552, 554) each having two angled reflectors (558, 544 and 556, 546) for illustrative purposes shown at approximately 45 degrees. it should be appreciated that in actual configurations, the angle may differ depending upon the reflective surface, reflective/refractive properties of the waveguides, etc. the angled reflectors direct a portion of light incoming from the real world down each side of a first waveguide (or down two separate waveguides if the top layer is not monolithic) such that it hits a reflector (548, 550) at each end, such as a dmd which may be used for selective attenuation. the reflected light may be injected back into the second waveguide (or into two separate waveguides if the bottom layer is not monolithic) and back toward two angled reflectors (again, they need not be at 45 degrees as shown) for exit out toward the eye 58. focusing lenses may also be placed between the reflectors at each end and the waveguides. in another embodiment the reflectors (548, 550) at each end may comprise standard mirrors (such as alumized mirrors). further, the reflectors may be wavelength selective reflectors, such as dichroic mirrors or film interference filters. further, the reflectors may be diffractive elements configured to reflect incoming light. fig. 19f illustrates a configuration in which four reflective surfaces in a pyramid type configuration are utilized to direct light through two waveguides (560, 562), in which incoming light from the real world may be divided up and reflected to four difference axes. the pyramid-shaped reflector (564) may have more than four facets, and may be resident within the substrate prism, as with the reflectors of the configuration of fig. 19e . the configuration of fig. 19f is an extension of that of fig. 19e . referring to fig. 19g , a single waveguide (566) may be utilized to capture light from the world 144 with one or more reflective surfaces (574, 576, 578, 580, 582), relay it 570 to a selective attenuator (568; such as a dmd array), and recouple it back into the same waveguide such that it propagates 572 and encounters one or more other reflective surfaces (584, 586, 588, 590, 592) that cause it to at least partially exit (594) the waveguide on a path toward the eye 58 of the user. preferably the waveguide comprises selective reflectors such that one group (574, 576, 578, 580, 582) may be switched on to capture incoming light and direct it down to the selective attenuator, while separate another group (584, 586, 588, 590, 592) may be switched on to exit light returning from the selective attenuator out toward the eye 58. for simplicity the selective attenuator is shown oriented substantially perpendicularly to the waveguide; in other embodiments, various optics components, such as refractive or reflective optics, may be utilized to plane the selective attenuator at a different and more compact orientation relative to the waveguide. referring to fig. 19h , a variation on the configuration described in reference to fig. 19d is illustrated. this configuration is somewhat analogous to that discussed above in reference to fig. 5b , wherein a switchable array of reflectors may be embedded within each of a pair of waveguides (602, 604). referring to fig. 19h , a controller may be configured to turn the reflectors (598, 600) on and off in sequence, such that multiple reflectors are operated on a frame sequential basis. then the dmd or other selective attenuator (594) may also be sequentially driven in sync with the different mirrors being turned on and off. referring to fig. 19i , a pair of wedge-shaped waveguides similar to those described above (for example, in reference to figs. 15a-15c ) are shown in side or sectional view to illustrate that the two long surfaces of each wedge-shaped waveguide (610, 612) are not co-planar. a "turning film" (606, 608; such as that available from 3m corporation under the trade name, "traf", which in essence comprises a microprism array), may be utilized on one or more surfaces of the wedge-shaped waveguides to either turn incoming rays at an angle such that the rays will be captured by total internal reflection, or to redirect outgoing rays exiting the waveguide toward an eye or other target. incoming rays are directed down the first wedge and toward the selective attenuator 614 such as a dmd, lcd (such as a ferroelectric lcd), or an lcd stack to act as a mask). after the selective attenuator (614), reflected light is coupled back into the second wedge-shaped waveguide which then relays the light by total internal reflection along the wedge. the properties of the wedge-shaped waveguide are intentionally such that each bounce of light causes an angle change. the point at which the angle has changed enough to be the critical angle to escape total internal reflection becomes the exit point from the wedge-shaped waveguide. typically the exit will be at an oblique angle. therefore, another layer of turning film may be used to "turn" the exiting light toward a targeted object such as the eye 58. referring to fig. 19j , several arcuate lenslet arrays (616, 620, 622) are positioned relative to an eye and configured such that a spatial attenuator array 618 is positioned at a focal/image plane such that it may be in focus with the eye 58. the first 616 and second 620 arrays are configured such that in the aggregate, light passing from the real world to the eye is essentially passed through a zero power telescope. the embodiment of fig. 19j shows a third array 622 of lenslets which may be utilized for improved optical compensation, but the general case does not require such a third layer. as discussed above, utilizing telescopic lenses that possess the diameter of the viewing optic may create an undesirably large form factor (somewhat akin to having a bunch of small sets of binoculars in front of the eyes). one way to optimize the overall geometry is to reduce the diameter of the lenses by splitting them out into smaller lenslets, as shown in fig. 19j (e.g., an array of lenses rather than one single large lens). the lenslet arrays (616, 620, 622) are shown wrapped radially or arcuately around the eye 58 to ensure that beams incoming to the pupil are aligned through the appropriate lenslets (else the system may suffer from optical problems such as dispersion, aliasing, and/or lack of focus). thus all of the lenslets are oriented "toed in" and pointed at the pupil of the eye 58, and the system facilitates avoidance of scenarios wherein rays are propagated through unintended sets of lenses on route to the pupil. referring to figs. 19k-19n , various software approaches may be utilized to assist in the presentation of darkfield in a virtual or augmented reality displace scenario. referring to fig. 19k , a typical challenging scenario for augmented reality is depicted 632, with a textured carpet 624 and non-uniform background architectural features 626, both of which are lightly-colored. the black box 628 depicted indicates the region of the display in which one or more augmented reality features are to be presented to the user for three-dimensional perception, and in the black box a robot creature 630 is being presented that may, for example, be part of an augmented reality game in which the user is engaged. in the depicted example, the robot character 630 is darkly-colored, which makes for a challenging presentation in three-dimensional perception, particularly with the background selected for this example scenario. as discussed briefly above, one of the main challenges for a presenting darkfield augmented reality object is that the system generally cannot add or paint in "darkness"; generally the display is configured to add light. thus, referring to fig. 19l , without any specialized software treatments to enhance darkfield perception, presentation of the robot character in the augmented reality view results in a scene wherein portions of the robot character that are to be essentially flat black in presentation are not visible, and portions of the robot character that are to have some lighting (such as the lightly-pigmented cover of the shoulder gun of the robot character) are only barely visible (634). these portions may appear almost like a light grayscale disruption to an otherwise normal background image. referring to fig. 19m , using a software-based global attenuation treatment (akin to digitally putting on a pair of sunglasses) provides enhanced visibility to the robot character because the brightness of the nearly black robot character is effective increased relative to the rest of the space, which now appears more dark 640. also shown in fig. 19m is a digitally-added light halo 636 which may be added to enhance and distinguish the now-more-visible robot character shapes 638 from the background. with the halo treatment, even the portions of the robot character that are to be presented as flat black become visible with the contrast to the white halo, or "aura" presented around the robot character. preferably the halo may be presented to the user with a perceived focal distance that is behind the focal distance of the robot character in three-dimensional space. in a configuration wherein single panel occlusion techniques such as those described above is being utilized to present darkfield, the light halo may be presented with an intensity gradient to match the dark halo that may accompany the occlusion, minimizing the visibility of either darkfield effect. further, the halo may be presented with blurring to the background behind the presented halo illumination for further distinguishing effect. a more subtle aura or halo effect may be created by matching, at least in part, the color and/or brightness of a relatively lightcolored background. referring to fig. 19n , some or all of the black intonations of the robot character may be changed to dark, cool blue colors to provide a further distinguishing effect relative to the background, and relatively good visualization of the robot 642. wedge-shaped waveguides have been described above, such as in reference to figs. 15a -15d and fig. 19i . a key aspect of wedge-shaped waveguides is that every time a ray bounces off of one of the noncoplanar surfaces, a change in the angle is created, which ultimately results in the ray exiting total internal reflection when its approach angle to one of the surfaces is greater than the critical angle. turning films may be used to redirect exiting light so that exiting beams leave with a trajectory that is more or less perpendicular to the exit surface, depending upon the geometric and ergonomic issues at play. with a series or array of displays injecting image information into a wedge-shaped waveguide, as shown in fig. 15c , for example, the wedge-shaped waveguide may be configured to create a fine-pitched array of angle-biased rays emerging from the wedge. somewhat similarly, it has been discussed above that a lightfield display, or a variable wavefront creating waveguide, both may produce a multiplicity of beamlets or beams to represent a single pixel in space such that wherever the eye is positioned, the eye is hit by a plurality of different beamlets or beams that are unique to that particular eye position in front of the display panel. as was further discussed above in the context of lightfield displays, a plurality of viewing zones may be created within a given pupil, and each may be used for a different focal distance, with the aggregate producing a perception similar to that of a variable wavefront creating waveguide, or similar to the actual optical physics of reality of the objects viewed were real. thus a wedge-shaped waveguide with multiple displays may be utilized to generate a lightfield. in an embodiment similar to that of fig. 15c with a linear array of displays injecting image information, a fan of exiting rays is created for each pixel. this concept may be extended in an embodiment wherein multiple linear arrays are stacked to all inject image information into the wedge-shaped waveguide (in one variation, one array may inject at one angle relative to the wedge-shaped waveguide face, while the second array may inject at a second angle relative to the wedge-shaped waveguide face), in which case exit beams fan out at two different axes from the wedge. thus such a configuration may be utilized to produce pluralities of beams spraying out at a plurality of different angles, and each beam may be driven separately due to the fact that under such configuration, each beam is driven using a separate display. in another embodiment, one or more arrays or displays may be configured to inject image information into wedge-shaped waveguide through sides or faces of the wedge-shaped waveguide other than that shown in fig. 15c , such as by using a diffractive optic to bend injected image information into a total internal reflection configuration relative to the wedge-shaped waveguide. various reflectors or reflecting surfaces may also be utilized in concert with such a wedge-shaped waveguide embodiment to out-couple and manage light from the wedge-shaped waveguide. in one embodiment, an entrance aperture to a wedge-shaped waveguide, or injection of image information through a different face other than shown in fig. 15c , may be utilized to facilitate staggering (geometric and/or temporal) of different displays and arrays such that a z-axis delta may also be developed as a means for injecting three-dimensional information into the wedge-shaped waveguide. for a greater than threedimensions array configuration, various displays may be configured to enter a wedge-shaped waveguide at multiple edges in multiple stacks with staggering to get higher dimensional configurations. referring to fig. 20a , a configuration similar to that depicted in fig. 8h is shown wherein a waveguide 646 has a diffractive optical element (648; or "doe", as noted above) sandwiched in the middle (alternatively, as described above, the diffractive optical element may reside on the front or back face of the depicted waveguide). a ray may enter the waveguide 646 from the projector or display 644. once in the waveguide 646, each time the ray intersects the doe 648, part of the ray is exited out of the waveguide 646. as described above, the doe may be designed such that the exit illuminance across the length of the waveguide 646 is somewhat uniform. for example, the first such doe intersection may be configured to exit about 10% of the light. then, the second doe intersection may be configured to exit about 10% of the remaining light such that 81% is passed on, and so on. in another embodiment, a doe may be designed to comprise a variable diffraction efficiency, such as linearly-decreasing diffraction efficiency, along its length to map out a more uniform exit illuminance across the length of the waveguide. to further distribute remaining light that reaches an end (and in one embodiment to allow for selection of a relatively low diffraction efficiency doe which would be favorable from a view-to-the-world transparency perspective), a reflective element (650) at one or both ends may be included. further, referring to the embodiment of fig. 20b , additional distribution and preservation may be achieved by including an elongate reflector 652 across the length of the waveguide as shown (comprising, for example, a thin film dichroic coating that is wavelength-selective); preferably such reflector would be blocking light that accidentally is reflected upward (back toward the real world 144 for exit in a way that it would not be utilized by the viewer). in some embodiments, such an elongate reflector may contribute to a "ghosting" effect perception by the user. in one embodiment, this ghosting effect may be eliminated by having a dual-waveguide (646, 654) circulating reflection configuration, such as that shown in fig. 20c , which is designed to keep the light moving around until it has been exited toward the eye 58 in a preferably substantially equally distributed manner across the length of the waveguide assembly. referring to fig. 20c , light may be injected with a projector or display 644, and as it travels across the doe 656 of the first waveguide 654, it ejects a preferably substantially uniform pattern of light out toward the eye 58. light that remains in the first waveguide is reflected by a first reflector assembly 660 into the second waveguide 646. in one embodiment, the second waveguide 646 may be configured to not have a doe, such that it merely transports or recycles the remaining light back to the first waveguide, using the second reflector assembly. in another embodiment (as shown in fig. 20c ) the second waveguide 646 may also have a doe 648 configured to uniformly eject fractions of travelling light to provide a second plane of focus for three-dimensional perception. unlike the configurations of figs. 20a and 20b , the configuration of fig. 20c is designed for light to travel the waveguide in one direction, which avoids the aforementioned ghosting problem that is related to passing light backwards through a waveguide with a doe. referring to fig. 20d , rather than including a mirror or box style reflector assembly 660 at the ends of a waveguide for recycling the light, an array of smaller retro-reflectors 662, or a retro-reflective material, may be utilized. referring to fig. 20e , an embodiment is shown that utilizes some of the light recycling configurations of the embodiment of fig. 20c to "snake" the light down through a waveguide 646 having a sandwiched doe 648 after it has been injected with a display or projector 644 such that it crosses the waveguide 646 multiple times back and forth before reaching the bottom, at which point it may be recycled back up to the top level for further recycling. such a configuration not only recycles the light and facilitates use of relatively low diffraction efficiency doe elements for exiting light toward the eye 58, but also distributes the light, to provide for a large exit pupil configuration akin to that described in reference to fig. 8k . referring to fig. 20f , an illustrative configuration similar to that of fig. 5a is shown, with incoming light injected along a conventional prism or beamsplitter substrate 104 to a reflector 102 without total internal reflection (e.g., without the prism being considered a waveguide) because the input projection 106, scanning or otherwise, is kept within the bounds of the prism. this means that the geometry of such prism becomes a significant constraint. in another embodiment, a waveguide may be utilized in place of the simple prism of fig. 20f , which facilitates the use of total internal reflection to provide more geometric flexibility. other configurations described above are configured to benefit from the inclusion of waveguides for similar manipulations and light. for example, referring back to fig. 7a , the general concept illustrated therein is that a collimated image injected into a waveguide may be refocused before transfer out toward an eye, in a configuration also designed to facilitate viewing light from the real world. in place of the refractive lens shown in fig. 7a , a diffractive optical element may be used as a variable focus element. referring back to fig. 7b , another waveguide configuration is illustrated in the context of having multiple layers stacked upon each other with controllable access toggling between a smaller path (total internal reflection through a waveguide) and a larger path (total internal reflection through a hybrid waveguide comprising the original waveguide and a liquid crystal isolated region with the liquid crystal switched to a mode wherein the refractive indices are substantially matched between the main waveguide and the auxiliary waveguide). this allows the controller to be able to tune which path is being taken on a frame-by-frame basis. high-speed switching electro-active materials, such as lithium niobate, facilitate path changes with such a configuration at large rates (e.g., in the order of ghz), which allows one to change the path of light on a pixel-by-pixel basis. referring back to fig. 8a , a stack of waveguides paired with weak lenses is illustrated to demonstrate a multifocal configuration wherein the lens and waveguide elements may be static. each pair of waveguide and lens may be functionally replaced with waveguide having an embedded doe element (which may be static, in a closer analogy to the configuration of fig. 8a , or dynamic), such as that described in reference to fig. 8i . referring to fig. 20g , if a transparent prism or block 104 (e.g., not a waveguide) is utilized to hold a mirror or reflector 102 in a periscope type of configuration to receive light from other components, such as a lens 662 and projector or display 644, the field of view is limited by the size of that reflector 102. it should be appreciated that the bigger the reflector, the wider the field of view. thus to accommodate a larger field of view with such configuration, a thicker substrate may be needed to hold a larger reflector. otherwise, the functionality of an aggregated plurality of reflectors may be utilized to increase the functional field of view, as described in figs. 8o , 8p , and 8q . referring to fig. 20h , a stack 664 of planar waveguides 666, each fed with a display or projector (644; or in another embodiment a multiplexing of a single display) and having an exit reflector 668, may be utilized to aggregate toward the function of a larger single reflector. the exit reflectors may be at the same angle in some cases, or not the same angle in other cases, depending upon the positioning of the eye 58 relative to the assembly. fig. 20i illustrates a related configuration, in which the reflectors (680, 682, 684, 686, 688) in each of the planar waveguides (670, 672, 674, 676, 678) have been offset from each other. each waveguide receives light from a projector or display 644 which may be sent through a lens 690 to ultimately transmit exiting light to the pupil 45 of the eye 58 by virtue of the reflectors (680, 682, 684, 686, 688) in each of the planar waveguides (670, 672, 674, 676, 678). if one can create a total range of all of the angles that would be expected to be seen in the scene (e.g., preferably without blind spots in the key field of view), then a useful field of view may have been achieved. as described above, the eye 58 functions based at least in part on the angle at which light rays enter the eye. this may be advantageously simulated. the rays need not pass through the exact same point in space at the pupil - rather the light rays just need to get through the pupil and be sensed by the retina. fig. 20k illustrates a variation 692 wherein the shaded portion of the optical assembly may be utilized as a compensating lens to functionally pass light from the real world 144 through the assembly as though it has been passed through a zero power telescope. referring to fig. 20j , each of the aforementioned rays may also be a relative wide beam that is being reflected through the pertinent waveguide (670, 672) by total internal reflection. the reflector (680, 682) facet size will determine a width of the exiting beam. referring to fig. 20l , a further discretization of the reflector is shown, wherein a plurality of small straight angular reflectors may form a roughly parabolic reflecting surface 694 in the aggregate through a waveguide or stack thereof 696. light coming in from the displays (644; or single muxed display, for example), such as through a lens 690, is all directed toward the same shared focal point at the pupil 45 of the eye 58. referring back to fig. 13m , a linear array of displays 378 injects light into a shared waveguide 376. in another embodiment a single display may be multiplexed to a series of entry lenses to provide similar functionality as the embodiment of fig. 13m , with the entry lenses creating parallel paths of rays running through the waveguide. in a conventional waveguide approach wherein total internal reflection is relied upon for light propagation, the field of view is restricted because there is only a certain angular range of rays propagating through the waveguide (others may escape out). in one embodiment, if a red/green/blue (or "rgb") laserline reflector is placed at one or both ends of the planar surfaces, akin to a thin film interference filter that is highly reflective for only certain wavelengths and poorly reflective for other wavelengths, then one can functionally increase the range of angles of light propagation. windows (without the coating) may be provided for allowing light to exit in predetermined locations. further, the coating may be selected to have a directional selectivity (somewhat like reflective elements that are only highly reflective for certain angles of incidence). such a coating may be most relevant for the larger planes/sides of a waveguide. referring back to fig. 13e , a variation on a scanning fiber display was discussed, which may be deemed a scanning thin waveguide configuration, such that a plurality of very thin planar waveguides 358 may be oscillated or vibrated such that if a variety of injected beams is coming through with total internal reflection, the configuration functionally would provide a linear array of beams escaping out of the edges of the vibrating elements 358. the depicted configuration has approximately five externally-projecting planar waveguide portions 358 in a host medium or substrate 356 that is transparent, but which preferably has a different refractive index so that the light will stay in total internal reflection within each of the substratebound smaller waveguides that ultimately feed (in the depicted embodiment there is a 90 degree turn in each path at which point a planar, curved, or other reflector may be utilized to transmit the light outward) the externally-projecting planar waveguide portions 358. the externally-projecting planar waveguide portions 358 may be vibrated individually, or as a group along with oscillatory motion of the substrate 356. such scanning motion may provide horizontal scanning, and for vertical scanning, the input 360 aspect of the assembly (e.g., such as one or more scanning fiber displays scanning in the vertical axis) may be utilized. thus a variation of the scanning fiber display is presented. referring back to fig. 13h , a waveguide 370 may be utilized to create a lightfield. with waveguides working best with collimated beams that may be associated with optical infinity from a perception perspective, all beams staying in focus may cause perception discomfort (e.g., the eye will not make a discernible difference in dioptric blur as a function of accommodation; in other words, the narrow diameter, such as 0.5mm or less, collimated beamlets may open loop the eye's accommodation/vergence system, causing discomfort). in one embodiment, a single beam may be fed in with a number of cone beamlets coming out, but if the introduction vector of the entering beam is changed (e.g., laterally shift the beam injection location for the projector/display relative to the waveguide), one may control where the beam exits from the waveguide as it is directed toward the eye. thus one may use a waveguide to create a lightfield by creating a bunch of narrow diameter collimated beams, and such a configuration is not reliant upon a true variation in a light wavefront to be associated with the desired perception at the eye. if a set of angularly and laterally diverse beamlets is injected into a waveguide (for example, by using a multicore fiber and driving each core separately; another configuration may utilize a plurality of fiber scanners coming from different angles; another configuration may utilize a high-resolution panel display with a lenslet array on top of it), a number of exiting beamlets can be created at different exit angles and exit locations. since the waveguide may scramble the lightfield, the decoding is preferably predetermined. referring to figs. 20m and 20n , a waveguide 646 assembly 696 is shown that comprises stacked waveguide components in the vertical or horizontal axis. rather than having one monolithic planar waveguide, the waveguide assembly 696 stacks a plurality of smaller waveguides 646 immediately adjacent each other such that light introduced into one waveguide, in addition to propagating down (e.g., propagating along a z axis with total internal reflection in +x,-x) such waveguide by total internal reflection, also totally internally reflects in the perpendicular axis (+y, -y) as well, such that it does not overflow into other areas. in other words, if total internal reflection is from left to right and back during z axis propagation, the configuration will be set up to totally internally reflect any light that hits the top or bottom sides as well. each layer may be driven separately without interference from other layers. each waveguide may have a doe 648 embedded and configured to eject out light with a predetermined distribution along the length of the waveguide, as described above, with a predetermined focal length configuration (shown in fig. 20m as ranging from 0.5 meters to optical infinity). in another variation, a very dense stack of waveguides with embedded does may be produced such that it spans the size of the anatomical pupil of the eye (e.g., such that multiple layers 698 of the composite waveguide may be required to cross the exit pupil, as illustrated in fig. 20n ). with such a configuration, one may feed a collimated image for one wavelength, and then the portion located the next millimeter down producing a diverging wavefront that represents an object coming from a focal distance of, say, 15 meters away, and so on. the concept here is that an exit pupil is coming from a number of different waveguides as a result of the does and total internal reflection through the waveguides and across the does. thus rather than creating one uniform exit pupil, such a configuration creates a plurality of stripes that, in the aggregate, facilitate the perception of different focal depths with the eye/brain. such a concept may be extended to configurations comprising a waveguide with a switchable/controllable embedded doe (e.g. that is switchable to different focal distances), such as those described in relation to figs. 8b-8n , which allows more efficient light trapping in the axis across each waveguide. multiple displays may be coupled into each of the layers, and each waveguide with doe would emit rays along its own length. in another embodiment, rather than relying on total internal reflection, a laserline reflector may be used to increase angular range. in between layers of the composite waveguide, a completely reflective metallized coating may be utilized, such as aluminum, to ensure total reflection, or alternatively dichroic style or narrow band reflectors may be utilized. referring to fig. 20o , the whole composite waveguide assembly 696 maybe be curved concavely toward the eye 58 such that each of the individual waveguides is directed toward the pupil. in other words, the configuration may be designed to more efficiently direct the light toward the location where the pupil is likely to be present. such a configuration also may be utilized to increase the field of view. as was discussed above in relation to figs. 8l , 8m , and 8n , a changeable diffraction configuration allows for scanning in one axis, somewhat akin to a scanning light display. fig. 21a illustrates a waveguide 698 having an embedded (e.g., sandwiched within) doe 700 with a linear grating term that may be changed to alter the exit angle of exiting light 702 from the waveguide, as shown. a high-frequency switching doe material such as lithium niobate may be utilized. in one embodiment, such a scanning configuration may be used as the sole mechanism for scanning a beam in one axis; in another embodiment, the scanning configuration may be combined with other scanning axes, and may be used to create a larger field of view. for example, if a normal field of view is 40 degrees, and by changing the linear diffraction pitch one can steer over another 40 degrees, the effective usable field of view for the system is 80 degrees. referring to fig. 21b , in a conventional configuration, a waveguide (708) may be placed perpendicular to a panel display 704, such as an lcd or oled panel, such that beams may be injected from the waveguide 708, through a lens 706, and into the panel 704 in a scanning configuration to provide a viewable display for television or other purposes. thus the waveguide may be utilized in such configuration as a scanning image source, in contrast to the configurations described in reference to fig. 21a , wherein a single beam of light may be manipulated by a scanning fiber or other element to sweep through different angular locations, and in addition, another direction may be scanned using the high-frequency diffractive optical element. in another embodiment, a uniaxial scanning fiber display (say scanning the fast line scan, as the scanning fiber is relatively high frequency) may be used to inject the fast line scan into the waveguide, and then the relatively slow doe switching (e.g., in the range of 100 hz) may be used to scan lines in the other axis to form an image. in another embodiment, a doe with a grating of fixed pitch may be combined with an adjacent layer of electro-active material having a dynamic refractive index (such as liquid crystal), such that light may be redirected into the grating at different angles. this is an application of the basic multipath configuration described above in reference to fig. 7b , in which an electro-active layer comprising an electro-active material such as liquid crystal or lithium niobate may change its refractive index such that it changes the angle at which a ray emerges from the waveguide. a linear diffraction grating may be added to the configuration of fig. 7b (in one embodiment, sandwiched within the glass or other material comprising the larger lower waveguide) such that the diffraction grating may remain at a fixed pitch, but such that the light is biased before it hits the grating. fig. 21c shows another embodiment featuring two wedge-like waveguide elements (710, 712), wherein one or more of them may be electro-active so that the related refractive index may be changed. the elements may be configured such that when the wedges have matching refractive indices, the light totally internally reflects through the pair (which in the aggregate performs akin to a planar waveguide with both wedges matching) while the wedge interfaces have no effect. if one of the refractive indices is changed to create a mismatch, a beam deflection at the wedge interface 714 is caused, and total internal reflection is caused from that surface back into the associated wedge. then, a controllable doe 716 with a linear grating may be coupled along one of the long edges of the wedge to allow light to exit out and reach the eye at a desirable exit angle. in another embodiment, a doe such as a bragg grating, may be configured to change pitch versus time, such as by a mechanical stretching of the grating (for example, if the grating resides on or comprises an elastic material), a moiré beat pattern between two gratings on two different planes (the gratings may be the same or different pitches), z-axis motion (e.g., closer to the eye, or farther away from the eye) of the grating, which functionally is similar in effect to stretching of the grating, or electro-active gratings that may be switched on or off, such as one created using a polymer dispersed liquid crystal approach wherein liquid crystal droplets may be controllably activated to change the refractive index to become an active grating. this is contrast to turning the voltage off and allowing a switch back to a refractive index that matches that of the host medium. in another embodiment, a time-varying grating may be utilized for field of view expansion by creating a tiled display configuration. further, a time-varying grating may be utilized to address chromatic aberration (failure to focus all colors/wavelengths at the same focal point). one property of diffraction gratings is that they will deflect a beam as a function of its angle of incidence and wavelength (e.g., a doe will deflect different wavelengths by different angles: somewhat akin to the manner in which a simple prism will divide out a beam into its wavelength components). one may use time-varying grating control to compensate for chromatic aberration in addition to field of view expansion. thus, for example, in a waveguide with embedded doe type of configuration as described above, the doe may be configured to drive the red wavelength to a slightly different place than the green and blue to address unwanted chromatic aberration. the doe may be time-varied by having a stack of elements that switch on and off (e.g. to get red, green, and blue to be diffracted outbound similarly). in another embodiment, a time-varying grating may be utilized for exit pupil expansion. for example, referring to fig. 21d , it is possible that a waveguide 718 with embedded doe 720 may be positioned relative to a target pupil such that none of the beams exiting in a baseline mode actually enter the target pupil 45 - such that the pertinent pixel would be missed by the user. a time-varying configuration may be utilized to fill in the gaps in the outbound exit pattern by shifting the exit pattern laterally (shown in dashed/dotted lines) to effectively scan each of the 5 exiting beams to better ensure that one of them hits the pupil of the eye. in other words, the functional exit pupil of the display system is expanded. in another embodiment, a time-varying grating may be utilized with a waveguide for one, two, or three axis light scanning. in a manner akin to that described in reference to fig. 21a , one may use a term in a grating that is scanning a beam in the vertical axis, as well as a grating that is scanning in the horizontal axis. further, if radial elements of a grating are incorporated, as is discussed above in relation to figs. 8b-8n , one may have scanning of the beam in the z axis (e.g., toward/away from the eye), all of which may be time-sequential scanning. notwithstanding the discussions herein regarding specialized treatments and uses of does generally in connection with waveguides, many of these uses of doe are usable whether or not the doe is embedded in a waveguide. for example, the output of a waveguide may be separately manipulated using a doe. or, a beam may be manipulated by a doe before it is injected into a waveguide. further, one or more does, such as a time-varying doe, may be utilized as an input for freeform optics configurations, as discussed below. as discussed above in reference to figs. 8b-8n , an element of a doe may have a circularly-symmetric term, which may be summed with a linear term to create a controlled exit pattern (e.g., as described above, the same doe that outcouples light may also focus it). in another embodiment, the circular term of the doe diffraction grating may be varied such that the focus of the beams representing those pertinent pixels is modulated. in addition, one configuration may have a second/separate circular doe, obviating the need to have a linear term in the doe. referring to fig. 21e , one may have a waveguide 722 outputting collimated light with no doe element embedded, and a second waveguide that has a circularly-symmetric doe that can be switched between multiple configurations - in one embodiment by having a stack 724 of such doe elements ( fig. 21f shows another configuration wherein a functional stack 728 of doe elements may comprise a stack of polymer dispersed liquid crystal elements 726, as described above, wherein without a voltage applied, a host medium refraction index matches that of a dispersed molecules of liquid crystal; in another embodiment, molecules of lithium niobate may be dispersed for faster response times; with voltage applied, such as through transparent indium tin oxide layers on either side of the host medium, the dispersed molecules change index of refraction and functionally form a diffraction pattern within the host medium) that can be switched on/off. in another embodiment, a circular doe may be layered in front of a waveguide for focus modulation. referring to fig. 21g , the waveguide 722 is outputting collimated light, which will be perceived as associated with a focal depth of optical infinity unless otherwise modified. the collimated light from the waveguide may be input into a diffractive optical element 730 which may be used for dynamic focus modulation (e.g., one may switch on and off different circular doe patterns to impart various different focuses to the exiting light). in a related embodiment, a static doe may be used to focus collimated light exiting from a waveguide to a single depth of focus that may be useful for a particular user application. in another embodiment, multiple stacked circular does may be used for additive power and many focus levels - from a relatively small number of switchable doe layers. in other words, three different doe layers may be switched on in various combinations relative to each other; the optical powers of the does that are switched on may be added. in one embodiment wherein a range of up to 4 diopters is desired, for example, a first doe may be configured to provide half of the total diopter range desired (in this example, 2 diopters of change in focus); a second doe may be configured to induce a 1 diopter change in focus; then a third doe may be configured to induce a 1/2 diopter change in focus. these three does may be mixed and matched to provide ½, 1, 1.5, 2, 2.5, 3, and 3.5 diopters of change in focus. thus a super large number of does would not be required to get a relatively broad range of control. in one embodiment, a matrix of switchable doe elements may be utilized for scanning, field of view expansion, and/or exit pupil expansion. generally in the above discussions of does, it has been assume that a typical doe is either all on or all off. in one variation, a doe 732 may be subdivided into a plurality of functional subsections (such as the one labeled as element 734 in fig. 21h ), each of which preferably is uniquely controllable to be on or off (for example, referring to fig. 21h , each subsection may be operated by its own set of indium tin oxide, or other control lead material, voltage application leads 736 back to a central controller). given this level of control over a doe paradigm, additional configurations are facilitated. referring to fig. 21i , a waveguide 738 with embedded doe 740 is viewed from the top down, with the user's eye positioned in front of the waveguide. a given pixel may be represented as a beam coming into the waveguide and totally internally reflecting along until it may be exited by a diffraction pattern to come out of the waveguide as a set of beams. depending upon the diffraction configuration, the beams may come out parallel/collimated (as shown in fig. 21i for convenience), or in a diverging fan configuration if representing a focal distance closer than optical infinity. the depicted set of parallel exiting beams may represent, for example, the farthest left pixel of what the user is seeing in the real world as viewed through the waveguide, and light off to the rightmost extreme will be a different group of parallel exiting beams. indeed, with modular control of the doe subsections as described above, one may spend more computing resource or time creating and manipulating the small subset of beams that is likely to be actively addressing the user's pupil (e.g., because the other beams never reach the user's eye and are effectively wasted). thus, referring to fig. 21j , a waveguide 738 configuration is shown wherein only the two subsections (740, 742) of the doe 744 are deemed to be likely to address the user's pupil 45 are activated. preferably one subsection may be configured to direct light in one direction simultaneously as another subsection is directing light in a different direction. fig. 21k shows an orthogonal view of two independently controlled subsections (734, 746) of a doe 732. referring to the top view of fig. 21l , such independent control may be used for scanning or focusing light. in the configuration depicted in fig. 21k , an assembly 748 of three independently controlled doe/waveguide subsections (750, 752, 754) may be used to scan, increase the field of view, and/or increase the exit pupil region. such functionality may arise from a single waveguide with such independently controllable doe subsections, or a vertical stack of these for additional complexity. in one embodiment, if a circular doe may be controllably stretched radially-symmetrically, the diffraction pitch may be modulated, and the doe may be utilized as a tunable lens with an analog type of control. in another embodiment, a single axis of stretch (for example, to adjust an angle of a linear doe term) may be utilized for doe control. further, in another embodiment a membrane, akin to a drum head, may be vibrated, with oscillatory motion in the z-axis (e.g., toward/away from the eye) providing z-axis control and focus change over time. referring to fig. 21m , a stack of several does 756 is shown receiving collimated light from a waveguide 722 and refocusing it based upon the additive powers of the activated does. linear and/or radial terms of does may be modulated over time, such as on a frame sequential basis, to produce a variety of treatments (such as tiled display configurations or expanded field of view) for the light coming from the waveguide and exiting, preferably toward the user's eye. in configurations wherein the doe or does are embedded within the waveguide, a low diffraction efficiency is desired to maximize transparency for light passed from the real world. in configurations wherein the doe or does are not embedded, a high diffraction efficiency may be desired, as described above. in one embodiment, both linear and radial doe terms may be combined outside of the waveguide, in which case high diffraction efficiency would be desired. referring to fig. 21n , a segmented or parabolic reflector, such as those discussed above in fig. 8q , is shown. rather than executing a segmented reflector by combining a plurality of smaller reflectors, in one embodiment the same functionality may result from a single waveguide with a doe having different phase profiles for each section of it, such that it is controllable by subsection. in other words, while the entire segmented reflector functionality may be turned on or off together, generally the doe may be configured to direct light toward the same region in space (e.g., the pupil of the user). referring to figs. 22a -22z, optical configurations known as "freeform optics" may be utilized certain of the aforementioned challenges. the term "freeform" generally is used in reference to arbitrarily curved surfaces that may be utilized in situations wherein a spherical, parabolic, or cylindrical lens does not meet a design complexity such as a geometric constraint. for example, referring to fig. 22a , one of the common challenges with display 762 configurations when a user is looking through a mirror (and also sometimes a lens 760) is that the field of view is limited by the area subtended by the final lens 760 of the system. referring to fig. 22b , in more simple terms, if one has a display 762, which may include some lens elements, there is a straightforward geometric relationship such that the field of view cannot be larger than the angle subtended by the display (762). referring to fig. 22c , this challenge is exacerbated if the light from the real world is also be to passed through the optical system, because in such case, there often is a reflector 764 that leads to a lens 760. by interposing a reflector, the overall path length to get to the lens from the eye is increased, which tightens the angle and reduces the field of view. given this, if the field of view is to be increased, the size of the lens may also be increased. however, this may mean pushing a physical lens toward the forehead of the user from an ergonomic perspective. further, the reflector may not catch all of the light from the larger lens. thus, there is a practical limitation imposed by human head geometry, and it generally is a challenge to get more than a 40-degree field of view using conventional see-through displays and lenses. with freeform lenses, rather than having a standard planar reflector as described above, one has a combined reflector and lens with power (e.g., a curved reflector 766), which means that the curved lens geometry determines the field of view. referring to fig. 22d , without the circuitous path length of a conventional paradigm as described above in reference to fig. 22c , it is possible for a freeform arrangement to realize a significantly larger field of view for a given set of optical requirements. referring to fig. 22e , a typical freeform optic has three active surfaces. referring to fig. 22e , in one typical freeform optic 770 configuration, light may be directed toward the freeform optic from an image plane, such as a flat panel display 768, into the first active surface 772. this first active surface 772 may be a primarily transmissive freeform surface that refracts transmitted light and imparts a focal change (such as an added stigmatism, because the final bounce from the third surface may add a matching/opposite stigmatism and these are desirably canceled). the incoming light may be directed from the first surface to a second surface (774), wherein it may strike with an angle shallow enough to cause the light to be reflected under total internal reflection toward the third surface 776. the third surface may comprise a half-silvered, arbitrarily-curved surface configured to bounce the light out through the second surface toward the eye, as shown in fig. 22e . thus in the depicted typical freeform configuration, the light enters through the first surface, bounces from the second surface, bounces from the third surface, and is directed out of the second surface. due to the optimization of the second surface to have the requisite reflective properties on the first pass, as well as refractive properties on the second pass as the light is exited toward the eye, a variety of curved surfaces with higher-order shapes than a simple sphere or parabola are formed into the freeform optic. referring to fig. 22f , a compensating lens 780 may be added to the freeform optic 770 such that the total thickness of the optic assembly is substantially uniform in thickness, and preferably without magnification, to light incoming from the real world 144 in an augmented reality configuration. referring to fig. 22g , a freeform optic 770 may be combined with a waveguide 778 configured to facilitate total internal reflection of captured light within certain constraints. for example, as shown in fig. 22g , light may be directed into the freeform/waveguide assembly from an image plane, such as a flat panel display, and totally internally reflected within the waveguide until it hits the curved freeform surface and escapes toward the eye of the user. thus the light bounces several times in total internal reflection until it approaches the freeform wedge portion. one of the main objectives with such an assembly is to lengthen the optic assembly while retaining as uniform a thickness as possible (to facilitate transport by total internal reflection, and also viewing of the world through the assembly without further compensation) for a larger field of view. fig. 22h depicts a configuration similar to that of fig. 22g , with the exception that the configuration of fig. 22h also features a compensating lens portion to further extend the thickness uniformity and assist with viewing the world through the assembly without further compensation. referring to fig. 22i , in another embodiment, a freeform optic 782 is shown with a small flat surface, or fourth face 784, at the lower left corner that is configured to facilitate injection of image information at a different location than is typically used with freeform optics. the input device 786 may comprise, for example, a scanning fiber display, which may be designed to have a very small output geometry. the fourth face may comprise various geometries itself and have its own refractive power, such as by use planar or freeform surface geometries. referring to fig. 22j , in practice, such a configuration may also feature a reflective coating 788 along the first surface such that it directs light back to the second surface, which then bounces the light to the third surface, which directs the light out across the second surface and to the eye 58. the addition of the fourth small surface for injection of the image information facilitates a more compact configuration. in an embodiment wherein a classical freeform input configuration and a scanning fiber display 790 are utilized, some lenses (792, 794) may be required in order to appropriately form an image plane 796 using the output from the scanning fiber display. these hardware components may add extra bulk that may not be desired. referring to fig. 22k , an embodiment is shown wherein light from a scanning fiber display 790 is passed through an input optics assembly (792, 794) to an image plane 796, and then directed across the first surface of the freeform optic 770 to a total internal reflection bounce off of the second surface, then another total internal reflection bounce from the third surface results in the light exiting across the second surface and being directed toward the eye 58. an all-total-internal-reflection freeform waveguide may be created such that there are no reflective coatings (e.g., such that total-internal-reflection is being relied upon for propagation of light until a critical angle of incidence with a surface is met, at which point the light exits in a manner akin to the wedge-shaped optics described above). in other words, rather than having two planar surfaces, one may have a surface comprising one or more sub-surfaces from a set of conical curves, such as parabolas, spheres, ellipses, etc.). such a configuration angles that are shallow enough for total internal reflection within the optic. this approach may be considered to be a hybrid between a conventional freeform optic and a wedge-shaped waveguide. one motivation to have such a configuration is to avoid the use of reflective coatings, which may help product reflection, but also are known to prevent transmission of a relatively large portion (such as 50%) of the light transmitting through from the real world 144. further, such coatings also may block an equivalent amount of the light coming into the freeform optic from the input device. thus there are reasons to develop designs that do not have reflective coatings. as described above, one of the surfaces of a conventional freeform optic may comprise a half-silvered reflective surface. generally such a reflective surface will be of "neutral density", meaning that it will generally reflect all wavelengths similarly. in another embodiment, such as one wherein a scanning fiber display is utilized as an input, the conventional reflector paradigm may be replaced with a narrow band reflector that is wavelength sensitive, such as a thin film laserline reflector. thus in one embodiment, a configuration may reflect particular red/green/blue wavelength ranges and remain passive to other wavelengths. this generally will increase transparency of the optic and therefore be preferred for augmented reality configurations wherein transmission of image information from the real world 144 across the optic also is valued. referring to fig. 22l , an embodiment is depicted wherein multiple freeform optics (770) may be stacked in the z axis (e.g., along an axis substantially aligned with the optical axis of the eye). in one variation, each of the three depicted freeform optics may have a wavelength-selective coating (for example, one highly selective for blue, the next for green, the next for red) so that images may be injected into each to have blue reflected from one surface, green from another, and red from a third surface. such a configuration may be utilized, for example, to address chromatic aberration issues, to create a lightfield, and/or to increase the functional exit pupil size. referring to fig. 22m , an embodiment is shown wherein a single freeform optic 798 has multiple reflective surfaces (800, 802, 804), each of which may be wavelength or polarization selective so that their reflective properties may be individually controlled. referring to fig. 22n , in one embodiment, multiple microdisplays, such as scanning light displays, 786 may be injected into a single freeform optic to tile images (thereby providing an increased field of view), increase the functional pupil size, or address challenges such as chromatic aberration (e.g., by reflecting one wavelength per display). each of the depicted displays would inject light that would take a different path through the freeform optic due to the different positioning of the displays relative to the freeform optic, thereby providing a larger functional exit pupil output. in one embodiment, a packet or bundle of scanning fiber displays may be utilized as an input to overcome one of the challenges in operatively coupling a scanning fiber display to a freeform optic. one such challenge with a scanning fiber display configuration is that the output of an individual fiber is emitted with a certain numerical aperture, or "na". the na is the projectional angle of light from the fiber; ultimately this angle determines the diameter of the beam that passes through various optics, and ultimately determines the exit functional exit pupil size. thus, in order to maximize exit pupil size with a freeform optic configuration, one may either increase the na of the fiber using optimized refractive relationships, such as between core and cladding, or one may place a lens (e.g., a refractive lens, such as a gradient refractive index lens, or "grin" lens) at the end of the fiber or build one into the end of the fiber as described above. another approach may be to create an array of fibers that is feeding into the freeform optic, in which case all of the nas in the bundle remain small, thereby producing an array of small exit pupils at the exit pupil that in the aggregate forms the functional equivalent of a large exit pupil. alternatively, in another embodiment a more sparse array (e.g., not bundled tightly as a packet) of scanning fiber displays or other displays may be utilized to functionally increase the field of view of the virtual image through the freeform optic. referring to fig. 22o , in another embodiment, a plurality of displays or displays 786 may be injected through the top of a freeform optic 770, as well as another plurality 786 through the lower corner. the display arrays may be two or three dimensional arrays. referring to fig. 22p , in another related embodiment, image information also may be injected in from the side 806 of the freeform optic 770 as well. in an embodiment wherein a plurality of smaller exit pupils is to be aggregated into a functionally larger exit pupil, one may elect to have each of the scanning fibers monochromatic, such that within a given bundle or plurality of projectors or displays, one may have a subgroup of solely red fibers, a subgroup of solely blue fibers, and a subgroup of solely green fibers. such a configuration facilitates more efficiency in output coupling for bringing light into the optical fibers. for instance, this approach would not necessitate a superimposing of red, green, and blue into the same band. referring to figs. 22q-22v , various freeform optic tiling configurations are depicted. referring to fig. 22q , an embodiment is depicted wherein two freeform optics are tiled side-by-side and a microdisplay, such as a scanning light display, 786 on each side is configured to inject image information from each side, such that one freeform optic wedge represents each half of the field of view. referring to fig. 22r , a compensator lens 808 may be included to facilitate views of the real world through the optics assembly. fig. 22s illustrates a configuration wherein freeform optics wedges are tiled side by side to increase the functional field of view while keeping the thickness of such optical assembly relatively uniform. referring to fig. 22t , a star-shaped assembly comprises a plurality of freeform optics wedges (also shown with a plurality of displays for inputting image information) in a configuration that may provide a larger field of view expansion while also maintaining a relatively thin overall optics assembly thickness. with a tiled freeform optics assembly, the optics elements may be aggregated to produce a larger field of view. the tiling configurations described above have addressed this notion. for example, in a configuration wherein two freeform waveguides are aimed at the eye such as that depicted in fig. 22r , there are several ways to increase the field of view. one option is to "toe in" the freeform waveguides such that their outputs share, or are superimposed in, the space of the pupil. for example, the user may see the left half of the visual field through the left freeform waveguide, and the right half of the visual field through the right freeform waveguide. with such a configuration, the field of view has been increased with the tiled freeform waveguides, but the exit pupil has not grown in size. alternatively, the freeform waveguides may be oriented such that they do not toe in as much, such that they exit pupils that are side-by-side at the eye's anatomical pupil are created. in one example, the anatomical pupil may be 8mm wide, and each of the side-by-side exit pupils may be 8mm, such that the functional exit pupil is expanded by about two times. thus such a configuration provides an enlarged exit pupil. however, if the eye is moved around in the "eyebox" defined by that exit pupil, that eye may lose parts of the visual field (e.g., lose either a portion of the left or right incoming light because of the side-by-side nature of such configuration). in one embodiment using such an approach for tiling freeform optics, especially in the z-axis relative to the eye of the user, red wavelengths may be driven through one freeform optic, green through another, and blue through another, such red/green/blue chromatic aberration may be addressed. multiple freeform optical elements may be provided to such a configuration that are stacked up, each of which is configured to address a particular wavelength. referring to fig. 22u , two oppositely-oriented freeform optics are shown stacked in the z-axis (e.g., they are upside down relative to each other). with such a configuration, a compensating lens may not be required to facilitate accurate views of the world through the assembly. in other words, rather than having a compensating lens such as in the embodiment of fig. 22f or fig. 22r , an additional freeform optic may be utilized, which may further assist in routing light to the eye. fig. 22v shows another similar configuration wherein the assembly of two freeform optical elements is presented as a vertical stack. to ensure that one surface is not interfering with another surface in the freeform optics, one may use wavelength or polarization selective reflector surfaces. for example, referring to fig. 22v , red, green, and blue wavelengths in the form of 650nm, 530nm, and 450nm may be injected, as well as red, green, and blue wavelengths in the form of 620nm, 550nm, and 470nm. different selective reflectors may be utilized in each of the freeform optics such that they do not interfere with each other. in a configuration wherein polarization filtering is used for a similar purpose, the reflection/transmission selectivity for light that is polarized in a particular axis may be varied (e.g., the images may be pre-polarized before they are sent to each freeform waveguide, to work with reflector selectivity). referring to figs. 22w and 22x , configurations are illustrated wherein a plurality of freeform waveguides may be utilized together in series. referring to fig. 22w , light may enter from the real world and be directed sequentially through a first freeform optic 770, through an optional lens 812 which may be configured to relay light to a reflector 810 such as a dmd from a dlp system, which may be configured to reflect the light that has been filtered on a pixel by pixel basis (e.g., an occlusion mask may be utilized to block out certain elements of the real world, such as for darkfield perception, as described above; suitable spatial light modulators may be used which comprise dmds, lcds, ferroelectric lcoss, mems shutter arrays, and the like, as described above) to another freeform optic 770 that is relaying light to the eye 28 of the user. such a configuration may be more compact than one using conventional lenses for spatial light modulation. referring to fig. 22x , in a scenario in which it is very important to keep overall thickness minimized, a configuration may be utilized that has one surface that is highly-reflective such that the highly-reflective surface may bounce light straight into another compactly positioned freeform optic. in one embodiment a selective attenuator 814 may be interposed between the two freeform optical elements 770. referring to fig. 22y , an embodiment is depicted wherein a freeform optic 770 may comprise one aspect of a contact lens system. a miniaturized freeform optic is shown engaged against the cornea of a user's eye 58 with a miniaturized compensator lens portion 780, akin to that described in reference to fig. 22f . signals may be injected into the miniaturized freeform assembly using a tethered scanning fiber display which may, for example, be coupled between the freeform optic and a tear duct area of the user, or between the freeform optic and another head-mounted display configuration. interaction between one or more users and the ar system user system interaction with the cloud having described various optical embodiments above, the following discussion will focus on an interaction between one or more ar systems and an interaction between the ar system and the physical world. as illustrated in figs. 23 and 24 , the light field generation subsystem (e.g. 2300 and 2302 respectively) is preferably operable to produce a light field. for example, an optical apparatus 2360 or subsystem may generate or project light to simulate a four dimensional (4d) light field that would be produced by light reflecting from a real three-dimensional object or scene. for instance, an optical apparatus such as a wave guide reflector array projector (wrap) apparatus 2310 or multiple depth plane three dimensional (3d) display system may generate or project multiple virtual depth planes at respective radial focal distances to simulate a 4d light field. the optical apparatus 2360 in the form of a wrap apparatus 2310 or multiple depth plane 3d display system may, for instance, project images into each eye of a user, either directly or indirectly. when the number and radial placement of the virtual depth planes is comparable to the depth resolution of the human vision system as a function of radial distance, a discrete set of projected depth planes mimics the psycho-physical effect that is produced by a real, continuous, three dimensional object or scene. in one or more embodiments, the system 2300 may comprise a frame 2370 that may be customized for each ar user. additional components of the system 2300 may include electronics 2330 (as will be discussed in further detail below) to connect various electrical and electronic subparts of the ar system to each other. the system 2300 may further comprise a microdisplay 2320 that projects light associated with one or more virtual images into the waveguide prism 2310. as shown in fig. 23 , the light produced from the microdisplay 2320 travels within the waveguide 2310, and some of light reaches the user's eyes 2390. in one or more embodiments, the system 2300 may further comprise one or more compensation lenses 2380 to alter the light associated with the virtual images. fig. 24 illustrates the same components as fig. 23 , but illustrates how light from the microdisplays 2320 travels through the waveguides 2310 to reach the user's eyes 2390. it should be appreciated that the optical apparatus 2360 may include a number of linear waveguides, each with a respective series of deconstructed curved spherical reflectors or mirrors embedded, located or formed within each of the linear wave guides. the series of deconstructed curved spherical reflectors or mirrors are designed to refocus infinity-focused light at specific radial distances. a convex spherical mirror can be used to produce an output spherical wave to represent a virtual point source which appears to be located at a defined distance behind the convex spherical mirror. by concatenating in a linear or rectangular wave guide a series of micro-reflectors whose shapes (e.g., radii of curvature about two axes) and orientation together, it is possible to project a 3d image that corresponds to a spherical wave front produced by a virtual point source at a particular x, y, z coordinates. each of the 2d wave guides or layers provides an independent optical path relative to the other wave guides, and shapes the wave front and focuses incoming light to project a virtual depth plane that corresponds to a respective radial distance. with a sufficient number of 2d wave guides, a user viewing the projected virtual depth planes experiences a 3d effect. such a device is described in u.s. patent application serial no. 13/915,530 filed on june 11, 2013 . other embodiments may comprise other combinations of optical systems, and it should be appreciated that the embodiment(s) described in relation to figs. 23 and 24 are for illustrative purposes only. the audio subsystem of the ar system may take a variety of forms. for instance, the audio subsystem may take the form of a simple two speaker 2 channel stereo system, or a more complex multiple speaker system (5.1, 7.1, 12.1 channels). in some implementations, the audio subsystem may be operable to produce a three-dimensional sound field. the ar system may include one or more distinct components. for example, the ar system may include a head worn or mounted component, such as the one shown in the illustrated embodiment of figs. 23 and 24 . the head worn or mounted component typically includes the visual system (e.g., such as the ones shown in figs. 23 and 24 ). the head worn component may also include audio transducers (e.g., speakers, microphones). the audio transducers may integrate with the visual, for example each audio transducers supported from a common frame with the visual components. alternatively, the audio transducers may be distinct from the frame that carries the visual components. for example, the audio transducers may be part of a belt pack, such as the ones shown in figs. 4d as illustrated in figs. 23 and 24 , the ar system may include a distinct computation component (e.g., the processing sub-system), separate from the head worn component (e.g., the optical sub-system as shown in figs.23 and 24 ). the processing sub-system or computation component may, for example, take the form of the belt pack, which can be convenience coupled to a belt or belt line of pants during use. alternatively, the computation component may, for example, take the form of a personal digital assistant or smartphone type device. the computation component may include one or more processors, for example, one or more microcontrollers, microprocessors, graphical processing units, digital signal processors, application specific integrated circuits (asics), programmable gate arrays, programmable logic circuits, or other circuits either embodying logic or capable of executing logic embodied in instructions encoded in software or firmware. the computation component may include one or more nontransitory computer or processor-readable media, for example volatile and/or nonvolatile memory, for instance read only memory (rom), random access memory (ram), static ram, dynamic ram, flash memory, eeprom, etc. as discussed above, the computation component may be communicatively coupled to the head worn component. for example, computation component may be communicatively tethered to the head worn component via one or more wires or optical fibers via a cable with appropriate connectors. the computation component and the head worn component may communicate according to any of a variety of tethered protocols, for example ubs ® , usb2 ® , usb3 ® , ethernet ® , thunderbolt ® , lightning ® protocols. alternatively or additionally, the computation component may be wirelessly communicatively coupled to the head worn component. for example, the computation component and the head worn component may each include a transmitter, receiver or transceiver (collectively radio) and associated antenna to establish wireless communications there between. the radio and antenna(s) may take a variety of forms. for example, the radio may be capable of short range communications, and may employ a communications protocol such as bluetooth ® , wi-fi ® , or some ieee 802.11 compliant protocol (e.g., ieee 802.11n, ieee 802.11a/c). as illustrated in figs. 23 and 24 , the body or head worn components may include electronics and microdisplays, operable to deliver augmented reality content to the user, for example augmented reality visual and/or audio content. the electronics (e.g., part of 2320 in figs. 23 and 24 ) may include various circuits including electrical or electronic components. the various circuits are communicatively coupled to a number of transducers that either deliver augmented reality content, and/or which sense, measure or collect information about the ambient physical environment and/or about a user. fig. 25 shows an example architecture 2500 for the electronics for an augmented reality device, according to one illustrated embodiment. the ar device may include one or more printed circuit board components, for instance left (2502) and right (2504) printed circuit board assemblies (pcba). as illustrated, the left pcba 2502 includes most of the active electronics, while the right pcba 604supports principally supports the display or projector elements. the right pcba 2504 may include a number of projector driver structures which provide image information and control signals to image generation components. for example, the right pcba 2504 may carry a first or left projector driver structure 2506 and a second or right projector driver structure 2508. the first or left projector driver structure 2506 joins a first or left projector fiber 2510 and a set of signal lines (e.g., piezo driver wires). the second or right projector driver structure 2508 joins a second or right projector fiber 2512 and a set of signal lines (e.g., piezo driver wires). the first or left projector driver structure 2506 is communicatively coupled to a first or left image projector, while the second or right projector drive structure 2508 is communicatively coupled to the second or right image projector. in operation, the image projectors render virtual content to the left and right eyes (e.g., retina) of the user via respective optical components, for instance waveguides and/or compensation lenses (e.g., as shown in figs. 23 and 24 ). the image projectors may, for example, include left and right projector assemblies. the projector assemblies may use a variety of different image forming or production technologies, for example, fiber scan projectors, liquid crystal displays (lcd), lcos displays, digital light processing (dlp) displays. where a fiber scan projector is employed, images may be delivered along an optical fiber, to be projected therefrom via a tip of the optical fiber. the tip may be oriented to feed into the waveguide ( figs. 23 and 24 ). the tip of the optical fiber may project images, which may be supported to flex or oscillate. a number of piezoelectric actuators may control an oscillation (e.g., frequency, amplitude) of the tip. the projector driver structures provide images to respective optical fiber and control signals to control the piezoelectric actuators, to project images to the user's eyes. continuing with the right pcba 2504, a button board connector 2514 may provide communicative and physical coupling to a button board 2516 which carries various user accessible buttons, keys, switches or other input devices. the right pcba 2504 may include a right earphone or speaker connector 2518, to communicatively couple audio signals to a right earphone 2520 or speaker of the head worn component. the right pcba 2504 may also include a right microphone connector 2522 to communicatively couple audio signals from a microphone of the head worn component. the right pcba 2504 may further include a right occlusion driver connector 2524 to communicatively couple occlusion information to a right occlusion display 2526 of the head worn component. the right pcba 2504 may also include a board-to-board connector to provide communications with the left pcba 2502 via a board-to-board connector 2534 thereof. the right pcba 2504 may be communicatively coupled to one or more right outward facing or world view cameras 2528 which are body or head worn, and optionally a right cameras visual indicator (e.g., led) which illuminates to indicate to others when images are being captured. the right pcba 2504 may be communicatively coupled to one or more right eye cameras 2532, carried by the head worn component, positioned and orientated to capture images of the right eye to allow tracking, detection, or monitoring of orientation and/or movement of the right eye. the right pcba 2504 may optionally be communicatively coupled to one or more right eye illuminating sources 2530 (e.g., leds), which as explained herein, illuminates the right eye with a pattern (e.g., temporal, spatial) of illumination to facilitate tracking, detection or monitoring of orientation and/or movement of the right eye. the left pcba 2502 may include a control subsystem, which may include one or more controllers (e.g., microcontroller, microprocessor, digital signal processor, graphical processing unit, central processing unit, application specific integrated circuit (asic), field programmable gate array (fpga) 2540, and/or programmable logic unit (plu)). the control system may include one or more non-transitory computer- or processor readable medium that stores executable logic or instructions and/or data or information. the nontransitory computer- or processor readable medium may take a variety of forms, for example volatile and nonvolatile forms, for instance read only memory (rom), random access memory (ram, dram, sd-ram), flash memory, etc. the non-transitory computer or processor readable medium may be formed as one or more registers, for example of a microprocessor, fpga or asic. the left pcba 2502 may include a left earphone or speaker connector 2536, to communicatively couple audio signals to a left earphone or speaker 2538 of the head worn component. the left pcba 2502 may include an audio signal amplifier (e.g., stereo amplifier) 2542, which is communicative coupled to the drive earphones or speakers the left pcba 2502 may also include a left microphone connector 2544 to communicatively couple audio signals from a microphone of the head worn component. the left pcba 2502 may further include a left occlusion driver connector 2546 to communicatively couple occlusion information to a left occlusion display 2548 of the head worn component. the left pcba 2502 may also include one or more sensors or transducers which detect, measure, capture or otherwise sense information about an ambient environment and/or about the user. for example, an acceleration transducer 2550 (e.g., three axis accelerometer) may detect acceleration in three axis, thereby detecting movement. a gyroscopic sensor 2552 may detect orientation and/or magnetic or compass heading or orientation. other sensors or transducers may be similarly employed. the left pcba 2502 may be communicatively coupled to one or more left outward facing or world view cameras 2554 which are body or head worn, and optionally a left cameras visual indicator (e.g., led) 2556 which illuminates to indicate to others when images are being captured. the left pcba may be communicatively coupled to one or more left eye cameras 2558, carried by the head worn component, positioned and orientated to capture images of the left eye to allow tracking, detection, or monitoring of orientation and/or movement of the left eye. the left pcba 2502 may optionally be communicatively coupled to one or more left eye illuminating sources (e.g., leds) 2556, which as explained herein, illuminates the left eye with a pattern (e.g., temporal, spatial) of illumination to facilitate tracking, detection or monitoring of orientation and/or movement of the left eye. the pcbas 2502 and 2504 are communicatively coupled with the distinct computation component (e.g., belt pack) via one or more ports, connectors and/or paths. for example, the left pcba 2502 may include one or more communications ports or connectors to provide communications (e.g., bi-directional communications) with the belt pack. the one or more communications ports or connectors may also provide power from the belt pack to the left pcba 2502. the left pcba 2502 may include power conditioning circuitry 2580 (e.g., dc/dc power converter, input filter), electrically coupled to the communications port or connector and operable to condition (e.g., step up voltage, step down voltage, smooth current, reduce transients). the communications port or connector may, for example, take the form of a data and power connector or transceiver 2582 (e.g., thunderbolt ® port, usb ® port). the right pcba 2504 may include a port or connector to receive power from the belt pack. the image generation elements may receive power from a portable power source (e.g., chemical battery cells, primary or secondary battery cells, ultra-capacitor cells, fuel cells), which may, for example be located in the belt pack. as illustrated, the left pcba 2502 includes most of the active electronics, while the right pcba 2504 supports principally supports the display or projectors, and the associated piezo drive signals. electrical and/or fiber optic connections are employed across a front, rear or top of the body or head worn component of the ar system. both pcbas 2502 and 2504 are communicatively (e.g., electrically, optically) coupled to the belt pack. the left pcba 2502 includes the power subsystem and a high speed communications subsystem. the right pcba 2504 handles the fiber display piezo drive signals. in the illustrated embodiment, only the right pcba 2504 needs to be optically connected to the belt pack. in other embodiments, both the right pcba and the left pcba may be connected to the belt pack. while illustrated as employing two pcbas 2502 and 2504, the electronics of the body or head worn component may employ other architectures. for example, some implementations may use a fewer or greater number of pcbas. also for example, various components or subsystems may be arranged differently than illustrated in fig. 25 . for example, in some alternative embodiments some of the components illustrated in fig. 25 as residing on one pcba may be located on the other pcba, without loss of generality. as illustrated in figs. 4a-4d , each user may use his/her respective ar system (generally referred to as individual ar systems in the discussion below). in some implementations, the individual ar systems may communicate with one another. for example, two or more proximately located ar systems may communicate with one another. as described further herein, communications may occur after performance of a handshaking protocol, in one or more embodiments. the ar systems may communicate wirelessly via one or more radios. as discussed above, such radios may be capable of short range direct communications, or may be capable of longer range direct communications (e.g., without a repeater, extender, etc.). additionally or alternatively, indirect longer range communications may be achieved via one or more intermediary devices (e.g., wireless access points, repeaters, extenders). the head worn component of the ar system may have one or more "outward" facing cameras. in one or more embodiments, the head worn component may have one or more "inward" facing cameras. as used herein, "outward facing" means that the camera captures images of the ambient environment rather than the user who is wearing the head worn component. notably, the "outward" facing camera could have a field of view that encompass areas to the front, the left, the right or even behind the user. this contrasts with an inward facing camera which captures images of the individual who is wearing the head worn component, for instance a camera that faces the user's face to capture facial expression or eye movements of the user. in many implementations, the personal (or individual) ar system(s) worn by the user(s) may include one or more sensors, transducers, or other components. the sensors, transducers, or other components may be categorized into two general categories, (i) those that detect aspects of the user who wears the sensor(s) (e.g., denominated herein as inward facing sensors), and (ii) those that detect conditions in the ambient environment in which the user is located (e.g., denominated herein as outward facing sensors). these sensors may take a large variety of forms. for example, the sensor(s) may include one or more image sensors, for instance digital still or moving image cameras. also for example, the sensor(s) may include one or more audio sensors or microphones. other sensors may detect position, movement, temperature, heart rate, perspiration, etc. as noted above, in one or more embodiments, sensors may be inward facing. for example, image sensors worn by a user may be positioned and/or oriented to detect eye movement of the user, facial expressions of the user, or limb (arms, legs, hands) of the user. for example, audio sensors or microphones worn by a user may be positioned and/or oriented to detect utterances made by the user. such audio sensors or microphones may be directional and may be located proximate a mouth of the user during use. as noted above, sensors may be outward facing. for example, image sensors worn by a user may be positioned and/or oriented to visually detect the ambient environment in which the user is located and/or objects with which the user is interacting. in one or more embodiments, image-based sensors may refer to cameras (e.g., field-of-view cameras, ir cameras, eye tracking cameras, etc.) also for example, audio sensors or microphones worn by a user may be positioned and/or oriented to detect sounds in the ambient environment, whether from natural sources like other people, or generated from inanimate objects such as audio speakers. the outward facing sensors may detect other characteristics of the ambient environment. for example, outward facing sensors may include a temperature sensor or thermocouple that detects a temperature in the ambient environment. outward facing sensors may detect humidity, air quality, and/or air flow in the ambient environment. outward facing sensors may include light detector (e.g., photodiodes) to detect an ambient light condition in the ambient environment. in one or more embodiments, light probes may also be used as part of the individual ar systems. outward facing sensors may include one or more sensors that detect a presence and/or absence of an object, including other people, in the ambient environment and/or movement in the ambient environment. physical space/room based sensor system as illustrated in the system architecture 2600 of fig. 26 , in some implementations the ar system may include physical space or room based sensor systems. as illustrated in fig. 26 , the ar system 2602 not only draws from users' individual ar systems (e.g., head-mounted augmented reality display system, etc.) as shown in figs. 23 and 24 , but also may use room-based sensor systems 2604 to collect information about rooms and physical spaces. the space or room based sensor systems 2604 detect and/or collect information from a physical environment, for example a space such as a room (e.g., an office, living room, media room, kitchen or other physical space). the space or room based sensor system(s) 2604 typically includes one or more image sensors 2606, for instance one or more cameras (e.g., digital still cameras, digital moving image or video cameras). the image sensor(s) may be used in addition to image sensors which form part of the personal ar system(s) worn by the user(s), in one or more embodiments. the space or room based sensor systems may also include one or more audio sensors or transducers 2608, for example omni-directional or directional microphones. the audio sensors or transducers may detect sound from animate objects (e.g., one or more users or other people in the ambient environment. the audio sensors or transducers may detect sound from inanimate objects, for example footsteps, televisions, stereo systems, radios, or other appliances. the space or room based sensor systems 2604 may also include other environmental sensors 2610, temperature 2612, humidity 2614 , air quality 2616, air flow or velocity, ambient light sensing, presence absence, movement, etc., in the ambient environment. all these inputs feed back to the ar system 2602, as shown in fig. 26 . it should be appreciated that only some of the room-based sensors are shown in fig. 26 , and some embodiments may comprise fewer or greater sensor sub-systems, and the embodiment of fig. 26 should not be seen as limiting. the space or room based sensor system(s) 2604 may detect and/or collect information in with respect to a space or room based coordinate system. for example, visual or optical information and/or audio information may be referenced with respect to a location or source of such information within a reference frame that is different from a reference frame of the user. for example, the location of the source of such information may be identified within a reference frame of the space or room based sensor system or component thereof. the reference frame of the space or room based sensor system or component may be relatively fixed, and may be identical to a reference frame of the physical space itself. alternatively, one or more transformations (e.g., translation and/or rotation matrices) may mathematically relate the reference frame of the space or room based sensor system or component with the reference frame of the physical space. fig. 27 illustrates a communications architecture which employs one or more hub, central, or distributed, server computer systems and one or more individual ar systems communicatively coupled by one or more wired or wireless networks, according to one illustrated embodiment. in one or more embodiments, a cloud server may refer to a server that is accessed by the one or more individual ar systems through a network (e.g., wired network, wireless network, bluetooth, cellular network, etc.) in the illustrated embodiment, the individual ar systems communicate with the cloud servers or server computer systems 2780 through a network 2704. in one or more embodiments, a cloud server may refer to a hosted server or processing system that is hosting at a different location, and is accessed by multiple users on demand through the internet or some type of network. in one or more embodiments, a cloud server may be a set of multiple connected servers that comprise a cloud. the server computer systems 2780 may, for example, be clustered. for instance, clusters of server computer systems may be located at various geographically dispersed locations. such may facilitate communications, shortening transit paths and/or provide for redundancy. specific instances of personal ar systems 2708 may be communicatively coupled to the server computer system(s) 2780 through a cloud network 2704. the server computer system(s) 2780 may maintain information about a specific user's own physical and/or virtual worlds. the server computer system(s) 2780 may allow a given user to share information about the specific user's own physical and/or virtual worlds with other users. additionally or alternatively, the server computer system(s) 2780 may allow other users to share information about their own physical and/or virtual worlds with the given or specific user. as described herein, server computer system(s) 2780 may allow mapping and/or characterizations of large portions of the physical worlds. information may be collected via the personal ar system of one or more users. the models of the physical world may be developed overtime, and by collection via a large number of users. this may allow a given user to enter a new portion or location of the physical world, yet benefit by information collected by others who either previously or are currently in the particular location. models of virtual worlds may be created over time via user by a respective user. the individual ar system(s) 2708 may be communicatively coupled to the server computer system(s). for example, the personal ar system(s) 2708 may be wirelessly communicatively coupled to the server computer system(s) 2780 via one or more radios. the radios may take the form of short range radios, as discussed above, or relatively long range radios, for example cellular chip sets and antennas. the individual ar system(s) 2708 will typically be communicatively coupled to the server computer system(s) 2780 indirectly, via some intermediary communications network or component. for instance, the individual ar system(s) 2708 will typically be communicatively coupled to the server computer system(s) 2780 via one or more telecommunications provider systems, for example one or more cellular communications provider networks. in many implementations, the ar system may include additional components. in one or more embodiments, the ar devices may, for example, include one or more haptic devices or components. the haptic device(s) or component(s) may be operable to provide a tactile sensation to a user. for example, the haptic device(s) or component(s) may provide a tactile sensation of pressure and/or texture when touching virtual content (e.g., virtual objects, virtual tools, other virtual constructs). the tactile sensation may replicate a feel of a physical object which a virtual object represents, or may replicate a feel of an imagined object or character (e.g., a dragon) which the virtual content represents. in some implementations, haptic devices or components may be worn by the user. an example of a haptic device in the form of a user wearable glove is described herein. in some implementations, haptic devices or components may be held the user. an example of a haptic device in the form of a user wearable glove (e.g., fig. 34a) is described herein. other examples of haptic devices in the form of various haptic totems are described further below. the ar system may additionally or alternatively employ other types of haptic devices or user input components. the ar system may, for example, include one or more physical objects which are manipulable by the user to allow input or interaction with the ar system. these physical objects are referred to herein as totems, and will be described in further detail below. some totems may take the form of inanimate objects, for example a piece of metal or plastic, a wall, a surface of table. alternatively, some totems may take the form of animate objects, for example a hand of the user. as described herein, the totems may not actually have any physical input structures (e.g., keys, triggers, joystick, trackball, rocker switch). instead, the totem may simply provide a physical surface, and the ar system may render a user interface so as to appear to a user to be on one or more surfaces of the totem. for example, and as discussed in more detail further herein, the ar system may render an image of a computer keyboard and trackpad to appear to reside on one or more surfaces of a totem. for instance, the ar system may render a virtual computer keyboard and virtual trackpad to appear on a surface of a thin rectangular plate of aluminum which serves as a totem. the rectangular plate does not itself have any physical keys or trackpad or sensors. however, the ar system may detect user manipulation or interaction or touches with the rectangular plate as selections or inputs made via the virtual keyboard and/or virtual trackpad. many of these components are described in detail further below. passable world model the passable world model allows a user to effectively pass over a piece of the user's world (e.g., ambient surroundings, interactions, etc.) to another user. each user's respective individual ar system captures information as the user passes through or inhabits an environment, which the ar system processes to produce a passable world model. the individual ar system may communicate or pass the passable world model to a common or shared collection of data at the cloud. the individual ar system may communicate or pass the passable world model to other users of the ar system, either directly or via the cloud. the passable world model provides the ability to efficiently communicate or pass information that essentially encompasses at least a field of view of a user. of course, it should be appreciated that other inputs (e.g., sensory inputs, image inputs, eye-tracking inputs etc.) may additionally be transmitted to augment the passable world model at the cloud. fig. 28 illustrates the components of a passable world model 2800 according to one illustrated embodiment. as a user 2801 walks through an environment, the user's individual ar system 2810 captures information (e.g., images, location information, position and orientation information, etc.) and saves the information through posed tagged images. in the illustrated embodiment, an image may be taken of the object 2820 (which resembles a table) and map points 2804 may be collected based on the captured image. this forms the core of the passable world model, as shown by multiple keyframes (e.g., cameras) 2802 that have captured information about the environment. as shown in fig. 28 , there may be multiple keyframes 2802 that capture information about a space at any given point in time. for example, a keyframe may be another user's ar system capturing information from a particular point of view. another keyframe may be a room-based camera/sensor system that is capturing images and points 2804 through a stationary point of view. by triangulating images and points from multiple points of view, the position and orientation of real objects in a 3d space may be determined. in one or more embodiments, the passable world model 2808 is a combination of raster imagery, point and descriptors clouds, and polygonal/geometric definitions (referred to herein as parametric geometry). all this information is uploaded to and retrieved from the cloud, a section of which corresponds to a particular space that the user may have walked into. as shown in fig. 28 , the passable world model also contains many object recognizers 2812 that work on the cloud or on the user's individual system 2810 to recognize objects in the environment based on points and pose-tagged images captured through the various keyframes of multiple users. essentially by continually capturing information about the physical world through multiple keyframes 2802, the passable world is always growing, and may be consulted (continuously or as needed) in order to determine how to render virtual content in relation to existing physical objects of the real world. by collecting information from the user's environment, a piece of the passable world 2806 is constructed/augmented, and may be "passed" along to one or more ar users simultaneously or in the future. asynchronous communications is established between the user's respective individual ar system and the cloud based computers (e.g., server computers). in other words, the user's individual ar system is constantly updating information about the user's surroundings to the cloud, and also receiving information from the cloud about the passable world. thus, rather than each ar user having to capture images and recognize objects based on the captured images, having an asynchronous system allows the system to be more efficient. information that already exists about that part of the world is automatically communicated to the individual ar system while new information is updated to the cloud. it should be appreciated that the passable world model lives both on the cloud or other form of networking computing or peer to peer system, and also may live on the user's individual ar system. in one or more embodiments, the ar system may employ different levels of resolutions for the local components (e.g., computational component such as the belt pack) and remote components (e.g., cloud based computers 2780). this is because the remote components (e.g., resources that reside on the cloud servers) are typically more computationally powerful than local components. the cloud based computers may pick data collected by the many different individual ar systems, and/or one or more space or room based sensor systems, and utilize this information to add on to the passable world model. the cloud based computers may aggregate only the best (e.g., most useful) information into a persistent world model. in other words, redundant information and/or less-than-optimal quality information may be timely disposed so as not to deteriorate the quality and/or performance of the system. fig. 29 illustrates an example method 2900 of interacting with the passable world model. at 2902, the user's individual ar system may detect a location and orientation of the user within the world. in one or more embodiments, the location may be derived by a topological map of the system, as will be described in further detail below. in other embodiments, the location may be derived by gps or any other localization tool. it should be appreciated that the passable world may be constantly accessed by the individual ar system. in another embodiment (not shown), the user may request access to another user's space, prompting the system to access that section of the passable world, and associated parametric information corresponding to the other user. thus, there may be many triggers for the passable world. at the simplest level, however, it should be appreciated that the passable world is constantly being updated and accessed by multiple user systems, thereby constantly adding and receiving information from the cloud. following the above example, based on the known location of the user, at 2904, the system may draw a radius denoting a physical area around the user that communicates both the position and intended direction of the user. next, at 2906, the system may retrieve a piece of the passable world based on the anticipated position of the user. in one or more embodiments, the piece of the passable world may contain information from the geometric map of the space acquired through previous keyframes and captured images and data stored in the cloud. at 2908, the ar system uploads information from the user's environment into the passable world model. at 2910, based on the uploaded information, the ar system renders the passable world associated with the position of the user to the user's individual ar system. this information allows virtual content to meaningfully interact with the user's real surroundings in a coherent manner. for example, a virtual "monster" may be rendered to be originating from a particular building of the real world. or, in another example, a user may leave a virtual object in relation to physical coordinates of the real world such that a friend (also wearing the ar system) finds the virtual object in the same physical coordinates. in order to allow such capabilities (and many more), it is important for the ar system to constantly access the passable world to retrieve and upload information. it should be appreciated that the passable world contains persistent digital representations of real spaces that is crucially utilized in rendering virtual and/or digital content in relation to real coordinates of a physical space. it should be appreciated that the ar system may maintain coordinates of the real world and/or virtual world. in some embodiments, a third party may maintain the map (e.g., coordinates) of the real world, and the ar system may consult the map to determine one or more parameters in order to render virtual content in relation to real objects of the world. it should be appreciated that the passable world model does not itself render content that is displayed to the user. rather it is a high level concept of dynamically retrieving and updating a persistent digital representation of the real world in the cloud. in one or more embodiments, the derived geometric information is loaded onto a game engine, which then renders content associated with the passable world. thus, regardless of whether the user is in a particular space or not, that particular space has a digital representation in the cloud that can be accessed by any user. this piece of the passable world may contain information about the physical geometry of the space and imagery of the space, information about various avatars that are occupying the space, information about virtual objects and other miscellaneous information. as described in detail further herein, one or more object recognizers may examine or "crawl" the passable world models, tagging points that belong to parametric geometry. parametric geometry, points and descriptors may be packaged into passable world models, to allow low latency passing or communicating of information corresponding to a portion of a physical world or environment. in one or more embodiments, the ar system can implement a two tier structure, in which the passable world model allow fast pose processing in a first tier, but then inside that framework is a second tier (e.g., fast features). in one or more embodiments, the second tier structure can increase resolution by performing a frame-to-frame based three-dimensional (3d) feature mapping. fig. 30 illustrates an example method 3000 of recognizing objects through object recognizers. at 3002, when a user walks into a room, the user's individual ar system captures information (e.g., images, sensor information, pose tagged images, etc.) about the user's surroundings from multiple points of view. at 3004, a set of 3d points may be extracted from the one or more captured images. for example, by the time the user walks into a section of a room, the user's individual ar system has already captured numerous keyframes and pose tagged images about the surroundings (similar to the embodiment shown in fig. 28 ). it should be appreciated that in one or more embodiments, each keyframe may include information about the depth and color of the objects in the surroundings. in one or more embodiments, the object recognizers (either locally or in the cloud) may use image segmentation techniques to find one or more objects. it should be appreciated that different objects may be recognized by their own object recognizers that have been written by developers and programmed to recognize that particular object. for illustrative purposes, the following example, will assume that the object recognizer recognizes doors. the object recognizer may be an autonomous and/or atomic software object or "robot" that utilizes the pose tagged images of the space, including key frames and 2d and 3d feature points taken from multiple keyframes, and uses this information, and geometry of the space to recognize one or more objects (e.g., the door) it should be appreciated that multiple object recognizers may run simultaneously on a set of data, and multiple object recognizers may run independent of each other. it should be appreciated that the object recognizer takes 2d images of the object (2d color information, etc.), 3d images (depth information) and also takes 3d sparse points to recognize the object in a geometric coordinate frame of the world. next, at 3006, the object recognizer(s) may correlate the 2d segmented image features with the sparse 3d points to derive object structures and one or more properties about the object using 2d/3d data fusion. for example, the object recognizer may identify specific geometry of the door with respect to the keyframes. next, at 3008, the object recognizer parameterizes the geometry of the object. for example, the object recognizer may attach semantic information to the geometric primitive (e.g., the door has a hinge, the door can rotate 90 degrees, etc.) of the object. or, the object recognizer may reduce the size of the door, to match the rest of the objects in the surroundings, etc.. at 3010, the ar system may synchronize the parametric geometry of the objects to the cloud. next, at 3012, the object recognizer may re-insert the geometric and parametric information into the passable world model. for example, the object recognizer may dynamically estimate the angle of the door, and insert it into the world. thus, it can be appreciated that using the object recognizer allows the system to save computational power because, rather than constantly requiring real-time capture of information about the angle of the door or movement of the door, the object recognizer uses the stored parametric information to estimate the movement or angle of the door. this allows the system to function independently based on computational capabilities of the individual ar system without necessarily relying on information in the cloud servers. it should be appreciated that this information may be updated to the cloud, and transmitted to other ar systems such that virtual content may be appropriately displayed in relation to the recognized door. as briefly discussed above, object recognizers are atomic autonomous software and/or hardware modules which ingest sparse points (e.g., not necessarily a dense point cloud), pose-tagged images, and geometry, and produce parametric geometry that has semantics attached. the semantics may take the form of taxonomical descriptors, for example "wall," "chair," "aeron ® chair," and properties or characteristics associated with the taxonomical descriptor. for example, a taxonomical descriptor such as a table may have associated descriptions such as "has a flat horizontal surface which can support other objects." given an ontology, an object recognizer turns images, points, and optionally other geometry, into geometry that has meaning (e.g., semantics). since the individual ar systems are intended to operate in the real world environment, the points represent sparse, statistically relevant, natural features. natural features are those that are inherent to the object (e.g., edges, holes), in contrast to artificial features added (e.g., printed, inscribed or labeled) to objects for the purpose of machine-vision recognition. the points do not necessarily need to be visible to humans. it should be appreciated that the points are not limited to point features, e.g., line features and high dimensional features. in one or more embodiments, object recognizers may be categorized into two types, type 1 - basic objects (e.g., walls, cups, chairs) and type 2 - detailed objects (e.g., aeron ® chair, my wall, etc.). in some implementations, the type 1 recognizers run across the entire cloud, whereas the type 2 recognizers run against previously found type 1 data (e.g., search all chairs for aeron ® chairs). in one or more embodiments, the object recognizers may use inherent properties of an object to facilitate object identification. or, in other embodiments, the object recognizers may use ontological relationships between objects in order to facilitate implementation. for example, an object recognizer may use the fact that window may be "in" a wall to facilitate recognition of instances of windows. in one or more embodiments, object recognizers may be bundled, partnered or logically associated with one or more applications. for example, a "cup finder" object recognizer may be associated with one, two or more applications in which identifying a presence of a cup in a physical space would be useful. for example, a coffee company may create its own "cup finder" application that allows for the recognition of cups provided by the coffee company. this may allow delivery of virtual content/advertisements, etc. related to the coffee company, and may directly and/or indirectly encourage participation or interest in the coffee company. applications can be logically connected tor associated with defined recognizable visual data or models. for example, in response to a detection of any aeron ® chairs in an image, the ar system calls or executes an application from the herman miller company, the manufacturer and/or seller of aeron ® chairs. similarly, in response to detection of a starbucks ® signs or logo in an image, the ar system calls or executes a starbucks ® application. in yet another example, the ar system may employ an instance of a generic wall finder object recognizer. the generic wall finder object recognizer identifies instances of walls in image information, without regard to specifics about a wall. thus, the generic wall finder object recognizer may identify vertically oriented surfaces that constitute walls in the image data. the ar system may also employ an instance of a specific wall finder object recognizer, which is separate and distinct from the generic wall finder. the specific wall finder object recognizer identifies vertically oriented surfaces that constitute walls in the image data and which have one or more specific characteristics beyond those of generic wall. for example, a given specific wall may have one or more windows in defined positions, one or more doors in defined positions, may have a defined paint color, may have artwork hung from the wall, etc., which visually distinguishes the specific wall from other walls. such features allow the specific wall finder object recognizer to identify particular walls. for example, one instance of a specific wall finder object recognizer may identify a wall of a user's office. other instances of specific wall finder object recognizers may identify respective walls of a user's living room or bedroom. a specific object recognizer may stand independently from a generic object recognizer. for example, a specific wall finder object recognizer may run completely independently from a generic wall finder object recognizer, not employing any information produced by the generic wall finder object recognizer. alternatively, a specific (e.g., more refined) object recognizer may be run nested against objects previously found by a more generic object recognizer. for example, a generic and/or a specific door finder object recognizer may run against a wall found by a generic and/or specific wall finder object recognizer, since a door must be in a wall. likewise, a generic and/or a specific window finder object recognizer may run against a wall found by a generic and/or specific wall finder object recognizer, since a window must be "in" a wall. in one or more embodiments, an object recognizer may not only identify the existence or presence of an object, but may also identify other characteristics associated with the object. for example, a generic or specific door finder object recognizer may identify a type of door, whether the door is hinged or sliding, where the hinge or slide is located, whether the door is currently in an open or a closed position, and/or whether the door is transparent or opaque, etc. as noted above, each object recognizer is atomic, that is the object recognizer is autonomic, autonomous, asynchronous, and essentially a black box software object. this allows object recognizers to be community-built. developers may be incentivized to build object recognizers. for example, an online marketplace or collection point for object recognizers may be established. object recognizer developers may be allowed to post object recognizers for linking or associating with applications developed by other object recognizer or application developers. various other incentives may be similarly provided. also for example, an incentive may be provided to an object recognizer developer or author based on the number of times an object recognizer is logically associated with an application and/or based on the total number of distributions of an application to which the object recognizer is logically associated. as a further example, an incentive may be provided to an object recognizer developer or author based on the number of times an object recognizer is used by applications that are logically associated with the object recognizer. the incentives may be monetary incentives, in one or more embodiments. in other embodiments, the incentive may comprise providing access to services or media behind a pay-wall, and/or providing credits for acquiring services, media, or goods. it would, for example, be possible to instantiate any number of distinct generic and/or specific object recognizers. some embodiments may require a very large number of generic and specific object recognizers. these generic and/or specific object recognizers can all be run against the same data. as noted above, some object recognizers can be nested such that they are essentially layered on top of each other. in one or more embodiments, a control program may control the selection, use or operation of the various object recognizers, for example arbitrating the use or operation thereof. some object recognizers may be placed in different regions, to ensure that the object recognizers do not overlap each other. as discussed above, the object recognizers may run locally at the individual ar system's belt back, or may be run on one or more cloud servers. ring buffer of object recognizers fig. 31 shows a ring buffer 3100 of object recognizers, according to one illustrated embodiment. the ar system may organize the object recognizers in a ring topology, for example to achieve low disk-read utilization. the various object recognizers may sit on or along the ring, all running in parallel. passable world model data (e.g., walls, ceiling, floor) may be run through the ring, in one or more embodiments. as the data rolls by, each object recognizer collects that data relevant to the object which the object recognizer recognizes. some object recognizers may need to collect large amounts of data, while others may only need to collect small amounts of data. the respective object recognizers collect whatever data they require, and return results in the same manner described above. in the illustrated embodiment, the passable world data 3116 runs through the ring. starting clockwise, a generic wall object recognizer 3102 may first be run on the passable world data 3116. the generic wall object recognizer 3102 may recognize an instance of a wall 3118. next, a specific wall object recognizer 3104 may run on the passable world data 3116. similarly, a table object recognizer 3106, and a generic chair object recognizer 3108 may be run on the passable world data 3116. specific object recognizers may also be run on the data, such as the specific aeron ® object recognizer 3110 that successfully recognizes an instance of the aeron chair 3120. in one or more embodiments, bigger, or more generic object recognizers may go through the data first, and smaller, and finer-detail recognizers may run through the data after the bigger ones are done. going through the ring, a cup object recognizer 3112 and a fork object recognizer 3114 may be run on the passable world data 3116. avatars in the passable world as an extension of the passable world model, not only objects are recognized, but other users/people of the real world may be recognized and may be rendered as virtual objects. for example, as discussed above, a friend of a first user may be rendered as an avatar at the ar system of the first user. in some implementations, in order to render an avatar that properly mimics the user, the user may train the ar system, for example by moving through a desired or prescribed set of movements. in response, the ar system may generate an avatar sequence in which an avatar replicates the movements, for example, by animating the avatar. thus, the ar system captures or receives images of a user, and generates animations of an avatar based on movements of the user in the captured images. the user may be instrumented, for example, by wearing one or more sensors. in one or more embodiments, the ar system knows where the pose of the user's head, eyes, and/or hands based on data captured by various sensors of his/her individual ar system. in one or more embodiments, the ar system may allow the user to "set-up" an avatar and "train" the avatar based on predetermined movements and/or patterns. the user can, for example, simply act out some motions for training purposes. in one or more embodiments, the ar system may perform a reverse kinematics analysis of the rest of user's body, and may create an animation based on the reverse kinematics analysis. in one or more embodiments, the passable world may also contain information about various avatars inhabiting a space. it should be appreciated that every user may be rendered as an avatar in one embodiment. or, a user operating an individual ar system from a remote location can create an avatar and digitally occupy a particular space as well. in either case, since the passable world is not a static data structure, but rather constantly receives information, avatar rendering and remote presence of users into a space may be based on the user's interaction with the user's individual ar system. thus, rather than constantly updating an avatar's movement based on captured keyframes, as captured by cameras, avatars may be rendered based on a user's interaction with his/her individual augmented reality device. advantageously, this reduces the need for individual ar systems to retrieve data from the cloud, and instead allows the system to perform a large number of computation tasks involved in avatar animation on the individual ar system itself. more particularly, the user's individual ar system contains information about the user's head pose and orientation in a space, information about hand movement etc. of the user, information about the user's eyes and eye gaze, information about any totems that are being used by the user. thus, the user's individual ar system already holds a lot of information about the user's interaction within a particular space that is transmitted to the passable world model. this information may then be reliably used to create avatars for the user and help the avatar communicate with other avatars or users of that space. it should be appreciated that in one or more embodiments, third party cameras may not be needed to animate the avatar. rather, the avatar may be animated based on the user's individual ar system, and then transmitted to the cloud to be viewed/interacted with by other users of the ar system. in one or more embodiments, the ar system captures a set of data pertaining to the user through the sensors of the ar system. for example, accelerometers, gyroscopes, depth sensors, ir sensors, image-based cameras, etc. may determine a movement of the user relative to the head mounted system. this movement may be computed through the processor and translated through one or more algorithms to produce a similar movement in a chose avatar. the avatar may be selected by the user, in one or more embodiments. or, in other embodiments, the avatar may simply be selected by another user who is viewing the avatar. or, the avatar may simply be a virtual, real-time, dynamic image of the user itself. based on captured set of data pertaining to the user (e.g., movement, emotions, direction of movement, speed of movement, physical attributes, movement of body parts relative to the head, etc.) a pose of the sensors (e.g., sensors of the individual ar system) relative to the user may be determined. the pose (e.g., position and orientation) allow the system to determine a point of view from which the movement/set of data was captured such that it can be translated/transformed accurately. based on this information, the ar system may determine a set of parameters related to the user's movement (e.g., through vectors) and animate a desired avatar with the calculated movement. any similar method may be used to animate an avatar to mimic the movement of the user. it should be appreciated that the movement of the user and the movement of the avatar (e.g., in the virtual image being displayed at another user's individual ar device) are coordinated such that the movement is captured and transferred to the avatar in as little time as possible. ideally, the time lag between the captured movement of the user, to the animation of the avatar should be minimal. for example, if the user is not currently at a conference room, but wants to insert an avatar into that space to participate in a meeting at the conference room, the ar system takes information about the user's interaction with his/her own system and uses those inputs to render the avatar into the conference room through the passable world model. the avatar may be rendered such that the avatar takes the form of the user's own image such that it looks like the user himself/herself is participating in the conference. or, based on the user's preference, the avatar may be any image chosen by the user. for example, the user may render himself/herself as a bird that flies around the space of the conference room. at the same time, information about the conference room (e.g., key frames, points, pose-tagged images, avatar information of people in the conference room, recognized objects, etc. ) may be rendered as virtual content to the user who is not currently in the conference room. in the physical space, the system may have captured keyframes that are geometrically registered and may then derive points from the captured keyframes. as mentioned before, based on these points, the system may calculate pose and may run object recognizers, and may reinsert parametric geometry into the keyframes, such that the points of the keyframes also have semantic information attached to them. thus, with all this geometric and semantic information, the conference room may now be shared with other users. for example, the conference room scene may be rendered on the user's table. thus, even if there is no camera at the conference room, the passable world model, using information collected through prior key frames etc., is able to transmit information about the conference room to other users and recreate the geometry of the room for other users in other spaces. topological map an integral part of the passable world model is to create maps of very minute areas of the real world. for example, in order to render virtual content in relation to physical objects, very detailed localization is required. such localization may not be achieved simply through gps or traditional location detection techniques. for example, the ar system may not only require coordinates of a physical location that a user is in, but may, for example, need to know exactly what room of a building the user is located in. based on this information, the ar system may retrieve data (e.g., specific geometries of real objects in the room, map points for the room, geometric information of the room, etc.) for that room to appropriately display virtual content in relation to the real objects of the identified room. at the same time, however, this precise, granular localization may be done in a cost-effective manner such that not too many resources are consumed unnecessarily. to this end, the ar system may use topological maps for localization purposes instead of gps or retrieving detailed geometric maps created from extracted points and pose tagged images (e.g., the geometric points may be too specific, and hence most costly). in one or more embodiments, the topological map is a simplified representation of physical spaces in the real world that is easily accessible from the cloud and only presents a fingerprint of a space, and the relationship between various spaces. further details about the topological map will be provided further below. in one or more embodiments, the ar system may layer topological maps on the passable world model, for example to localize nodes. the topological map can layer various types of information on the passable world model, for instance: point cloud, images, objects in space, global positioning system (gps) data, wi-fi data, histograms (e.g., color histograms of a room), received signal strength (rss) data, etc. this allows various layers of information (e.g., a more detailed layer of information to interact with a more high-level layer) to be placed in context with each other, such that it can be easily retrieved. this information may be thought of as fingerprint data; in other words, it is designed to be specific enough to be unique to a location (e.g., a particular room). as discussed above, in order to create a complete virtual world that can be reliably passed between various users, the ar system captures different types of information about the user's surroundings(e.g., map points, features, pose tagged images, objects in a scene, etc.). this information is processed and stored in the cloud such that it can be retrieved as needed. as mentioned previously, the passable world model is a combination of raster imagery, point and descriptors clouds, and polygonal/geometric definitions (referred to herein as parametric geometry). thus, it should be appreciated that the sheer amount of information captured through the users' individual ar system allows for high quality and accuracy in creating the virtual world. in other words, since the various ar systems (e.g., user-specific head-mounted systems, room-based sensor systems, etc.) are constantly capturing data corresponding to the immediate environment of the respective ar system, very detailed and accurate information about the real world in any point in time may be known with a high degree of certainty. although this amount of information is highly useful for a host of ar applications, for localization purposes, sorting through that much information to find the piece of passable world most relevant to the user is highly inefficient and costs precious bandwidth. to this end, the ar system creates a topological map that essentially provides less granular information about a particular scene or a particular place. in one or more embodiments, the topological map may be derived through global positioning system (gps) data, wi-fi data, histograms (e.g., color histograms of a room), received signal strength (rss) data, etc. for example, the topological map may be created by histograms (e.g., a color histogram) of various rooms/areas/spaces, and be reduced to a node on the topological map. for example, when a user walks into a room or space, the ar system may take a single image (or other information ) and construct a color histogram of the image. it should be appreciated that on some level, the histogram of a particular space will be mostly constant over time (e.g., the color of the walls, the color of objects of the room, etc.). in other words, each room or space has a distinct signature that is different from any other room or place. this unique histogram may be compared to other histograms of other spaces/areas and identified. now that the ar system knows what room the user is in, the remaining granular information may be easily accessed and downloaded. thus, although the histogram will not contain particular information about all the features and points that have been captured by various cameras (keyframes), the system may immediately detect, based on the histogram, where the user is, and then retrieve all the more particular geometric information associated with that particular room or place. in other words, rather than sorting through the vast amount of geometric and parametric information that encompasses that passable world model, the topological map allows for a quick and efficient way to localize the ar user. based on the localization, the ar system retrieves the keyframes and points that are most relevant to the identified location. for example, after the system has determined that the user is in a conference room of a building, the system may then retrieve all the keyframes and points associated with the conference room rather than searching through all the geometric information stored in the cloud. referring now to fig. 32 , an example embodiment of a topological map 3200 is presented. as discussed above, the topological map 3200 may be a collection of nodes 3202 and connections 3204 between the nodes 3202 (e.g., represented by connecting lines). each node 3202 represents a particular location (e.g., the conference room of an office building) having a distinct signature or fingerprint (e.g., gps information, color histogram or other histogram, wi-fi data, rss data etc.) and the lines may represent the connectivity between them. it should be appreciated that the connectivity may not have anything to do with geographical connectivity, but rather may simply be a shared device or a shared user. for example, a first user may have walked from a first node to a second node. this relationship may be represented through a connection between the nodes. as the number of ar users increases, the nodes and connections between the nodes will also proportionally increase, providing more precise information about various locations. once the ar system has identified a node of the topological map, the system may then retrieve a set of geometric information pertaining to the node to determine how/where to display virtual content in relation to the real objects of that space. thus, layering the topological map on the geometric map is especially helpful for localization and efficiently retrieving only relevant information from the cloud. in one or more embodiments, the ar system can represent two images captured by respective cameras of a part of the same scene in a graph theoretic context as first and second pose tagged images. it should be appreciated that the cameras in this context may refer to a single camera taking images of different scenes, or it may be two different cameras. there is some strength of connection between the pose tagged images, which could, for example, be the points that are in the field of views of both of the cameras. in one or more embodiments, the cloud based computer may construct such as a graph (e.g., a topological representation of a geometric world similar to that of fig. 32 ). the total number of nodes and edges in the graph is much smaller than the total number of points in the images. at a higher level of abstraction, other information monitored by the ar system can be hashed together. for example, the cloud based computer(s) may hash together one or more of global positioning system (gps) location information, wi-fi location information (e.g., signal strengths), color histograms of a physical space, and/or information about physical objects around a user. the more points of data there are, the more likely that the computer will statistically have a unique identifier for that space. in this case, space is a statistically defined concept. as an example, an office may be a space that is represented as, for example a large number of points and two dozen pose tagged images. the same space may be represented topologically as a graph having only a certain number of nodes (e.g., 5, 25, 100, 1000, etc.), which can be easily hashed against. graph theory allows representation of connectedness, for example as a shortest path algorithmically between two spaces. thus, the system abstracts away from the specific geometry by turning the geometry into pose tagged images having implicit topology. the system takes the abstraction a level higher by adding other pieces of information, for example color histogram profiles, and the wi-fi signal strengths. this makes it easier for the system to identify an actual real world location of a user without having to understand or process all of the geometry associated with the location. fig. 33 illustrates an example method 3300 of constructing a topological map. first, at 3302, the user's individual ar system may capture an image from a first point of view of a particular location (e.g., the user walks into a room of a building, and an image is captured from that point of view). at 3304, a color histogram may be generated based on the captured image. as discussed before, the system may use any other type of identifying information, (e.g., wi-fi data, rss information, gps data, number of windows, etc.) but the color histogram is used in this example for illustrative purposes. next, at 3306, the system runs a search to identify the location of the user by comparing the color histogram to a database of color histograms stored in the cloud. at 3310, a decision is made to determine whether the color histogram matches an existing color histogram stored in the cloud. if the color histogram does not match any color histogram of the database of color histograms, it may then be stored as a node in the topological made 3314. if the color histogram matches an existing color histogram of the database, it is stored as a node in the cloud 3312. if the color histogram matches an existing color histogram in the database, the location is identified, and the appropriate geometric information is provided to the individual ar system. continuing with the same example, the user may walk into another room or another location, where the user's individual ar system takes another picture and generates another color histogram of the other location. if the color histogram is the same as the previous color histogram or any other color histogram, the ar system identifies the location of the user. if the color histogram is not the same as a stored histogram, another node is created on the topological map. additionally, since the first node and second node were taken by the same user (or same camera/same individual user system), the two nodes are connected in the topological map. in one or more embodiments, the ar system may employ mesh networking localization. the individual ar system has a native knowledge of position. this allows explicit construction of topological maps, with connections weighted by distance, as discussed above. this permits the user of optimal mesh network algorithms by the ar system. thus, the ar system can optimize mobile communications routing based on its known absolute pose. the ar system can use ultra wide bandwidth (uwb) communications infrastructure for both communications and localization, in addition to the machine vision. in addition to aiding in localization, the topological map may also be used to improve/ fix errors and or missing information in geometric maps. in one or more embodiment, topological maps may be used to find loop-closure stresses in geometric maps or geometric configurations of a particular place. as discussed above, for any given location or space, images taken by one or more ar systems (multiple field of view images captured by one user's individual ar system or multiple users' ar systems) give rise a large number of map points of the particular space. for example, a single room may correspond to thousands of map points captured through multiple points of views of various cameras (or one camera moving to various positions). the ar system utilizes map points to recognize objects (through object recognizers) as discussed above, and to add to on to the passable world model in order to store a more comprehensive picture of the geometry of various objects of the real world. in one or more embodiments, map points derived from various key frames may be used to triangulate the pose and orientation of the camera that captured the images. in other words, the collected map points may be used to estimate the pose (e.g., position and orientation) of the keyframe (e.g. camera) capturing the image. it should be appreciated, however, that given the large number of map points and keyframes, there are bound to be some errors (e.g., stresses) in this calculation of keyframe position based on the map points. to account for these stresses, the ar system may perform a bundle adjust. a bundle adjust allows for the refinement, or optimization of the map points and keyframes to minimize the stresses in the geometric map. for example, as illustrated in fig. 34 , an example geometric map is presented. as shown in fig. 34 , the geometric map may be a collection of keyframes 3402 that are all connected to each other. the keyframes 3402 may represent a point of view from which various map points are derived for the geometric map. in the illustrated embodiment, each node of the geometric map represents a keyframe (e.g., camera), and the various keyframes are connected to each other through connecting lines 3404. in the illustrated embodiment, the strength of the connection between the different keyframes is represented by the thickness of the connecting lines 3404. for example, as shown in fig. 34 , the connecting lines between node 3402a and 3402b is depicted as a thicker connecting line 3404 as compared to the connecting lines between node 3402a and node 3402f. the connecting lines between node 3402a and node 3402d is also depicted to be thickener than the connecting line between 3402b and node 3402d. in one or more embodiments, the thickness of the connecting lines represents the number of features or map points shared between them. for example, if a first keyframe and a second keyframe are close together, they may share a large number of map points (e.g., node 3402a and node 3402b), and may thus be represented with a thicker connecting line. of course, it should be appreciated that other ways of representing geometric maps may be similarly used. for example, the strength of the line may be based on a geographical proximity between the keyframes, in another embodiment. thus, as shown in fig. 34 , each geometric map represents a large number of keyframes 3402 and their connection to each other. now, assuming that a stress is identified in a particular point of the geometric map, a bundle adjust may be performed to alleviate the stress by radially pushing the stress out radially out from the identified point of stress 3406. the stress is pushed out radially in waves 3408 (e.g., n=1, n=2, etc.) propagating from the point of stress, as will be described in further detail below. the following description illustrates an example method of performing a wave propagation bundle adjust. it should be appreciated that all the examples below refer solely to wave propagation bundle adjusts, and other types of bundle adjusts may be similarly used in other embodiments. first, a particular point of stress is identified. in the illustrated embodiment of fig. 34 , consider the center (node 3402a) to be the identified point of stress. for example, the system may determine that the stress at a particular point of the geometric map is especially high (e.g., residual errors, etc.). the stress may be identified based on one of two reasons. one, a maximum residual error may be defined for the geometric map. if a residual error at a particular point is greater than the predefined maximum residual error, a bundle adjust may be initiated. second, a bundle adjust may be initiated in the case of loop closure stresses, as will be described further below (when a topological map indicates mis-alignments of map points). when a stress is identified, the ar system distributes the error evenly, starting with the point of stress and propagating it radially through a network of nodes that surround the particular point of stress. for example, in the illustrated embodiment, the bundle adjust may distribute the error to n =1 (one degree of separation from the identified point of stress, node 3402a) around the identified point of stress. in the illustrated embodiment, nodes 3402b-3402g are all part of the n=1 wave around the point of stress, node 3402a. in some cases, this may be sufficient. in other embodiments, the ar system may propagate the stress even further, and push out the stress to n =2 (two degrees of separation from the identified point of stress, node 3402a), or n =3 (three degrees of separation from the identified point of stress, node 3402a) such that the stress is radially pushed out further and further until the stress is distributed evenly. thus, performing the bundle adjust is an important way of reducing stress in the geometric maps. ideally, the stress is pushed out to n=2 or n=3 for better results. in one or more embodiments, the waves may be propagated in even smaller increments. for example, after the wave has been pushed out to n=2 around the point of stress, a bundle adjust can be performed in the area between n=3 and n=2, and propagated radially. by controlling the wave increments, this iterative wave propagating bundle adjust process can be run on massive data to reduce stresses on the system. in an optional embodiment, because each wave is unique, the nodes that have been touched by the wave (e.g., bundle adjusted) may be colored so that the wave does not re-propagate on an adjusted section of the geometric map. in another embodiment, nodes may be colored so that simultaneous waves may propagate/originate from different points in the geometric map. as mentioned previously, layering the topological map on the geometric map of keyframes and map points may be especially crucial in finding loop-closure stresses. a loop-closure stress refers to discrepancies between map points captured at different times that should be aligned but are mis-aligned. for example, if a user walks around the block and returns to the same place, map points derived from the position of the first keyframe and the map points derived from the position of the last keyframe as extrapolated from the collected map points should ideally be identical. however, given stresses inherent in the calculation of pose (position of keyframes) based on the different map points, there are often errors and the system does not recognize that the user has come back to the same position because estimated key points from the first key frame are not geometrically aligned with map points derived from the last keyframe. this may be an example of a loop-closure stress. to this end, the topological map may be used to find the loop-closure stresses in a geometric map. referring back to the previous example, using the topological map along with the geometric map allows the ar system to recognize the loop-closure stresses in the geometric map because the topological map may indicate that the user has come back to the starting point (based on the color histogram, for example). for example, referring to the layered map 3500 of fig. 35 , the nodes of the topological map (e.g., 3504a and 3504b) are layered on top of the nodes of the geometric map (e.g., 3502a-3502f). as shown in fig. 16 , the topological map, when placed on top of the geometric map may suggest that keyframe b (node 3502g) is the same as keyframe a (node 3502a). based on this, a loop closure stress may be detected, the system detects that keyframes a and b should be closer together in the same node, and the system may then perform a bundle adjust. thus, having identified the loop-closure stress, the ar system may then perform a bundle adjust on the identified point of stress, using a bundle adjust technique, such as the one discussed above. it should be appreciated that performing the bundle adjust based on the layering of the topological map and the geometric map ensures that the system only retrieves the keyframes on which the bundle adjust needs to be performed instead of retrieving all the keyframes in the system. for example, if the ar system identifies, based on the topological map that there is a loop-closure stress, the system may simply retrieve the keyframes associated with that particular node or nodes of the topological map, and perform the bundle adjust on only those keyframes rather than all the keyframes of the geometric map. again, this allows the system to be efficient and not retrieve unnecessary information that might unnecessarily tax the system. referring now to fig. 36 , an example method 3600 for correcting loop-closure stresses based on the topological map is described. at 3602, the system may identify a loop closure stress based on a topological map that is layered on top of a geometric map. once the loop closure stress has been identified, at 3604, the system may retrieve the set of key frames associated with the node of the topological map at which the loop closure stress has occurred. after having retrieved the key frames of that node of the topological map, the system may, at 3606, initiate a bundle-adjust on that point in the geometric map. at 3608, the stress is propagated away from the identified point of stress and is radially distributed in waves, to n=1 (and then n=2, n=3, etc.) similar to the technique shown in fig. 34 . in mapping out the virtual world, it is important to know all the features and points in the real world to accurately portray virtual objects in relation to the real world. to this end, as discussed above, map points captured from various head-worn ar systems are constantly adding to the passable world model by adding in new pictures that convey information about various points and features of the real world. based on the points and features, as discussed above, one can also extrapolate the pose and position of the keyframe (e.g., camera, etc.). while this allows the ar system to collect a set of features (2d points) and map points (3d points), it may also be important to find new features and map points to render a more accurate version of the passable world. one way of finding new map points and/or features may be to compare features of one image against another. each feature may have a label or feature descriptor attached to it (e.g., color, identifier, etc.). comparing the labels of features in one picture to another picture may be one way of uniquely identifying natural features in the environment. for example, if there are two keyframes, each of which captures about 500 features, comparing the features of one keyframe with the other may help determine new map points. however, while this might be a feasible solution when there are just two keyframes, it becomes a very large search problem that takes up a lot of processing power when there are multiple keyframes, each of which captures millions of points. in other words, if there are m keyframes, each having n unmatched features, searching for new features involves an operation of mn 2 (o(mn 2 )). unfortunately, this is a very large search operation. one approach to find new points that avoids such a large search operation is by render rather than search. in other words, assuming the position of m keyframes are known and each of them has n points, the ar system may project lines (or cones) from n features to the m keyframes to triangulate a 3d position of the various 2d points. referring now to fig. 37 , in this particular example, there are 6 keyframes 3702, and lines or rays are rendered (using a graphics card) from the 6 keyframes to the points 3704 derived from the respective keyframe. in one or more embodiments, new 3d map points may be determined based on the intersection of the rendered lines. in other words, when two rendered lines intersect, the pixel coordinates of that particular map point in a 3d space may be 2 instead of 1 or 0. thus, the higher the intersection of the lines at a particular point, the higher the likelihood is that there is a map point corresponding to a particular feature in the 3d space. in one or more embodiments, this intersection approach, as shown in fig. 37 may be used to find new map points in a 3d space. it should be appreciated that for optimization purposes, rather than rendering lines from the keyframes, triangular cones may instead be rendered from the keyframe for more accurate results. the triangular cone is projected such that a rendered line to the n feature (e.g., 3704) represents a bisector of the triangular cone, and the sides of the cone are projected on either side of the nth feature. in one or more embodiments, the half angles to the two side edges may be defined by the camera's pixel pitch, which runs through the lens mapping function on either side of the nth feature. the interior of the cone may be shaded such that the bisector is the brightest and the edges on either side of the nth feature may be set of 0. the camera buffer may be a summing buffer, such that bright spots may represent candidate locations of new features, but taking into account both camera resolution and lens calibration. in other words, projecting cones, rather than lines may help compensate for the fact that certain keyframes are farther away than others that may have captured the features at a closer distance in this approach, a triangular cone rendered from a keyframe that is farther away will be larger (and have a large radius) than one that is rendered from a keyframe that is closer. a summing buffer may be applied in order to determine the 3d map points (e.g., the brightest spots of the map may represent new map points). essentially, the ar system may project rays or cones from a number of n unmatched features in a number m prior key frames into a texture of the m+1 keyframe, encoding the keyframe identifier and feature identifier. the ar system may build another texture from the features in the current keyframe, and mask the first texture with the second. all of the colors are a candidate pairing to search for constraints. this approach advantageously turns the o(mn 2 ) search for constraints into an o(mn) render, followed by a small o((<m)n(<<n)) search. in another approach, new map points may be determined by selecting a virtual keyframe from which to view the existing n features. in other words, the ar system may select a virtual key frame from which to view the map points.. for instance, the ar system may use the above keyframe projection, but pick a new "keyframe" based on a pca(principal component analysis) of the normals of the m keyframes from which {m,n} labels are sought (e.g., the pca-derived keyframe will give the optimal view from which to derive the labels). performing a pca on the existing m keyframes provides a new keyframe that is most orthogonal to the existing m keyframes. thus, positioning a virtual key frame at the most orthogonal direction may provide the best viewpoint from which to find new map points in the 3d space. performing another pca provides a next most orthogonal direction, and performing a yet another pca provides yet another orthogonal direction. thus, it can be appreciated that performing 3 pcas may provide an x, y and z coordinates in the 3d space from which to construct map points based on the existing m key frames having the n features. fig. 38 describes an example method 3800 for determining map points from m known keyframes. first, at 3802, the ar system retrieves m keyframes associated with a particular space. as discussed above, m keyframes refers to known keyframes that have captured the particular space. next, at 3804, a pca of the normal of the keyframes is performed to find the most orthogonal direction of the m key frames. it should be appreciated that the pca may produce three principals each of which is orthogonal to the m key frames. next, at 3806, the ar system selects the principal that is smallest in the 3d space, and is also the most orthogonal to the view of all the m keyframes. at 3808, after having identified the principal that is orthogonal to the keyframes, a virtual keyframe may be placed along the axis of the selected principal. in one or more embodiments, the virtual keyframe may be placed far away enough so that its field of view includes all the m keyframes. next, at 3810, the ar system may render a feature buffer, such that rays (or cones) are rendered from each of the m key frames to the nth feature. the feature buffer may be a summing buffer, such that the bright spots (pixel coordinates at which lines n lines have intersected) represent candidate locations of n features. it should be appreciated that the same process described above may be repeated with all three pca axes, such that map points are found on x, y and z axes. next, at 3812 the system may store all the bright spots in the image as virtual "features". next, at 3814, a second "label" buffer may be created at the virtual keyframe to stack the lines (or cones) and to save their {m, n} labels. next, at 3816, a "mask radius" may be drawn around each bright spot in the feature buffer. it should be appreciated that the mask radius represents the angular pixel error of the virtual camera. the ar system may fill the resulting circles around each bright spot, and mask the label buffer with the resulting binary image. in an optional embodiment, the circles may be filled by applying a gradient filter such that the center of the circles are bright, but the brightness fades to zero at the periphery of the circle. in the now-masked label buffer, the principal rays may be collected using the {m, n}-tuple label of each triangle. it should be appreciated that if cones/triangles are used instead of rays, the ar system may only collect triangles where both sides of the triangle are captured inside the circle. thus, the mask radius essentially acts as a filter that eliminates poorly conditioned rays or rays that have a large divergence (e.g., a ray that is at the edge of a field of view (fov) or a ray that emanates from far away). for optimization purposes, the label buffer may be rendered with the same shading as used previously in generated cones/triangles). in another optional optimization embodiment, the triangle density may be scaled from one to zero instead of checking the extents (sides) of the triangles. thus, rays that are very divergent will effectively raise the noise floor inside a masked region. running a local threshold-detect inside the mark will trivially pull out the centroid from only those rays that are fully inside the mark. at 3818, the collection of masked/optimized rays m may be fed to a bundle adjuster to estimate and/or correct the location of the newly-determined map points. it should be appreciated that this system is functionally limited to the size of the render buffers that are employed. for example, if the keyframes are widely separated, the resulting rays/ cones will have a lower resolution. in an alternate embodiment, rather than using pca analysis to find the orthogonal direction, the virtual key frame may be placed at the location of one of the m key frames. this may be a simpler and more effective solution because the m key frames may have already captured the space at the best resolution of the camera. if pcas are used to find the orthogonal directions at which to place the virtual keyframes, the process above is repeated by placing the virtual camera along each pca axis and finding map points in each of the axes. in yet another example method of finding new map points, the ar system may hypothesize new map points. the ar system may retrieve the first three principal components from a pca analysis on m keyframes. next, a virtual keyframe may be placed at each principal. next, a feature buffer may be rendered exactly as discussed above at each of the three virtual keyframes. since the principal components are by definition orthogonal to each other, rays drawn from each camera outwards may hit each other at a point in 3d space. it should be appreciated that there may be multiple intersections of rays in some instances. thus, there may now be n features in each virtual keyframe. next, a geometric algorithm may be used to find the points of intersection between the different rays. this geometric algorithm may be a constant time algorithm because there may be n 3 rays. masking and optimization may be performed in the same manner described above to find the map points in 3d space. in one or more embodiments, the ar system may stitch separate small world model segments into larger coherent segments. this may occur on two levels: small models and large models. small models correspond to a local user level (e.g., on the computational component, for instance belt pack). large models, on the other hand, correspond to a large scale or system-wide level (e.g., cloud system) for "entire world" modeling. this can be implemented as part of the passable world model concept. for example, the individual ar system worn by a first user captures information about a first office, while the individual ar system worn by a second user captures information about a second office that is different from the first office. the captured information may be passed to cloud-based computers, which eventually builds a comprehensive, consistent, representation of real spaces sampled or collected by various users walking around with individual ar devices. the cloud based computers build the passable world model incrementally, via use overtime. it is anticipated that different geographic locations will build up, mostly centered on population centers, but eventually filling in more rural areas. the cloud based computers may, for example, perform a hash on gps, wi-fi, room color histograms, and caches of all the natural features in a room, and places with pictures, and generate a topological graph that is the topology of the connectedness of things, as described above. the cloud-based computers may use topology to identify where to stitch the regions together. alternatively, the cloud based computers could use a hash of features (e.g., the topological map), for example identifying a geometric configuration in one place that matches a geometric configuration in another place. in one or more embodiments, the ar system may simultaneously or concurrently employ separate occlusion, depth, and color display or rendering. for example, the individual ar system may have a color rendering module (e.g., lcd, dlp, lcos, fiber scanner projector, etc.) that gives spatial color and a spatial backlight which can selectively illuminate parts of color mechanism. in one or more embodiments, the individual ar system may employ a time sequential approach. for example, the individual ar system may produce or load one color image, then step through different regions of the image and selectively illuminate the regions. in conjunction with selective illumination, the individual ar system can operate a variable focal element that changes the actual perceived depth of the light. the variable focal element may shape the wave front, for example, synchronously with a backlight. the individual ar system may render color, for instance at 60 frames per second. for every one of those frames, the individual ar system can have six frames that are rendered during that period of time that are selectively illuminating one portion of the background. the individual ar system renders all the light in the background in the 60th of a second. this approach advantageously allows rendering of various pieces of an image at different depths. most often, a person's head faces forward. the ar system may infer hip orientation using a low pass filter that identifies a direction in which a user's head is pointing and/or by detecting motion relative to the real world or ambient environment. in one or more embodiments, the ar system may additionally or alternatively employ knowledge of an orientation of hands. there is a statistical correlation between these body parts and the hip location and/or hip orientation. thus, the ar system can infer a hip coordinate frame without using instrumentation to detect hip orientation. in one or more embodiments, the ar system can use the hip coordinate frame as a virtual coordinate frame to which virtual content is rendered. this may constitute the most general class. the ar system may render virtual objects around the hip coordinate frame like a home screen(e.g., a social networking screen rendered on one part of the user's view, a video screen rendered on another part of the user's view, etc.). in a world-centric coordinate frame, virtual content (e.g., virtual objects, virtual tools, and other virtual constructs, for instance applications, features, characters, text and other symbols) is fixed with respect to objects of the real world, rather than being fixed to a coordinate frame oriented around the user. in some implementations, the ar system blends multiple levels of depth data into a single color frame, for example exploiting the timing characteristics of the lcd display. for example, the ar system may pack six depth layers of data into one single red/green/blue (rgb) frame. depth in color space may be achieved by, for example, manipulating depth frames by encoding a z-buffer in color space. the ar system may encode depth planes as layer-masks in individual color channels. in one or more embodiments, this may be implemented using standard graphic cards to create a custom shader that renders a single frame that has an rgb frame and the z distance. thus, the encoded z-buffer may be used to generate volumetric information and determine the depth of the image. a hardware component may be used to interpret the frame buffer and the encoded z-buffer. this means that the hardware and software portions are completely abstracted and that there is minimal coupling between the software and hardware portions. the ar system may render virtual content locked to various reference frames, as discussed above. for example, where the ar system includes a head worn component, a view locked reference head-mounted (hmd) frame may be useful. that is, the reference frame stays locked to a reference frame of the head, turning and/or tilting with movement of the head. a body locked reference frame is locked to a reference frame of the body, essentially moving around (e.g., translating, rotating) with the movement of the user's body. a world locked reference frame is fixed to a reference frame of the environment and remains stationary within environment. for example, a world locked reference frame may be fixed to a room, wall or table. in some implementations, the ar system may render virtual content with portions locked to respective ones of two or more reference frames. for example, the ar system may render virtual content using two or more nested reference frames. for instance, the ar system may employ a spherical paradigm. as an example, an inner-most sphere extending to a first radial distance may be locked to a head or view reference frame. radially outward of the inner-most sphere, an intermediate sphere (e.g., slightly-less than arm's length) may be locked to a body reference frame. radially outward of the intermediate sphere, an outer or an outer-most sphere (e.g., full arm extension) may be locked to a world reference frame. as previously noted, the ar system may statistically or otherwise infer actual pose of a body or portion thereof (e.g., hips, hands). for instance, the ar system may select or use the user's hips as a coordinate frame. the ar system statistically infers where the hips are (e.g., position, orientation) and treats that pose as a persistent coordinate frame. as a user moves their head (e.g., rotate, tilt), the ar system renders virtual content (e.g., virtual objects, virtual tools, and other virtual constructs, for instance applications, features, characters, text, digits and other symbols) which are locked to the pose of the user's hips. this can advantageously dramatically increase the virtual field of view. if the user moves their head to look around, the user can see virtual content that is tied around the user's body. that is, the ar system can use a body centered coordinate frame for rendering, e.g., render virtual content with respect to the hip coordinate frame and the virtual content stays locked in the user's field of view no matter how the user's head moves. predictive head model in one or more embodiments, the ar system may use information from one or more of actual feature tracker, gyros, accelerometers, compass and other sensors to predict head movement direction, speed and/or acceleration. it takes a certain amount of time to render a frame of virtual content for the rendering engine. the ar system may use various structures or components for rendering frames of virtual content. for example, the ar system may employ a fiber scan projector. alternatively, the ar system may employ a low persistence display. the ar system may cause flashing of the frame, for example via a backlight. the ar system could use an lcd, for instance, quickly flash the lcd with a very bright backlight, to realize an extremely low persistence display that does not scan through the rasterization. in other words, the ar system gets the pixels in line, and then flashes the lcd with a very bright light for a very short duration. in some implementations, the ar system may render frames to the world coordinate system, allowing the frame scanning projector (fsp) to scan in the world coordinates and sample the frames. further details on predictive head modeling are disclosed in u.s. patent app. serial no. 14/212,961, entitled "display systems and method," filed on march 14, 2014 under attorney docket no. 20006.00. ambient light is sometimes a problem for ar systems because it may affect a quality of projection of virtual content to the user. typically, ar systems have little or no control over the entry of ambient light. thus there is typically little or no control over how the ambient environment appears where an ar system is used in a real world environment. for instance, ambient light conditions over an entire scene may be overly bright or overly dim. also for instance, light intensity may vary greatly throughout a scene. further, there is little or no control over the physical objects that appear in a scene, some of which may be sources of light (e.g., luminaries, windows) or sources of reflection. this can cause rendered virtual content (e.g., virtual objects, virtual tools, and other virtual constructs, for instance applications, features, characters, text and other symbols) difficult to perceive by the ar user. in one or more embodiments, the ar system may automatically identify relatively dark and/or relatively bright area(s) in an ambient environment. based on the identified dark and/or bright areas, the ar system may render virtual content (e.g., virtual text, digits or other symbols) at relatively dark places in the ar user's field of vision in order to address occlusion issues. in this way, the ar system renders virtual content in a manner such that it is best visible to the ar user in view of the ambient environment. in one or more embodiments, the ar system may additionally or alternatively optimize rendered virtual content based at least in part on one or more characteristics of the particular ambient environment. the ar system may render virtual content to accommodate for aspects of the ambient environment, in some embodiments. for instance, if a wall is relatively light, the ar system may render text that will appear superimposed on the door as dark text. or, in another instance, virtual content may be dynamically altered (e.g., darkened, lightened, etc.) based on the detected light of the ambient environment. typically, it may be difficult for the ar system to render black. however, the ar system may be able to render white or other colors.. if a scene includes a white physical wall, then the ar system will render text, digits, and/or other symbols that can be seen against the white background. for example, the ar system may render a color halo about the text, digits or other symbols, allowing the white wall to shine through. if a scene includes a black or dark colored wall, the ar system may render the text, digits, other symbols in a relatively light color. thus, the ar system adjusts visual properties of what is being rendered based on characteristics of the ambient background. image based lighting solutions in order to create convincing realism in the virtual content (e.g., virtual objects, virtual tools, and other virtual constructs, for instance applications, features, characters, text, digits and other symbols) in augmented reality, it is advantageous to emulate the lighting system incident to the environment in which it is super-imposed. the classic lambertian lighting model does not illuminate an object in the way that people are used to seeing in the real, natural world. the lighting in a real world environment is a complex system that is constantly and continuously changing throughout the space, rich with both dramatic contrasts and subtle nuances of intensity and color. the eye is used to seeing this in the real world. the lambertian lighting model does not capture these nuances, and the human visual perception system notices the missing lighting effects, thereby destroying the illusion of realism. in one or more embodiments, a technique called image based lighting (ibl) may be effective in creating realism in computer graphics (cg). ibl does not attempt to compute a complex lighting system the way the radiosity solution does, but rather captures real world lighting photographically with light probes. a technique termed the "silver sphere light probe" technique is effective in capturing the complex colors reflected toward the viewer; however 360 degree cameras are able to capture higher fidelity of data of the entire environment, creating much more convincing light maps. in one or more embodiments, ibl techniques may be used to render virtual content that appears indistinguishable from real objects. modeling packages such as maya ® , utilize libraries of ibl light maps, from which the user can choose to illuminate a particular virtual scene. the user chooses a light map from the library that seems consistent with the content of the scene. thus, it is possible to create realism from ibl, without the light map being identical to the environment in which the light map is used, if the light map is simply similar to the environment. this suggests that it is the subtle nuances in the lighting that the human visual perception system expects to see on the object. if those nuances are inconsistent with the environment, they may interfere with creating an illusion of reality. one solution to employ ibl in an ar system is to supply a vast library of sample light maps created by photography, covering many different environments to encompass a wide variety of potential situations. each of the light maps may be associated with various light parameters specific to the identified situation. the light maps could be stored in the cloud and referenced as needed to illuminate various items or instances of virtual content. in such an implementation, it would be advantageous to automate the selection of light map for a particular real world environment. the user's individual ar system is already equipped with one or more cameras (e.g., outward facing cameras), and photographically samples the environment in which the user is located. the ar system may use the captured image data as map selection criteria. samples from the cameras can be used to heuristically search a library of light maps, and find the closest approximation light map. the ar system may use a variety of parameters, for example frequency data, color palette, dynamic range, etc., the ar system may compare the parameters of the captured visual data against the library light maps and find the light map with the least error. referring now to fig. 39 , an example method 3900 of selecting an appropriate light map is provided. at 3902, the user's individual ar system captures an image of the ambient surrounding through the user's cameras. next, the system selects at least one parameter of the captured image data to compare against the library of light maps. for example, the system may compare a color palette of the captured image against the library of light maps. at 3904, the system compares the parameter of the captured image against the parameters of the light maps, determines a closest approximation of the parameter (3906) and selects a light map having the closest approximation (3908). the system selects the closest approximation, and renders the virtual object based on the selected light map, at 3910. alternatively, or additionally, a selection technique utilizing artificial neural networks may be used. the ar system may use a neural network trained on the set or library of light maps. the neural network uses the selection criteria data as input, and produces a light map selection as output. after the neural network is trained on the library, the ar system presents the real world data from the user's camera to the neural network, and the neural network selects the light map with the least error from the library, either instantly or in real-time. this approach may also allow for modification of a light map. regardless of whether the selection is done heuristically or with a neural network, the selected light map will have error compared to the input samples in the criteria data. if the selected light map is, for example, close in frequency data and dynamic range, but the color palette contains excessive error, the ar system may modify the color palette to better align with the color palette of the real world sampled data, and may construct a modified light map from the new constituency data. the ar system may also combine data from multiple light maps that were identified as near solutions to produce a newly constructed light map. in one or more embodiments, the ar system can then store the newly constructed map as a new entry in the library for future selection. if neural net selection is used, this would require re-training the neural network in the cloud on the augmented set or library. however, the re-training may be brief because the new additions may only require minor adjustments to one or more network weights utilized by the neural network. fig. 40 illustrates an example method 4000 for creating a light map. first, at 4002, the user's individual ar system captures an image of the ambient surroundings through the user's cameras. next, the system selects at least one parameter of the captured image data to compare against the library of light maps. for example, the system may compare a color palette of the captured image against the library of light maps. next, at 4004 the system compares the parameter of the captured image against the parameters of the light maps, determines one or more closest approximation of the parameters (4006), and selects light maps corresponding to the closest approximations. for example, the light map may be selected based on a light intensity detected from the captured image. or, the light map may compare a brightness, or gradient of brightness, or pattern of brightness in the image, and use that information to select the closest approximation. at 4008, the system constructs a new light map by combining parameters of the selected light maps. next, at 4010, the new light map is added to the library of light maps. another approach to supplying appropriate light maps for ibl applications is to use the user's ar device (e.g., head worn component) itself as a light probe to create the ibl light map from scratch. as previously noted, the device is equipped with one or more cameras. the camera(s) can be arranged and/or oriented to capture images of the entire 360 degree environment, which can be used to create a usable light map in situ. either with 360 degree cameras or with an array of narrow angle cameras stitched together, the ar system may be used as a light probe, operating in real time to capture a light map of the actual environment, not just an approximation of the environment. although the captured light map is centric to the user's position, it may be sufficient to create a "convincing enough" object light map. in such a situation, the error is inversely proportional to the level of scrutiny it is subjected to. that is, a far-away object will exhibit a high amount of error using a user-centric light map, but the user's visual perception system will be in a poor position to detect that error due to the distance from the eye being relatively large. whereas, the closer the user is to the object, the more keen the user's visual perception system is to detect error, but at the same time, the more accurate the light map will be, as the user's head approaches a position of the object. while this may be sufficient in many situations, a technique to address that error is discussed below. in one or more embodiments, the ar system (e.g., cloud based computers, individual computational components) may apply transformations to the user-centric light maps that project the user-centric light map as a suitable object centric light map, reducing or eliminating the error of the translational offset. as schematically illustrated in fig. 41 , one technique models the user-centric light map as a classic sphere 4124 centered on the user 4120, of an appropriate radius, perhaps similar to a size of the room. another sphere 4126 is modeled around the object 4122 to be lit, of a radius that fits inside the user-centric sphere 4124. the data from the user-centric sphere 4124 is then projected onto the object-centric sphere 4126 from the point of view of the object 4122, creating a new light map. ray casting will work for this projection. alternatively, a numerical method may be employed. this transformation warps the user-centric light map to be more accurate from the point of view of the object. color intensities are then modified to adjust for distance attenuation according to the offset position of the object. let att(x) be a light attenuation function, where x is the distance from the light to the viewer. the intensity of a given texel of the user-centric light map is expressed as im = is * att(d), where im is the intensity in the map and is is the intensity at the light's source. thus is = im / att(d). so the new intensity in the new object-centric transformation is im' = is * att(d'). it should be appreciated that the sky sphere method of transformation may work well for situations where the sources of light captured are significantly far from the user and object positions. more specifically, if the sources of light are at least as far away as the sphere boundary (which was modeled to represent the sources of light), the technique will likely work. however, as light data sources encroach upon the inner sphere space, error may quickly grow. the worst case scenario is when light data is sourced directly between the user and the object. this would result in the light data mapping to the rear of the object, rather than the front where it is needed. if the light camera system on the user's device is equipped with stereoscopic or depth sensing utility, the ar system can store a depth value associated with each texel of the light map. the only area this depth data is particularly useful is on the data that resides between the user and the object. thus, a stereoscopic camera system may suffice so long as it captures depth in the user's field of view, which is the area in question. the areas of the light map residing behind the user, or for that matter behind the object, is less dependent on depth data because those areas project similarly to both user and object alike. simply attenuating the values for different distances may be sufficient for that area of the light map. once depth data is captured for the area of the map where it is needed (e.g., in front of the user), the ar system can compute the exact euclidean coordinates of the source of that light data on a texel by texel basis. as schematically illustrated in fig. 42 , an object-centric light map may be constructed by projecting those coordinates onto the object sphere, and attenuating the intensities accordingly. as shown in fig. 42 , the user is located at the center of the user semi-sphere 4228, and an object sphere 4226 is modeled around the object 4222, similar to that of fig. 41 . once the depth data is captured for the area of the map, the ar system computes the exact coordinates of the source of the light data for each space point 4230 based on the depth data. although there is no guarantee that the color data projecting toward the object is the same as the color projecting toward the user from these inner space points, the color data will likely be close enough for the general case. the above discussion focused on constructing an object-centric light map based on user-centric data from one sampled user position. however, in many or most cases, the user will be navigating throughout an environment, enabling the collection of many samples of the light environment from many different perspectives. furthermore, having multiple users in the environment increases the sample sets that can be collected interactively in real time. as the user traverses or users traverse the physical space, the ar system captures new light maps at smart intervals and key positions. these light maps may be stored in the cloud as a grid. as new virtual content enters a scene, the ar system access the stored grid and finds a corresponding light map that represents a position closest to the location of the virtual content. the ar system computes the transformation of the light map from the grid position to the virtual object's own position. fig. 43 describes an example method 4300 for using a transformation light map in order to project virtual content. at 4302, the user's individual ar system estimates a location and position of a user relative to the world. next, at 4304, the ar system accesses a grid of light maps stored in the cloud, and selects a light map in a grid that is closest to the location and position of the user (4306). at 4308,the ar system computes a transformation of the light map from the grid position to the virtual object's position such that the lighting of the virtual object matches the lighting of the ambient surroundings. in one or more embodiments, case based reasoning is employed in that a solution of the 'nearest case' is adopted, modified, and employed. the transformed case may be stored back in the grid as a meta-case to be used for that location until better sampled data becomes available to replace the meta-case data. as the grid becomes populated with more and more cases, the opportunity will become available to upgrade the light maps for the existing virtual content to more appropriate cases. this way, the interactivity of the users allows the ar system to learn the lighting of the environment, and iteratively converge the virtual content to a realistic solution. the stored grid may remain in the cloud for future use in the same environment. certainly, drastic changes to the environment may challenge the effectiveness of the grid, and the grid may need to be rebuilt from start. however certain types of changes can still utilize previously collected data. for instance, global changes, such as dimming the lights, can still use the collected data, with a scaling down of the luminance across the dataset while keeping the higher frequency data. a number of techniques are discussed below to apply effective image based lighting to virtual content in the ar system. in one or more embodiments, the ar system learns the lighting of a physical environment through interaction of the users and their device cameras. the data may be stored in the cloud and continuously improved with further interaction. the objects select light maps using case-based reasoning techniques, applying transformations to adjust the light maps, and discreetly update the light maps at opportune times or conditions, converging toward a realistic solution. through interaction and sampling, the ar system improves its understanding of the light environment of a physical space. in one or more embodiments, the ar system will update the light maps being used in rendering of various virtual content to more realistic light maps based on the acquired knowledge of the light environment. a potential problem may occur if, for example a user witnesses an update (e.g., change in rendering of a virtual content). for example, if the user sees changes occurring on the surface of a virtual object, the surface will appear to animate, destroying the desired illusion of realism. to solve this potential problem, the ar system executes updates discreetly, during special circumstances that minimize the risk of the user noticing an update or change to a piece of or instance of virtual content. for example, consider an initial application when a virtual object enters a scene. an update or change may be performed as a virtual object leaves the field of view of user, briefly or even just far into the periphery of the user's field of view. this minimizes the likelihood that the user will perceive the update or change of the virtual object. the ar system may also update partial maps, corresponding to back-facing parts of the virtual object, which the user cannot see. if the user walks around the virtual object, the user will discover an increased realism on the far side without ever seeing the update or change. the ar system may update or change the fore-side of the virtual object, which is now out of the user's field of view while the user is viewing the rear or far side of the virtual object. the ar system may perform updates or changes on various selected portions (e.g., top, bottom, left, right, front, rear) of the map of the virtual object while those portions are not in the field of view of the user. in one or more embodiments, the ar system may wait to perform updates or changes until an occurrence of one or more conditions that typically may lead a user to expect a change on the surface/lights of the virtual object. for example, the ar system may perform a change or update when a shadow passes over the virtual object. since the positions of both virtual and real objects are known, standard shadowing techniques can be applied. the shadow would obscure the update or change from the viewer. also for example, the ar system may update or change the map of the virtual object in response to light in the environment dimming, to reduce the perception of the update or change by the user. in yet another example, the ar system may update or change a map of a virtual object in response to occurrence of an event that is known or to have a high probability of drawing the attention of a user. for instance, in response to a virtual monster crashing down through a ceiling, like in a video game, the ar system may update or change the map for other virtual objects since it is highly likely that the user is focusing on the virtual monster and not the other virtual objects. avatars the ar system may render virtual representations of users or other entities, referred to as avatars, as described in some detail above. the ar system may render an avatar of a user in the user's own virtual spaces, and/ or in the virtual spaces of other user's. in some implementations, the ar system may allow an avatar to operate a virtual machine, for example a virtual robot, to operate in an environment. for example, the ar system may render an avatar to appear to "jump" into a robot, to allow the avatar to physically change an environment, and then allow the avatar to jump back out of the robot. this approach allows time multiplexing of a physical asset. for instance, the ar system may render an avatar of a first user to appear in virtual space of a second user in which there is a virtual robot. the "visiting" avatar of the first user enters into a body of the robot in the second user's virtual space. the first user can manipulate the second user's virtual environment via the virtual robot. if another avatar was previously residing in robot, that other avatar is removed to allow the avatar of the first user to enter or inhabit the robot. the other avatar originally inhabiting the robot and being removed from the robot may become a remote avatar, visiting some other virtual space. the avatar originally inhabiting the robot may reenter the robot once the avatar of the first user is done using the robot. the ar system may render an avatar presence in a virtual space with no instrumentation, and allow virtual interaction. the passable world model allows a first user to pass a second user a copy of the first user's section of the world (e.g., a level that runs locally). if the second user's individual ar system is performing local rendering, all the first user's individual ar system needs to send is the skeletal animation. it should be appreciated that the ar system may allow for a continuity or spectrum of avatar rendering. at its simplest, the ar system can drive inferential avatar rendering in a manner similar to driving a character in multi-player online games. the resulting avatar may be rendered with the appearance of a game character (e.g., animation), walking around in a virtual world. in that implementation, the only data coming from the user associated with the avatar is velocity and direction of travel, and possibly simple movements for instance hand motions, etc. next in complexity, an avatar may resemble a physical appearance of the associated user, and may include updating of the avatar based on information collected from the associated user in real-time. for example, an image of a first user's face may have been captured or pre-scanned for use in generating the avatar. the avatar may have a face that appears either as realistic representation (e.g., photographic) or as a recognizable representation (e.g., drawn, cartoonish or caricature). the body of the avatar may, for example, be drawn, cartoonish or caricature, and may even be out of portion with the head of the avatar. the ar system may employ information collected from the first user to animate the avatar in real-time. for example, a head worn component of the individual ar system may include one or more inward facing cameras and/or microphones or other sensors (e.g., temperature, perspiration, heat rate, blood pressure, breathing rate) to collect real-time information or data from the first user. the information may include images and sound, including vocals with the inflections, etc. voice may be passed through to appear to be emanating from the avatar. in some implementations in which the avatar has a realistic face, the facial images may also be passed through. where the avatar does not have a realistic face, the ar system may discern facial expressions from the images and/or inflections in voice from the sound. the ar system may update facial expressions of the avatar based on the discerned facial expressions and/or inflections in voice. for example, the ar system may determine an emotion state (e.g., happy, sad, angry, content, frustrated, satisfied) of the first user based on the facial expressions and/or inflections. the ar system may select a facial expression to render on the avatar based on the determined emotion state of the first user. for example, the ar system may select from a number of animation or graphical representations of emotion. thus, the ar system may employ real time texture mapping to render emotional state of a user on an avatar that represents the user. next in complexity, the ar system may collect information about portions of a user's body in addition to, or other than, the user's face or voice. for example, the ar system may collect information representative of movement of one or more limbs of the user and/or of the user's entire body. the ar system may collect such information via user worn sensors (e.g., accelerometers, gyros) and/or via a room sensor system which monitors at least a portion of a physical space in which the user is located. the ar system uses the collected information to render the entire body of the avatar in a way that reflects that actual movement of the user which the avatar represents. the ar system may perform functions such along with real-time texture mapping, applying images (e.g., video) to the avatar. in an even more complex implementation, the ar system may include one or more light field cameras which capture a light field of the user in physical space. the second user may view a live real three-dimensional image of the first user with sound, which is more realistic then the previously described implementations. in a most complex implementation, the ar system may include one or more light field cameras which capture a light field of the user in physical space. the ar system may code the captured light field into a model, and send the model to an individual ar system of a second user for rendering into the second user's virtual space. as discussed above, an ar system may use head, hand, environment pose, voice inflection, and/or eye gaze to animate or modify a user's virtual self or avatar in a space. the ar system may infer a location of a user's avatar simply based on a position of the user's head and/or hands with respect to the environment. the ar system may statistically process voice inflection (e.g., not content of utterances), and animate or modify an emotional expression of the corresponding avatar to reflect an emotion of the respective user which the avatar represents. for example, if a user has selected an avatar that resembles a pumpkin, in response to detecting patterns in the user's voice that indicate anger, the ar system may render teeth in a mouth cutout of the pumpkin avatar. as a further example, a user may have an avatar that resembles a particular character. in response to detection of vocal inflections that indicate inquisitiveness, the ar system may render an avatar that resembles the particular character, for instance with mouth moving and eyes are looking around is same manner as the user's mouth and eyes, etc. a rendering of a user's respective virtual space or environment is asynchronous. an exchange of a relatively small amount of information allows a first user to experience being in another's user's space, or experience having another user in the first user's space. if the first user has a copy of the second user's space, the first user can appear in the second user's space, with control over their own viewpoint of the second user's space, as well as control over their own interactions within the second user's space. animating an avatar using a subset of information, without instrumentation, provides for scalability. the ar system can provide for autonomous navigation of virtual objects through an environment. where the virtual objects constitute avatars, various emotional states of the avatar may be taken into account autonomously navigating through a space the avatar is inhabiting. as illustrated in fig. 44 , the ar system may include a collection or library of autonomous navigation definitions or objects 4400a-4400d (collectively 4400), which sense and are responsive in predefined ways to certain defined conditions which may occur or be sensed in the virtual space or environment. the autonomous navigation definitions or objects are each associated with a condition or stimulus which may occur or be sensed in a virtual space or environment. an autonomous navigation definition or object 4400a may be responsive to, for example, a presence of structure (e.g., a wall). an autonomous navigation definition or object 4400b may be responsive to, for example, light or a source of light (e.g., luminaire, window). an autonomous navigation definition or object 4400c may be responsive to, for example, sound or a source of sound (e.g., bell, siren, whistle, voice). an autonomous navigation definition or object 4400d may be responsive to, for example, food or water or a source of food or water. other autonomous navigation definitions or objects (not shown in fig.44 ) may be responsive to other conditions or stimuli, for instance a source of fear (e.g., monster, weapon, fire, cliff), source of food, source of water, treasure, money, gems, precious metals, etc. the autonomous navigation definitions or objects 4400 are each associated with a defined response. autonomous navigation definitions or objects respond, for example by causing or tending to cause movement. for example, some autonomous navigation definitions or objects 4400 cause or tend to cause movement away from a source of a condition or stimulus. also for example, some autonomous navigation objects 2300 cause or tend to cause movement toward a source of a condition or stimulus. at least some of the autonomous navigation definitions or objects 4400 have one or more adjustable parameters. the adjustable parameters do not change the fundamental conditions or stimulus to which the autonomous navigation definitions or objects 4400 react, but may set a sensitivity level and/or level or strength of response to the conditions or stimuli. the ar system may provide one or more user interface tools for adjusting properties. for example, a user interface tool (e.g., slider bar icons, knob icons) may allow for scaling the properties, inverting the properties (e.g., move towards, move away), etc. the adjustable parameters may, for example, set a level of sensitivity of the autonomous navigation definition or object 4400 to the conditions or stimulus to which the autonomous navigation definition or object is responsive. for example, a sensitivity parameter may be set to a low level, at which the autonomous navigation definition or object 4400 is not very responsive to an occurrence of a condition or presence of a stimulus, for instance not responding until a source of a condition or stimulus is very close. also for example, a sensitivity parameter may be set to a high level, at which the autonomous navigation definition or object 4400 is very responsive to an occurrence of a condition or presence of a stimulus, for instance responding even when a source of a condition or stimulus is not very close. levels in between the low and high levels may also be employed. in some implementations, the level of sensitivity may be considered as a range of sensitivity. such may set an outer boundary at which the autonomous navigation definition or object 4400 is sensitive, or may set a gradient in sensitivity, which may be linear, exponential, or even a step function with one or more distinct steps in sensitivity. the adjustable parameters may, for example, set a level of response of the autonomous navigation definition or object 4400 to the conditions or stimulus to which the autonomous navigation definition or object 4400 is responsive. for example, a parameter may adjust a strength at which the autonomous navigation definition or object 4400 responds to an occurrence of a condition or stimulus. for instance, a parameter may set a strength of a tendency or likelihood to move. for example, a tendency parameter may be set to a low level, at which the autonomous navigation definition or object 4400 is not very responsive an occurrence of a condition or presence of a stimulus. also for example, the tendency parameter may be set to a high level, at which the autonomous navigation definition or object 4400 is very responsive to an occurrence of a condition or presence of a stimulus, and will strongly cause movement either toward or away from the source of a condition or stimulus. a speed parameter may set a speed at which the autonomous navigation definition or object 4400 moves in response to detection of the condition or stimulus. the speed may be a fixed speed or a variable speed which changes with time (e.g., slowing down 5 seconds after response starts) or distance (e.g., slowing down after moving a fixed distance). a direction parameter may set a direction of movement (e.g., toward, away). while autonomous navigation definitions or objects 4400 may be responsive to conditions and stimuli in a two-dimensional area, in some implementations the autonomous navigation definitions or objects 4400 are responsive to conditions and stimuli in a three-dimensional volume. some autonomous navigation definitions or objects 4400 may be isotropic, that is detecting and responding to conditions occurring in all directions relative to the autonomous navigation object 4400. some autonomous navigation definitions or objects 4400 may be anisotropic, that is detecting and responding to conditions occurring in only limited directions relative to the autonomous navigation definition or object. isotropic or anisotropic operation may be an adjustable parameter for some autonomous navigation definitions or objects 4400. the autonomous navigation definitions or objects 4400 may be predefined, and selectable by a user or others. in some implementations, a user may define new autonomous navigation definitions or objects 4400, and optionally incorporate the new autonomous navigation definitions or objects into a collection or library for reuse by the user or for use by others. as illustrated in fig. 45 , one or more autonomous navigation definitions or objects 4400a, 4400c are logically associable to a virtual object 4500, for example to an avatar. when logically associated with a virtual object 4500, the autonomous navigation definitions or objects 4400a, 4400c may be plotted as a body centered coordinate frame about the virtual object 4500. that is the center of the autonomous navigation definition or object 4400a, 4400c is the center of the body of the virtual object 4500 itself. the autonomous navigation definitions or objects 4400 may be scaled, for example with a logarithmic function or some other function that for instance scales infinity to 1 and proximity to 0. the autonomous navigation definitions or objects 4400 are each independent from one another. any number of autonomous navigation definitions or objects 4400 can be associated or applied to a virtual object 4500. for example, thousands of autonomous navigation definitions or objects 4400 may be applied to a single virtual object 4500. fig. 46 shows a set or "stack" 4600 of autonomous navigation definitions or objects 4400 which are logically associated with a given virtual object 4500, and which can be arranged as rings about the virtual object 4500, for example as illustrated in fig. 45 . once a set or stack 4600 of autonomous navigation objects 4400a-4400d has been defined, and composited, as indicated by summing line 4602 ( fig. 46 ), values of the autonomous navigation definitions or objects 44 are normalized to be between zero and one. as noted, some properties of at least some of the autonomous navigation objects 4400 may be adjustable. those properties may include a level of sensitivity as wells as a strength of response. while the types (e.g., condition or stimulus) of autonomous navigation definitions or objects 4400 available may be fixed, a user can composite 4602 the autonomous navigation definitions or objects 4400 to provide a composite or combined output 4604 ( fig. 41 ). the composite mechanism may, for example, look for a lowest value, in one or more embodiments. in other cases, the trigger may be a high value, depending on the application. the composite mechanism could, for example, treat the autonomous navigation definition or object 4400a that is responsive to a presence of a structure (e.g., sonar or collision detection) as a filter (e.g., binary outcome, pass/do not pass, on/off), and treat all of other autonomous navigation definition or object 4400b-4400d as scaling factors. for example, the composite 4604 of one or more autonomous navigation definitions or objects 4400 may perform a peak detection on a value or shape (e.g., what is the maximal distance away from center), and provide an indication of a direction and magnitude of velocity (indicated by vector 4602) that the virtual object 4500 should travel in response to the detected condition(s) or stimuli. the strength of response or action of an autonomous navigation definition or object may be represented as a potential field. for example, a potential field may define a tendency to attract or repel an avatar. for instance, the ar system may establish a convention in which a positive potential field attracts an avatar, while a negative potential repels an avatar. alternatively, the convention may be that a positive potential field repels an avatar, while a negative potential attracts an avatar. as a further alternative, one type of potential field may be available under an established convention, which either repels or alternatively attracts the avatar. further, the ar system may employ a convention where a potential field may be assigned a magnitude or gradient, the magnitude or gradient corresponding to a strength or attraction or repulsion. the gradient may be a linear or nonlinear function, and may even include singularities. the potential field may be established coincidentally with the virtual object or avatar. the potential field may tend to cause an avatar to avoid a source of the condition or stimulus (e.g., sound, light) for example to steer around the source of the condition or stimulus. as illustrated in fig. 45 , in one example there may be a first virtual object 4500 which is moving in a virtual space or environment 4502. the virtual space or environment 4502 may include a wall 4504, which may be either a virtual or a physical object. the virtual space or environment 4502 may include a source 4506 of a sound 4508. in one or more embodiments, the ar system may use artificial intelligence to steer the first virtual object 4500 toward a target, for example the source 4506 of the sound 4508 in the virtual space or environment 4502 which includes the wall 4504, while avoiding collisions with the wall 4504. for instance, an autonomous navigation object 4400a that is responsive to a presence of structures may be logically associated with the virtual object 4500. also for instance, an autonomous navigation object 4400c that is responsive to sound 4508 may be logically associated with the virtual object 4500. the autonomous navigation objects 4400a, 4400c may be defined to constitute one or more rings located about a body of the virtual object 4500. for example, the autonomous navigation object 4400 may have a property that defines allowable movement. for example, the autonomous navigation object 4400a may, in the presence of structure, limit movement that would result in a collision with the structure. for instance, in the presence of a flat wall 4504, the autonomous navigation object 4400a may limit the first virtual object 4500 to movement in a lateral direction (e.g., cannot move into the wall), while allowing the first virtual object 4500 to move in any other directions without limitation. also for example, the autonomous navigation object 4400c may, in the presence of sound 4508, cause the associated first virtual object 4500 to move generally towards a source 4506 of the sound 4508. the above example may be modified with the addition of a source of light to the virtual space or environment 4502. an autonomous navigation definition or object 4400b ( fig. 44 ) that is responsive to light may be associated with the first virtual object 4500. detection of light by the light responsive autonomous navigation definition or object 4400b may cause the first virtual object 4500 to tend to move toward the source of light, or conversely tend to move away from the source of light. in this case, the first virtual object 4500 will be responsive to the composite of three conditions, structure, sound, and light. as described above, a set of autonomous navigation definitions or objects may be represented arranged as rings about a virtual object (e.g., avatar) and composited together. these can be represented as a state in a state machine, and provide the virtual object to which the autonomous navigation definitions or objects are associated with travel or movement information (e.g., direction, orientation, speed, and/or distance of travel or movement). this provides a time-based method of instructing a virtual object on where to travel, completely behaviorally. in some implementations, an artificial intelligence algorithm may be applied to tune a state to perfection, based just on empirical input data. the ar system may provide for persistent emotion vectors (pevs) to define state transitions. pevs are capable of representing various emotions, and may have particular values at a particular state in time. in one or more embodiments, pevs may be globally used. a transition from state to state may be controlled by a set or stack up of the pevs. notably, the state machine may not need to be a complete state machine, but rather may cover only a portion of all possible states. a user may set up the states for the particular state transitions that the user is interested in. as illustrated in fig. 47a , a set 4700a of autonomous navigation definitions or objects 4400a-4400d associated with a given virtual object (e.g., an avatar) 4702a are composited to sum to a single ring 4704a. the set 4700a may be assigned or logically associated with one or more emotional states, for example anger 4706a, sad 4706b, happy, frightened, satisfied, hungry, tired, cold, hot, pleased, disappointed, etc. (collectively, 4706, only two emotional states called out in fig. 47a ). the ar system provides for user configurable summing blocks 4708a, 4708b (only two shown collectively 4708), into which the autonomous navigation definitions or objects 4400a-4400b feed. the summing block 4708 drives respective emotion vectors. a user may configure the summing blocks 4708 to cause particular actions to occur. these are inherently time-based, and may apply global weightings based on a current state of a virtual object 4702a, such as an avatar. as illustrated in fig. 47b , a user or some other may, for example, establish a frightened or flee emotion vector. for example, a frightened or flee autonomous navigation definition or object 4400n may be logically associated with a virtual object (e.g., avatar) 4702b. the frightened or flee autonomous navigation definition or object 4400n may be the only autonomous navigation definition or object 4400 in a set 4700n, and may composite 4704n to an identity function via summing block 4708n. a frightened or flee emotion vector tends to cause the virtual object (e.g., avatar) 4702b to flee when presented with some defined condition or stimulus, such as fright 4706n. the frightened or flee emotion vector may typically have a relatively short time constant, and very low threshold. the state transition to a flee state is controlled by a state of the global. consequently, state transitions to a flee state when the frightened or flee emotion vector goes low, either alone or in combination with other emotion vectors. the ar system may employ feedback, for instance using a correlation or a statistical mechanism. for example, a correlation threshold graph 4800 may be defined for any particular autonomous navigation definition or object as illustrated in fig. 48 . the correlation threshold graph 4800 may, for example, have been time plotted along a horizontal axis 4800a and a scale (e.g., zero to one) plotted along a vertical axis 4800b. to control a relation of an autonomous navigation definition or object on the vertical axis, a user can specify a threshold in time t0 and a threshold sensed condition or stimulus level ct. a function fn defines the respective response once the threshold has been meet. thus, the ar system allows two or more autonomous navigation definitions or objects 4400 to be summed together. the ar system may also allow a user to adjust a trigger threshold. for example, in response to a particular combination of autonomous navigation definitions or objects 4400 exceeding a certain time threshold, the value(s) of those autonomous navigation definitions or objects 4400 may be applied to a ramping mechanism to a particular emotion vector. the approach described herein provides a very complex artificial intelligence (al) property by performing deterministic acts with completely deterministic globally visible mechanisms for transitioning from one state to another. these actions are implicitly map-able to a behavior that a user cares about. constant insight through monitoring of these global values of an overall state of the system is required, which allows the insertion of other states or changes to the current state. as a further example, an autonomous navigation definition or object may be responsive to a distance to a neighbor. the autonomous navigation definition or object may define a gradient around a neighbor, for example with a steep gradient on a front portion and a shallow gradient on a back portion. this creates an automatic behavior for the associated virtual object. for example, as the virtual object moves, it may for instance tend to move toward the shallow gradient rather than the steep gradient, if defined as such. alternatively, the virtual object may, for instance, tend to move toward the steep gradient rather than the shallow gradient, if defined as such. the gradients may be defined to cause the virtual object to tend to move around behind the neighbor. this might, for example, be used in a gaming environment where the neighbor is an enemy and the autonomous navigation object functions as an enemy sensor. this may even take into account the direction that the enemy is facing. for example, the value may be high if the avatar is in front. as the avatar moves, it senses a smaller gradient which attracts the avatar to come up behind enemy (e.g., flanking run behind and punch behavior). thus, the autonomous navigation definitions or objects 4400 are configured to sense states in the artificial environment, e.g., presence of water, presence of food, slope of ground, proximity of enemy, light, sound, texture. the autonomous navigation definitions or objects 4400 and pevs allow users to compose definitions that cause virtual objects to tend toward a behavior the user desires. this may allow users to incrementally and atomically or modularly specify an infinite level of complexity by adding states, optimizing an individual state, and defining transitions to new states. in one or more embodiments, the ar system may associate a navigation object with a virtual object. the navigation object may be responsive to one or more predetermined conditions (e.g., a movement, a command, a structure, an emotion, a distance, etc.). based on the change in the navigation object, at least one parameter of the virtual object may be changed as well. for example, the virtual object may move faster, or move toward another object, or exhibit a facial expression, etc. processing the ar system may, in at least some implementations, advantageously perform optical flow analysis in hardware by finding features via an image processing unit (ipu), then finding the features frame-by-frame with a general purpose set theoretic processor (gpstp). these components allow the ar system to perform some of complex computations described throughout this application. further details on these components will be provided below, but it should be appreciated that any other similar processing components may be similarly used, or used additionally. a gpstp is a search engine that efficiently finds defined objects. gpstps perform a set theoretic search. by way of explanation, a venn diagram search of the combinatorics can be searched in order n, rather than factorial order. the gpstps efficiently performs comparisons using set theory to find defined objects. for example, a gpstp is an efficient structure to find a person who meets very specific criteria, as illustrated in the example following criteria: male who had a 1987 cadillac, purchased a starbucks ® coffee on july 31st, and who climbed mount everest in 1983, and who has a blue shirt. an ipu is a piece of imaging processing hardware that can take an image in pixels and convert it into features. a feature may be thought of as a pixel coordinate with meta information. in executing optical flow algorithms and imaging, the ar system identifies an object in a frame and then determines where that object appears in at least one subsequent frame. the ipu efficiently generates features, and reduces the data from pixels to a set of features. for example, the ipu may take a frame with mega pixels of a million points size, and produce a much smaller set of features (e.g., 200 features). these set of features may be provided to gpstp for processing. the gpstp may store the features to be found. as discussed above, a feature is a 2d point in an image with associated meta information or data. features can have names or labels. the gpstp has the n-1 features that were found in the most previous ring. if a match is found, the correspondence may be saved in 2d. this requires only a small amount of computing for a general purpose processor to calculate a bundle adjust to fig. out what the relative absolute pose was from the last frame to the current frame. it provides a hardware closed loop that is very fast and very efficient. in a mobile computation scenario, the two pieces of hardware (ipu and gpstp) may efficiently perform what would normally require a large amount of conventional imaging processing. in some implementations, the ar system may employ a meta process that provides timing and quality targets for every atomic module in localization, pose, and mapping processes. by providing each atomic module a timing and quality target, those modules can internally or autonomously self-regulate their algorithm to optimality. this advantageously avoids the need for hard-real time operation. the meta-controller may then pull in statistics from the atomic modules, statistically identifying the class of place in which the system is operating. overall system tuning configurations for various places (e.g., planes, roads, hospitals, living rooms, etc.) may be saved. the ar system may employ a tracking module. any piece of computer processing can take different amounts of time. if every module is atomic and can receive and use timing and quality data, the modules can determine or at least estimate how long they take to run a process. the module may have some metric on the quality of the respective process. the modules may take the determined or estimated timing of various modules into account, automatically implementing tradeoffs where possible. for example, the module may decide to determine that taking more time to achieve higher quality is advisable. the meta-controller could seed a quality time target to every module in a very modular system. this may allow each module to self-tune itself to hit timing targets. this allows operation of a very complicated processing system that needs to run in real time, without a schedule. it forms a feedback loop. this approach avoids the need for a hard real-time operating system. the meta-controller sends the time target messages to the modules. for example, if a user is playing a game, the meta-controller may decide to tell the modules to use low quality localization targets because the meta-controller would like to free up computing power for some other task (e.g., on character innovation). the meta-controller may be statistically defined and can provide targets that balance in different configurations. this approach may also save on system tuning. for example, a global set of modifiable algorithmic parameters may allow for tuning. for instance, operations may be tuned based on location (e.g., on a plane, driving a car, in a hospital, in a living room). the approach allows for bundling of all these parameters. for example, feature tracking can have low quality targets, so only requires a relatively short time, and remainder of the time budget can be used for other processing. classical "features from accelerated segment test" (fast) feature extractors (as discussed in some detail above) may be configured into a massively parallel byte-matching system general purpose set theoretic processor (gpstp). as noted above the gpstp is a processor that does comparisons only. the resulting feature extractor has outputs and capabilities similar to fast, but is implemented completely through brute-force search and comparison rather than mathematics. the feature extractor would be located near the camera, to immediately process frames into feature data (x, y, z, basic descriptor information), in one or more embodiments. massively parallel comparisons would be performed on serially streamed data via the gpstps. the approach would essentially make an image sequential, and have gpstp find every type of fast feature possible. the types of features are enumerated and gpstp finds the features because there is only a limited size, for example 8 bits per pixel. the gpstp rolls through and find every combination via a brute force search. any image can be serialized, and any feature of interest may be transformed. a transform may be performed on the image beforehand, which makes the bit patterns invariant to rotation or scaling, etc. gpstp takes some group of pixels and applies one or more convolution operations. thus, by utilizing the various ar systems, various software and optics techniques outlined above, the system is able to create virtual reality and/or augmented reality experiences for the user. fig. 49 illustrates another system architecture of an example ar system. as shown in fig. 49 , the ar system 4900 comprises a plurality of input channels from which the ar system 4900 receives input. the input may be sensory input 4906, visual input 4902 or stationary input 4904. other types of input may also be similarly received (e.g., gesture information, auditory information, etc.). it should be appreciated that the embodiment of fig. 49 is simplified for illustrative purposes only, and other types of input may be received and fed into the ar system 4900. on a basic level, the ar system 4900 may receive input (e.g., visual input 4902 from the user's wearable system, input from room cameras, sensory input in the form of various sensors in the system, gestures, totems, eye tracking etc.) from one or more ar systems. the ar systems may constitute one or more user wearable systems, and/or stationary room systems (room cameras, etc.). the wearable ar systems not only provide images from the cameras, they may also be equipped with various sensors (e.g., accelerometers, temperature sensors, movement sensors, depth sensors, gps, etc.) to determine the location, and various other attributes of the environment of the user. of course, this information may further be supplemented with information from stationary cameras discussed previously. these cameras, along with the wearable ar systems, may provide images and/or various cues from a different point of view. it should be appreciated that image data may be reduced to a set of points, as explained above. as discussed above, the received data may be a set of raster imagery and point information that is stored in a map database 4910. as discussed above, the map database 4910 collects information about the real world that may be advantageously used to project virtual objects in relation to known locations of one or real objects. as discussed above, the topological map, the geometric map etc. may be constructed based on information stored in the map database 4910. in one or more embodiments, the ar system 4900 also comprises object recognizers 4908 (object recognizers explained in depth above). as discussed at length above, object recognizers 4908 "crawl" through the data (e.g., the collection of points) stored in one or more databases (e.g., the map database 4910) of the ar system 4900 and recognize (and tag) one or more objects. the mapping database may comprise various points collected over time and their corresponding objects. based on this information, the object recognizers may recognize objects and supplement this with semantic information (as explained above). for example, if the object recognizer recognizes a set of points to be a door, the system may attach some semantic information (e.g., the door has a hinge and has a 90 degree movement about the hinge). over time the map database grows as the system (which may reside locally or may be accessible through a wireless network) accumulates more data from the world. once the objects are recognized, the information may be transmitted to one or more user wearable systems 4920. for example, the ar system 4900 may transmit data pertaining to a scene in a first location (e.g., san francisco) to one or more users having wearable systems in new york city. utilizing the data in the map database 4910 (e.g., data received from multiple cameras and other inputs, the object recognizers and other software components map the points collected through the various images, recognize objects etc.) the scene may be accurately "passed over" to a user in a different part of the world. as discussed above, the ar system 4900 may also utilize a topological map for localization purposes. more particularly, the following discussion will go in depth about various elements of the overall system that allows the interaction between one or more users of the ar system. fig. 50 is an example process flow diagram 5000 that illustrates how a virtual scene is displayed to a user in relation to one or more real objects. for example, the user may be new york city, but may desire to view a scene that is presently going on in san francisco. or, the user may desire to take a "virtual" walk with a friend who resides in san francisco. to do this, the ar system 4900 may essentially "pass over" the world corresponding to the san francisco user to the wearable ar system of the new york user. for example, the wearable ar system may create, at the wearable ar system of the new york user, a virtual set of surroundings that mimic the real world surroundings of the san francisco user. similarly, on the flip side, the wearable ar system of the san francisco user may create a virtual avatar (or a virtual look-alike of the new york user that mimics the actions of the new york user. thus, both users visualize one or more virtual elements that are being "passed over" from the other user's world and onto the user's individual ar system. first, in 5002, the ar system may receive input (e.g., visual input, sensory input, auditory input, knowledge bases, etc.) from one or more users of a particular environment. as described previously, this may be achieved through various input devices, and knowledge already stored in the map database. the user's cameras, sensors, gps system, eye tracking etc., conveys information to the system (step 5002). it should be appreciated that such information may be collected from a plurality of users to comprehensively populate the map database with real-time and up-to-date information. in one or more embodiments, the ar system 4900 may determine a set of sparse points based on the set of received data (5004). as discussed above, the sparse points may be used in determining pose of the keyframes that took a particular image. this may be crucial in understanding the orientation and position of various objects in the user's surroundings. the object recognizers may crawl through these collected points and recognize one or more objects using the map database 4910 (5006). in one or more embodiments, the one or more objects may be recognized previously and stored in the map database. in other embodiments, if the information is new, object recognizers may run on the new data, and the data may be transmitted to one or more wearable ar systems (5008). based on the recognized real objects and/or other information conveyed to the ar system, the desired virtual scene may be accordingly displayed to the user of the wearable ar system (5010). for example, the desired virtual scene (e.g., the walk with the user in san francisco) may be displayed accordingly (e.g., comprising a set of real objects at the appropriate orientation, position, etc.) in relation to the various objects and other surroundings of the user in new york. it should be appreciated that the above flow chart represents the system at a very basic level. fig. 51 below represents a more detailed system architecture. referring to fig. 51 , various elements are depicted for one embodiment of a suitable vision system. as shown in fig. 51 , the ar system 5100 comprises a map 5106 that received information from at least a pose module 5108, a depth map or fusion module 5104. as will be described in detail further below, the pose module 5108 receives information from a plurality of wearable ar systems. specifically, data received from the systems' cameras 5120 and data received from sensors such as imus 5122 may be utilized to determine a pose at which various images were captured. this information allows the system to place one or more map points derived from the images at the appropriate position and orientation in the map 5106. this pose information is transmitted to the map 5106, which uses this information to store map points based on the position and orientation of the cameras with respect to the captured map points. as shown in fig. 51 , the map 5106 also interacts with the depth map module 5104. the depth map module 5104 receives information from a stereo process 5110, as will be described in further detail below. the stereo process 5110 constructs a depth map 5126 utilizing data received from stereo cameras 5116 on the plurality of wearable ar systems and ir cameras (or ir active projectors 5118). the stereo process 5110 may also receive inputs based on hand gestures 5112. it should be appreciated that the hand gestures and/or totem gestures may be determined based at least in part on data received from eye cameras 5114 that track the user's hand gestures. as shown in fig. 51 , data from the stereo process 5110 and the data from the pose process 5108 are used at the depth map fusion module 5104. in other words, the fusion process 5108 determines a depth of objects also utilizing pose information from the pose process 5108. this information is then transmitted and stored at the map 5106. as shown in fig. 51 , data from the map 5106 is transmitted as needed to provide an ar experience to a plurality of users of the wearable ar system. one or more users may interact with the ar system through gesture tracking 5128, eye tracking 5130, totem tracking 5132 or through a gaming console 5134. the map 5106 is a database containing map data for the world. in one embodiment, the map 5106 may partly reside on user-wearable components, and/or may partly reside at cloud storage locations accessible by wired or wireless network. the map 5106 is a significant and growing component which will become larger and larger as more and more users are on the system. in one or more embodiments, the map 5106 may comprise a set of raster imagery, point + descriptors clouds and/or polygonal/geometric definitions corresponding to one or more objects of the real world. the map 5106 is constantly updated with information received from multiple augmented reality devices, and becomes more and more accurate over time. it should be appreciated that the system may further include a processor/controller that performs a set of actions pertaining to the various components described with respect to fig. 51 . also, the processor/controller may determine through the various components (e.g., fusion process, pose process, stereo, etc.) a set of output parameters that can be used to project a set of images to the user through a suitable vision system. for example, the output parameter may pertain to a determined pose that varies one or more aspects of a projected image. or, the output parameter may pertain to a detected user input that may cause modification of one or more aspects of a projected image. other such output parameters of various parts of the system architecture will be described in further detail below. in one or more embodiments, the map 5106 may comprise a passable world model. the passable world model allows a user to effectively "pass" over a piece of the user's world (i.e., ambient surroundings, interactions, etc.) to another user. each user's respective individual ar system (e.g., individual augmented reality devices) captures information as the user passes through or inhabits an environment, which the ar system (or virtual reality world system in some embodiments) processes to produce a passable world model. the individual ar system may communicate or pass the passable world model to a common or shared collection of data, referred to as the cloud. the individual ar system may communicate or pass the passable world model to other users, either directly or via the cloud. the passable world model provides the ability to efficiently communicate or pass information that essentially encompasses at least a field of view of a user. for example, as a user walks through an environment, the user's individual ar system captures information (e.g., images) and saves the information as posed tagged images, which form the core of the passable world model. the passable world model is a combination of raster imagery, point + descriptors clouds, and/or polygonal/geometric definitions (referred to herein as parametric geometry). some or all of i this information is uploaded to and retrieved from the cloud, a section of which corresponds to this particular space that the user has walked into. asynchronous communications is established between the user's respective individual ar system and the cloud based computers (e.g., server computers). in other words, the user's individual ar system is constantly updating information about the user's surroundings to the cloud, and also receiving information from the cloud about the passable world. thus, rather than each user having to capture images, recognize objects of the images etc., having an asynchronous system allows the system to be more efficient. information that already exists about that part of the world is automatically communicated to the individual ar system while new information is updated to the cloud. it should be appreciated that the passable world model lives both on the cloud or other form of networking computing or peer to peer system, and also may live on the user's individual system. a pose process 5108 may run on the wearable computing architecture and utilize data from the map 5106 to determine position and orientation of the wearable computing hardware or user. pose data may be computed from data collected on the fly as the user is experiencing the system and operating in the world. the data may comprise images, data from sensors (such as inertial measurement, or "imu" devices, which generally comprises accelerometer and gyro components), and surface information pertinent to objects in the real or virtual environment. it should be appreciated that for any given space, images taken by the user's individual ar system (multiple field of view images captured by one user's individual ar system or multiple users' ar systems) gives rise to a large number of map points of the particular space. for example, a single room may have a thousand map points captured through multiple points of views of various cameras (or one camera moving to various positions). thus, if a camera (or cameras) associated with the users' individual ar system captures multiple images, a large number of points are collected and transmitted to the cloud. these points not only help the system recognize objects, and create a more complete virtual world that may be retrieved as part of the passable world model, they also allow refinement of calculation of the position of the camera based on the position of the points. in other words, the collected points may be used to estimate the pose (e.g., position and orientation) of the keyframe (e.g. camera) capturing the image. a set of "sparse point representation" may be the output of a simultaneous localization and mapping (or "slam"; or "v-slam") 5124. this refers to a configuration wherein the input is an images/visual only) process. the system is not only determines where in the world the various components are, but also what the world comprises. pose 5108 is a building block that achieves many goals, including populating the map 5106 and using the data from the map 5106. in one embodiment, sparse point positions are not completely adequate, and further information may be needed to produce a multifocal virtual or augmented reality experience 5102 as described above. dense representations, (generally referred to as depth map information) may be utilized to fill this gap at least in part. such information may be computed from a process referred to as "stereo." in the stereo process 5110, depth information is determined using a technique such as triangulation or time-of-flight sensing. further details on dense and sparse representations of data are provided further below. in one or more embodiments, 3-d points may be captured from the environment, and the pose (i.e., vector and/or origin position information relative to the world) of the cameras that capture those images or points may be determined, such that these points or images may be "tagged", or associated, with this pose information. then points captured by a second camera may be utilized to determine the pose of the second camera. in other words, one can orient and/or localize a second camera based upon comparisons with tagged images from a first camera. this knowledge may be utilized to extract textures, make maps, and create a virtual copy of the real world (because then there are two cameras around that are registered). thus, at the base level, in one embodiment, a wearable ar system can be utilized to capture both 3-d points and the 2-d images that produced the points, and these points and images may be sent out to a cloud storage and processing resource (i.e., the mapping database). they may also be cached locally with embedded pose information (i.e., cache the tagged images) such the cloud may have access to (i.e., in available cache) tagged 2-d images (i.e., tagged with a 3-d pose), along with 3-d points. the cloud system may save some points as fiducials for pose only, to reduce overall pose tracking calculation. generally it may be desirable to have some outline features to be able to track major items in a user's environment, such as walls, a table, etc., as the user moves around the room, and the user may want to be able to "share" the world and have some other user walk into that room and also see those points. such useful and key points may be termed "fiducials" because they are fairly useful as anchoring points - they are related to features that may be recognized with machine vision, and that can be extracted from the world consistently and repeatedly on different pieces of user hardware. thus, these fiducials preferably may be saved to the cloud for further use. in one embodiment it is preferable to have a relatively even distribution of fiducials throughout the pertinent world, because they are the kinds of items that cameras can easily use to recognize a location. in one embodiment, the pertinent cloud computing configuration may groom the database of 3-d points and any associated metadata periodically to use the best data from various users for both fiducial refinement and world creation. in other words, the system may get the best dataset by using inputs from various users looking and functioning within the pertinent world. in one embodiment, the database is intrinsically fractal - as users move closer to objects, the cloud passes higher resolution information to such users. as a user maps an object more closely, that data is sent to the cloud, and the cloud can add new 3-d points and image-based texture maps to the database if the new maps are superior to what was stored previously in the database. it should be appreciated that the database may be accessed by multiple users simultaneously. in one or more embodiments, the system may recognize objects based on the collected information. for example, it may be important to understand an object's depth in order to recognize and understand such object. recognizer software objects ("recognizers") may be deployed on cloud or local resources to specifically assist with recognition of various objects on either or both platforms as a user is navigating data in a world. for example, if a system has data for a world model comprising 3-d point clouds and pose-tagged images, and there is a desk with a bunch of points on it as well as an image of the desk, there may not be a determination that what is being observed is, indeed, a desk as humans would know it. in other words, some 3-d points in space and an image from someplace off in space that shows most of the desk may not be enough to instantly recognize that a desk is being observed. to assist with this identification, a specific object recognizer may be created to enter the raw 3-d point cloud, segment out a set of points, and, for example, extract the plane of the top surface of the desk. similarly, a recognizer may be created to segment out a wall from 3-d points, so that a user could change wallpaper or remove part of the wall in virtual or augmented reality and have a portal to another room that is not actually there in the real world. such recognizers operate within the data of a world model and may be thought of as software "robots" that crawl a world model and imbue that world model with semantic information, or an ontology about what is believed to exist amongst the points in space. such recognizers or software robots may be programmed such that their entire existence is about going around the pertinent world of data and finding things that it believes are walls, or chairs, or other items. they may tag a set of points with the functional equivalent of, "this set of points belongs to a wall", and may comprise a combination of point-based algorithm and pose-tagged image analysis for mutually informing the system regarding what is in the points. object recognizers may be created for many purposes of varied utility, depending upon the perspective. for example, in one embodiment, a purveyor of coffee such as starbucks may invest in creating an accurate recognizer of starbucks coffee cups within pertinent worlds of data. such a recognizer may be configured to crawl worlds of data large and small searching for starbucks coffee cups, so they may be segmented out and identified to a user when operating in the pertinent nearby space (i.e., perhaps to offer the user a coffee in the starbucks outlet right around the corner when the user looks at his starbucks cup for a certain period of time). with the cup segmented out, it may be recognized quickly when the user moves it on his desk. such recognizers may be configured to run or operate not only on cloud computing resources and data, but also on local resources and data, or both cloud and local, depending upon computational resources available. in one embodiment, there is a global copy of the world model on the cloud with millions of users contributing to that global model, but for smaller worlds or sub-worlds like an office of a particular individual in a particular town, most of the global world will not care what that office looks like, so the system may groom data and move to local cache information that is believed to be most locally pertinent to a given user. in one embodiment, when a user walks up to a desk, related information (such as the segmentation of a particular cup on his table) may reside only upon his local computing resources and not on the cloud, because objects that are identified as ones that move often, such as cups on tables, need not burden the cloud model and transmission burden between the cloud and local resources. thus the cloud computing resource may segment 3-d points and images, thus factoring permanent (e.g., generally not moving) objects from movable ones. this may affect where the associated data is to remain, where it is to be processed, remove processing burden from the wearable/local system for certain data that is pertinent to more permanent objects, allow one-time processing of a location which then may be shared with limitless other users, allow multiple sources of data to simultaneously build a database of fixed and movable objects in a particular physical location, and segment objects from the background to create object-specific fiducials and texture maps. the system may share basic elements (walls, windows, desk geometry, etc.) with any user who walks into the room in virtual or augmented reality, and in one embodiment that person's system will take images from his particular perspective and upload those to the cloud. then the cloud becomes populated with old and new sets of data and can run optimization routines and establish fiducials that exist on individual objects. image information and active patterns (such as infrared patterns created using active projectors, as shown in fig. 51 ) are used as an input to the stereo process 5110. a significant amount of depth map information may be fused together, and some of this may be summarized with surface representation. for example, mathematically definable surfaces are efficient (i.e., relative to a large point cloud) and digestible inputs to things like game engines. the above techniques represent some embodiments of the depth mapping process 5104, but it should be appreciated that other such techniques may be used for depth mapping and fusion. the output of the stereo process (depth map) may be combined in the fusion process 5104. pose 5108 may be an input to this fusion process 5104 as well, and the output of fusion 5108 becomes an input to populating the map process 5106, as shown in the embodiment of fig. 51 . sub-surfaces may connect with each other, such as in topographical mapping, to form larger surfaces, and the map 5106 may become a large hybrid of points and surfaces. to resolve various aspects in the augmented reality process 5102, various inputs may be utilized. for example, in the depicted embodiment, various game parameters 5134 may be inputs to determine that the user or operator of the system is playing a monster battling game with one or more monsters at various locations, monsters dying or running away under various conditions (such as if the user shoots the monster), walls or other objects at various locations, and the like. the map 5105 may include information regarding where such objects are relative to each other, to be another valuable input to the ar experience 5102. the input from the map 5106 to the ar process 5102 may be called the "world map". pose relative to the world becomes an input and may play a key role to almost any interactive system. controls or inputs from the user are another important input. in order to move around or play a game, for example, the user may need to instruct the system regarding what the user wishes to do. beyond just moving oneself in space, there are various forms of user controls that may be utilized. in one embodiment, data 5112 pertaining to a totem or object (e.g., a gun) may be held by the user and tracked by the system. the system preferably will know that the user is holding the item and understand what kind of interaction the user is having with the item (i.e., if the totem or object is a gun, the system may understand location and orientation, as well as whether the user is clicking a trigger or other sensed button or element which may be equipped with a sensor, such as an imu, which may assist in determining what is going on, even with such activity is not within the field of view of any of the cameras). data 5112 pertaining to hand gesture tracking or recognition may also provide valuable input information. the system may track and interpret hand gestures for button presses, for gesturing left or right, stop, etc. for example, in one configuration, the user may wish to flip through emails or a calendar in a non-gaming environment, or "fist bump" with another person or player. the system may leverage a minimum amount of hand gestures, which may or may not be dynamic. for example, the gestures may be simple static gestures (e.g., open hand for stop, thumbs up for ok, thumbs down for not ok, a hand flip right or left or up/down for directional commands, etc.). one embodiment may start with a fairly limited vocabulary for gesture tracking and interpretation, and eventually become more nuanced and complex. eye tracking 5114 is another important input (i.e., tracking where the user is looking to control the display technology to render at a specific depth or range). in one embodiment, vergence of the eyes may be determined using triangulation, and then using a vergence/accommodation model developed for that particular person, accommodation may be determined. with regard to the camera systems, some embodiments correspond to three pairs of cameras: a relative wide field of view ("fov") or "passive slam" pair of cameras 5120 arranged to the sides of the user's face, a different pair of cameras oriented in front of the user to handle the stereo process 5104 and also to capture hand gestures and totem/object tracking in front of the user's face. a pair of eye cameras 5114 may be oriented into the eyes of the user to triangulate eye vectors and/or other information. as noted above, the system may also comprise one or more textured light projectors (such as infrared, or "ir", projectors 5118) to inject texture into a scene, as will be described in further detail below. calibration of all of these devices (for example, the various cameras, imus and other sensors, etc.) is important in coordinating the system and components thereof. the system may also utilize wireless triangulation technologies (such as mobile wireless network triangulation and/or global positioning satellite technology, both of which become more relevant as the system is utilized outdoors). other devices or inputs such as a pedometer worn by a user, a wheel encoder associated with the location and/or orientation of the user, may need to be calibrated to become valuable to the system. the display system may also be considered to be an input element from a calibration perspective. in other words, the various elements of the system preferably are related to each other, and are calibrated intrinsically as well (i.e., how the elements map the real world matrix into measurements; going from real world measurements to matrix may be termed "intrinsics"). for a camera module, the standard intrinsic parameters may include the focal length in pixels, the principal point (intersection of the optical axis with the sensor), and distortion parameters (particularly geometry). one may also consider photogrammetric parameters, if normalization of measurements or radiance in space is of interest. with an imu module 5122 that combines gyro and accelerometer devices, scaling factors may be important calibration inputs. camera-to-camera calibration may also be crucial and may be performed by having the three sets of cameras (e.g., eye cameras, stereo cameras, and wide field of view cameras, etc.) rigidly coupled to each other. in one embodiment, the display may have two eye sub-displays, which may be calibrated at least partially in-factory, and partially in-situ due to anatomic variations of the user (location of the eyes relative to the skull, location of the eyes relative to each other, etc.). thus in one embodiment, a process is conducted at runtime to calibrate the display system for the particular user. generally all of the calibration will produce parameters or configurations which may be used as inputs to the other functional blocks, as described above. for example, the calibration may produce inputs that relate to where the cameras are relative to a helmet or other head-worn module; the global reference of the helmet; the intrinsic parameters of the cameras, etc. such that the system can adjust the images in real-time in order to determine a location of every pixel in an image in terms of ray direction in space. the same is also true for the stereo cameras 5116. in one or more embodiments, a disparity map of the stereo cameras may be mapped into a depth map, and into an actual cloud of points in 3-d. thus, calibration is fundamental in this case as well. all of the cameras preferably will be known relative to a single reference frame. this is a fundamental notion in the context of calibration. similar to the above, the same is also true with the imu(s) 5122. generally, the three axes of rotation may be determined relative to the ar system in order to facilitate at least some characterization/transformation related thereto. other calibration techniques will be discussed further below. dense/sparse mapping tracking as previously noted, there are many ways that one can obtain map points for a given location, where some approaches may generate a large number of (dense) points, lower resolution depth points and other approaches may generate a much smaller number of (sparse) points. however, conventional vision technologies are premised upon the map data being all of one density of points. this presents a problem when there is a need to have a single map that has varying density of points from varying levels of sparse to completely dense sets of data. for example, when in an indoor setting within a given space, there is often the need to store a very dense map of the point within the room, e.g., because the higher level and volume of detail for the points in the room may be important to fulfill the requirements of many gaming or business applications. on the other hand, in a long hallway or in an outdoor setting, there is far less need to store a dense amount of data, and hence it may be far more efficient to represent outdoor spaces using a sparser set of points. with the wearable ar system, the system architecture is capable of accounting for the fact that the user may move from a setting corresponding to a dense mapping (e.g., indoors) to a location corresponding to a more sparse mapping (e.g., outdoors), and vice versa. the general idea is that regardless of the nature of the identified point, certain information is obtained for that point, where these points are stored together into a common map, as described in detail previously. a normalization process is performed to make sure the stored information for the points is sufficient to allow the system to perform desired functionality for the wearable device. this common map therefore permits integration of the different types and/or densities of data, and allows movement of the wearable device with seamless access and use of the map data. referring ahead to fig. 114 , a flowchart 11400 of one possible approach to populate the map with both sparse map data and dense map data is illustrated. the path on the left portion addresses sparse points and the path of the right portion addresses dense points. at 11401a, the process identifies sparse feature points, which may pertain to any distinctive/repeatable textures visible to the machine. examples of such distinctive points include corners, circles, triangles, text, etc. identification of these distinctive features allows one to identify properties for that point, and also to localize the identified point. various type of information is obtained for the point, including the coordinates of the point as well as other information pertaining to the characteristics of the texture of the region surrounding or adjacent to the point. similarly, at 11401b, identification is made of a large number of points within a space. for example, a depth camera may be used to capture a set of 3d points within space that identifies the (x,y,z) coordinate of that point. some depth cameras may also capture the rgb values along with the d (depth) value for the points. this provides a set of world coordinates for the captured points. the problem at this point is there are two sets of potentially incompatible points, where one set is sparse (resulting from 11401a) and the other set is dense (resulting from 11401b). normalization on the captured data can be performed to address this potential problem. normalization is performed to address any aspect of the data that may be needed to facilitate vision functionality needed for the wearable device. for example, at 11403a, scale normalization can be performed to normalize the density of the sparse data. here, a point is identified, and offsets from that point are also identified to determine differences from the identified point to the offsets, where this process is performed to check and determine the appropriate scaling that should be associated with the point. similarly, at 11403b, the dense data may also be normalized as appropriate to properly scale the identified dense points. other types of normalization may also be performed as known to one skill in the art, e.g., coordinate normalization to common origin point. a machine learning framework can be used to implement the normalization process, so that the learned normalization from a local set of points is used to normalize a second point, and so on until all necessary points have been normalized. the normalized point data for both the sparse and dense points are then represented in an appropriate data format. at 11405a, a descriptor is generated and populated for each sparse point. similarly, at 11405b, descriptors are generated and populated for the dense points. the descriptors (e.g., using the a-kaze, orb or latch descriptor algorithm) characterizes each of the points, whether corresponding to sparse or dense data. for example, the descriptor may include information about the scale, orientation, patch data, and/or texture of the point. thereafter, at 11407, the descriptors are then stored into a common map database (as described above) to unify the data, including both the sparse and dense data. during operation of the wearable device, the data that is needed is used by the system. for example, when the user is in a space corresponding to dense data, a large number of points are likely available to perform any necessary functionality using that data. on the other hand, when the user has moved to a location corresponding to sparse data, there may be a limited number of points that are used to perform the necessary functionality. the user may be in an outdoor space where only four points are identified. the four points may be used, for example, for object identification and orientation of that object. the points may also be used to determine the pose of the user. for example, assume the user has moved into a room that has already been mapped. the user's device will identify points in the room (e.g., using a mono or stereo camera(s) on the wearable device). an attempt is made to check for the same points/patterns that were previously mapped, e.g., by identifying known points, the user's location can be identified as well as the user's orientation. given four or more identified points in a 3d model of the room, this allows one to determine the pose of the user. if there is a dense mapping, then algorithms appropriate for dense data can be used to make the determination. if the space corresponds to a sparse mapping, then algorithms appropriate for sparse data can be used to make the determination. projected texture sources in some locations, there may be a scarcity of feature points from which to obtain texture data for that space. for example, certain rooms may have wide swaths of blank walls for which there are no distinct feature points to identify to obtain the mapping data. some embodiments provide a framework for actively generating a distinctive texture of each point, even in the absence of natural feature points or naturally occurring texture. fig. 115 illustrates an example approach. one or more fiber-based projectors 11501 are employed to project light that is visible to one or more cameras, such as camera 1 (11502) and/or camera 2 (11503). in one embodiment, the fiber-based projector comprises a scanned fiber display scanner that projects a narrow beam of light back and forth at selected angles. the light may be projected through a lens or other optical element, which may be utilized to collect the angularly-scanned light and convert it to one or more bundles of rays. the projection data 11507 to be projected by the fiber-based projector may comprise any suitable type of light. in some embodiments, the projection data comprises 11507 structured light 11504 having a series of dynamic known patterns, where successive light patterns are projected to identify individual pixels that can be individually addressed and textured. the projection data may also comprise patterned light 11505 having a known pattern of points to be identified and textured. in yet another embodiment, the projection data comprises textured light 11506, which does not necessarily need to comprise a known or recognizable pattern, but does include sufficient texture to distinctly identify points within the light data. in operation, the one or more camera(s) are placed having a recognizable offset from the projector. the points are identified from the captured images from the one or more cameras, and triangulation is performed to determine the requisite location and depth information for the point. with the textured light approach, the textured light permits one to identify points even if there is already some texturing on the projected surface. this is implemented, for example, by having multiple cameras identify the same point from the projection (either from the textured light or from a real-world object), and then triangulating the correct location and depth information for that identified point through a texture extraction module 11508. this may be advantageous over the structured light and patterned light approaches because the texture pattern does not have to be known. rather, the texture pattern is just triangulated from two more cameras. this is more robust to ambient light conditions. further, two or more projectors do not interfere with each other because the texture is used directly for triangulation, and not identification. using the fiber-based projector for this functionality provides numerous advantages. one advantage is that the fiber-based approach can be used to draw light data exactly where it is desired for texturing purposes. this allows the system to place a visible point exactly where it needs to be projected and/or seen by the camera(s). in effect, this permits a perfectly controllable trigger for a trigger-able texture source for generating the texture data. this allows the system to very quickly and easily project light and then find the desired point to be textured, and to then triangulate its position and depth. another advantage provided by this approach is that some fiber-based projectors are also capable of capturing images. therefore, in this approach, the cameras can be integrated into the projector apparatus, providing savings in terms of cost, device real estate, and power utilization. for example, when two fiber projectors/cameras are used, this allows a first projector/camera to precisely project light data which is captured by the second projector/camera. next, the reverse occurs, where the second projector/camera precisely projects the light data to be captured by the first projector/camera. triangulation can then be performed for the captured data to generate texture information for the point. as previously discussed, an ar system user may use a wearable structure having a display system positioned in front of the eyes of the user. the display is operatively coupled, such as by a wired lead or wireless connectivity, to a local processing and data module which may be mounted in a variety of configurations. the local processing and data module may comprise a power-efficient processor or controller, as well as digital memory, such as flash memory, both of which may be utilized to assist in the processing, caching, and storage of data a) captured from sensors which may be operatively coupled to the frame, such as image capture devices (such as cameras), microphones, inertial measurement units, accelerometers, compasses, gps units, radio devices, and/or gyros; and/or b) acquired and/or processed using a remote processing module and/or remote data repository, possibly for passage to the display after such processing or retrieval. the local processing and data module may be operatively coupled, such as via a wired or wireless communication links, to the remote processing module and remote data repository such that these remote modules are operatively coupled to each other and available as resources to the local processing and data module. in some cloud-based embodiments, the remote processing module may comprise one or more relatively powerful processors or controllers for analyzing and/or processing data and/or image information. fig. 116 depicts an example architecture that can be used in certain cloud-based computing embodiments. the cloud-based server(s) 11612 can be implemented as one or more remote data repositories embodied as a relatively large-scale digital data storage facility, which may be available through the internet or other networking configuration in a cloud resource configuration. various types of content may be stored in the cloud-based repository. for example, data collected on the fly as the user is experiencing the system and operating in the world may be stored in the cloud-based repository. the data may comprise images, data from sensors (such as inertial measurement, or imu devices, which generally comprises accelerometer and gyro components), and surface information pertinent to objects in the real or virtual environment. the system may generate various types of data and metadata from the collected sensor data. for example, geometry mapping data 11606 and semantic mapping data 11608 can be generated and stored within the cloud-based repository. map data may be cloud-based, which may be a database containing map data for the world. in one embodiment, this data is entirely stored in the cloud. in another embodiment, this map data partly resides on user-wearable components, and may partly reside at cloud storage locations accessible by wired or wireless network. the cloud server(s) 11612 may further store personal information of users and/or policies of the enterprise in another database 11610. cloud-based processing may be performed to process and/or analyze the data. for example, the semantic map 11608 comprises information that provides sematic content usable by the system, e.g., for objects and locations in the world being tracked by the map. one or more remote servers can be used to perform the processing 11602 (e.g., machine learning processing) to analyze sensor data and to identify/generate the relevant semantic map data. as another example, a pose process may be run to determine position and orientation of the wearable computing hardware or user. this pose processing can also be performed on a remote server. in one embodiment, the system processing is partially performed on cloud-based servers and partially performed on processors in the wearable computing architecture. in an alternate embodiment, the entirety of the processing is performed on the remote servers. any suitable partitioning of the workload between the wearable device and the remote server (e.g., cloud-based server) may be implemented, with consideration of the specific work that is required, the relative available resources between the wearable and the server, and the network bandwidth availability/requirements. cloud-based facilities may also be used to perform quality assurance processing and error corrections 11604 for the stored data. such tasks may include, for example, error correction, labelling tasks, clean-up activities, and generation of training data. automation can be used at the remote server to perform these activities. alternatively, remote "people resources" can also be employed, similar to the mechanical turk program provided by certain computing providers. personal data personal data can also be configurably stored at various locations within the overall architecture. in some embodiments, as the user utilizes the wearable device, historical data about the user is being acquired and maintained, e.g., to reflect location, activity, and copies of sensor data for that user over a period of time. the personal data may be locally stored at the wearable device itself, but given the large volume of data likely to be generated during normal usage, a cloud-based repository may be the best location to store that historical data. one or more privacy policies may control access to that data, especially in a cloud-based setting for storage of the personal data. the privacy policies are configurable by the user to set the conditions under which the user's personal data can be accessed by third parties. the user may permit access under specific circumstances, e.g., for users that seek to allow a third party to provide services to the user based on the personal data. for example, a marketer may seek to determine the location of that user in order to provide coupons for business in the general vicinity of that user. the user may use a privacy policy to allow his location data to be shared with third parties, because the user feels it is of benefit to receive the marketing information/coupon from the third party marketer. on the other hand, the user may seek the highest level of privacy that corresponds to configurations that do not allow any access by third parties to any of the personal data. any suitable privacy policy configuration may be used. interacting with the ar system the following embodiments illustrate various approaches in which one or more ar systems interact with the real environment and/or with other ar users. in one example embodiment, the ar system may include an "augmented" mode, in which an interface of the ar device may be substantially transparent, thereby allowing the user to view the local, physical environment. fig. 52 illustrates an example embodiment of objects viewed by a user when the ar system is operating in an augmented mode. as shown in fig. 52 , the ar system presents a physical object 5202 and a virtual object 5204. in the embodiment illustrated in fig. 5 , the physical object 5202 is a real, physical object existing in the local environment of the user, whereas the virtual object 5204 is a virtual object created by the ar system. in some embodiments, the virtual object 5204 may be displayed at a fixed position or location within the physical environment (e.g., a virtual monkey standing next to a particular street sign located in the physical environment), or may be displayed to the user as an object located at a position relative to the user (e.g., a virtual clock or thermometer visible in the upper, left corner of the display). in some embodiments, virtual objects may be made to be cued off of, or trigged by, an object physically present within or outside a user's field of view. virtual object 5204 is cued off, or triggered by, the physical object 5202. for example, the physical object 5202 may actually be a stool, and the virtual object 5204 may be displayed to the user (and, in some embodiments, to other users interfacing with the ar system) as a virtual animal standing on the stool. in such an embodiment, the ar system (e.g., using use software and/or firmware stored, for example, in the processor to recognize various features and/or shape patterns) may identify the physical object 5202 as a stool. these recognized shape patterns such as, for example, the stool top, may be used to trigger the placement of the virtual object 5204. other examples include walls, tables, furniture, cars, buildings, people, floors, plants, animals, or any object which can be seen can or be used to trigger an augmented reality experience in some relationship to the object or objects. in some embodiments, the particular virtual object 5204 that is triggered may be selected by the user or automatically selected by other components of the head-mounted ar system. additionally, in embodiments in which the virtual object 5204 is automatically triggered, the particular virtual object 5204 may be selected based upon the particular physical object 5202 (or feature thereof) off which the virtual object 5204 is cued or triggered. for example, if the physical object is identified as a diving board extending over a pool, the triggered virtual object may be a creature wearing a snorkel, bathing suit, floatation device, or other related items. in another example embodiment, the ar system may include a "virtual" mode, in which the ar system provides a virtual reality interface. in the virtual mode, the physical environment is omitted from the display, and virtual object data is presented on the display 303. the omission of the physical environment may be accomplished by physically blocking the visual display (e.g., via a cover) or through a feature of the ar system in which the display transitions to an opaque setting. in the virtual mode, live and/or stored visual and audio sensory may be presented to the user through the interface of the ar system, and the user experiences and interacts with a digital world (digital objects, other users, etc.) through the virtual mode of the interface. thus, the interface provided to the user in the virtual mode is comprised of virtual object data comprising a virtual, digital world. fig. 53 illustrates an example embodiment of a user interface when operating in a virtual mode. as shown in fig. 53 , the user interface presents a virtual world 5300 comprised of digital objects 5310, wherein the digital objects 5310 may include atmosphere, weather, terrain, buildings, and people. although it is not illustrated in fig. 53 , digital objects may also include, for example, plants, vehicles, animals, creatures, machines, artificial intelligence, location information, and any other object or information defining the virtual world 5300. in another example embodiment, the ar system may include a "blended" mode, wherein various features of the ar system (as well as features of the virtual and augmented modes) may be combined to create one or more custom interface modes. in one example custom interface mode, the physical environment is omitted, and virtual object data is presented in a manner similar to the virtual mode. however, in this example custom interface mode, virtual objects may be fully virtual (e.g., they do not exist in the local, physical environment) or the objects may be real, local, physical objects rendered as a virtual object in the interface in place of the physical object. thus, in this particular custom mode (referred to herein as a blended virtual interface mode), live and/or stored visual and audio sensory may be presented to the user through the interface of the ar system, and the user experiences and interacts with a digital world comprising fully virtual objects and rendered physical objects. fig. 54 illustrates an example embodiment of a user interface operating in accordance with the blended virtual interface mode. as shown in fig. 54 , the user interface presents a virtual world 5400 comprised of fully virtual objects 5410, and rendered physical objects 5420 (renderings of objects otherwise physically present in the scene). in accordance with the example illustrated in fig. 54 , the rendered physical objects 5420 include a building 5420a, the ground 5420b, and a platform 5420c. these physical objects are shown with a bolded outline 5430 to indicate to the user that the objects are rendered. additionally, the fully virtual objects 5410 include an additional user 541 0a, clouds 5410b, the sun 5410c, and flames 5410d on top of the platform 620c. it should be appreciated that fully virtual objects 5410 may include, for example, atmosphere, weather, terrain, buildings, people, plants, vehicles, animals, creatures, machines, artificial intelligence, location information, and any other object or information defining the virtual world 5400, and not rendered from objects existing in the local, physical environment. conversely, the rendered physical objects 5420 are real, local, physical objects rendered as a virtual object. the bolded outline 5430 represents one example for indicating rendered physical objects to a user. as such, the rendered physical objects may be indicated as such using methods other than those disclosed herein. thus, as the user interfaces with the ar system in the blended virtual interface mode, various physical objects may be displayed to the user as rendered physical objects. this may be especially useful for allowing the user to interface with the ar system, while still being able to safely navigate the local, physical environment. in some embodiments, the user may be able to selectively remove or add the rendered physical objects. in another example custom interface mode, the interface may be substantially transparent, thereby allowing the user to view the local, physical environment, while various local, physical objects are displayed to the user as rendered physical objects. this example custom interface mode is similar to the augmented mode, except that one or more of the virtual objects may be rendered physical objects as discussed above with respect to the previous example. the foregoing example custom interface modes represent a few example embodiments of various custom interface modes capable of being provided by the blended mode of the ar system. accordingly, various other custom interface modes may be created from the various combination of features and functionality provided by the components of the ar system and the various modes discussed above without departing from the scope of the present disclosure. the embodiments discussed herein merely describe a few examples for providing an interface operating in an off, augmented, virtual, or blended mode, and are not intended to limit the scope or content of the respective interface modes or the functionality of the components of the ar system. for example, in some embodiments, the virtual objects may include data displayed to the user (time, temperature, elevation, etc.), objects created and/or selected by the system, objects created and/or selected by a user, or even objects representing other users interfacing the system. additionally, the virtual objects may include an extension of physical objects (e.g., a virtual sculpture growing from a physical platform) and may be visually connected to, or disconnected from, a physical object. the virtual objects may also be dynamic and change with time, change in accordance with various relationships (e.g., location, distance, etc.) between the user or other users, physical objects, and other virtual objects, and/or change in accordance with other variables specified in the software and/or firmware of the ar system, gateway component, or servers. for example, in certain embodiments, a virtual object may respond to a user device or component thereof (e.g., a virtual ball moves when a haptic device is placed next to it), physical or verbal user interaction (e.g., a virtual creature runs away when the user approaches it, or speaks when the user speaks to it), a chair is thrown at a virtual creature and the creature dodges the chair, other virtual objects (e.g., a first virtual creature reacts when it sees a second virtual creature), physical variables such as location, distance, temperature, time, etc. or other physical objects in the user's environment (e.g., a virtual creature shown standing in a physical street becomes flattened when a physical car passes). the various modes discussed herein may be applied to user devices other than the ar system. for example, an augmented reality interface may be provided via a mobile phone or tablet device. in such an embodiment, the phone or tablet may use a camera to capture the physical environment around the user, and virtual objects may be overlaid on the phone/tablet display screen. additionally, the virtual mode may be provided by displaying the digital world on the display screen of the phone/tablet. accordingly, these modes may be blended to create various custom interface modes as described above using the components of the phone/tablet discussed herein, as well as other components connected to, or used in combination with, the user device. for example, the blended virtual interface mode may be provided by a computer monitor, television screen, or other device lacking a camera operating in combination with a motion or image capture system. in this example embodiment, the virtual world may be viewed from the monitor/screen and the object detection and rendering may be performed by the motion or image capture system. fig. 55 illustrates an example embodiment of the present disclosure, wherein two users located in different geographical locations each interact with the other user and a common virtual world through their respective user devices. in this embodiment, the two users 5501 and 5502 are throwing a virtual ball 5503 (a type of virtual object) back and forth, wherein each user is capable of observing the impact of the other user on the virtual world (e.g., each user observes the virtual ball changing directions, being caught by the other user, etc.). since the movement and location of the virtual objects (e.g., the virtual ball 5503) are tracked by the servers in the computing network associated with the ar system, the system may, in some embodiments, communicate the exact location and timing of the arrival of the ball 5503 with respect to each user to each of the users 5501 and 5502. for example, if the first user 5501 is located in london, the user 5501 may throw the ball 5503 to the second user 5502 located in los angeles at a velocity calculated by the ar system. accordingly, the ar system may communicate to the second user 5502 (e.g., via email, text message, instant message, etc.) the exact time and location of the ball's arrival. as such, the second user 5502 may use the ar device to see the ball 5503 arrive at the specified time and located. one or more users may also use geo-location mapping software (or similar) to track one or more virtual objects as they travel virtually across the globe. an example of this may be a user wearing a 3d head-mounted display looking up in the sky and seeing a virtual plane flying overhead, superimposed on the real world. the virtual plane may be flown by the user, by intelligent software agents (software running on the user device or gateway), other users who may be local and/or remote, and/or any of these combinations. as previously discussed, the user device may include a haptic interface device, wherein the haptic interface device provides a feedback (e.g., resistance, vibration, lights, sound, etc.) to the user when the haptic device is determined by the ar system to be located at a physical, spatial location relative to a virtual object. for example, the embodiment described above with respect to fig. 55 may be expanded to include the use of a haptic device 5602, as shown in fig. 56 . in this example embodiment, the haptic device 5602 may be displayed in the virtual world as a baseball bat. when the ball 5503 arrives, the user 5502 may swing the haptic device 5602 at the virtual ball 5503. if the ar system determines that the virtual bat provided by the haptic device 5602 made "contact" with the ball 5503, then the haptic device 5602 may vibrate or provide other feedback to the user 5502, and the virtual ball 5503 may ricochet off the virtual bat in a direction calculated by the ar system in accordance with the detected speed, direction, and timing of the ball-to-bat contact. the disclosed ar system may, in some embodiments, facilitate mixed mode interfacing, wherein multiple users may interface a common virtual world (and virtual objects contained therein) using different interface modes (e.g., augmented, virtual, blended, etc.). for example, a first user interfacing a particular virtual world in a virtual interface mode may interact with a second user interfacing the same virtual world in an augmented reality mode. fig. 57a illustrates an example wherein a first user 5701 (interfacing a digital world of the ar system in a blended virtual interface mode) and first object 5702 appear as virtual objects to a second user 5722 interfacing the same digital world of the ar system in a full virtual reality mode. as described above, when interfacing the digital world via the blended virtual interface mode, local, physical objects (e.g., first user 5701 and first object 5702) may be scanned and rendered as virtual objects in the virtual world. the first user 5701 may be scanned, for example, by a motion capture system or similar device, and be rendered in the virtual world as a first rendered physical object 5731. similarly, the first object 5702 may be scanned, for example, by the environment-sensing system 5706 of the ar system, and rendered in the virtual world as a second rendered physical object 5732. the first user 5701 and first object 5702 are shown in a first portion 5710 of fig. 57a as physical objects in the physical world. in a second portion 5720 of fig. 57a , the first user 5701 and first object 5702 are shown as they appear to the second user 5722 interfacing the same virtual world of the ar system in a full virtual reality mode: as the first rendered physical object 5731 and second rendered physical object 5732. fig. 57b illustrates another example embodiment of mixed mode interfacing, in which the first user 5701 is interfacing the digital world in a blended virtual interface mode, as discussed above, and the second user 5722 is interfacing the same digital world (and the second user's physical, local environment 5725) in an augmented reality mode. in the embodiment in fig. 57b , the first user 5701 and first object 5702 are located at a first physical location 5715, and the second user 5722 is located at a different, second physical location 5725 separated by some distance from the first location 5715. in this embodiment, the virtual objects 5731 and 5732 may be transposed in real-time (or near real-time) to a location within the virtual world corresponding to the second location 5725. thus, the second user 5722 may observe and interact, in the second user's physical, local environment 5725, with the rendered physical objects 5731 and 5732 representing the first user 5701 and first object 5702, respectively. fig. 58 illustrates an example illustration of a user's view when interfacing the ar system in an augmented reality mode. as shown in fig. 58 , the user sees the local, physical environment (e.g., a city having multiple buildings) as well as a virtual character 5810 (e.g., virtual object). the position of the virtual character 5810 may be triggered by a 2d visual target (for example, a billboard, postcard or magazine) and/or one or more 3d reference frames such as buildings, cars, people, animals, airplanes, portions of a building, and/or any 3d physical object, virtual object, and/or combinations thereof. in the example illustrated in fig. 58 , the known position of the buildings in the city may provide the registration fiducials and/or information and key features for rendering the virtual character 5810. additionally, the user's geospatial location (e.g., provided by gps, attitude/position sensors, etc.) or mobile location relative to the buildings, may comprise data used by the computing network of the ar system to trigger the transmission of data used to display the virtual character(s) 5810. in some embodiments, the data used to display the virtual character 5810 may comprise the rendered character 5810 and/or instructions for rendering the virtual character 5810 or portions thereof. in some embodiments, if the geospatial location of the user is unavailable or unknown, the ar system may still display the virtual object 5810 using an estimation algorithm that estimates where particular virtual objects and/or physical objects may be located, using the user's last known position as a function of time and/or other parameters. this may also be used to determine the position of any virtual objects in case the ar system's sensors become occluded and/or experience other malfunctions. in some embodiments, virtual characters or virtual objects may comprise a virtual statue, wherein the rendering of the virtual statue is triggered by a physical object. for example, referring now to fig. 59 , a virtual statue 5910 may be triggered by a real, physical platform 5920. the triggering of the statue 5910 may be in response to a visual object or feature (e.g., fiducials, design features, geometry, patterns, physical location, altitude, etc.) detected by the user device or other components of the ar system. when the user views the platform 5920 without the user device, the user sees the platform 5920 with no statue 5910. however, when the user views the platform 5920 through the wearable ar device, the user sees the statue 5910 on the platform 5920 as shown in fig. 59 . the statue 5910 is a virtual object and, therefore, may be stationary, animated, change over time or with respect to the user's viewing position, or even change depending upon which particular user is viewing the statue 5910. for example, if the user is a small child, the statue may be a dog. if the viewer is an adult male, the statue may be a large robot as shown in fig. 59 . these are examples of user dependent and/or state dependent experiences. this will help one or more users to perceive one or more virtual objects alone and/or in combination with physical objects and experience customized and personalized versions of the virtual objects. the statue 5910 (or portions thereof) may be rendered by various components of the system including, for example, software/firmware installed on the user device. using data that indicates the location and attitude of the user device, in combination with the registration features of the virtual object (e.g., statue 5910), the virtual object (e.g., statue 5910) is able to form a relationship with the physical object (e.g., platform 5920). for example, the relationship between one or more virtual objects with one or more physical objects may be a function of distance, positioning, time, geo-location, proximity to one or more other virtual objects, and/or any other functional relationship that includes virtual and/or physical data of any kind. in some embodiments, image recognition software in the user device may further enhance the virtual object-to-physical object relationship. the interactive interface provided by the disclosed system and method may be implemented to facilitate various activities such as, for example, interacting with one or more virtual environments and objects, interacting with other users, as well as experiencing various forms of media content, including advertisements, music concerts, and movies. accordingly, the disclosed system facilitates user interaction such that the user not only views or listens to the media content, but rather, actively participates in and experiences the media content. in some embodiments, the user participation may include altering existing content or creating new content to be rendered in one or more virtual worlds. in some embodiments, the media content, and/or users creating the content, may be themed around a mythopoeia of one or more virtual worlds. in one example, musicians (or other users) may create musical content to be rendered to users interacting with a particular virtual world. the musical content may include, for example, various singles, eps, albums, videos, short films, and concert performances. in one example, a large number of users may interface the ar system to simultaneously experience a virtual concert performed by the musicians. in some embodiments, the media produced may contain a unique identifier code associated with a particular entity (e.g., a band, artist, user, etc.). the code may be in the form of a set of alphanumeric characters, upc codes, qr codes, 2d image triggers, 3d physical object feature triggers, or other digital mark, as well as a sound, image, and/or both. in some embodiments, the code may also be embedded with digital media which may be interfaced using the ar system. a user may obtain the code (e.g., via payment of a fee) and redeem the code to access the media content produced by the entity associated with the identifier code. the media content may be added or removed from the user's interface. in one embodiment, to avoid the computation and bandwidth limitations of passing real-time or near real-time video data from one computing system to another with low latency, such as from a cloud computing system to a local processor coupled to a user, parametric information regarding various shapes and geometries may be transferred and utilized to define surfaces, while textures maybe transferred and added to these surfaces to bring about static or dynamic detail, such as bitmap-based video detail of a person's face mapped upon a parametrically reproduced face geometry. as another example, if a system recognizes a person's face, and recognizes that the person's avatar is located in an augmented world, the system may be pass the pertinent world information and the person's avatar information in one relatively large setup transfer, after which remaining transfers to a local computing system for local rendering may be limited to parameter and texture updates. this may include motion parameters of the person's skeletal structure and moving bitmaps of the person's face. these may require less bandwidth relative to the initial setup transfer or passing of real-time video. cloud-based and local computing assets thus may be used in an integrated fashion, with the cloud handling computation that does not require relatively low latency, and the local processing assets handling tasks wherein low latency is at a premium. in such a case, the form of data transferred to the local systems preferably is passed at relatively low bandwidth due to the form or amount of such data (e.g., parametric info, textures, etc. rather than real-time video of surroundings). referring ahead to fig. 63 , a schematic illustrates coordination between cloud computing assets 6346 and local processing assets (6308, 6320). in one embodiment, the cloud 6346 assets are operatively coupled, such as via wired or wireless networking (wireless being preferred for mobility, wired being preferred for certain high-bandwidth or high-data-volume transfers that may be desired), directly to (6340, 6342) one or both of the local computing assets (6320, 6308), such as processor and memory configurations which may be housed in a structure to be coupled to a user's head or belt 6308. these computing assets local to the user may be operatively coupled to each other as well, via wired and/or wireless connectivity configurations 6344. in one embodiment, to maintain a low-inertia and small-size head mounted subsystem 6320, primary transfer between the user and the cloud 6346 may be via the link between the belt-based subsystem 6308 and the cloud, with the head mounted subsystem 6320 primarily data-tethered to the belt-based subsystem 6308 using wireless connectivity, such as ultra-wideband ("uwb") connectivity, as is currently employed, for example, in personal computing peripheral connectivity applications. as discussed at some length above, with efficient local and remote processing coordination, and an appropriate display device for a user, aspects of one world pertinent to a user's current actual or virtual location may be transferred or "passed" to the user and updated in an efficient fashion. indeed, in one embodiment, with one person utilizing a virtual reality system ("vrs") in an augmented reality mode and another person utilizing a vrs in a completely virtual mode to explore the same world local to the first person, the two users may experience one another in that world in various fashions. for example, referring to fig. 60 , a scenario similar to that described in reference to fig. 59 is depicted, with the addition of a visualization of an avatar 6002 of a second user who is flying through the depicted augmented reality world from a completely virtual reality scenario. in other words, the scene depicted in fig. 60 may be experienced and displayed in augmented reality for the first person - with two augmented reality elements (the statue 6010 and the flying bumble bee avatar 2 of the second person) displayed in addition to actual physical elements around the local world in the scene, such as the ground, the buildings in the background, the statue platform 6020. dynamic updating may be utilized to allow the first person to visualize progress of the second person's avatar 2 as the avatar 2 flies through the world local to the first person. again, with a configuration as described above, in which there is one world model that can reside on cloud computing resources and be distributed from there, such world can be "passable" to one or more users in a relatively low bandwidth form. this may be preferable rather than passing real-time video data. the augmented experience of the person standing near the statue (e.g., as shown in fig. 60 ) may be informed by the cloud-based world model, a subset of which may be passed down to them and their local display device to complete the view. a person sitting at a remote ar device, which may be as simple as a personal computer sitting on a desk, can efficiently download that same section of information from the cloud and have it rendered on their display. indeed, one person actually present in the park near the statue may take a remotely-located friend for a walk in that park, with the friend joining through virtual and augmented reality. the system will need to know where the street is, where the trees are, where the statue is, etc. using this information and data from the cloud, the joining friend can download aspects of the scenario from the cloud, and then start walking along as an augmented reality local relative to the person who is actually in the park. referring to fig. 61 , a time and/or other contingency parameter based embodiment is depicted, wherein a person is engaged with a virtual and/or augmented reality interface is utilizing the ar system (6104) and enters a coffee establishment to order a cup of coffee (6106). the vrs may utilize sensing and data gathering capabilities, locally and/or remotely, to provide display enhancements in augmented and/or virtual reality for the person, such as highlighted locations of doors in the coffee establishment or bubble windows of the pertinent coffee menu (6108). when the user receives the cup of coffee that he has ordered, or upon detection by the system of some other pertinent parameter, the system may display (6110) one or more time-based augmented or virtual reality images, video, and/or sound in the local environment with the display device, such as a madagascar jungle scene from the walls and ceilings, with or without jungle sounds and other effects, either static or dynamic. such presentation to the user may be discontinued based upon a timing parameter (e.g., 5 minutes after the full coffee cup has been recognized and handed to the user; 10 minutes after the system has recognized the user walking through the front door of the establishment, etc.) or other parameter, such as a recognition by the system that the user has finished the coffee by noting the upside down orientation of the coffee cup as the user ingests the last sip of coffee from the cup - or recognition by the system that the user has left the front door of the establishment (6312). referring to fig. 62 , one embodiment of a suitable user display device 6214 is shown, comprising a display lens 6282 which may be mounted to a user's head or eyes by a housing or frame 6284. the display lens 6282 may comprise one or more transparent mirrors positioned by the housing 6284 in front of the user's eyes 6220 and to deliver projected light 6238 into the eyes 6220 and facilitate beam shaping, while also allowing for transmission of at least some light from the local environment in an augmented reality configuration. in a virtual reality configuration, it may be desirable for the display system 6214 to be capable of blocking substantially all light from the local environment, such as by a darkened visor, blocking curtain, all black lcd panel mode or the like. in the depicted embodiment, two wide-field-of-view machine vision cameras 6216 are coupled to the housing 6284 to image the environment around the user. in one embodiment these cameras 6216 are dual-capture visible light / infrared light cameras. the depicted embodiment also comprises a pair of scanned-laser shaped-wavefront (e.g., for depth) light projector modules with display mirrors and optics to project light 6238 into the eyes 6220 as shown. the depicted embodiment also comprises two miniature infrared cameras 6224 paired with infrared light sources 6226 (e.g., light emitting diodes "led"s), which track the eyes 6220 of the user to support rendering and user input. the system 6214 further features a sensor assembly 6239, which may comprise x, y, and z axis accelerometer capability as well as a magnetic compass and x, y, and z axis gyro capability, preferably providing data at a relatively high frequency, such as 200 hz. the depicted system 6214 also comprises a head pose processor 6236 such as an asic (application specific integrated circuit), fpga (field programmable gate array), and/or arm processor (advanced reduced-instruction-set machine), which may calculate real or near-real time user head pose from wide field of view image information output from the capture devices 6216. also shown is another processor 6232to execute digital and/or analog processing to derive pose from the gyro, compass, and/or accelerometer data from the sensor assembly 6239. the depicted embodiment also features a gps 6237 (e.g., global positioning satellite) subsystem to assist with pose and positioning. finally, the depicted embodiment comprises a rendering engine 6234 which may feature hardware running a software program to provide rendering information local to the user to facilitate operation of the scanners and imaging into the eyes of the user, for the user's view of the world. the rendering engine 6234 is operatively coupled (6281, 6270, 6276, 6278, 6280) (e.g., via wired or wireless connectivity) to the sensor pose processor 6232, the image pose processor 6236, the eye tracking cameras 6224, and the projecting subsystem 6218 such that light of rendered augmented and/or virtual reality objects is projected using a scanned laser arrangement 6218 in a manner similar to a retinal scanning display. other embodiments may utilize other optical arrangements similar to the various optical embodiments discussed above. the wavefront of the projected light beam 6238 may be bent or focused to coincide with a desired focal distance of the augmented and/or virtual reality object. the mini infrared cameras 6224 may be utilized to track the eyes to support rendering and user input (e.g., where the user is looking, depth of focus, etc.). as discussed below, eye vergence may be utilized to estimate depth of focus. the gps 6237, gyros, compass, and accelerometers 6239 may be utilized to provide course and/or fast pose estimates. the camera 6216 images and pose information, in conjunction with data from an associated cloud computing resource, may be utilized to map the local world and share user views with a virtual or augmented reality community. while much of the hardware in the display system 6214 featured in fig. 62 is depicted directly coupled to the housing 6284 which is adjacent the display 6282 and eyes 6220 of the user, the hardware components depicted may be mounted to or housed within other components, such as a belt-mounted component. in one embodiment, all of the components of the system 6214 featured in fig. 62 are directly coupled to the display housing 6284 except for the image pose processor 6236, sensor pose processor 6232, and rendering engine 6234. it should be appreciated that communication between the image pose processor 6236, sensor pose processor 6232 and the rendering engine 6243 may be through wireless communication, such as ultra wideband, or wired communication. the depicted housing 6284 is of a shape that naturally fits the user and is able to be head-mounted on the user's head. the housing 6284 may also feature speakers, such as those which may be inserted into the ears of a user and utilized to provide sound to the user which may be pertinent to an augmented or virtual reality experience such as the jungle sounds referred to in reference to fig. 61 , and microphones, which may be utilized to capture sounds local to the user. in one or more embodiments, the mini-cameras 6224 may be utilized to measure where the centers of a user's eyes 6220 are geometrically verged to, which, in general, coincides with a position of focus, or "depth of focus", of the eyes 6220. as discussed above, a 3-dimensional surface of all points that the eyes verge to is called the "horopter". the focal distance may take on a finite number of depths, or may be infinitely varying. light projected from the vergence distance appears to be focused to the subject eye 6220, while light in front of or behind the vergence distance is blurred. further, it has been discovered that spatially coherent light with a beam diameter of less than about 0.7 millimeters is correctly resolved by the human eye regardless of where the eye focuses. given this understanding, to create an illusion of proper focal depth, the eye vergence may be tracked with the mini cameras 6224, and the rendering engine 6234 and projection subsystem 6218 may be utilized to render all objects on or close to the horopter in focus, and all other objects at varying degrees of defocus (e.g., using intentionally-created blurring). preferably the system 6214 renders to the user at a frame rate of about 60 frames per second or greater. as described above, preferably the mini cameras 6224 may be utilized for eye tracking, and software may pick up not only vergence geometry but also focus location cues to serve as user inputs. preferably such a system has brightness and contrast suitable for day or night use. in one embodiment such a system preferably has latency of less than about 20 milliseconds for visual object alignment, less than about 0.1 degree of angular alignment, and about 1 arc minute of resolution, which is approximately the limit of the human eye. the display system 6214 may be integrated with a localization system, which may involve the gps element, optical tracking, compass, accelerometer, and/or other data sources, to assist with position and pose determination. it should be appreciated that localization information may be utilized to facilitate accurate rendering in the user's view of the pertinent world (e.g., such information would facilitate the glasses to know where they are with respect to the real world). other suitable display devices may include but are not limited to desktop and mobile computers, smartphones, smartphones which may be enhanced additionally with software and hardware features to facilitate or simulate 3-d perspective viewing (for example, in one embodiment a frame may be removably coupled to a smartphone, the frame featuring a 200 hz gyro and accelerometer sensor subset, two small machine vision cameras with wide field of view lenses, and an arm processor- to simulate some of the functionality of the configuration featured in fig. 14 ), tablet computers, tablet computers which may be enhanced as described above for smartphones, tablet computers enhanced with additional processing and sensing hardware, head-mounted systems that use smartphones and/or tablets to display augmented and virtual viewpoints (visual accommodation via magnifying optics, mirrors, contact lenses, or light structuring elements), non-see-through displays of light emitting elements (lcds, oleds, vertical-cavity-surface-emitting lasers, steered laser beams, etc.), see-through displays that simultaneously allow humans to see the natural world and artificially generated images (for example, light-guide optical elements, transparent and polarized oleds shining into close-focus contact lenses, steered laser beams, etc.), contact lenses with light-emitting elements (they may be combined with specialized complimentary eyeglasses components), implantable devices with light-emitting elements, and implantable devices that stimulate the optical receptors of the human brain. with a system such as that depicted in fig. 63 , 3-d points may be captured from the environment, and the pose (e.g., vector and/or origin position information relative to the world) of the cameras that capture those images or points may be determined, such that these points or images may be "tagged", or associated, with this pose information. then points captured by a second camera (e.g., another ar system) may be utilized to determine the pose of the second camera. in other words, one can orient and/or localize a second camera based upon comparisons with tagged images from a first camera. this knowledge may be utilized to extract textures, make maps, and create a virtual copy of the real world (because then there are two cameras around that are registered). thus, at the base level, in one embodiment the ar system can capture both 3-d points and the 2-d images that produced the points, and these points and images may be sent out to a cloud storage and processing resource. they may also be cached locally with embedded pose information (e.g., cache the tagged images), such that the cloud may be able to access (e.g., in available cache) tagged 2-d images (e.g., tagged with a 3-d pose), along with 3-d points. if a user is observing something dynamic, the ar system of the user may also send additional information up to the cloud pertinent to the motion (for example, if looking at another person's face, the user can take a texture map of the face and push the texture map up at an optimized frequency even though the surrounding world is otherwise basically static). the cloud system may save some points as fiducials for pose only, to reduce overall pose tracking calculation. generally it may be desirable to use some outline features in order to track major items in a user's environment, such as walls, a table, etc., as the user moves around the room. the user may desire to "share" the world and have some other user walk into that room and also see those points. such useful and key points may be termed "fiducials" because they are fairly useful as anchoring points. they are related to features that may be recognized with machine vision, and that can be extracted from the world consistently and repeatedly on different pieces of user hardware. thus these fiducials preferably may be saved to the cloud for further use. in one embodiment it is preferable to have a relatively even distribution of fiducials throughout the pertinent world, because they are the kinds of items that cameras can easily use to recognize a location. in one embodiment, the pertinent cloud computing configuration to groom the database of 3-d points and any associated metadata periodically to use the best data from various users for both fiducial refinement and world creation. in other words, the system may get the best dataset by using inputs from various users looking and functioning within the pertinent world. in one embodiment the database is intrinsically fractal - as users move closer to objects, the cloud passes higher resolution information to such users. as a user maps an object more closely, that data is sent to the cloud, and the cloud can add new 3-d points and image-based texture maps to the database if the new points are better than the previously stored points. it should be appreciated that this process may run for multiple users simultaneously. as described above, an ar or vr experience may rely, in large part, on recognizing certain types of objects. for example, it may be important to understand that a particular object has a given depth in order to recognize and understand such object. as described in some length above, recognizer software objects ("recognizers") may be deployed on cloud or local resources to specifically assist with recognition of various objects on either or both platforms as a user is navigating data in a world. for example, if a system has data for a world model comprising 3-d point clouds and pose-tagged images, and there is a desk with a bunch of points on it as well as an image of the desk, the geometry of the desk may be taught to the system in order for the system to recognize it. in other words, some 3-d points in space and an image shows most of the desk may not be enough to instantly recognize that a desk is being observed. to assist with this identification, a specific object recognizer may be created that run on the raw 3-d point cloud, segment out a set of points, and, for example, extract the plane of the top surface of the desk. similarly, a recognizer may be created to segment out a wall from 3-d points, such that a user may simply change a "virtual" wallpaper or remove a part of the wall in virtual or augmented reality and/or have a portal to another virtual room that is not part of the real world. such recognizers operate within the data of a world model and may be thought of as software "robots" that crawl a world model and imbue that world model with semantic information, or an ontology about what is believed to exist amongst the points in space. such recognizers or software robots may be configured such that their entire existence is about going around the pertinent world of data and finding things that it believes are walls, or chairs, or other items. they may be configured to tag a set of points with the functional equivalent of, "this set of points belongs to a wall", and may comprise a combination of point-based algorithm and pose-tagged image analysis for mutually informing the system regarding what is in the points. object recognizers may be created for many purposes of varied utility, depending upon the perspective. for example, in one embodiment, a purveyor of coffee such as starbucks ® may invest in creating an accurate recognizer of starbucks coffee cups within pertinent worlds of data. such a recognizer may crawl worlds of data large and small searching for starbucks coffee cups, so they may be segmented out and identified to a user when operating in the pertinent nearby space (e.g., perhaps to offer the user a coffee in the starbucks outlet right around the corner when the user looks at his starbucks cup for a certain period of time). with the cup segmented out, it may be recognized quickly when the user moves it on his desk. such recognizers may run or operate not only on cloud computing resources and data, but also on local resources and data, or both cloud and local, depending upon computational resources available. in one embodiment, there is a global copy of the world model on the cloud with millions of users contributing to that global model. however, for smaller worlds (e.g., an office of a particular individual in a particular town), local information will not be of relevant to most users of the world. thus, the system may groom data and move to local cache information that is believed to be most locally pertinent to a given user. in one embodiment, for example, when a user walks up to a desk, related information (such as the segmentation of a particular cup on his table) may reside only upon his local computing resources and not on the cloud, because objects that are identified as ones that move often, such as cups on tables, need not burden the cloud model and transmission burden between the cloud and local resources. thus the cloud computing resource may segment 3-d points and images, thus factoring permanent (e.g., generally not moving) objects from movable ones, and this may affect where the associated data is to remain, where it is to be processed, remove processing burden from the wearable/local system for certain data that is pertinent to more permanent objects. this also allows one-time processing of a location which then may be shared with limitless other users, allow multiple sources of data to simultaneously build a database of fixed and movable objects in a particular physical location, and segment objects from the background to create object-specific fiducials and texture maps. in one embodiment, the system may query a user for input about the identity of certain objects (for example, the system may present the user with a question such as, "is that a starbucks coffee cup?"), such that the user may train the system and allow the system to associate semantic information with objects in the real world. an ontology reference may provide guidance regarding objects segmented from the world (e.g., what the objects do, how the objects behave, etc.). in one embodiment the system may feature a virtual or actual keypad, such as a wirelessly connected keypad, connectivity to a keypad of a smartphone, or the like, to facilitate certain user input to the system. the system may share basic elements (walls, windows, desk geometry, etc.) with any user who walks into the room in virtual or augmented reality, and in one embodiment that person's system may take images from his particular perspective and upload those to the cloud. then the cloud becomes populated with old and new sets of data and can run optimization routines and establish fiducials that exist on individual objects. it should be appreciated that gps and other localization information may be utilized as inputs to such processing. further, other computing systems and data, such as one's online calendar or facebook ® account information, may be utilized as inputs (for example, in one embodiment, a cloud and/or local system may analyze the content of a user's calendar for airline tickets, dates, and destinations, such that over time, information may be moved from the cloud to the user's local systems to be ready for the user's arrival time in a given destination). in one embodiment, cloud resources may pass digital models of real and virtual worlds between users, as described above in reference to "passable worlds", with the models being rendered by the individual users based upon parameters and textures. this reduces bandwidth relative to the passage of real-time video, allows rendering of virtual viewpoints of a scene, and allows millions or more users to participate in one virtual gathering without sending each of them data that they need to see (such as video), because the user's views are rendered by their local computing resources. the ar system may register the user location and field of view (together known as the "pose") through one or more of the following: real-time metric computer vision using the cameras, simultaneous localization and mapping techniques, maps, and data from sensors such as gyros, accelerometers, compass, barometer, gps, radio signal strength triangulation, signal time of flight analysis, lidar ranging, radar ranging, odometry, and sonar ranging. the ar system may simultaneously map and orient. for example, in unknown environments, the ar system may collect information about the environment, ascertaining fiducial points suitable for user pose calculations, other points for world modeling, images for providing texture maps of the world. fiducial points may be used to optically calculate pose. as the world is mapped with greater detail, more objects may be segmented out and given their own texture maps, but the world still preferably is representable at low spatial resolution in simple polygons with low resolution texture maps. other sensors, such as those discussed above, may be utilized to support this modeling effort. the world may be intrinsically fractal in that moving or otherwise seeking a better view (through viewpoints, "supervision" modes, zooming, etc.) request high-resolution information from the cloud resources. moving closer to objects captures higher resolution data, and this may be sent to the cloud, which may calculate and/or insert the new data at interstitial sites in the world model. referring to fig. 64 , a wearable system may capture image information and extract fiducials and recognized points 6452. the wearable local system may calculate pose using one of the pose calculation techniques mentioned below. the cloud 6454 may use images and fiducials to segment 3-d objects from more static 3-d background. images may provide textures maps for objects and the world (textures may be real-time videos). the cloud resources may store and make available static fiducials and textures for world registration. the cloud resources may groom the point cloud for optimal point density for registration. the cloud resources 6460 may store and make available object fiducials and textures for object registration and manipulation. the cloud may groom point clouds for optimal density for registration. the cloud resource 6462 may use all valid points and textures to generate fractal solid models of objects. the cloud may groom point cloud information for optimal fiducial density. the cloud resource 6464 may query users for training on identity of segmented objects and the world. as described above, an ontology database may use the answers to imbue objects and the world with actionable properties. the following specific modes of registration and mapping feature the terms "o-pose", which represents pose determined from the optical or camera system; "s-pose", which represents pose determined from the sensors (e.g., such as a combination of gps, gyro, compass, accelerometer, etc. data, as discussed above); and an ar server (which represents the cloud computing and data management resource). the "orient" mode makes a basic map of a new environment, the purpose of which is to establish the user's pose if the new environment is not mapped, or if the user is not connected to the ar servers. in the orient mode, the wearable system extracts points from an image, tracks the points from frame to frame, and triangulates fiducials using the s-pose (since there are no fiducials extracted from images). the wearable system may also filter out bad fiducials based on persistence of the user. it should be appreciated that the orient mode is the most basic mode of registration and mapping and will always work even for a low-precision pose. however after the ar system has been used in relative motion for at least a little time, a minimum fiducial set will have been established such that the wearable system is set for using the o-pose to recognize objects and to map the environment. as soon as the o-pose is reliable (with the minimum fiducial set) the wearable set may exit out of the orient mode. the "map and o-pose" mode may be used to map an environment. the purpose of the map and o-pose mode is to establish high-precisions poses, to map the environment and to provide the map and images to the ar servers. in this mode, the o-pose is calculated from mature world fiducials downloaded from the ar server and/or determined locally. it should be appreciated, however, that the s-pose may be used as a check of the calculated o-pose, and may also be used to speed up computation of the o-pose. similar to above, the wearable system extracts points from images, and tracks the points from frame to frame, triangulates fiducials using the o-pose, and filters out bad fiducials based on persistence. the remaining fiducials and pose-tagged images are then provided to the ar server cloud. it should be appreciated that the these functions ( extraction of points, filtering out bad fiducials and providing the fiducials and pose-tagged images) need not be performed in real-time and may be performed at a later time to preserve bandwidth. the o-pose is used to determine the user's pose (user location and field of view). the purpose of the o-pose is to establish a high-precision pose in an already mapped environment using minimum processing power. calculating the o-pose involves several steps. to estimate a pose at n, the wearable system may use historical data gathered from s-poses and o-poses (n-1, n-2, n-3, etc.). the pose at n is then used to project fiducials into the image captured at n to create an image mask from the projection. the wearable system extracts points from the masked regions and calculates the o-pose from the extracted points and mature world fiducials. it should be appreciated that processing burden is greatly reduced by only searching/extracting points from the masked subsets of a particular image. going one step further, the calculated o-pose at n, and the s-pose at n may be used to estimate a pose at n+1. the pose-tagged images and/or video may be transmitted to the ar server cloud. the "super-res" mode may be used to create super resolution imagery and fiducials. composite pose-tagged images may be used to create super-resolution images, which may in turn be used to enhance fiducial position estimation. it should be appreciated that iterate o-pose estimates from super- resolution fiducials and imagery. the above steps may be performed real-time on the wearable device or may be transmitted to the ar server cloud and performed at a later time. in one embodiment, the ar system may have certain base functionality, as well as functionality facilitated by "apps" or applications that may be distributed through the ar system to provide certain specialized functionalities. for example, the following apps may be installed to the subject ar system to provide specialized functionality. in one embodiment, if the display device tracks 2-d points through successive frames, then fits a vector-valued function to the time evolution of those points, it is possible to sample the vector valued function at any point in time (e.g. between frames) or at some point in the near future (by projecting the vector-valued function forward in time. this allows creation of high-resolution post-processing, and prediction of future pose before the next image is actual captured (e.g., doubling the registration speed is possible without doubling the camera frame rate). for body-centric rendering (as opposed to head-fixed or world-fixed renderings) an accurate view of body is desired. rather than measuring the body, in one embodiment is possible to derive its location through the average position of a user's head. if the user's face points forward most of the time, a multi-day average of head position will reveal that direction. in conjunction with the gravity vector, this provides a reasonably stable coordinate frame for body-fixed rendering. using current measures of head position with respect to this long-duration coordinate frame allows consistent rendering of objects on/around a user's body - with no extra instrumentation. for implementation of this embodiment, single register averages of head direction-vector may be started, and a running sum of data divided by delta-t will give current average head position. keeping five or so registers, started on day n-5, day n-4, day n-3, day n-2, day n-1 allows use of rolling averages of only the past "n" days. in one embodiment, a scene may be scaled down and presented to a user in a smaller-than-actual space. for example, in a situation wherein there is a scene that may be rendered in a huge space (e.g., such as a soccer stadium), there may be no equivalent huge space present, or such a large space may be inconvenient to a user. in one embodiment the system may reduce the scale of the scene, so that the user may watch it in miniature. for example, one could have a bird's eye-view video game, or a world championship soccer game, play out in an unscaled field - or scaled down and presented on a living room floor. the system may simply shift the rendering perspective, scale, and associated accommodation distance. the system may also draw a user's attention to specific items within a presented scene by manipulating focus of virtual or augmented reality objects, by highlighting them, changing the contrast, brightness, scale, etc. preferably the system may accomplish the following modes. in open-space-rendering mode, the system may grab key points from a structured environment, and fill in the space between with renderings. this mode may be used to create potential venues, like stages, output space, large indoor spaces, etc. in object-wrapping mode, the system may recognize a 3d object in the real world, and then augment it. "recognition" in this context may mean identifying the 3d object with high enough precision to anchor imagery to the 3d object. it should be appreciated that recognition, in this context, may either mean classifying the type of an object (e.g., a face of a person), and/or classifying a particular instance of an object (e.g., joe, a person). using these principles in mind, the recognizer software can be used to recognize various things, like walls, ceilings, floors, faces, roads, the sky, skyscrapers, ranch houses, tables, chairs, cars, road signs, billboards, doors, windows, bookshelves, etc. some recognizer software programs may be type i, and have generic functionality (e.g., "put my video on that wall", "that is a dog", etc.), while other recognizer software programs may be type ii, and have specific functionality (my tv is on _my_ living room wall 3.2 feet from the ceiling", "that is fido", etc.) in body-centeric rendering, any rendered virtual objects are fixed to the user's body. for example, some objects may float around the user's body (e.g., a user's belt). accomplishing this requires knowing the position of the body, and not just the head. however, the position of the body may be estimated by the position of the head. for example, heads usually point forward parallel to the ground. also, the position of the body may become more accurate with time by using data acquired by a long-term average of users' head positions. type ii recognized objects may be linked to an online database of various 3d models. when starting the recognition process, it is ideal to start with objects that have commonly available 3d models, like cars or public utilities. the system may also be used for virtual presence, e.g., enabling a user to paint a remote person's avatar into a particular open space. this may be considered a subset of "open space rendering," discussed above. the user may create a rough geometry of a local environment and iteratively send both geometry and texture maps to others. the user may grant permission for others to enter their environment, however. subtle voice cues, hand tracking, and head motion may be sent to the remote avatar. based on the above information, the avatar may be animated. it should be appreciated that creating virtual presence minimizes bandwidth and may be used sparingly. the system may also be configured for making an object "a portal" to another room. in other words, instead of showing an avatar in a local room, a recognized object (e.g. a wall) may be used as a portal to another's user's environments. thus, multiple users may be sitting in their own rooms, looking "through" walls into the environments of other users. the system may also be configured for creating a dense digital model of an area when a group of cameras (people) view a scene from different perspectives. this model may be render-able from any vantage point as long as the area is viewed through at least one camera. for example, a wedding scene may be rendered through vantage points of multiple users. it should be appreciated that recognizers may differentiate and map stationary objects differently from moving objects (e.g. walls have stable texture maps, while people have higher frequency moving texture maps). with rich digital model updated in real time, scenes may be rendered from any perspective. going back to the wedding example, an attendee in the back may fly in the air to the front row for a better view. or an off-site attendee can find a "seat" either with an avatar, or invisible, if permitted by an organizer. attendees can show moving avatars, or may have the avatars hidden from view. it should be appreciated that this aspect likely requires extremely high bandwidth. high-frequency data may be streamed through the crowd on a high-speed local wireless connection, while low frequency data may come from the ar server in the cloud. in the above example, because all attendees of the wedding may have high precision position information, therefore making an optimal routing path for local networking trivial. for communication to the system, or between users, simple silent messaging is often desirable. for example, a finger chording keyboard may be used. in an optional embodiment, tactile glove solutions may offer enhanced performance. to give a full virtual reality experience to users, the vision system is darkened and the user is shown a view that is not overlaid with the real world. even in this mode, a registration system may still be necessary to track a user's head position. there may be several modes that may be used to experience full virtual reality. for example, in the "couch" mode, the users may be able to fly. in the "walking" mode, objects of the real world may be re-rendered as virtual objects so that the user does not collide with the real world. as a general rule, rendering body parts may be important for the user's suspension of disbelief in navigating through the virtual world. in one or more embodiments, this may require having a method for tracking and rendering body parts in the user's field of view. for example, an opaque visor may be a form of virtual reality with many image-enhancement possibilities. in another example, a wide field of vision may give the user a rear view. in yet another example, the system may include various forms of "super vision," like telescope vision, see-through vision, infrared vision, god's vision, etc. in one embodiment a system for virtual and/or augmented user experience is created such that remote avatars associated with users may be animated based at least in part upon data on a wearable device with input from sources such as voice inflection analysis and facial recognition analysis, as conducted by pertinent software modules. for example, referring back to fig. 60 , the bee avatar 6002 may be animated to have a friendly smile based upon facial recognition of a smile upon the user's face, or based upon a friendly tone of voice or speaking, as determined by software that analyzes voice inputs to microphones which may capture voice samples locally from the user. further, the avatar character may be animated in a manner in which the avatar is likely to express a certain emotion. for example, in an embodiment wherein the avatar is a dog, a happy smile or tone detected by system local to the human user may be expressed in the avatar as a wagging tail of the dog avatar. referring to figs. 65-70 , various aspects of complex gaming embodiments are illustrated in the context of a spy type game which may be thematically oriented with some of the spy themes presented in relation to the character promoted under "secret agent 007". referring to fig. 65 , an illustration of a family 6584 is depicted, with one member of the family 6585 piloting a character in the game by operating an input device 6588, such as a gaming joystick or controller, which is operatively coupled to a gaming computer or console 6586, such as those based upon personal computers or dedicated gaming systems. the gaming console 6586 is operatively coupled to a display 6592 that shows a user interface view 6592 to the pilot/operator 6585 and others who may be nearby. fig. 66 illustrates one example of such a user interface view 6592, in which the subject game is being conducted on or near a bridge within the city of london, england. the user interface view 6592 for this particular player 6585 is purely virtual reality (e.g., all elements of the displayed user interface are not actually present with the players 6585), they are virtual elements displayed using the monitor or display (element 6590 in fig. 65 ). referring again to fig. 66 , the depicted virtual reality view 6592 features a view of the city of london featuring a bridge 6602 and various buildings 6698 and other architectural features, with a depiction of the gaming character (6618 - also referred to as "agent 009" in this illustrative example) operated by the subject player 6585 from a perspective view as shown in the user interface view 6592 of fig. 66 . also displayed to the player 6585 are a communications display 6696, a compass indicator 6694, a character status indicator 6614, a news tool user interface 6604, a social networking tool user interface 6632, and a messaging user interface 6612. further shown is the representative of another character in the game (6622 - also referred to as "agent 006" in this illustrative example). as shown in the user interface view 6592, the system may presents information deemed relevant to the scene presented, such as a message through the messaging interface 6612 that agent 006 is approaching, along with visually-presented highlighting around the agent 006 character. the operator 6585 may change the perspective of the view he or she is utilizing at any time. for example, rather than the helicopter-like perspective view shown in fig. 66 , the player may decide to select a view from the perspective of the eyes of such character, or one of many other possible views which may be calculated and presented. referring to fig. 67 , another illustrative view 6744 shows an actual human player operating as character "agent 006" 6740 wearing a head mounted ar display system 6700 and associated local processing system 6708 while he participates in the same game that is being played by the operator at home in her living room (player 6585 in fig. 65 , for example), and while he actually walks through the real city of london for his blended or augmented reality experience. in the depicted embodiment, while the player 6740 walks along the bridge wearing his augmented reality head mounted display 6700, his local processing system 6708 is feeding his display with various virtual reality elements as depicted, which are overlaid upon his view of actual reality (e.g., such as the actual skyline and structures of london 6738). the human may be carrying one or more actual documents 6842 in his hands, which, in one embodiment, were previously electronically communicated to him for printout and use in the gaming scenario. fig. 68 shows an illustration of the view 6846 from the player's 6740 eye perspective, looking out over his actual documents 6742 to see the actual london skyline 6738, while also being presented with a variety of virtual elements for an augmented reality view through his head mounted display. the virtual elements may include, for example, a communications display 6826, a news display 6828, one or more electronic communications or social networking tool displays 6832, one or more player status indicators 6834, a messaging interface 6836, a compass orientation indicator 6824, and one or more displays of content 6848, such as textual, audio, or video content. this may be retrieved and presented in accordance with other displayed or captured information, such as the text or photographs featured in the actual documents 6842 carried by the player 6840. nearby, another character "agent 009", who only exists in virtual reality, is presented into the augmented reality view 6846 of the player 6840 operating as character "agent 006", and may be labeled as such in the user interface for easy identification, as shown in fig. 68 . referring to fig. 69 , a player's eye view 6952 of another player 6950 who also happens to be actually present in london 6938 and walking across the same bridge toward the "agent 006" player 6940, but without a head-worn ar system is presented. this player 6950 may be carrying a mobile communication device 6954 such as a tablet or smartphone, which in this embodiment, may be wirelessly connected with the larger system and utilized as a "window" into the augmented reality world of the subject game and configured to present in the limited user interface 6956 of the device, augmented reality information regarding one or two other nearby players (e.g., actual or virtual), along with other augmented reality display information 6962 such as warnings or character information. as shown in fig. 69 , a virtual representation of the agent 006 player 6958 and that of agent 009 6960 are shown on the user interface 6956. referring to fig. 70 , a "bird's eye" or manned or unmanned aerial vehicle (or "uav") view is presented 7064. in one embodiment, the view 7064 may be based upon a virtual uav operated by another player, or one of the aforementioned players. the depicted view 7064 may be presented in full virtual mode to a player, for example, who may be sitting on a couch at home with a large computer display 6590 or a head mounted ar system. alternatively, such view may be presented as an augmented reality view to a player who happens to be in an airplane or other flying vehicle (e.g., "augmented" or blended because to a person in such a position, at least portions of the view would be actual reality). the illustrated view 7064 contains an interface area for an information dashboard 7070 featuring pertinent information, such as information regarding an identified counterparty spotted in the view. the depicted view 7064 also features virtual highlighting information such as sites of interest of information 7068, locations and/or statuses of other players or characters 7066, and/or other information presentations 7067. referring to fig. 71 , for illustrative purposes, another augmented reality scenario is presented with a view 7172 featuring certain actual reality elements, such as: the architecture of the room 7174, a coffee table 7180, a dj table 7178, and five actual people (7176, 7188, 7182, 7184, 7186), each of whom is wearing head mounted ar system so that they may experience respective augmented reality views of the world (e.g., a virtual reality cartoon character 7198, a virtual reality spanish dancer character 7196, a cartoon character 7194, and a globe-rabbit-eared head covering 7192 for one of the actual people 7188). without the augmented reality interface hardware, the room would look to the five actual people like a room with furniture, a dj table. with the ar system, however, the system is configured such that the engaged players or participants may experience another user who is currently in the room in the form of the cartoon character or a spanish dancer, or as the cartoon character, or the user wearing normal clothing, but has his/her head visualized with globe-rabbit-eared head covering 7192. the system may also be configured to show certain virtual features associated with the actual dj table 7178, such as virtual music documentation pages 7190 which may be only visible to the dj 7176 or dj table lighting features which may be visible to anyone around using their augmented reality interface hardware. referring to figs. 72a and 72b , an adaptation of a mobile communications device such as a tablet computer or smartphone may be utilized to experience augmented reality as a modified "window" into the augmented reality world of the subject game or experience being created using the subject system. referring to fig. 72a , a typical smartphone or tablet computing system mobile device 7254 features a relatively simple visual user interface 7256 and typically has one or more cameras. referring to fig. 72b , the mobile computing device has been removably and operatively coupled into an enhancement console 7218 to increase the augmented reality participation capabilities of the mobile computing device. for example, the depicted embodiment features two player-oriented cameras 7202 which may be utilized for eye tracking; four speakers 7200 which may be utilized for simple high-quality audio and/or directional sound shaping; two forward-oriented cameras 7204 for machine vision, registration, and/or localization; an added battery or power supply capability 7212; one or more input interfaces (214, 216) which may be positioned for easy utilization by a player grasping the coupled system; a haptic feedback device 7222 to provide feedback to the user who is grasping the coupled system (in one embodiment, the haptic feedback device may provide two axes of feedback, in + or - directions for each axis, to provide directional feedback; such configuration may be utilized, for example, to assist the operator in keeping the system aimed at a particular target of interest, etc.); one or more gps or localizing sensors 7206; and/or one or more accelerometers, inertial measurement units (imu), and/or gyros (208). referring to fig. 73 , in one embodiment, a system such as that depicted in fig. 72b may be utilized to coarse-localize a participant in x and y (akin to latitude and longitude earth coordinates) cartesian directions using a gps sensor and/or wireless triangulation (7332). coarse orientation may be achieved using a compass and/or wireless orientation techniques (7334). with coarse localization and orientation determined, the distributed system may load (e.g., via wireless communication) local feature mapping information to the local device. such information may comprise, for example, geometric information, such as skyline geometry, architectural geometry, waterway/planar element geometry, landscape geometry, and the like (7336). the local and distributed systems may utilize the combination of coarse localization, coarse orientation, and local feature map information to determine fine localization and orientation characteristics (such as x, y, and z {akin to altitude} coordinates and 3-d orientation) (7338), which may be utilized to cause the distributed system to load fine pitch local feature mapping information to the local system to enhance the user experience and operation. movements to different orientations and locations may be tracked utilizing coarse localization and orientation tools as well as locally deployed devices such as inertial measurement units, gyroscopes, and accelerometers which may be coupled to mobile computing systems such as tablets or mobile phones which may be carried by the participant (7342). actual objects, such as the dj table 7178 featured in fig. 71 , may be extended with virtual reality surfaces, shapes, and or functionality. for example, in one embodiment, a real button on such device may open a virtual panel which interacts with the actual device and/or other devices, people, or objects. rooms such as the party room 7174 depicted in fig. 71 may be extrapolated to be any room or space. the system may have anywhere from some known data (such as existing two or three dimensional data regarding the room other associated structures or things) - or may have nearly zero data, and machine vision configurations utilizing cameras such as those mounted upon the controller console of fig. 72b can be utilized to capture additional data; further, the system may be created such that groups of people may crowd-source usable two or three dimensional map information. in a configuration wherein existing map information is available, such as three-dimensional map data of the city of london, a user wearing a head mounted ar system may be roughly located using gps, compass, and/or other means (such as additional fixed tracking cameras, devices coupled to other players, etc.). fine registration may be accomplished from the user's sensors, and determining a known geometry of the physical location as fiducials for such registration. for example, in a london-specific building when viewed at distance x, when the system has located the user within y feet from gps information and direction c from the compass and map m, the system may be configured to implement registration algorithms (somewhat akin to techniques utilized in robotic or computer-assisted surgery) to "lock in" the three-dimensional location of the user within some error e. fixed cameras may also be utilized along with head mounted or sensory ware systems. for example, in party room such as that depicted in fig. 71 , fixed cameras mounted to certain aspects of the room 7174 may be configured to provide live, ongoing views of the room and moving people, giving remote participants a "live" digital remote presence view of the whole room, such that their social interactions with both virtual and physical people in the room is much richer. in such an embodiment, a few rooms may be mapped to each other: the physical room and virtual room geometries may be mapped to each other; additional extensions or visuals may be created which map it equally to, less than, or larger than the physical room, with objects moving about through both the physical and virtual "meta" rooms, and then visually customized, or "skinned", versions of the room may be made available to each user or participant. for example, while the users may be in the exact same physical or virtual room, the system may allow for custom views by users. for example, one user can be at the party, but have the environment mapped with a "death star" motif or skin, while another user may have the room skinned as it is shown in fig. 71 with the party environment. display in one or more embodiments, a predictor/corrector mechanism can be applied to smooth out and/or predictively correct for delays and/or timing inconsistencies in the display process. to illustrate, consider that there are numerous stages in the process to display an image in the eyepiece of a wearable device. for example, assume that the wearable device corresponds to at least the following processing stages: sensor -> compute -> application -> display processing the sensor stage pertains to the measurements taken from one or more sensors that are used to create or display data through the wearable device. such sensors may include, for example, cameras, imus, etc. the issue is that some of the sensors may have measurement rates that are significantly different from one another, where some are considered relatively "fast", others may be considered relatively "slow". camera sensors may operate relatively slowly, e.g., in the range from 30-60 measurements/second. in contrast, imus may operate relatively fast, e.g., in the range from 500-2000 measurements/second. these different measurement rates may introduce delays and inconsistencies when attempting to use the measurement data to generate display information. in addition, timing delays may be introduced during some of the above-identified processing stages. for example, a timing delay may be introduced in the compute stage during which the sensor data is received and the computations upon that sensor data are run. for example, the actions to normalize, compute, adjust, and/or scale the sensor data will likely create a delay δt compute during this processing stage. similarly, the application stage is also likely to introduce a certain amount of delay. the application stage is the stage at which a particular application is executing to operate upon the input data for the functionality desired by the user. for example, if the user is playing a game, then the game application is running in the application stage. the required processing by the application will introduce a delay δt application during this processing stage. the display processing stage is also likely to introduce its own delay δt display into the process. this delay is introduced, for example, to perform the processing needed to render the pixels to be displayed in the wearable eyepieces. as is evident, many types of delays are introduced during the various stages of the processing. a predictive filter may be used to account for and/or correct the effects of these delays and/or inconsistencies to the displayed image. this is accomplished by predictively determining the effects of these issues (e.g., by adding/computing for the effects of the clock and δt compute and δt application and δt display ). the prediction filter also takes into account the relative speed of the sensor measurements at the sensor stage. one possible approach that can be taken to make this prediction is to utilize a kalman predictor in the display processing stage. based at least in part on this prediction, compensatory changes can be made to the display data to account for and/or correct negative effects of the delays and/or measurement speed. as an illustrative example, consider when a certain set of visual data needs to be displayed in the wearable device. however, the user is also in motion at that particular point in time, and the delays discussed above may cause a noticeable lag in the rendered pixels to the user for that scene. in this situation, the present embodiment uses the predictive filter to identify the existence and effect of the delay, to analyze the movement of the user to determine "where he is going", and to then perform a "shift" of the displayed data to account for the processing delays. the filter can also be used to "smooth" the visual artifacts and negative effect from the sensor measurements, e.g., using a kalman smoother. ul system the following discussion will focus on various types of user interface components that may be used to communicate with the ar system. the ar system may use one or more of a large variety of user interface (ul) components. the user interface components may include components that perform: eye tracking, hand tracking, totem tracking, natural feature pose determination, head pose determination, as well as predictive head pose determination. the user interface system may employ an asynchronous world model. the user interface components may employ view-centered (e.g., head-centered) rendering, body-centered rendering, and/or world-centered rendering, as discussed herein. further, the user interface components may employ various types of environmental data, for example gps location data, wi-fi signal strength date, cellphone differential signal strength, known features, image histogram profiles, hashes of room features, etc., proximity to walls/ceiling/floors/3d-blobs/etc., location in the world (e.g., home, office, car, street), approximate social data (e.g., "friends"), and/or voice recognition. as described above, an asynchronous portion model refers to building a local copy in the individual ar system(s) and synchronizing any changes against the cloud. for example, if a chair is moved in a space, a chair object recognizer may recognize that the chair has moved. however, there may be a delay in getting that information to the cloud, and then getting it downloaded to the local system such that a remote presence avatar may sit in the chair. it should be appreciated that environmental data can contribute to how the user interface can be used. since the ar system is situationally aware, it implicitly has a semantic understanding of where the user or physical objects are located. for example, gps location data, wi-fi signal strength or network identity, differential signal strength, known features, histogram profiles, etc., can be used to make statistical inferences for a topological map. the concept of the user interface in the augmented reality implementation can be extended. for example, if a user is close to a wall and knocks on a wall, the knocking can be interpreted by the user interface as a user experience (ux) interaction modality. as another example, if a user selects a particular wi-fi signal on a device, the selection could be interpreted by the user interface as an interaction modality. the world around the user becomes part of the user interface (ul) for the user. user inputs referring ahead to fig. 100 , the user interface may be responsive to one or more of a variety of inputs. the user interface of the ar system may, for example, be responsive to hand inputs 10002, for instance: gestures, touch, multi-touch, and/or multiple hand input. the user interface of the ar system may, for example, be responsive to eye inputs 10004, for instance: eye vector and/or eye condition (e.g., open/close). the user interface of the ar system may, for example, be responsive to totem inputs 10006. totems may take any of a large variety of forms, for example a belt pack. totem input may be static, for example tracking a closed book/tablet, etc. totem input may be dynamic, for example dynamically changing like flipping pages in a book etc. totem input may be related to communications with the totem, for instance a ray gun totem. totem input may be related to intrinsic communications, for instance communications via usb, data-communications, etc. totem input may be generated via an analog joystick, click wheel, etc. the user interface of the ar system may, for example, be responsive to head pose, for instance head position and/or orientation. the user interface of the ar system may, for example, be responsive to voice, for instance spoken commands and parameters. the user interface of the ar system may, for example, be responsive to environmental sounds. the ar system may, for instance, include one or more ambient microphone to pick up sounds, for example chest taps, etc. the user interface of the ar system may, for example, be responsive to environmental situations. for instance, the user interface may be responsive to movement occurring against or proximate a wall, or a movement above a defined threshold (e.g., movement at a relatively high speed). it may be useful to have a consistent user interface metaphor to suggest to developers and build into ar system's operating system (os), and which may allow for reskinning for various applications and/or games. one approach may employ user actuatable levers or buttons icons, although that approach lacks tactile feedback. levers may have a respective fulcrum point, although such an approach may be difficult for users. another approach is based on a "force field" metaphor that intentionally keeps things away (e.g. sparks on boundaries, etc.). in one or more embodiments, a virtual image may be presented to the user in the form of a virtual user interface. the virtual user interface may be a floating virtual screen, as shown in fig. 100 . since the system knows where (e.g., the depth, distance, perceived location, etc.) of the virtual user interface, the system may easily calculate the coordinates of the virtual interface, and allow the user to interact with the virtual screen, and receive inputs from the virtual user interface based on the coordinates at which the interaction happens, and a known coordinates of the user's hands, eyes, etc. thus, in other words, the system maps coordinates of various "keys", or features of the virtual user interface, and also maps coordinates/knows a location of the user's hands, eyes (or any other type of input) and correlates them, to receive user input. for example, if a virtual user interface is presented to the user in a head-centric reference frame, the system always knows a distance/ location of various "keys" or features of the virtual user interface in relation to a world-centric reference frame. the system then performs some mathematical translations/transforms to find a relationship between both reference frames. next, the user may "select" a button of the user interface by squeezing the virtual icon. since the system knows the location of the touch (e.g., based on haptic sensors, image-based sensors, depth sensors etc.), the system determines what button was selected based on the location of the hand squeeze and the known location of the button the user interface. thus, constantly knowing the location of virtual objects in relation to real objects, and in relation to various reference frames (e.g., world-centric, head-centric, hand-centric, hip-centric etc.) allows the system to understand various user inputs. based on the input, the system may use a mapping table to correlate the input to a particular action or command, and execute the action. in other words, the user's interaction with the virtual user interface is always being tracked (e.g., eye interaction, gesture interaction, hand interaction, head interaction, etc.). these interactions (or characteristics of these interactions), including, but not limited to location of the interaction, force of interaction, direction of the interaction, frequency of interaction, number of interactions, nature of interactions, etc. are used to allow the user to provide user input to the user interface in response to the displayed virtual user interface. eye tracking in one or more embodiments, the ar system can track eye pose (e.g., orientation, direction) and/or eye movement of one or more users in a physical space or environment (e.g., a physical room). the ar system may employ information (e.g., captured images or image data) collected by one or more sensors or transducers (e.g., cameras) positioned and oriented to detect pose and or movement of a user's eyes. for example, head worn components of individual ar systems may include one or more inward facing cameras and/or light sources to track a user's eyes. as noted above, the ar system can track eye pose (e.g., orientation, direction) and eye movement of a user, and construct a "heat map". a heat map may be a map of the world that tracks and records a time, frequency and number of eye pose instances directed at one or more virtual or real objects. for example, a heat map may provide information regarding what virtual and/or real objects produced the most number/time/frequency of eye gazes or stares. this may further allow the system to understand a user's interest in a particular virtual or real object. advantageously, in one or more embodiments, the heat map may be used in advertising or marketing purposes and to determine an effectiveness of an advertising campaign, in some embodiments. the ar system may generate or determine a heat map representing the areas in the space to which the user(s) are paying attention. in one or more embodiments, the ar system can render virtual content (e.g., virtual objects, virtual tools, and other virtual constructs, for instance applications, features, characters, text, digits, and other symbols), for example, with position and/or optical characteristics (e.g., color, luminosity, brightness) optimized based on eye tracking and/or the heat map gaze tracking it should be appreciated that the concepts outlined with respect to gaze tracking may be applied to any of the user scenarios and embodiments described further below. in one or more embodiments, the various user interfaces described below may also be activated/originated back to a detected gaze. the principles described herein may be applied to any other part of the disclosure, and should not be read as limiting. the ar system may track eye gaze in some embodiments. there are three main components to gaze tracking: an eye tracking module (pupil detection and center of cornea detection), a head tracking module, and a correlation module that correlates the eye tracking module with the head tracking module. the correlation module correlates the information between the world coordinates (e.g., position of objects in the real world) and the eye coordinates (e.g., movement of the eye in relation to the eye tracking cameras, etc.). the eye tracking module is configured to determine the center of the cornea and the center of the pupil. referring ahead to fig. 117 , a schematic of the eye 11702 is illustrated. as shown in fig. 117 , a line 11704 is shown to pass through the center of the cornea, the center of the pupil and the center of the eyeball. this line 11704 may be referred to as the optical axis. fig. 117 also shows another gaze line 11706 that passes through the cornea. this line may be referred to as the visual axis. as shown in fig. 17 , the visual axis is a tilted line in relation to the optical axis. it should be appreciated that the area of the fovea 11708 through which the visual axis 11706 crosses is considered to be a very dense area of photoreceptors, and therefore crucial for the eye in order to view the outside world. the visual axis 11706 is typically at a 1-5° deviation (not necessarily vertical deviation) from the optical axis. in conventional gaze tracking technologies, one of the main assumptions is that the head is not moving. this makes it easier to determine the visual axis in relation to the optical axis for gaze tracking purposes. however, in the context of the ar system, it is anticipated that the user will be constantly moving his/her head; therefore conventional gaze tracking mechanisms may not be feasible to this end, the ar system is configured to normalize the position of the cornea in relation to the system. it should be appreciated that the position of the cornea is very important in gaze tracking because both the optical axis and the visual axis pass through the cornea as shown in the previous fig.117 . referring now to fig. 118 , the ar system comprises a world camera system (e.g., cameras placed on the user's head to capture a set of surroundings; the cameras move with the movement of the user's head) 11804 that is attached to the wearable ar system 11806. also, as shown in fig. 118 , the ar system 11806 may further comprise one or more eye tracking cameras 11808 that track movements of the eye 11802. since both cameras (e.g., eye tracking cameras 11808 and the world cameras 11804), are moving, the system may account for both head movement and eye movement. both the head movement (e.g., calculated based on the fov cameras 11804), and the eye movement (e.g., calculated based on the eye tracking cameras 11808) may be tracked in order to normalize the position of the cornea. it should be appreciated that the eye tracking cameras 11808 measure the distance from the cameras to the center of the cornea. thus, to compensate for the any changes in how the wearable ar system 11806 moves with respect to the eye, the distance to the center of the cornea is normalized. for example, with eye glass movement, there may be a slight rotation and/or translation of the cameras away from the cornea. however, the system compensates for this movement by normalizing the distance to the center of the cornea. it should be appreciated that since both the eye tracking cameras and the head camera (world cameras) are rigid bodies (e.g., the frame of the ar system), any normalization or correction of the eye tracking cameras needs to also be similarly performed on the world cameras. for example, the same rotation and translation vector may be similarly applied to the world camera system. thus, this step identifies the relationship between the eye tracking and head tracking systems (e.g., a rotational vector, a translational vector, etc.). once the rotation and/or translation vectors have been identified, a calibration step is performed at various depths away from the user. for example, there may be known points that are at a fixed distance away from the user. the world cameras 11804 may measure the distance between a point that is fixed in space from the user. as discussed above, a position of the center of the cornea is also known based on calculations associated with the eye tracking cameras 11808. additionally, as discussed above, the relationship between the eye tracking camera 11808 and the world camera is also known (e.g., any translational or rotational vectors). thus, it can be appreciated that once the position of the target (e.g., fixed known points in space) and the position of the cornea have been identified, the gaze line (from the cornea to the target) may be easily identified. this information may be used in mapping and/or rendering in order to accurately portray virtual objects in space in relation to one or more real objects of the physical world. more particularly, to determine the relationship between the world camera 11804 and the eye tracking camera 11806, at least two fixed images may be presented both to the eye camera and the world camera and the difference in the images may be used to calibrate both cameras. for instance, if the center of the cornea is known in relation to the eye tracking system 11808, the center of the cornea may be determined in relation to the world coordinate system 11804 by utilizing the known relationship between the eye cameras and the world cameras. in one or more embodiments, during a calibration process (e.g., during a set-up process when the user first receives the ar device, etc.), a first fixed image is captured by the eye camera 11806 and then the world camera 11804. for illustrative purposes, the first image capture performed by the eye camera may be considered "e", and the first image capture performed by the world camera may be considered "w". then, a second fixed image is captured by the eye camera 11806 and then captured by the world camera 11804. the second fixed image may be at a slightly different position than the first fixed image. the second image capture of the eye camera may be referred to as e' and the second image capture of the world camera may be referred to as w'. since z = wxe and z= w'xe', x can be easily calculated using the above two equations. thus, this information may be used to map points reliably to naturally calibrate the position of the cameras in relation to the world. by establishing this mapping information, the gaze line 11706 may be easily determined, which may, in turn, be used to strategically provide virtual content to the user. gaze tracking hardware referring now to fig. 119 , to detect the center of the cornea using the eye tracking module, the ar system utilizes either one camera with two glints (e.g., led lights) or two cameras with one glint each. in the illustrated embodiment, only one glint 11902 is shown in relation to the eye 11802 and the eye tracking camera 11806. it should be appreciated that the surface of the cornea is very reflective and thus, if there is a camera that tracks the eye (e.g., the eye tracking cameras), there may be a glint that is formed on the image plane of the camera. since the 3d position of the led light 11902is known, and the line from the image plane of the camera to the glint 11910 is known, a 3d plane comprising the glint and the image plane is created. the center of the cornea is located on this created 3d plane 11904 (which is represented as a line in fig. 119 ). similarly, if another glint (from another led light) is used, the two 3d planes intersect each other such that the other 3d plane also has the center of the cornea. thus, it can be appreciated that the intersection of both 3d planes produces a line which holds the center of the cornea. now the exact point of the cornea within that line may be determined. it should be appreciated that there is a unique position on that line (from the glint to the projector) that satisfies reflection law. as is well known in physics, the law of reflection states that when a ray of light reflects off a surface, the angle of incidence is equal to the angle of reflection. this law may be used to find the center of the cornea. referring to fig. 120 , now the distance from center of the cornea to the original point (e.g., the glint 11910) may be determined (r', not shown). similarly, the same analysis may be performed on the other line 12004 (from the other glint 12002 to the other projector) to find r"(the distance from the intersection line to the other line) (not shown). the center of the cornea may be estimated based on the value of r' and r" that are closest in value to each other. it should be appreciated that the above example embodiment describes two planes, but, the position of the cornea may be found more easily if more planes are used. this may be achieved by using a plurality of led lights (e.g., more glints). it is important that the eye tracking system produce at least two glints on the eye. to increase accuracy, more glints may be produced on the eye. however, with the additional glints produced on the surface of the eye, it becomes difficult to determine which glint was produced by which led. to this end, to understand the correspondences between the glint and the led, rather than simultaneously reflecting the glints on each frame, one led may be turned on for one frame, and the other may be turned on after the first one has been turned off. this approach may make the ar system more reliable. similarly, it is difficult to determine the exact center of the pupil because of discrepancies caused by refraction. to detect the center of the pupil, an image of an eye may be captured. one may move around the center of the image in a "starburst" pattern radially outward from a central point in order to find the pupil. once that is found, the same process may be performed starting from points within the pupil to find edges of the pupil. this information may be used to infer the pupil center. it should be appreciated that if this process is repeated several times, some center may be outliers. however, these outliers may be filtered out. even with this approach, however, the center of the pupil may still not be in the correct position because of refraction principle discussed above. referring now to fig. 121 , calibration may be performed to determine the deviation between the visual axis and the optical axis. when calibrating the system, the real center of pupil may not matter, but for mapping in the world (consider, for example, the world to be in 2d for, example), it is important to determine the distance between the world and the eye. given the pupil center and the image plane, it is important to find a mapping to find the correlated coordinates in the 2d world, as shown in fig. 121 . to this end, one can use parabola mapping to find the corresponding coordinates in the image plane. a sample equation like the following may be used: as shown in 12100 of fig. 121 , equations similar to the above may be used to determine (xs, ys) from the determined (xe, ye). here, the total parameters are twelve. each point provides two equations; therefore at least six points (e.g., a1-a6) may be needed to solve this equation. now that the center of the cornea is known, and a position of a target point is known, a line may be drawn from the center of the cornea to the target point. the world camera 11804 has a fixed plane that takes the image, which may take the image at a fixed point in space. then another target point is displayed to the person, and then the intersection plane that is virtually attached to the world camera is determined. the mapping techniques described above may be used to determine the corresponding point within that intersection plane, as described in detail above. knowing the center of the cornea, the mapping techniques described above can identify the points on the image plane virtually attached to the world cameras. given that all these points are now known, a gaze line may be built from the center of the cornea to the point on the image plane. it should be appreciated that the gaze line is built for each eye separately. referring now to fig. 122 , an example method 12200 of determining the gaze line is illustrated. first, at 12202, a center of the cornea may be determined (e.g., through the led triangulation approach described above, etc.). then, at 112204, a relationship between the eye cameras and world cameras may be determined. at 12206, a target position may be determined. finally at 12208, mapping techniques may be utilized to build a gaze line based on all the determined information. pseudo-random pattern in one or more embodiments, the ar system may employ pseudo-random noise in tracking eye pose or eye movement. for example, the head worn component of an individual ar system may include one or more light sources (e.g., leds) positioned and oriented to illuminate a user's eyes when the head worn component is worn by the user. the camera(s) detects light from the light sources which is returned from the eye(s). for example, the ar system may use purkinje images, e.g., reflections of objects from the structure of the eye. the ar system may vary a parameter of the light emitted by the light source to impose a recognizable pattern on emitted, and hence detected, light which is reflected from eye. for example, the ar system may pseudo-randomly vary an operating parameter of the light source to pseudo-randomly vary a parameter of the emitted light. for instance, the ar system may vary a length of emission (on/off) of the light source(s). this facilitates automated detection of the emitted and reflected light from light emitted and reflected from ambient light sources. as illustrated in fig. 101 and fig. 102 , in one implementation, light sources (e.g., leds) 10102 are positioned on a frame on one side (e.g., top) of the eye and sensors (e.g., photodiodes) are positioned on the bottom part of the frame. the eye may be seen as a reflector. notably, only one eye needs to be instrumented and tracked since pairs of eyes tend to move in tandem. the light sources 10102 (e.g., leds) are normally turned on and off one at a time (e.g., time slice) to produce a patterned code (e.g., amplitude variation or modulation). the ar system performs autocorrelation of signals produced by the sensor(s) (e.g., photodiode(s)) to determine a time of flight signal. in one or more embodiments, the ar system employs a known geometry of the light sources (e.g., leds), the sensor(s) (e.g., photodiodes), and distance to the eye. the sum of vectors with the known geometry of the eye allow for eye tracking. when estimating the position of the eye, since the eye has a sclera and an eyeball, the geometry can be represented as two circles layered on top of each other. using this system 10100, the eye pointing vector can be determined or calculated with no cameras. also the eye center of rotation may be estimated since the cross section of the eye is circular and the sclera swings through a particular angle. this actually results in a vector distance because of autocorrelation of the received signal against known transmitted signal, not just ray traces. the output may be seen as a purkinje image 10200, as shown in fig. 102 , which may in turn be used to track movement of the eyes. in some implementations, the light sources may emit light in the infrared (ir) range of the electromagnetic spectrum, and the photosensors may be selectively responsive to electromagnetic energy in the ir range. in one or more embodiments, light rays are emitted toward the user's eyes as shown in the illustrated embodiment. the ar system is configured to detect one or more characteristics associated with an interaction of the light with the user's eyes (e.g., purkinje image, an extent of backscattered light detected by the photodiodes, a direction of the backscattered light, etc.). this may be captured by the photodiodes, as shown in the illustrated embodiments. one or more parameters of the interaction may be measured at the photodiodes. these parameters may in turn be used to extrapolate characteristics of eye movements or eye pose. hand tracking in one or more embodiments, the ar system may perform hand tracking via one or more user input detection devices and/or techniques. for example, the ar system may employ one or more image sensors (e.g., cameras) that are head worn and which face forward from the user's body reference frame. additionally, or alternatively, the ar system may use one or more sensors (e.g., cameras) which are not head worn or not worn on any portion of the user's body. for instance, the ar system may use one or more sensors (e.g., cameras, inertial sensors, gyros, accelerometers, temperature sensor or thermocouples, perspiration sensors) mounted in the physical environment (e.g., room-based sensor systems discussed above). as another example, the ar system may rely on stereo-pairs of cameras or photo sensors. alternatively, the ar system may include one or more sources of structured light to illuminate the hands. the structured light may, or may not, be visible to the user. for example, the light sources may selectively emit in the infrared or near-infrared range of the electromagnetic spectrum. as yet a further example, the ar system may perform hand tracking via an instrumented glove, for instance similar to the haptic glove discussed herein. the ar system may optically track the haptic glove. additionally or alternatively, the ar system may use telemetry from one or more glove sensors, for example one or more internal sensors or accelerometers (e.g., mems accelerometers) located in the glove. finger gestures in some implementations, fingers gestures may be used as input for the ar system. finger gestures can take a variety of forms and may, for example, be based on inter-finger interaction, pointing, tapping, rubbing, etc. other gestures may, for example, include 2d or 3d representations of characters (e.g., letters, digits, punctuation). to enter such a gesture, a user may simply swipe finger(s) in a predefined character pattern. in one implementation of a user interface, the ar system may render three circles, each circle with specifically chosen characters (e.g., letters, digits, punctuation) arranged circumferentially around the periphery. the user can swipe through the circles and letters to designate a character selection or input. in another implementation, the ar system renders a keyboard (e.g., qwerty keyboard) low in the user's field of view, proximate a position of the user's dominate hand in a bent-arm position. the user can than perform a swipe-like motion through desired keys, and then indicate that the swipe gesture selection is complete by performing another gesture (e.g., thumb-to-ring finger gesture) or other proprioceptive interaction. other gestures may include thumb/wheel selection type gestures, which may, for example be used with a "popup" circular radial menu which may be rendered in a field of view of a user, according to one illustrated embodiment. referring now to fig. 103 , some additional gestures 10320 are also illustrated. it should be appreciated that the finger gestures shown in fig. 103 are for example purposes only, and other gestures may be similarly used. in the top row left-most position, a pointed index finger may indicate a command to focus, for example to focus on a particular portion of a scene or virtual content at which the index finger is pointed. for example, gesture 10322 shows a gesture for a "focus" command consisting of a pointed index finger. the ar system may recognize the gesture (e.g., through the captured image/video of the finger, through sensors if a haptic glove is used, etc.) and perform the desired action. in the top row middle position, a first pinch gesture with the tip of the index finger touching a tip of the thumb to form a closed circle may indicate a grab and/or copy command. as shown in fig. 103 , the user may press the index and thumb finger together to "pinch" or grab one part of the user interface to another ( e.g., gesture 10324). for example, the user may use this gesture to copy or move an icon (e.g., an application) from one part of the virtual user interface to another. in the top row right-most position, a second pinch gesture with the tip of the ring finger touching a tip of the thumb to form a closed circle may indicate a select command. similarly, a "select" gesture may comprise pressing of the user's thumb with the ring finger, in one or more embodiments, as shown in fig. 10326. for example, the user may use this gesture to select a particular document, or perform some type of ar command. in the bottom row left-most position, a third pinch gesture with the tip of the pinkie finger touching a tip of the thumb to form a closed circle may indicate a back and/or cancel command. gesture 10330 shows an example "back/cancel" gesture that involves pressing together of the pinky finger and the thumb. in the bottom row middle position, a gesture in which the ring and middle fingers are curled with the tip of the ring finger touching a tip of the thumb may indicate a click and/or menu command. gesture 10332 (e.g., pressing together of the thumb with the middle finger and the ring finger) may be used for a "right click" command or to signify to the system to go back to the "main menu." in one or more embodiments, the user may simply hit a "home space" button on the ar system visor to go back to a home page (e.g., 10334). in the bottom row right-most position, touching the tip of the index finger to a location on the head worn component or frame may indicate a return to home command. this may cause the ar system to return to a home or default configuration, for example displaying a home or default menu. as shown in fig. 103 , the ar system recognizes various commands, and in response to these commands, performs certain functions that are mapped to the commands. the mapping of gestures to commands may be universally defined, across many users, facilitating development of various applications which employ at least some commonality in user interfaces. alternatively or additionally, users or developers may define a mapping between at least some of the gestures and corresponding commands to be executed by the ar system in response to detection of the commands. totems the ar system may detect or capture a user's interaction via tracking (e.g., visual tracking) of a totem. the totem is a predefined physical object that is recognized by the system, and may be used to communicate with the ar system. any suitable existing physical structure can be used as a totem. for example, in gaming applications, a game object (e.g., tennis racket, gun controller, etc.) can be recognized as a totem. one or more feature points can be recognized on the physical structure, providing a context to identify the physical structure as a totem. visual tracking can be performed of the totem, employing one or more cameras to detect a position, orientation, and/or movement (e.g., position, direction, distance, speed, acceleration) of the totem with respect to some reference frame (e.g., reference frame of a piece of media, the real world, physical room, user's body, user's head). actively marked totems comprise some sort of active lighting or other form of visual identification. examples of such active marking include (a) flashing lights (e.g., leds); (b) lighted pattern groups; (c) reflective markers highlighted by lighting; (d) fiber-based lighting; (e) static light patterns; and/or (f) dynamic light patterns. light patterns can be used to uniquely identify specific totems among multiple totems. passively marked totems comprise non-active lighting or identification means. examples of such passively marked totems include textured patterns and reflective markers. the totem can also incorporate one or more cameras/sensors, so that no external equipment is needed to track the totem. instead, the totem will track itself and will provide its own location, orientation, and/or identification to other devices. the on-board camera are used to visually check for feature points, to perform visual tracking to detect a position, orientation, and/or movement (e.g., position, direction, distance, speed, acceleration) of the totem itself and with respect to a reference frame. in addition, sensors mounted on the totem (such as a gps sensor or accelerometers) can be used to detect the position and location of the totem. a totem controller object is a device that can be mounted to any physical structure, and which incorporates functionality to facilitate tracking/identification of the totem. this allows any physical structure to become a totem merely by placing or affixing the totem controller object to that physical structure. the totem controller object may be a powered object that includes a battery to power electronics on the object. the totem controller object may include communications, e.g., wireless communications infrastructure such as an antenna and wireless networking modem, to exchange messages with other devices. the totem controller object may also include any active marking (such as leds or fiber-based lighting), passive marking (such as reflectors or patterns), or cameras/sensors (such as cameras, gps locator, or accelerometers). totems may be used in order to provide a virtual user interface, in one or more embodiments. the ar system may, for example, render a virtual user interface to appear on the totem. the totem may take a large variety of forms. for example, the totem may be an inanimate object. for instance, the totem may take the form of a piece or sheet of metal (e.g., aluminum). a processor component of an individual ar system, for instance a belt pack, may serve as a totem. the ar system may, for example, replicate a user interface of an actual physical device (e.g., keyboard and/or trackpad of a computer, a mobile phone) on a "dumb" totem. as an example, the ar system may render the user interface of a particular operation system of a phone onto a surface of an aluminum sheet. the ar system may detect interaction with the rendered virtual user interface, for instance via a front facing camera, and implement functions based on the detected interactions. for example, the ar system may implement one or more virtual actions, for instance render an updated display of the operating system of the phone, render video, render display of a webpage. additionally or alternatively, the ar system may implement one or more actual or non-virtual actions, for instance send email, send text, and/or place a phone call. this may allow a user to select a desired user interface to interact with from a set of actual physical devices, for example various models of smartphones and/or tablets, or other smartphones, tablets, or even other types of appliances which have user interfaces such as televisions, dvd/blu-ray players, thermostats, etc. thus a totem may be any object on which virtual content can be rendered, including for example a body part (e.g., hand) to which virtual content can be locked in a user experience (ux) context. in some implementations, the ar system can render virtual content so as to appear to be coming out from behind a totem, for instance appearing to emerge from behind a user's hand, and slowly wrapping at least partially around the user's hand. the ar system detects user interaction with the virtual content, for instance user finger manipulation with the virtual content which is wrapped partially around the user's hand. alternatively, the ar system may render virtual content so as to appear to emerge from a palm of the user's hand, and the system may detect a user's fingertip interaction and/or manipulation of that virtual content. thus, the virtual content may be locked to a reference frame of a user's hand. the ar system may be responsive to various user interactions or gestures, including looking at some item of virtual content, moving hands, touching hands to themselves or to the environment, other gestures, opening and/or closing eyes, etc. as described herein, the ar system may employ body-centered rendering, user-centered rendering, hand-centered rendering, hip-centered rendering, world-centered rendering, propreaceptic tactile interactions, pointing, eye vectors, totems, object recognizers, body sensor rendering, head pose detection, voice input, environment or ambient sound input, and the environment situation input to interact with the user of the ar system. fig. 104 shows a totem according to one illustrated embodiment, which may be used as part of a virtual keyboard 10422 implementation. the totem may have a generally rectangular profile and a soft durometer surface. the soft surface provides some tactile perception to a user as the user interacts with the totem via touch. as described above, the ar system may render the virtual keyboard image in a user's field of view, such that the virtual keys, switches or other user input components appear to reside on the surface of the totem. the ar system may, for example, render a 4d light field which is projected directly to a user's retina. the 4d light field allows the user to visually perceive the virtual keyboard with what appears to be real depth. the ar system may also detect or capture the user's interaction with the surface of the totem. for example, the ar system may employ one or more front facing cameras to detect a position and/or movement of a user's fingers. in particularly, the ar system may identify from the captured images, any interactions of the user's fingers with various portions of the surface of the totem. the ar system maps the locations of those interactions with the positions of virtual keys, and hence with various inputs (e.g., characters, numbers, punctuation, controls, functions). in response to the inputs, the ar system may cause the inputs to be provided to a computer or some other device. additionally or alternatively, the ar system may render the virtual user interface differently in response to selected user interactions. for instance, some user interactions may correspond to selection of a particular submenu, application or function. the ar system may respond to such selection by rendering a new set of virtual interface elements, based at least in part on the selection. for instance, the ar system may render a submenu or a menu or other virtual interface element associated with the selected application or functions. thus, rendering by ar system may be context sensitive. fig. 105a shows a top surface of a totem according to one illustrated embodiment, which may be used as part of a virtual mouse implementation 10502. the top surface of the totem may have generally ovoid profile, with hard surface portion, and one or more soft surface portions to replicate keys of a physical mouse. the soft surface portions do not actually need to implement switches, and the totem may have no physical keys, physical switches or physical electronics. the soft surface portion(s) provides some tactile perception to a user as the user interacts with the totem via touch. the ar system may render the virtual mouse image 10502 in a user's field of view, such that the virtual input structures (e.g., keys, buttons, scroll wheels, joystick, thumbstick, etc.) appear to reside on the top surface of the totem. as discussed above, the ar system may, for example, render a 4d light field which is projected directly to a user's retina to provide the visual perception of the virtual mouse with what appears to be real depth. the ar system may also detect or capture movement of the totem by the user, as well as, user interaction with the surface of the totem. for example, the ar system may employ one or more front-facing cameras to detect a position and/or movement of the mouse and/or interaction of a user's fingers with the virtual input structures (e.g., keys). the ar system maps the position and/or movement of the mouse. the ar system maps user interactions with the positions of virtual input structures (e.g., keys), and hence with various inputs (e.g., controls, functions). in response to the position, movements and/or virtual input structure activations, the ar system may cause corresponding inputs to be provided to a computer or some other device. additionally or alternatively, the ar system may render the virtual user interface differently in response to select user interactions. for instance, some user interactions may correspond to selection of a particular submenu, application or function. the ar system may respond to such selection by rendering a new set of virtual interface elements, based at least in part on the selection. for instance, the ar system may render a submenu or a menu or other virtual interface element associated with the selected application or functions, as discussed above. fig. 105b shows a bottom surface 10504 of the totem of fig. 105a , according to one illustrated embodiment, which may be used as part of a virtual trackpad implementation. the bottom surface of the totem may be flat with a generally oval or circular profile. the bottom surface may be a hard surface. the totem may have no physical input structures (e.g., keys, buttons, scroll wheels), no physical switches and no physical electronics. the ar system may optionally render a virtual trackpad image in a user's field of view, such that the virtual demarcations appear to reside on the bottom surface of the totem. the ar system detects or captures a user's interaction with the bottom surface of the totem. for example, the ar system may employ one or more front-facing cameras to detect a position and/or movement of a user's fingers on the bottom surface of the totem. for instance, the ar system may detect one or more static positions of one or more fingers, or a change in position of one or more fingers (e.g., swiping gesture with one or more fingers, pinching gesture using two or more fingers). the ar system may also employ the front-facing camera(s) to detect interactions (e.g., tap, double tap, short tap, long tap) of a user's fingers with the bottom surface of the totem. the ar system maps the position and/or movement (e.g., distance, direction, speed, acceleration) of the user's fingers along the bottom surface of the totem. the ar system maps user interactions (e.g., number of interactions, types of interactions, duration of interactions) with the bottom surface of the totem, and hence with various inputs (e.g., controls, functions). in response to the position, movements and/or interactions, the ar system may cause corresponding inputs to be provided to a computer or some other device. fig. 105c shows a top surface of a totem 10506 according to another illustrated embodiment, which may be used as part of a virtual mouse implementation. the totem of fig. 105c is similar in many respects to that of the totem of fig. 105a . hence, similar or even identical structures are identified with the same reference numbers. the top surface of the totem of fig. 105c includes one or more indents or depressions at one or more respective locations on the top surface where the ar system will render keys or cause other structures (e.g., scroll wheel) to appear. fig. 106a shows an orb totem 10602 with a flower petal-shaped (e.g., lotus flower) virtual user interface 10604 according to another illustrated embodiment. the totem 10602 may have a spherical shape with either a hard outer surface or a soft outer surface. the outer surface of the totem 10602 may have texture to facilitate a sure grip by the user. the totem 10602 may have no physical keys, physical switches or physical electronics. the ar system may render the flower petal-shaped virtual user interface image 10604 in a user's field of view, so as to appear to be emanating from the totem 10602. each of the petals of the virtual user interface 10604 may correspond to a function, category of functions, and/or category of content or media types, tools and/or applications. the ar system may optionally render one or more demarcations on the outer surface of the totem. alternatively or additionally, the totem 10602 may optionally bear one or more physical demarcations (e.g., printed, inscribed) on the outer surface. the demarcation(s) may assist the user in visually orienting the totem 10602 with the flower petal-shaped virtual user interface 10604. in one or more embodiments, the ar system detects or captures a user's interaction with the totem 10602. for example, the ar system may employ one or more front facing cameras to detect a position, orientation, and/or movement (e.g., rotational direction, magnitude of rotation, angular speed, angular acceleration) of the totem with respect to some reference frame (e.g., reference frame of the flower petal-shaped virtual user interface, real world, physical room, user's body, user's head). for instance, the ar system may detect one or more static orientations or a change in orientation of the totem 10602 or a demarcation on the totem 10602. the ar system may also employ the front facing camera(s) to detect interactions (e.g., tap, double tap, short tap, long tap, fingertip grip, enveloping grasp, etc.) of a user's fingers with outer surface of the totem. the ar system maps the orientation and/or change in orientation (e.g., distance, direction, speed, acceleration) of the totem to user selections or inputs. the ar system optionally maps user interactions (e.g., number of interactions, types of interactions, duration of interactions) with the outer surface of the totem 10602, and hence with various inputs (e.g., controls, functions). in response to the orientations, changes in position (e.g., movements) and/or interactions, the ar system may cause corresponding inputs to be provided to a computer or some other device. additionally or alternatively, and as discussed above, the ar system may render the virtual user interface 10604 differently in response to various user interactions. for instance, some user interactions may correspond to selection of a particular submenu, application or function. the ar system may respond to such selection by rendering a new set of virtual interface elements, based at least in part on the selection. for instance, the ar system may render a submenu or a menu or other virtual interface element associated with the selected application or functions. referring now to fig. 106b , the totem 10606 is disc shaped. similar to the user interface 10604 of fig. 106a , a flower-petal shaped virtual user interface 10604 is rendered when the totem 10606 is selected, in some embodiments. the totem of fig. 106b is disc-shaped, having a top surface and bottom surface which may be flat or domed, as illustrated in fig. 106b . that is, a radius of curvature may be infinite or much larger than a radius of curvature of a peripheral edge of the totem. the ar system renders the flower petal-shaped virtual user interface 10604 image in a user's field of view, so as to appear to be emanating from the totem 10606. as noted above, each of the petals may correspond to a function, category of functions, and/or category of content or media types, tools and/or applications. fig. 106b represents a number of examples, including a search function, settings functions, collection of favorites, profiles, collection of games, collection of tools and/or applications, social media or application category, media or content category or collection (e.g., entertainment, electronic magazines, electronic books, other publications, movies, television programs, etc.). fig. 106c shows an orb totem 10608 in a first configuration 10610 and a second configuration 10612, according to another illustrated embodiment. in particular, the totem 10608 has a number of arms or elements which are selectively moveable or positionable with respect to each other. for example, a first arm or pair of arms may be rotated with respect to a second arm or pair of arms. the first arm or pair of arms may be rotated from a first configuration 10610 to a second configuration 10612. where the arms are generally arcuate, as illustrated, in the first configuration, 10610, the arms form an orb or generally spherical structure. in the second configuration, 10612, the second arm or pairs of arms align with the first arm or pairs of arms to form an partial tube with a c-shaped profile, as shown in the illustrated embodiment. the arms may have an inner diameter size large enough to receive a wrist or other limb of a user, in one or more embodiments. the inner diameter may be sized small enough to prevent the totem 10608 from sliding off the limb during use. for example, the inner diameter may be sized to comfortably receive a wrist of a user, while not sliding past a hand of the user. this allows the totem 10608 to take the form of a bracelet, for example when not in use, for convenient carrying. a user may then an orb shape for use, in a fashion similar to the orb totems described above. the totem may have no physical keys, physical switches or physical electronics. notably, the virtual user interface (such as virtual user interface 10604 shown in figs. 106a and 106b ) is omitted from fig. 106c . the ar system may render a virtual user interface in any of a large variety of forms, for example the flower petal-shaped virtual user interface 10604 previously illustrated and discussed. fig. 107a shows a handheld controller shaped totem 10702, according to another illustrated embodiment. the totem 10702 has a gripping section sized and may comfortably fit in a user's hand. the totem 10702 may include a number of user input elements, for example a key or button and a scroll wheel. the user input elements may be physical elements, although not connected to any sensor or switches in the totem 10702, which itself may have no physical switches or physical electronics. alternatively, the user input elements may be virtual elements rendered by the ar system. it should be appreciated that the totem 10702 may have depressions, cavities, protrusions, textures or other structures to tactile replicate a feel of the user input element. the ar system detects or captures a user's interaction with the user input elements of the totem 10702. for example, the ar system may employ one or more front-facing cameras to detect a position and/or movement of a user's fingers with respect to the user input elements of the totem 10702. for instance, the ar system may detect one or more static positions of one or more fingers, or a change in position of one or more fingers (e.g., swiping or rocking gesture with one or more fingers, rotating or scrolling gesture, or both). the ar system may also employ the front facing camera(s) to detect interactions (e.g., tap, double tap, short tap, long tap) of a user's fingers with the user input elements of the totem 10702. the ar system maps the position and/or movement (e.g., distance, direction, speed, acceleration) of the user's fingers with the user input elements of the totem 10702. the ar system maps user interactions (e.g., number of interactions, types of interactions, duration of interactions) of the user's fingers with the user input elements of the totem 10702, and hence with various inputs (e.g., controls, functions). in response to the position, movements and/or interactions, the ar system may cause corresponding inputs to be provided to a computer or some other device. fig. 107b shows a block shaped totem 10704, according to another illustrated embodiment. the totem 10704 may have the shape of a cube with six faces, or some other three-dimensional geometric structure. the totem 10704 may have a hard outer surface or a soft outer surface. the outer surface of the totem 10704 may have texture to facilitate a sure grip by the user. the totem 10704 may have no physical keys, physical switches or physical electronics. the ar system may render a virtual user interface image in a user's field of view, so as to appear to be on the face(s) of the outer surface of the totem 10704, in one or more embodiments. each of the faces, and corresponding user input, may correspond to a function, category of functions, and/or category of content or media types, tools and/or applications. the ar system detects or captures a user's interaction with the totem 10704. for example, the ar system may employ one or more front-facing cameras to detect a position, orientation, and/or movement (e.g., rotational direction, magnitude of rotation, angular speed, angular acceleration) of the totem 10704 with respect to some reference frame (e.g., reference frame of the real world, physical room, user's body, user's head, etc.). for instance, the ar system may detect one or more static orientations or a change in orientation of the totem 10704. the ar system may also employ the front-facing camera(s) to detect interactions (e.g., tap, double tap, short tap, long tap, fingertip grip, enveloping grasp, etc.) of a user's fingers with outer surface of the totem 10704. the ar system maps the orientation and/or change in orientation (e.g., distance, direction, speed, acceleration) of the totem 10704 to user selections or inputs. the ar system optionally maps user interactions (e.g., number of interactions, types of interactions, duration of interactions) with the outer surface of the totem 10704, and hence with various inputs (e.g., controls, functions). in response to the orientations, changes in position (e.g., movements) and/or interactions, the ar system may cause corresponding inputs to be provided to a computer or some other device. in response to the orientations, changes in position (e.g., movements) and/or interactions, the ar system may change one or more aspects of the rendering the virtual user interface, causing corresponding inputs to be provided to a computer or some other device. for example, as a user rotates the totem 10704, different faces may come into the user's field of view, while other faces rotate out of the user's field of view. the ar system may respond by rendering virtual interface elements to appear on the now visible faces, which were previously hidden from the view of the user. likewise, the ar system may respond by stopping the rendering of virtual interface elements which would otherwise appear on the faces now hidden from the view of the user. additionally or alternatively, the ar system may render the virtual user interface differently in response to select user interactions. for instance, some user interactions may correspond to selection of a particular submenu, application or function. the ar system may respond to such selection by rendering a new set of virtual interface elements, based at least in part on the selection. for instance, the ar system render a submenu or a menu or other virtual interface element associated with the selected application or functions. fig. 107c shows a handheld controller shaped totem 10706, according to another illustrated embodiment. the totem 10706 has a gripping section sized and may comfortably fit in a user's hand, for example a cylindrically tubular portion. the totem 10706 may include a number of user input elements, for example a number of pressure sensitive switches and a joystick or thumbstick. the user input elements may be physical elements, although not connected to any sensor or switches in the totem 10706, which itself may have no physical switches or physical electronics. alternatively, the user input elements may be virtual elements rendered by the ar system. where the user input elements are virtual elements, the totem 10706 may have depressions, cavities, protrusions, textures or other structures to tactile replicate a feel of the user input element. the ar system detects or captures a user's interaction with the user input elements of the totem 10706. for example, the ar system may employ one or more front facing cameras to detect a position and/or movement of a user's fingers with respect to the user input elements of the totem 10706. for instance, the ar system may detect one or more static positions of one or more fingers, or a change in position of one or more fingers (e.g., swiping or rocking gesture with one or more fingers, rotating or scrolling gesture, or both). the ar system may also employ the front facing camera(s) to detect interactions (e.g., tap, double tap, short tap, long tap) of a user's fingers with the user input elements of the totem 10706. as discussed above, the ar system maps the position and/or movement (e.g., distance, direction, speed, acceleration) of the user's fingers with the user input elements of the totem 10706. the ar system maps user interactions (e.g., number of interactions, types of interactions, duration of interactions) of the user's fingers with the user input elements of the totem 10706, and hence with various inputs (e.g., controls, functions). in response to the position, movements and/or interactions, the ar system may cause corresponding inputs to be provided to a computer or some other device. fig. 107d shows another handheld controller shaped totem, 10708 according to another illustrated embodiment. the totem 10708 has a gripping section sized and may comfortably fit in a user's hand. the totem 10708 may include a number of user input elements, for example a key or button and a joystick or thumbstick. the user input elements may be physical elements, although not connected to any sensor or switches in the totem 10708, which itself may have no physical switches or physical electronics. alternatively, the user input elements may be virtual elements rendered by the ar system. in one or more embodiments, the totem 10708 may have depressions, cavities, protrusions, textures or other structures to tactile replicate a feel of the user input element. the ar system detects or captures a user's interaction with the user input elements of the totem 10708. for example, the ar system may employ one or more front-facing cameras to detect a position and/or movement of a user's fingers with respect to the user input elements of the totem 10708. for instance, the ar system may detect one or more static positions of one or more fingers, or a change in position of one or more fingers (e.g., swiping or rocking gesture with one or more fingers, rotating or scrolling gesture, or both). similar to the above, the ar system may also employ the front-facing camera(s) to detect interactions (e.g., tap, double tap, short tap, long tap) of a user's fingers with the user input elements of the totem. the ar system maps the position and/or movement (e.g., distance, direction, speed, acceleration) of the user's fingers with the user input elements of the totem 10708. the ar system maps user interactions (e.g., number of interactions, types of interactions, duration of interactions) of the user's fingers with the user input elements of the totem 10708, and hence with various inputs (e.g., controls, functions). in response to the position, movements and/or interactions, the ar system may cause corresponding inputs to be provided to a computer or some other device. fig. 108a shows a ring totem 10802, according one illustrated embodiment. in particular, the ring totem 10802 has a tubular portion and an interaction portion physically coupled to the tubular portion. the tubular and interaction portions may be integral, and may be formed as or from a single unitary structure. the tubular portion has an inner diameter sized large enough to receive a finger of a user. the inner diameter may be sized small enough to prevent the totem 10802 from sliding off the finger during normal use. this allows the ring totem 10802 to be comfortably worn even when not in active use, ensuring availability when needed. the ring totem 10802 may have no physical keys, physical switches or physical electronics. notably, the virtual user interface (e.g., 10604 shown in figs. 106a and 106b ) is omitted. the ar system may render a virtual user interface in any of a large variety of forms. for example, the ar system may render a virtual user interface in the user's field of view as to appear as if the virtual user interface element(s) reside on the interaction surface. alternatively, the ar system may render a virtual user interface as the flower petal-shaped virtual user interface 10604 previously illustrated and discussed, emanating from the interaction surface. similar to the above, the ar system detects or captures a user's interaction with the totem 10802. for example, the ar system may employ one or more front facing cameras to detect a position, orientation, and/or movement (e.g., position, direction, distance, speed, acceleration) of the user's finger(s) with respect to interaction surface in some reference frame (e.g., reference frame of the interaction surface, real world, physical room, user's body, user's head). for instance, the ar system may detect one or more locations of touches or a change in position of a finger on the interaction surface. again, as discussed above, the ar system may also employ the front-facing camera(s) to detect interactions (e.g., tap, double tap, short tap, long tap, fingertip grip, enveloping grasp) of a user's fingers with the interaction surface of the totem 10802. the ar system maps the position, orientation, and/or movement of the finger with respect to the interaction surface to a set of user selections or inputs. the ar system optionally maps other user interactions (e.g., number of interactions, types of interactions, duration of interactions) with the interaction surface of the totem 10802, and hence with various inputs (e.g., controls, functions). in response to the position, orientation, movement, and/or other interactions, the ar system may cause corresponding inputs to be provided to a computer or some other device. additionally or alternatively, as discussed above, the ar system may render the virtual user interface differently in response to select user interactions. for instance, some user interactions may correspond to selection of a particular submenu, application or function. the ar system may respond to such selection by rendering a new set of virtual interface elements, based at least in part on the selection. for instance, the ar system render a submenu or a menu or other virtual interface element associated with the selected application or functions. fig. 108b shows a bracelet totem 10804, according one illustrated embodiment. in particular, the bracelet totem 10804 has a tubular portion and a touch surface physically coupled to the tubular portion. the tubular portion and touch surface may be integral, and may be formed as or from a single unitary structure. the tubular portion has an inner diameter sized large enough to receive a wrist or other limb of a user. the inner diameter may be sized small enough to prevent the totem 10804 from sliding off the limb during use. for example, the inner diameter may be sized to comfortably receive a wrist of a user, while not sliding past a hand of the user. this allows the bracelet totem 10804 to be worn whether in active use or not, ensuring availability when desired. the bracelet totem 10804 may have no physical keys, physical switches or physical electronics. the ar system may render a virtual user interface in any of a large variety of forms. for example, the ar system may render a virtual user interface in the user's field of view as to appear as if the virtual user interface element(s) reside on the touch surface. alternatively, the ar system may render a virtual user interface similar to the flower petal-shaped virtual user interface 10604 previously illustrated and discussed, emanating from the touch surface. the ar system detects or captures a user's interaction with the totem 10804. for example, the ar system may employ one or more front-facing cameras to detect a position, orientation, and/or movement (e.g., position, direction, distance, speed, acceleration) of the user's finger(s) with respect to the touch surface of the totem in some reference frame (e.g., reference frame of the touch surface, real world, physical room, user's body, user's head). for instance, the ar system may detect one or more locations of touches or a change in position of a finger on the touch surface. as discussed above, the ar system may also employ the front-facing camera(s) to detect interactions (e.g., tap, double tap, short tap, long tap, fingertip grip, enveloping grasp) of a user's fingers with the touch surface of the totem 10804. the ar system maps the position, orientation, and/or movement of the finger with respect to the touch surface to a set of user selections or inputs. the ar system optionally maps other user interactions (e.g., number of interactions, types of interactions, duration of interactions) with the touch surface of the totem 10804, and hence with various inputs (e.g., controls, functions). in response to the position, orientation, movement, and/or other interactions, the ar system may cause corresponding inputs to be provided to a computer or some other device. additionally or alternatively, as discussed above, the ar system may render the virtual user interface differently in response to select user interactions. for instance, some user interactions may correspond to selection of a particular submenu, application or function. the ar system may respond to such selection by rendering a new set of virtual interface elements, based at least in part on the selection. for instance, the ar system may render a submenu or a menu or other virtual interface element associated with the selected application or functions. fig. 108c shows a ring totem 10806, according another illustrated embodiment. in particular, the ring totem 10806 has a tubular portion and an interaction portion physically rotatably coupled to the tubular portion to rotate with respect thereto. the tubular portion has an inner diameter sized large enough to receive a finger of a user there through. the inner diameter may be sized small enough to prevent the totem from sliding off the finger during normal use. this allows the ring totem to be comfortably worn even when not in active use, ensuring availability when needed. the interaction portion may itself be a closed tubular member, having a respective inner diameter received about an outer diameter of the tubular portion. for example, the interaction portion may be journaled or slideable mounted to the tubular portion. the interaction portion is accessible from an exterior surface of the ring totem. the interaction portion may, for example, be rotatable in a first rotational direction about a longitudinal axis of the tubular portion. the interaction portion may additionally be rotatable in a second rotational, opposite the first rotational direction about the longitudinal axis of the tubular portion. the ring totem 10806 may have no physical switches or physical electronics. the ar system may render a virtual user interface in any of a large variety of forms. for example, the ar system may render a virtual user interface in the user's field of view as to appear as if the virtual user interface element(s) reside on the interaction portion. alternatively, the ar system may render a virtual user interface similar to the flower petal-shaped virtual user interface previously illustrated and discussed, emanating from the interaction portion. similar to the above, the ar system detects or captures a user's interaction with the totem. for example, the ar system may employ one or more front-facing cameras to detect a position, orientation, and/or movement (e.g., position, direction, distance, speed, acceleration) of the interaction portion with respect to the tubular portion (e.g., finger receiving portion) in some reference frame (e.g., reference frame of the tubular portion, real world, physical room, user's body, user's head). for instance, the ar system may detect one or more locations or orientations or changes in position or orientation of the interaction portion with respect to the tubular portion. the ar system may also employ the front facing camera(s) to detect interactions (e.g., tap, double tap, short tap, long tap, fingertip grip, enveloping grasp) of a user's fingers with the interaction portion of the totem. the ar system maps the position, orientation, and/or movement of the interaction portion with respect the tubular portion to a set of user selections or inputs. the ar system optionally maps other user interactions (e.g., number of interactions, types of interactions, duration of interactions) with the interaction portion of the totem, and hence with various inputs (e.g., controls, functions). in response to the position, orientation, movement, and/or other interactions, the ar system may cause corresponding inputs to be provided to a computer or some other device. additionally or alternatively, as discussed above, the ar system may render the virtual user interface differently in response to select user interactions. for instance, some user interactions may correspond to selection of a particular submenu, application or function. the ar system may respond to such selection by rendering a new set of virtual interface elements, based at least in part on the selection. fig. 109a shows a glove-shaped haptic totem 10902, according one illustrated embodiment. in particular, the glove-shaped haptic totem 10902 is shaped like a glove or partial glove, having an opening for receiving a wrist and one or more tubular glove fingers (three shown) sized to receive a user's fingers. the glove-shaped haptic totem 10902 may be made of one or more of a variety of materials. the materials may be elastomeric or may otherwise conform to the shape or contours of a user's hand, providing a snug but comfortable fit. . the ar system may render a virtual user interface in any of a large variety of forms. for example, the ar system may render a virtual user interface in the user's field of view as to appear as if the virtual user interface element(s) is inter-actable via the glove-shaped haptic totem 10902. for example, the ar system may render a virtual user interface as one of the previously illustrated and/or described totems or virtual user interfaces. similar to the above, the ar system detects or captures a user's interaction via visual tracking of the user's hand and fingers on which the glove-shaped haptic totem 10902 is worn. for example, the ar system may employ one or more front-facing cameras to detect a position, orientation, and/or movement (e.g., position, direction, distance, speed, acceleration) of the user's hand and/or finger(s) with respect to some reference frame (e.g., reference frame of the touch surface, real world, physical room, user's body, user's head). similar to the above embodiments, for instance, the ar system may detect one or more locations of touches or a change in position of a hand and/or fingers. the ar system may also employ the front facing camera(s) to detect interactions (e.g., tap, double tap, short tap, long tap, fingertip grip, enveloping grasp) of a user's hands and/or fingers. notably, the ar system may track the glove-shaped haptic totem 10902 instead of the user's hands and fingers. the ar system maps the position, orientation, and/or movement of the hand and/or fingers to a set of user selections or inputs. the ar system optionally maps other user interactions (e.g., number of interactions, types of interactions, duration of interactions), and hence with various inputs (e.g., controls, functions). in response to the position, orientation, movement, and/or other interactions, the ar system may cause corresponding inputs to be provided to a computer or some other device. additionally or alternatively, as discussed above, the ar system may render the virtual user interface differently in response to select user interactions. for instance, some user interactions may correspond to selection of a particular submenu, application or function. the ar system may respond to such selection by rendering a new set of virtual interface elements, based at least in part on the selection. for instance, the ar system render a submenu or a menu or other virtual interface element associated with the selected application or functions. the glove-shaped haptic totem 10902 includes a plurality of actuators, which are responsive to signals to provide haptic sensations such as pressure and texture. the actuators may take any of a large variety of forms, for example piezoelectric elements, and/or micro electrical mechanical structures (mems). the ar system provides haptic feedback to the user via the glove-shaped haptic totem 10902. in particular, the ar system provides signals to the glove-shaped haptic totem 10902 to replicate a sensory sensation of interacting with a physical object which a virtual object may represent. such may include providing a sense of pressure and/or texture associated with a physical object. thus, the ar system may cause a user to feel a presence of a virtual object, for example including various structural features of the physical object such as edges, corners, roundness, etc. the ar system may also cause a user to feel textures such as smooth, rough, dimpled, etc. fig. 109b shows a stylus or brush shaped totem 10904, according one illustrated embodiment. the stylus or brush shaped totem 10904 includes an elongated handle, similar to that of any number of conventional stylus or brush 10904. in contrast to conventional stylus or brush, the stylus or brush has a virtual tip or bristles. in particular, the ar system may render a desired style of virtual tip or virtual bristle to appear at an end of the physical stylus or brush 10904. the tip or bristle may take any conventional style including narrow or wide points, flat bristle brushed, tapered, slanted or cut bristle brushed, natural fiber bristle brushes (e.g., horse hair), artificial fiber bristle brushes, etc. this advantageously allows the virtual tip or bristles to be replaceable. similar to the above, the ar system detects or captures a user's interaction via visual tracking of the user's hand and/or fingers on the stylus or brush 10904 and/or via visual tracking of the end of the stylus or brush 10904. for example, the ar system may employ one or more front facing cameras to detect a position, orientation, and/or movement (e.g., position, direction, distance, speed, acceleration) of the user's hand and/or finger(s) and/or end of the stylus or brush with respect to some reference frame (e.g., reference frame of a piece of media, the real world, physical room, user's body, user's head). for instance, the ar system may detect one or more locations of touches or a change in position of a hand and/or fingers. also for instance, the ar system may detect one or more locations of the end of the stylus or brush and/or an orientation of the end of the stylus or brush 10904 with respect to, for example, a piece of media or totem representing a piece of media. the ar system may additionally or alternatively detect one or more change in locations of the end of the stylus or brush 10904 and/or change in orientation of the end of the stylus or brush 10904 with respect to, for example, the piece of media or totem representing the piece of media. as discussed above, the ar system may also employ the front-facing camera(s) to detect interactions (e.g., tap, double tap, short tap, long tap, fingertip grip, enveloping grasp) of a user's hands and/or fingers or of the stylus or brush 10904. the ar system maps the position, orientation, and/or movement of the hand and/or fingers and/or end of the stylus or brush 10904 to a set of user selections or inputs. the ar system optionally maps other user interactions (e.g., number of interactions, types of interactions, duration of interactions), and hence with various inputs (e.g., controls, functions). in response to the position, orientation, movement, and/or other interactions, the ar system may cause corresponding inputs to be provided to a computer or some other device. additionally or alternatively, the ar system may render a virtual image of markings made by the user using the stylus or brush 10904, taking into account the visual effects that would be achieved by the selected tip or bristles. the stylus or brush 10904 may have one or more haptic elements (e.g., piezoelectric elements, mems elements), which the ar system controls to provide a sensation (e.g., smooth, rough, low friction, high friction) that replicates a feel of a selected point or bristles, as the selected point or bristles pass over media. the sensation may also reflect or replicate how the end or bristles would interact with different types of physical aspects of the media, which may be selected by the user. thus, paper and canvas may produce two different types of haptic responses. fig. 109c shows a pen shaped totem 10906, according one illustrated embodiment. the pen shaped totem 10906 includes an elongated shaft, similar to that of any number of conventional pen, pencil, stylus or brush. the pen shaped totem 10906 has a user actuatable joystick or thumbstick located at one end of the shaft. the joystick or thumbstick is movable with respect to the elongated shaft in response to user actuation. the joystick or thumbstick may, for example, be pivotally movable in four directions (e.g., forward, back, left, right). alternatively, the joystick or thumbstick may, for example, be movable in all directions four directions, or may be pivotally movable in any angular direction in a circle, for example to navigate. notably, the joystick or thumbstick is not coupled to any switch or electronics. instead of coupling the joystick or thumbstick to a switch or electronics, the ar system detects or captures a position, orientation, or movement of the joystick or thumbstick. for example, the ar system may employ one or more front-facing cameras to detect a position, orientation, and/or movement (e.g., position, direction, distance, speed, acceleration) of the joystick or thumbstick with respect to a given reference frame (e.g., reference frame of the elongated shaft, etc.). additionally, as discussed above, the ar system may employ one or more front-facing cameras to detect a position, orientation, and/or movement (e.g., position, direction, distance, speed, acceleration) of the user's hand and/or finger(s) and/or end of the pen shaped totem 10906 with respect to some reference frame (e.g., reference frame of the elongated shaft, of a piece of media, the real world, physical room, user's body, user's head). for instance, the ar system may detect one or more locations of touches or a change in position of a hand and/or fingers. also for instance, the ar system may detect one or more locations of the end of the pen shaped totem 10906 and/or an orientation of the end of the pen shaped totem 10906 with respect to, for example, a piece of media or totem representing a piece of media. the ar system may additionally or alternatively detect one or more change in locations of the end of the pen shaped totem 10906 and/or change in orientation of the end of the pen shaped totem 10906 with respect to, for example, the piece of media or totem representing the piece of media. similar to the above, the ar system may also employ the front facing camera(s) to detect interactions (e.g., tap, double tap, short tap, long tap, fingertip grip, enveloping grasp, etc.) of a user's hands and/or fingers with the joystick or thumbstick or the elongated shaft of the pen shaped totem 10906. the ar system maps the position, orientation, and/or movement of the hand and/or fingers and/or end of the joystick or thumbstick to a set of user selections or inputs. the ar system optionally maps other user interactions (e.g., number of interactions, types of interactions, duration of interactions), and hence with various inputs (e.g., controls, functions). in response to the position, orientation, movement, and/or other interactions, the ar system may cause corresponding inputs to be provided to a computer or some other device. additionally or alternatively, as discussed above, the ar system may render a virtual image of markings made by the user using the pen shaped totem 10906, taking into account the visual effects that would be achieved by the selected tip or bristles. the pen shaped totem 10906 may have one or more haptic elements (e.g., piezoelectric elements, mems elements), which the ar system control to provide a sensation (e.g., smooth, rough, low friction, high friction) that replicate a feel of passing over media. fig. 110a shows a charm chain totem 11002, according one illustrated embodiment. the charm chain totem 11002 includes a chain and a number of charms. the chain may include a plurality of interconnected links which provides flexibility to the chain. the chain may also include a closure or clasp which allows opposite ends of the chain to be securely coupled together. the chain and/or clasp may take a large variety of forms, for example single strand, multi-strand, links or braided. the chain and/or clasp may be formed of any variety of metals, or other non-metallic materials. a length of the chain should accommodate a portion of a user's limb when the two ends are clasped together. the length of the chain should also be sized to ensure that the chain is retained, even loosely, on the portion of the limb when the two ends are clasped together. the chain may be worn as a bracket on a wrist of an arm or on an ankle of a leg. the chain may be worn as a necklace about a neck. the charms may take any of a large variety of forms. the charms may have a variety of shapes, although will typically take the form of plates or discs. while illustrated with generally rectangular profiles, the charms may have any variety of profiles, and different charms on a single chain may have respective profiles which differ from one another. the charms may be formed of any of a large variety of metals, or non-metallic materials. each charm may bear an indicia which is logically associable in at least one computer- or processor-readable non-transitory storage medium with a function, category of functions, category of content or media types, and/or tools or applications which is accessible via the ar system. fig. 110b shows a keychain totem 11004, according one illustrated embodiment. the keychain totem 11004 includes a chain and a number of keys. the chain may include a plurality of interconnected links which provides flexibility to the chain. the chain may also include a closure or clasp which allows opposite ends of the chain to be securely coupled together. the chain and/or clasp may take a large variety of forms, for example single strand, multi-strand, links or braided. the chain and/or clasp may be formed of any variety of metals, or other non-metallic materials. the keys may take any of a large variety of forms. the keys may have a variety of shapes, although will typically take the form of conventional keys, either with or without ridges and valleys (e.g., teeth). in some implementations, the keys may open corresponding mechanical locks, while in other implementations the keys only function as totems and do not open mechanical locks. the keys may have any variety of profiles, and different keys on a single chain may have respective profiles which differ from one another. the keys may be formed of any of a large variety of metals, or non-metallic materials. various keys may be of different colors from one another. each key may bear an indicia, which is logically associable in at least one computer- or processor-readable non-transitory storage medium with a function, category of functions, category of content or media types, and/or tools or applications which is accessible via the ar system. as discussed above, the ar system detects or captures a user's interaction with the keys. for example, the ar system may employ one or more front-facing cameras to detect touching or manipulation of the keys by the user's fingers or hands. for instance, the ar system may detect a selection of a particular key by the user touching the respective key with a finger or grasping the respective key with two or more fingers. further, the ar may detect a position, orientation, and/or movement (e.g., rotational direction, magnitude of rotation, angular speed, angular acceleration) of a key with respect to some reference frame (e.g., reference frame of the portion of the body, real world, physical room, user's body, user's head). the ar system may also employ the front-facing camera(s) to detect other interactions (e.g., tap, double tap, short tap, long tap, fingertip grip, enveloping grasp, etc.) of a user's fingers with a key. as discussed above, the ar system maps selection of the key to user selections or inputs, for instance selection of a social media application. the ar system optionally maps other user interactions (e.g., number of interactions, types of interactions, duration of interactions) with the key, and hence with various inputs (e.g., controls, functions) with the corresponding application. in response to the touching, manipulation or other interactions with the keys, the ar system may cause corresponding applications to be activated and/or provide corresponding inputs to the applications. additionally or alternatively, similar to the above embodiments, the ar system may render the virtual user interface differently in response to select user interactions. for instance, some user interactions may correspond to selection of a particular submenu, application or function. the ar system may respond to such selection by rendering a set of virtual interface elements, based at least in part on the selection. for instance, the ar system render a submenu or a menu or other virtual interface element associated with the selected application or functions. referring now to fig. 111 , an example method 11100 of using totems is described. at 11102, a user's interaction with a totem is detected and/or captured. for example, the interaction may be captured based on inputs from the haptic glove, or through the front-facing cameras (e.g., world cameras, fov cameras, etc.0. at 11104, the ar system may detect a position, orientation and/or movement of the totem with respect to a given reference frame. the reference frame may be a predetermined reference frame that allows the ar system to calculate one or more characteristics of the totem's movement, in order to understand a user command. at 11106, the user's interaction (e.g., position/orientation/movement against reference frame) is consulted with a map stored in the system. in one or more embodiments, the map may be a 1:1 map that correlates certain movements/positions or orientations with a particular user input. other mapping tables and/or techniques may be similarly used in other embodiments. at 11108, the ar system may determine the user input based on the mapping. in one or more embodiments, the ar system may identify an object as a totem. the object may be a real object or a virtual object. typically, the totem may be a pre-designated object, for example, a set of keys, or a virtual set of keys, that may be displayed as a totem. in one or more embodiments, the user may have selected a totem. or, if the totem is a real object, the system may have captured one or more images/and or other data about the totem, to recognize it in the future. further, the ar system may request the user to "set up" the totem such that the system understands commands that are made in relation to the totem. for example, a center part of the totem may be pressed to indicate a particular command. in one or more embodiments, this may require the system to be pre-programmed to understand that command. in one or more embodiments, a reference frame of the totem may be correlated against a reference frame of the world to understand certain commands. for example, the system may recognize the user's hand movement (in one embodiment) in relation to the totem. in one or more embodiments, the ar system tracks an interaction of the user with the totem (e.g., hand movements, totem movements, eye movements, etc.). when an interaction matches a predetermined interaction (e.g., a pattern of movements, a speed of movement, a direction of movement, a force of touch, a proximity to another object, etc.), the system may determine a user input, and understand a command, in response to the determined user input. it should be appreciated that the concepts outlined here may be applied to various aspects of the ar system. for example, recognizing totems, recognizing patterns of movement in relation to totems and retrieving commands associated with the recognized totem gesture may be used in almost all the various embodiments and user scenarios discussed below. these same concepts help the system recognize the totem gesture and perform a command (e.g., open an application, display a user interface, purchase an item, switch applications, etc.). thus, the principles outlined here pertaining to recognizing totems and totem commands, and retrieving the command associated with the totem may be used in almost all the embodiments described below. it should be appreciated that these concepts will not be repeated during the discussion of specific embodiments for the purposes of brevity. light wavefront + sound wavefront in one or more embodiments, the ar system may produce a sound wavefront that is the analog of the light wavefront, producing a realistic sound field. in some implementations, the ar system may adjust microphone gain in the sound range dynamically to mix real physical players with virtual players in the virtual space. in other words, the ar system produces a realistic sound wavefront such that an emanating sound from a particular object (e.g., a virtual object, etc.) matches the light field. for example, if the virtual object is depicted such that it appears from far away, the sound emanating from the object should not be constant, but rather mimic the sound that would come from the object if it were approaching from far away. since the light field of the ar system produces a realistic visual experience of the virtual object, the sound wavefront of the ar system is also modified to realistically depict sound. for example, if the virtual object is approaching from behind, the sound coming from the virtual object will be different than if it were simply approaching from the front side. or if the virtual object is approaching from the right side, the sound may be modified such that the user instinctively turns to the right to look at the virtual object. thus, it can be appreciated that modifying the sound wavefront to realistically depict sounds may improve the overall experience of the ar system. the sound wavefront may also depend on the user's physical location. for example, natural sounds are perceived differently if the user is in a cathedral (e.g., there may be an echo, etc.),as compared to when the user is in an open space. the ar system may capture local and ambient sound (e.g., gameengine driven) reproduction. referring now to fig. 113 , a block diagram showing various components of the sound design system is provided. as shown in fig. 113 , head pose information 11318 may be used to determine object and listener pose 11320. this information, once determined may be fed into a spatial and proximity sound render module 11302. the object and listener pose 11320 may be fed into sound data module 11322, which may comprise various sound data files which may be stored in a database, in one or more embodiments. the sound data module 11322 may interact with a sound design tool 11324 (e.g., fmod studio, etc.) to provide sound design filters etc. to manipulate the sound data files. the sound and metadata 11322 may be fed into an equalization module 11314, which may also be fed with channel-based content 11316. the equalized sound may also be fed into the spatial and proximity render module 11302. in one or more embodiments, a 3d head model transfer function 11310 and a dynamically created space model (e.g., space transfer function) are also inputted to the spatial and proximity sound render module 11302. in one or more embodiments, the spatial and proximity sound render module 11302 may also receive inputs about sounds from canned spaces 11312. the transfer functions may manipulate the sound data by applying transforms based on the user's head pose and the virtual object information received from head pose 11318 and object and listener pose11320 modules respectively. in one or more embodiments, the spatial and proximity sound render module 11302 interacts with the binaural virtualizer 11304, and the sound is finally outputted to the user's headphones 11306. in one or more embodiments, the ar system may determine a head pose of a user to determine how to manipulate an audio object. the audio object may be tied to a virtual object (e.g., the audio appears to come from the virtual object, or may be located at a different place, but is associated with the virtual object). the audio object may be associated with the virtual object based on perceived location, such that the audio object (sound data) emanates from a perceived location of the virtual object. the ar system knows the perceived location of the virtual object (e.g., the map, the passable world model, etc.), so the ar system may place the audio object at the same location. based on the perceived location and/or determined location of the audio object in relation to the user's head pose, the sound data may go through a sound design algorithm to be dynamically altered such that the sound appears to be coming from a place of origin of the virtual object, in one or more embodiments. in one or more embodiments, the ar system may intentionally use various visual and/or audio triggers to initiate user head-motion. the ar system may select a trigger (e.g., virtual visual cue or virtual sound cue) and render the virtual visual image or sound cue to appear to emanate from the user's periphery (e.g., displace from front or direction that the user is facing). for example, if rendering a light field into an eye, non-image forming optics on the side or periphery may render visual cues or triggers to appear in the user's peripheral vision and causes a user to turn the user's head in desired direction. additionally or alternatively, the ar system may render a spatialized sound field, with wave front synthesis on sounds, with an audio or aural cue or trigger that appears out of the field of view of the user, again causing the user to turn in a desired direction. coordinate frames as discussed in detail in various embodiment above, and referring to fig. 133 , it should be appreciated that virtual content may be tied to one or more coordinate systems, such that the virtual content remains stationary or moves with respect to that coordinate system. for example, as shown in 13302, the virtual content may be room-centric. in other words, the virtual content is tethered to one or more coordinates of the real world such that the virtual content stays at a constant location within a space, while the user may move around or move away from it. in another embodiment, as shown in 13304, the virtual content may be body-centric. thus, the virtual content may be moved with respect to a central axis of the user. for example, if the user moves, the virtual content moves based on the user's movement. in yet another embodiment, as shown in 13306, the virtual content may be head-centric. in other words, the virtual content is tied to a coordinate system centered around the user's head. the virtual content may move as the user's moves the user's head around. this may be the case with a variety of user interfaces. the virtual content may move when the user turns his/her head, thereby providing a user's interface that is always within the view of the user. in yet another embodiment, as shown in 13308, the virtual content may be populated based on a hand-centric reference point such that the virtual content moves based on the user's hand movements (e.g., gauntlet user experience described below). referring now to fig. 134 , and as illustrated through the various embodiments described above, there may be many ways of interacting with the virtual content presented to the user. some examples are shown in fig. 134 , including intangible interactions such as gestures (e.g., hand, head, body, totem, etc.) 13402, voice interactions 13404, eye vectors 13406 and biofeedback 13408. as described in detail previously, gesture feedback 13402 may allow the user to interact with the ar system through movements of the user's hands, fingers or arms in general. voice user input 13404 may allow the user to simply "talk" to the ar system, and speak voice commands as needed to the ar system. eye user input 13406 may involve the use of the eye tracking system, such that the user may simply move the user's eyes to affect changes in the user interface. for example, the user input may be eye blinks or eye movement, which may correspond to predefined actions. for example, the user may blink three times consecutively while his/her focus is on a virtual icon. this may be a predefined selection command recognized by the system. in response, the system may simply select the virtual icon (e.g., open an application, etc.). thus, the user may communicate with the ar system with minimal effort. biofeedback 13408 may also be used to interact with the ar system. for example, the ar system may monitor the user's heartrate, and respond accordingly. for example, consider that the user is participating in an exercise challenge. in response to the user's elevated heart rate, the ar system may display virtual content to the user (e.g., prompting the user to slow down, drink water, etc.). in one or more embodiments, the interaction with the ar system may be tangible. for example, a known volume 13410 may be defined which is predefined to be a particular command. for example, the user may simply draw a shape in the air, which the ar system understands as a particular command. the interaction may be through a glove 13412 (e.g., haptic glove, etc.). thus, the glove 13412 may pick up gestures, physical touch, etc., which may, in turn, be used for one or more commands. similarly a recognized ring 13414 may be used to provide input to the ar system. in yet another embodiment, a malleable surface 13416 may be used to provide input to the system. for example, a malleable object 13416 may be used as a totem, but rather than just interacting in relation to a fixed sized object, the input may be to stretch the malleable object 13416 into different shapes and sizes, each of which may be predefined as a particular command. or, in other embodiments, a simple controller device 13418 (e.g., keyboard, mouse, console, etc.) may be used to interact with the system. in other embodiments, physical properties of objects 13420 may be used to interact with the system. gestures the ar system is configured to detect and be responsive to one or more finger/hand gestures. these gestures can take a variety of forms and may, for example, be based on inter-finger interaction, pointing, tapping, rubbing, etc. other gestures may, for example, include 2d or 3d representations of characters (e.g., letters, digits, punctuation). to enter such, a user swipes their finger in the defined character pattern. other gestures may include thumb/wheel selection type gestures, which may, for example be used with a "popup" circular radial menu which may be rendered in a field of view of a user, according to one illustrated embodiment. it should be appreciated that the concepts outlined here may be applied to various aspects of the ar system. for example, recognizing gestures and retrieving commands associated with the recognized gesture may be used in almost all the various embodiments and user scenarios discussed below. for example, gestures may be used in the various user interface embodiments discussed below. these same concepts help the system recognize the gesture and perform a command (e.g., open an application, display a user interface, purchase an item, switch applications, etc.). thus, the principles outlined here pertaining to recognizing gestures, and retrieving the command associated with the gesture may be used in almost all the embodiments described below. it should be appreciated that these concepts will not be repeated during the discussion of specific embodiments for the purposes of brevity. embodiments of the ar system can therefore recognize various commands using gestures, and in response perform certain functions mapped to the commands. the mapping of gestures to commands may be universally defined, across many users, facilitating development of various applications which employ at least some commonality in user interface. alternatively or additionally, users or developers may define a mapping between at least some of the gestures and corresponding commands to be executed by the ar system in response to detection of the commands. for example, a pointed index finger may indicate a command to focus, for example to focus on a particular portion of a scene or virtual content at which the index finger is pointed. a pinch gesture can be made with the tip of the index finger touching a tip of the thumb to form a closed circle, e.g., to indicate a grab and/or copy command. another example pinch gesture can be made with the tip of the ring finger touching a tip of the thumb to form a closed circle, e.g., to indicate a select command. yet another example pinch gesture can be made with the tip of the pinkie finger touching a tip of the thumb to form a closed circle, e.g., to indicate a back and/or cancel command. a gesture in which the ring and middle fingers are curled with the tip of the ring finger touching a tip of the thumb may indicate, for example, a click and/or menu command. touching the tip of the index finger to a location on the head worn component or frame may indicate a return to home command. embodiments of the invention provide an advanced system and method for performing gesture tracking and identification. in one embodiment, a rejection cascade approach is performed, where multiple stages of gesture analysis are performed upon image data to identify gestures. referring ahead to fig. 135a , incoming images 13542 (e.g., an rgb image at a depth d) is processed using a series of permissive analysis nodes. each analysis node 13544 (e.g., 13544a, 13544b, etc.) performs a distinct step of determining whether the image is identifiable as a gesture. each stage in this process performs a targeted computation so that the sequence of different determinations in its totality can be used to efficiently perform the gesture processing. this means, for example, that the amount of processing power at each stage of the process, along with the sequence/order of the nodes, can be used to optimize the ability to remove non-gestures while doing so with minimal computational expenses. according to the invention, a computationally less-expensive algorithm is applied at an earlier stage to remove large numbers of "easier" candidates, thereby leaving smaller numbers of "harder" data to be analyzed at a later stage using a more computationally expensive algorithm. the general approach to perform this type of processing in one embodiment is shown in the flowchart 13501 of fig. 135b . the first step 13502 is to generate candidates for the gesture processing. these include, for example, images captured from sensor measurements of the wearable device, e.g., from camera(s) mounted on the wearable device. next, at 13504, analysis is performed on the candidates to generate analysis data. for example, one type of analysis may be to check on whether the contour of the shapes (e.g., fingers) in the image is sharp enough. at 13506, sorting is then performed on the analyzed candidates. finally, at 13508, any candidate that corresponds to a scoring/analysis value that is lower than a minimum threshold is removed from consideration. fig. 135c depicts a more detailed approach for gesture analysis according to one embodiment of the invention. the first action is to perform depth segmentation 13520 upon the input data. for example, typically the camera providing the data inputs (e.g., the camera producing rgb + depth data) will be mounted on the user's head, where the user's world camera (e.g., front-facing camera, fov camera, etc.) will cover the range in which the human could reasonably perform gestures. as shown in fig. 135d , a line search 13560 is performed through the data (e.g., from the bottom of the field of view). if there are identifiable depth points along that line, then a potential gesture has been identified. if not, then further processing need not be done. in some embodiment, this type of line of depth point processing can be quite sparse -perhaps where 50 points are acquired relatively quickly. of course, different kinds of line series can be employed, e.g., in addition to or instead of flat lines across the bottom, smaller diagonal lines are employed in the area where there might be a hand/arm. any suitable depth sampling pattern may be employed, selecting preferably ones that are most effective at detecting gestures. in some embodiments, a confidence-enhanced depth map is obtained, where detected potentially valid gesture depth points are used to flood fill out from that point to segment out a potential hand or arm, and then further filtered to check whether the identified object is really a hand or an arm. another confidence enhancement can be performed, for example, by getting a clear depth map of the hand and then checking for the amount of light is reflected off the hand in the images to the sensor, where the greater amount of light corresponds to a higher confidence level. from the depth data, one can cascade to perform immediate/fast processing 13530, e.g., where the image data is amenable to very fast recognition of a gesture. this works best for very simple gestures and/or hand/finger positions. in many cases, deeper processing has to be performed to augment the depth map 13522. for example, one type of depth augmentation is to perform depth transforms upon the data. one type of augmentation is to check for geodesic distances from specified point sets, such as boundaries, centroids, etc. for example, from a surface location, a determination is made of the distance to various points on the map. this attempts to find, for example, the farthest point to the tip of the fingers (by finding the end of the fingers). the point sets may be from the boundaries (e.g., outline of hand) or centroid (e.g., statistical central mass location). surface normalization may also be calculated. in addition, curvatures may also be estimated, which identifies how fast a contour turns (e.g., by performing a filtering process to go over the points and removing concave points from fingers.) in some embodiments, orientation normalization may be performed on the data. to illustrate, consider that a given image of the hand may be captured with the hand in different positions. however, the analysis may be expecting of the image data of the hand in a canonical position. in this situation, as shown 13570 in fig. 135e , the mapped data may be re-oriented to change to a normalized/canonical hand position. one advantageous approach in some embodiments is to perform background subtraction on the data. in many cases, a known background exists in a scene, e.g., the pattern of a background wall. in this situation, the map of the object to be analyzed can be enhanced by removing the background image data. an example of this process 13580 is shown in fig. 135f , where the left portion of the fig. 135f shows an image of a hand over some background image data. the right-hand portion of fig. 135f shows the results of removing the background from the image, leaving the augmented hand data with increased clarity and focus. depth comparisons may also be performed upon points in the image to identify the specific points that pertain to the hand (as opposed to the background non-hand data). for example, as shown in 13590 of fig. 135g , it can be seen that a first point a is located at a first depth and a second point b is located at a significantly different second depth. in this situation, the difference in the depths of these two points makes it very evident that the two points likely belong to different objects. therefore, if one knows that the depth of the hand is at the same depth value as point a, then one can conclude that point a is part of the hand. on the other hand, since the depth value for point b is not the same as the depth of the hand, one can readily conclude that point b is not part of the hand. at this point a series of analysis stages is performed upon the depth map. any number of analysis stages can be applied to the data. the present embodiment shows three stages (e.g., 13524, 13526 and 13528, etc.), but one of ordinary skill in the art would readily understand that any other number of stages (either smaller or larger) may be used as appropriate for the application to which the invention is applied. in the current embodiment, stage 1 analysis 13524 is performed using a classifier mechanism upon the data. for example, a deep neural net or classification/decision forest can be used to apply a series of yes/no decisions in the analysis to identify the different parts of the hand for the different points in the mapping. this identifies, for example, whether a particular point belongs to the palm portion, back of hand, non-thumb finger, thumb, fingertip, and/or finger joint. any suitable classifier can be used for this analysis stage. for example, a deep learning module or a neural network mechanism can be used instead of or in addition to the classification forest. in addition, a regression forest (e.g., using a hough transformation, etc.) can be used in addition to the classification forest. the next stage of analysis (stage 2) 13526 can be used to further analyze the mapping data. for example, analysis can be performed to identify joint locations, in particular, or to perform skeletonization on the data. fig. 135h provides an illustration 13595 of skeletonization, where an original map of the hand data is used to identify the locations of bones/joints within the hand, resulting in a type of "stick" figure model of the hand/hand skeleton. this type of model provides with clarity, a very distinct view of the location of the fingers and the specific orientation and/or configuration of the hand components. labelling may also be applied at this stage to the different parts of the hand. at this point, it is possible that the data is now directly consumable by a downstream application 13534 without requiring any further analysis. this may occur, for example, if the downstream application itself includes logic to perform additional analysis/computations upon the model data. in addition, the system can also optionally cascade to perform immediate/fast processing 13532, e.g., where the data is amenable to very fast recognition of a gesture, such as the (1) fist gesture; (2) open palm gesture; (3) finger gun gesture; (4) pinch; etc. for example, as shown in 13598 of fig. 1351 , various points on the hand mapping (e.g., point on extended thumb and point on extended first finger) can be used to immediately identify a pointing gesture. the outputs will then proceed to a world engine 13536, e.g., to take action upon a recognized gesture. in addition, deeper processing can be performed in the stage 3 analysis. this may involve, for example, using a deep neural network or a decision forest/tree to classify the gesture. this additional processing can be used to identify the gesture, determine a hand pose, identify context dependencies, and/or any other information as needed. prior /control information can be applied in any of the described steps to optimize processing. this permits some biasing for the analysis actions taken in that stage of processing. for example, for game processing, previous action taken in the game can be used to bias the analysis based upon earlier hand positions/poses. in addition, a confusion matrix can be used to more accurately perform the analysis. using the principles of gesture recognition discussed above, the ar system may use visual input gathered from the user's fov cameras and recognize various gestures that may be associated with a predetermined command or action. referring now to flowchart 13521 of figure 135j , in step 13503, the ar system may detect a gesture as discussed in detail above. as described above, the movement of the fingers or a movement of the totem may be compared to a mapping database to detect a predetermined command, in step 13505. in step 13507, a determination is made whether the ar system recognizes the command based on the mapping step 13505. if the command is detected, the ar system determines the desired action and/or desired virtual content based on the gesture, in step 13507. if the gesture or movement of the totem does not correspond to any known command, the ar system simply goes back to detecting other gestures or movements to step 13503. in step 13509, the ar system determines the type of action necessary in order to satisfy the command. for example, the user may want to activate an application, or may want to turn a page, may want to generate a user interface, may want to connect to a friend located at another physical location, etc. based on the desired action/virtual content, the ar system determines whether to retrieve information from the cloud servers, or whether the action can be performed using local resources on the user device, in step 13511. for example, if the user simply wants to turn a page of a virtual book, the relevant data may already have been downloaded or may reside entirely on the local device, in which case, the ar system simply retrieves data associated with the next page and displays the next page to the user. similarly, if the user wishes to create a user interface such that the user can draw a picture in the middle of space, the ar system may simply generate a virtual drawing surface in the desired location without requiring data from the cloud. data associated with many applications and capabilities may be stored on the local device such that the user device does not need to unnecessarily connect to the cloud or access the passable world model. thus, if the desired action can be performed locally, local data may be used to display virtual content corresponding to the detected gesture (step 13513). alternatively, in step 13515, if the system needs to retrieve data from the cloud or the passable world model, the system may send a request to the cloud network, retrieve the appropriate data and send it back to the local device such that the action may be taken or the virtual content may be appropriately displayed to the user. for example, if the user wants to connect to a friend at another physical location, the ar system may need to access the passable world model to retrieve the necessary data associated with the physical form of the friend in order to render it accordingly at the local user device. thus, based on the user's interaction with the ar system, the ar system may create many types of user interfaces as desired by the user. the following represent some example embodiments of user interfaces that may be created in a similar fashion to the example process described above. it should be appreciated that the above process is simplified for illustrative purposes, and other embodiments may include additional steps based on the desired user interface. the following discussion details a set of additional applications of the ar system. ul hardware the ar system may employ pseudo-haptic gloves that provide sensations of pressures and/or vibrations that are tied to the physical object. the tactile effect may, for example, be akin to running a hand through a bubble. if a vibration is introduced onto a finger, a user will interpret that vibration as a texture. the pseudo-haptic glove may provide tactile sensations that replicate the feel of hard physical objects, soft physical objects, and physical objects that are fuzzy. the pseudo-haptic glove selectively produces the sensation of both pressure and vibration. for example, if there is a massless object (e.g., bubble) floating in space, the user may be able to feel the tactile sensation of touching the massless object. the user can change the tactile sensation of touching the virtual object, for example a texture oriented sensation rather than a firmness-oriented sensation. for example, if a user passes a hand through a bubble, the user may feel some tactile sensation although the user will not feel the sensation of grabbing a physical object. a similar approach of providing tactile sensations may be implemented in other wearable portions or components of the ar system. the glove and/or other components may use a variety of different actuators, for example piezoelectric actuators. thus, a user may feel as if able to touch massless virtual objects directly. for instance, if virtual object is located at a table, a consistent ux element corresponding to the haptic glove may provide the user with a proprioceptive tactile interaction. for example, the user may grab or may grasp a particular handle close to a door. using a handle as a coordinate frame for a virtual object may be very intuitive for the user. this allows a user to pick up physical things and actually feel the physical sensation though a tactile proxy hand. head worn components of individual ar systems may also include sensors to detect when earphones or ear buds are positioned proximate, on or in the ears of a user. the ar system may use any of a large variety of sensors, for example capacitive sensors, pressure sensors, electrical resistance sensors, etc. in response to detection of the earphones or ear buds being in place, the ar system may route sound via the earphones or ear buds. in response to a failure to detect the earphones or ear buds being in place, the ar system may route sound through conventional stand-alone speakers. additionally, the ar system may employ a composite camera. the composite camera may comprise a plurality of chip-level cameras mounted on or carried by a flexible substrate, for instance a flexible printed circuit board substrate. the flexible substrate may be modified and/or re-configured with a potting compound, to essentially form a single wide angle lens. for example, small cameras may be built with a layer approach, using wafer level technology. for instance, a plurality of video graphics array (vga) pads may be formed on a flexible substrate for communicatively coupling these cameras. the flexible substrate with cameras may be stretched over an anvil, and fixed for instance via an adhesive. this provides an inexpensive set of vga cameras that have an optically wide field of view of approximately 60 degree or 70 degrees. advantageously, a flat process may be employed, and the flexible substrate may be stretched over an anvil. the resultant structure provides the equivalent of a wide field of view camera from a pixel count image quality perspective, but with overlapping or non-overlapping fields of view. a plurality of two or three element wafer level of cameras can replace a specific wide field of view lens that has five or six elements, while still achieving the same field of view as the wide field of view camera. user interfaces as will be described in various embodiments below, the ar system may create many types of user interfaces. in some of the embodiments described below, the ar system creates a user interface based on a location of the user, and what type of reference frame the user interface may operate in. for example, some user interfaces (e.g., figs. 85a-85c below) are body-centric user interfaces, in which case, the ar system may determine a location of the user's center (e.g., hip, waist, etc.), and project a virtual interface based on that reference frame. other user interfaces are created based on a head-centric reference frame, a hand-centric reference frame etc. further, the ar system may utilize the principles of gesture tracking and/or totem tracking discussed above to also create and/or interact with some user interfaces. although each of the user interfaces described below have some differences, they principally function using some common principles. in order to display a user interface of the user's choosing, the ar system must determine a location of the user in the world (e.g., the world coordinate frame). for example, the user's location may be determined through any of the localization techniques discussed above (e.g., gps, bluetooth, topological map, map points related to the user's ar system, etc.). once the user's location in the world coordinate frame has been determined, a relationship between the user's hands/finger etc. in relation to the user's ar system may be determined. for example, if the user has selected a predefined ring-based user interface (e.g., figs. 85a-85c , etc.), a relationship between the user's ar system and the body-centric reference frame of the virtual user interface may be determined. for example, the body-centric user interfaces of figs. 85a-85c may be determined based on the coordinates of the user's hip. a position of the user's hip may be determined based on data collected by the ar system. in other words, the various sensors of the ar system (e.g., cameras, sensors, etc.) may help determine the coordinates (e.g., in the world coordinate system) of the user's hip. this determined location may be set as the origin coordinates (0,0,0) of the user interface. having determined the origin coordinates, the virtual user interface may be rendered based on the determined location of the user's hip, such that as the user's moves, the virtual user interfaces moves along with the user's body (e.g., the ring user interface of figs. 85a-85c remains around the user's body). in one or more embodiments, the various pre-configured user interfaces may be stored in a user interface database such that an appropriate user interface is retrieved from the database. the stored user interface program may comprise a set of characteristics and/or parameters about the user interface, including coordinates at which various parts of the virtual user interface must be displayed in relation to the origin coordinates. for example, in a very simple user interface having only 2 pixels, the coordinates of the pixels to be displayed in relation to the origin hip-coordinates may be defined. when a particular user interface is selected, the user interface data may be retrieved from the database, and various translation vectors may be applied to pixel coordinates in order to determine the world coordinates. in other words, each of the stored user interface programs may be predefined in relation to a particular reference frame, and this information may be used to determine the location at which to render the particular user interface. it should be appreciated that a majority of the user interfaces described below work based on this basic principle. although the above example illustrated the concept using only 2 pixels, it should be appreciated that the appropriate coordinates for all pixels of the virtual user interface may be similarly defined such that the relevant translations and/or rotations may be applied. in another example, say the user interface must be displayed at a location of a user's gestures. as shown in many embodiments below, several user interfaces may simply be created "on the fly," such that the user interface originates at a particular point in space defined by the user. similar localization concepts as the above may be used in this case as well. for example, a user may place his arm out in space and make a particular gesture with his/her fingers, indicating to the ar system that a user interface should be populated at that location. in this case, similar to the above, a location of the ar system in the world is known (e.g., gps, bluetooth, topological map, etc.). the various sensors and/or cameras of the ar system may determine a location of the user's gesture in relation to the ar system (e.g., after having recognized the gesture to mean the command to generate a user interface). as discussed above, once the location of the gesture in relation to the ar system cameras or sensors has been determined, several triangulation techniques may be used (e.g., translation vectors, etc.) to determine the world coordinates of that location. once the world coordinates of the location have been determined, a desired user interface may be generated such that it originates at that particular location. another theme in some of the user interfaces described below is that reference frames for some virtual content may be modified such that a virtual content that is currently being tied to a first reference frame is tied to another reference frame. as will be clear in some embodiments described below, a user may open an application through a hand-centric user interface. the application may open up a profile page of a friend that the user may desire to store for easy viewing in the future. in one or more embodiments, the user may take the virtual object or virtual box corresponding to the profile page (which is currently being displayed in relation to a hand-centric reference frame), and modify it such that it is no longer tied to the hand-centric reference frame, but is rather tied to a world-centric reference frame. for example, the ar system may recognize a gesture of the user (e.g., a throwing gesture, a gesture that takes the application and places it far away from the first reference frame, etc.) indicating to the system, that the ar user desires to modify a reference frame of a particular virtual object. once the gesture has been recognized, the ar system may determine the world coordinates of the virtual content (e.g., based on the location of the virtual content in relation to the known location of the ar system in the world), and modify one or more parameters (e.g., the origin coordinates field, etc.) of the virtual content, such that it is no longer tied to the hand-centric reference frame, but rather is tied to the world-coordinate reference frame. in yet another embodiment, the ar system must recognize that a particular virtual icon is selected, and move the virtual icon such that it appears to be moving with the user's hand (e.g., as if the user is holding a particular virtual application, etc.). to this end, the ar system may first recognize a gesture (e.g., a grasping motion with the user's fingers, etc.), and then determine the coordinates of the user's fingers/hand. similarly, the world coordinates of the virtual icon is also known, as discussed above (e.g., through a known location of the virtual content in relation to a particular reference frame, and a known relationship between the reference frame and the world-centric reference frame). since both coordinates are known, the virtual content may be moved to mirror the movement of the user's fingers. as will be described in various embodiments below, any space around the user may be converted into a user interface such that the user can interact with the system. thus, the ar system does not require a physical user interface such as a mouse/keyboard, etc. (although totems may be used as reference points, as described above), but rather a virtual user interface may be created anywhere and in any form to help the user interact with the ar system. in one embodiment, there may be predetermined models or templates of various virtual user interfaces. as discussed above, during set-up the user may designate a preferred type (or types) of virtual ui (e.g., body centric ul, head -centric ul, hand-centric ul, etc.). alternatively or additionally, various applications may be associated with their own types of virtual ul. alternatively or additionally, the user may customize the ui to create one that he/she may be most comfortable with. for example, the user may simply "draw" a virtual ui in space using a motion of his hands, and various applications or functionalities may automatically populate the drawn virtual ul. referring ahead to fig. 140 , an example flowchart of displaying a user interface is illustrated. in step 14002, the ar system may identify a particular ul. the type of ui may be predetermined by the user. the system may identify the ui needs populated based at least in part on the user input (e.g., gesture, visual data, audio data, sensory data, direct command, etc.). in step 14004, the ar system may generate data for the virtual ul. for example, data associated with the confines, general structure, shape of the ui etc. may be generated. in addition, the ar system may determine map coordinates of the user's physical location so that the ar system can display the ui in relation to the user's physical location. for example, if the ui is body-centric, the ar system may determine the coordinates of the user's physical stance such that a ring ui can be displayed around the user. or, if the ui is hand centric, the map coordinates of the user's hands may need to be determined. it should be appreciated that these map points may be derived through data received through the fov cameras, sensory input, or any other type of collected data. in step 14006, the ar system may send the data to the user device from the cloud. in other embodiments, the data may be sent from a local database to the display components. in step 14008, the ui is displayed to the user based on the sent data. once the virtual ui has been created, the ar system may simply wait for a command from the user to generate more virtual content on the virtual ui in step 14010. for example, the ui maybe a body-centric ring around the user's body. the ar system may then wait for the command, and if it is recognized (step 14012), virtual content associated with the command may be displayed to the user. referring now to fig. 141 , a more specific flowchart 14100 describing the display of user interfaces will be described. at 14102, the ar system may receive input pertaining to a desired virtual ul. for example, the ar system may detect this through a detected gesture, voice command, etc. at 14104, the ar system may identify the ui from a library of uls based on the user input, and retrieve the necessary data in order to display the ul. at 14106, the ar system may determine a coordinate frame or reference frame system that is associated with the identified ul. for example, as discussed above, some uls may be head-centric, others may be hand-centric, body centric, etc. at 14108, once the coordinate frame type has been determined, the ar system determines the location at which the virtual user interface must be displayed with respect to a location of the user. for example, if the identified ui is a body-centric ul, the ar system may determine a location (e.g., map points, localization techniques, etc.) of a center axis/point of the user's body (e.g., the user's location within the world coordinate frame). once this point/axis is located, it may be set as the origin of the coordinate frame (e.g., (0,0,0), in an x, y, z coordinate frame) (14110). in other words, the location at which the virtual ui is to be displayed will be determined with reference to the determined coordinate frame (e.g., center of the user's body). once the center of the user's body has been determined, a calculation may be made to determine the location at which the virtual ui must be populated (14112). at 14114, the desired ui may be populated at the determined map points. in other embodiments described above, a customized virtual user interface may simply be created on the fly based on a location of the user's fingers. for example, as described above, the user may simply "draw" a virtual boundary, and a user interface may be populated within that virtual boundary. referring now to fig. 142 , an example flowchart 14200 is illustrated. in step 14202, the ar system detects a movement of the user's fingers or hands. this movement may be a predetermined gesture signifying that the user wishes to create a user interface (the ar system may compare the gesture to a map of predetermined gestures, for example). based on this detection, the ar system may recognize the gesture as a valid gesture in step 14204. in step 14206, the ar system may retrieve through the cloud server, a location associated with the user's position of fingers/hands within the world coordinate frame in order to display the virtual ui at the right location, and in real-time with the movement of the user's fingers or hands. in step 14208, the ar system creates a ui that mirrors the user's gestures. this may be performed by identifying a location associated with the user's fingers and displaying the user interface at that location. in step 14210, the ui may be displayed in real-time at the right position using the determined location. the ar system may then detect another movement of the fingers or another predetermined gesture indicating to the system that the creation of user interface is done (step 14212). for example the user may stop making the motion of his fingers, signifying to the ar system to stop "drawing" the ul. in step 14214, the ar system displays the ui at the location in the boundary drawn by the user's finger's movement. thus, a custom user-interface may be created. using the principles of gesture tracking/ ui creation, etc. a few example user applications will now be described. the applications described below may have hardware and/or software components that may be separately installed onto the system, in some embodiments. in other embodiments, the system may be used in various industries, etc. and may be modified to achieve some of the embodiments below. although the particular embodiments described below often use gestures to communicate with the ar system, it should be appreciated that any other user input discussed above may be similarly used. for example, in addition to gestures, user interfaces and/or other virtual content (e.g., applications, pages, web sites, etc.), may be rendered in response to voice commands, direct inputs, totems, gaze tracking input, eye tracking input or any other type of user input discussed in detail above. the following section provides various embodiments of user interfaces that may be displayed through the ar system to allow interaction with the user. referring now to fig. 85a, fig. 85a shows a user interacting via gestures with a user interface construct 8500 rendered by an ar system (not shown in figs. 85a-85c ), according to one illustrated embodiment. in particular, fig. 85a shows a scenario 8500 of a user interacting with a generally annular layout or configuration virtual user interface 8512 having various user selectable virtual icons. the generally annular layout or configuration is substantially similar to that's illustrated in fig. 79e . the user selectable virtual icons may represent applications (e.g., social media application, web browser, email, etc.), functions, menus, virtual rooms or virtual spaces, etc. the user may, for example, perform a swipe gesture. the ar system detects the swipe gesture, and interprets the swipe gesture as an instruction to render the generally annular layout or configuration user interface. the ar system then renders the generally annular layout or configuration virtual user interface 8512 into the user's field of view so as to appear to at least partially surround the user, spaced from the user at a distance that is within arm's reach of the user, as shown in the illustrated embodiment. as described above, the user interface coordinates may be tied to the determined location of the user's center such that it is tied to the user's body. fig. 85b shows another scenario 8502 of the user interacting via gestures with a user interface virtual construct 8512 rendered by an ar system (not shown in fig. 85b ), according to another illustrated embodiment. the generally annular layout or configuration virtual user interface 8512 may present the various user selectable virtual icons in a scrollable form. the user may gesture, for example with a sweeping motion of a hand, to cause scrolling through various user selectable virtual icons. for instance, the user may make a sweeping motion to the user's left or to the user' right, in order to cause scrolling in the left (e.g., counterclockwise) or right (e.g., clockwise) directions, respectively. the user may, for example, perform a point or touch gesture, proximally identifying one of the user selectable virtual icons. the ar system detects the point or touch gesture, and interprets the point or touch gesture as an instruction to open or execute a corresponding application, function, menu or virtual room or virtual space. the ar system then renders appropriate virtual content based on the user selection. fig. 85c shows yet another scenario 8504 of the user interacting via gestures with a user interface virtual construct 8512 rendered by an ar system (not shown in fig. 39c), according to yet another illustrated embodiment. fig. 85c shows the user interacting with the generally annular layout or configuration virtual user interface 8512 of various user selectable virtual icons of figs. 85a and 85b . in particular, the user selects one of the user selectable virtual icons. in response, the ar system opens or executes a corresponding application, function, menu or virtual room or virtual space. for example, the ar system may render a virtual user interface for a corresponding application 8514 as illustrated in fig. 85c . alternatively, the ar system may render a corresponding virtual room or virtual space based on the user selection. referring now to fig. 86a, fig. 86a shows a scenario 8602 of a user interacting via gestures with a user interface virtual construct 8612 rendered by an ar system (not shown in fig. 86a ), according to one illustrated embodiment. in particular, fig. 86a shows a user performing a gesture to create a new virtual work portal or construct in hovering in space in a physical environment or hanging or glued to a physical surface such as a wall of a physical environment. the user may, for example, perform a two arm gesture, for instance dragging outward from a center point outward to a location that represents upper left and lower right corners of the virtual work portal or construct, as shown in fig. 86a . the virtual work portal or construct 8612 may, for example, be represented as a rectangle, the user gesture establishing not only the position, but also the dimensions of the virtual work portal or construct. the virtual work portal or construct 8612 may provide access to other virtual content, for example to applications, functions, menus, tools, games, and virtual rooms or virtual spaces. the user may employ various other gestures for navigating once the virtual work portal or construct has been created or opened. fig. 86b shows another scenario 8604 of the user interacting via gestures with a user interface virtual construct 8614 rendered by an ar system (not shown in fig. 86b ), according to one illustrated embodiment. in particular, fig. 86b shows a user performing a gesture to create a new virtual work portal or construct on a physical surface 8614 of a physical object that serves as a totem. the user may, for example, perform a two finger gesture, for instance an expanding pinch gesture, dragging outward from a center point to locations where an upper left and a lower right corner of the virtual work portal or construct should be located. the virtual work portal or construct may, for example, be represented as a rectangle, the user gesture establishing not only the position, but also the dimensions of the virtual work portal or construct. fig. 86c shows another scenario 8606 of the user interacting via gestures with a user interface virtual construct 8616 rendered by an ar system (not shown in fig. 86c ), according to one illustrated embodiment. in particular, fig. 86c shows a user performing a gesture to create a new virtual work portal or construct 8616 on a physical surface such as a top surface of a physical table or desk. the user may, for example, perform a two arm gesture, for instance dragging outward from a center point to locations where an upper left and a lower right corner of the virtual work portal or construct should be located. the virtual work portal or construct may, for example, be represented as a rectangle, the user gesture establishing not only the position, but also the dimensions of the virtual work portal or construct. as illustrated in fig. 86c , specific applications, functions, tools, menus, models, or virtual rooms or virtual spaces can be assigned or associated to specific physical objects or surfaces. thus, in response to a gesture performed on or proximate a defined physical structure or physical surface, the ar system automatically opens respective applications 8618 (or e.g., functions, tools, menus, model, or virtual room or virtual spaces) associated with the physical structure or physical surface, eliminating the need to navigate the user interface. as previously noted, a virtual work portal or construct may provide access to other virtual content, for example to applications, functions, menus, tools, games, three-dimensional models, and virtual rooms or virtual spaces. the user may employ various other gestures for navigating once the virtual work portal or construct has been created or opened. figs. 87a-87c show scenarios 8702, 8704 and 8706 respectively of a user interacting via gestures with various user interface virtual constructs rendered by the ar system (not shown in figs. 87a-87c ), according to one illustrated embodiment. the user interface may employ either or both of at least two distinct types of user interactions, denominated as direct input or proxy input. direct input corresponds to conventional drag and drop type user interactions, in which the user selects an iconification of an instance of virtual content, for example with a pointing device (e.g., mouse, trackball, finger) and drags the selected icon to a target (e.g., folder , other iconification of for instance an application). proxy input corresponds to a user selecting an iconification of an instance of virtual content by looking or focusing on the specific iconification with the user's eyes, then executing some other action (s) (e.g., gesture), for example via a totem. a further distinct type of user input is denominated as a throwing input. throwing input corresponds to a user making a first gesture (e.g., grasping or pinching) to select an iconification of an instance of virtual content, followed by a second gesture (e.g., arm sweep or throwing motion towards target) to indicate a command to move the virtual content at least generally in a direction indicated by the second gesture. the throwing input will typically include a third gesture (e.g., release) to indicate a target (e.g., folder). the third gesture may be performed when the user's hand is aligned with the target or at least proximate to the target. the third gesture may be performed when the user's hand is moving in the general direction of the target but may not yet be aligned or proximate with the target, assuming that there is no other virtual content proximate the target which would render the intended target ambiguous to the ar system. thus, the ar system detects and responds to gestures (e.g., throwing gestures, pointing gestures) which allow freeform location-specification denoting which virtual content should be rendered or moved. for example, where a user desires a virtual display, monitor or screen, the user may specify a location in the physical environment in the user's field of view in which to cause the virtual display, monitor or screen to appear. this contrasts from gesture input to a physical device, where the gesture may cause the physical device to operate (e.g., on/off, change channel or source of media content), but does not change a location of the physical device. additionally, where a user desires to logically associate a first instance of virtual content (e.g., icon representing file) with a second instance (e.g., icon representing storage folder or application), the gesture defines a destination for the first instance of virtual content. in particular, fig. 87a shows the user performing a first gesture to select a virtual content. the user may for example, perform a pinch gesture, pinching and appear to hold the virtual work portal or construct 8712 between a thumb and index finger. in response to the ar system detecting a selection (e.g., grasping, pinching or holding) of a virtual work portal or construct, the ar system may re-render the virtual work portal or construct with visual emphasis, for example as show in in fig. 87a . the visual emphasis cues the user as to which piece of virtual content the ar system has detected as being selected, allowing the user to correct the selection if necessary. other types of visual cues or emphasis may be employed, for example highlighting, marqueeing, flashing, color changes, etc. in particular, fig. 87b shows the user performing a second gesture to move the virtual work portal or construct to a physical object 8714, for example a surface of a wall, on which the user wishes to map the virtual work portal or construct. the user may, for example, perform a sweeping type gesture while maintaining the pinch gesture. in some implementations, the ar system may determine which physical object the user intends, for example based on either proximity and/or a direction of motion. for instance, where a user makes a sweeping motion toward a single physical object, the user may perform the release gesture with the user's hand short of the actual location of the physical object. since there are no other physical objects in proximate or in line with the sweeping gesture when the release gesture is performed, the ar system can unambiguously determine the identity of the physical object that the user intended. this may, in some ways, be thought of as analogous to a throwing motion. in response to the ar system detecting an apparent target physical object, the ar system may render a visual cue positioned in the user's field of view so as to appear co-extensive with or at least proximate the detected intended target. for example, the ar system may render a border that encompasses the detected intended target as shown in fig. 87b . the ar system may also continue rendering the virtual work portal or construct with visual emphasis, for example, as shown in fig. 87b . the visual emphasis cues the user as to which physical object or surface the ar system has detected as being selected, allowing the user to correct the selection if necessary. other types of visual cues or emphasis may be employed, for example highlighting, marqueeing, flashing, color changes, etc. in particular, fig. 87c shows the user performing a third gesture to indicate a command to map the virtual work portal or construct to the identified physical object, for example a surface of a wall, to cause the ar system to map the virtual work portal or construct to the physical object. the user may, for example, perform a release gesture, releasing the pinch to simulate releasing the virtual work portal or construct 8716. figs. 88a-88c show a number of user interface virtual constructs (8802, 8804 and 8806 respectively) rendered by an ar system (not shown in figs. 88a-8c ) in which a user's hand serves as a totem, according to one illustrated embodiment. as illustrated in fig. 88a , in response to detecting a first defined gesture (e.g., user opening or displaying open palm of hand, user holding up hand), the ar system renders a primary navigation menu in a field of view of the user so as to appear to be on or attached to a portion of the user's hand. for instance, a high level navigation menu item, icon or field may be rendered to appear on each finger other than the thumb. the thumb may be left free to serve as a pointer, which allows the user to select a desired one of the high level navigation menu item or icons via one of second defined gestures, for example by touch the thumb to the corresponding fingertip. the menu items, icons or fields 8812 may, for example, represent user selectable virtual content, for instance applications, functions, menus, tools, models, games, and virtual rooms or virtual spaces. as illustrated in fig. 88b , in response to detecting a defined gesture (e.g., user spreads fingers apart), the ar system expands the menus, rendering a lower level navigation menu 8814 in a field of view of the user so as to appear to be on or attached to a portion of the user's hand. for instance, a number of lower level navigation menu items or icons 8814 may be rendered to appear on each of the fingers other than the thumb. again, for example, the thumb may be left free to serve as a pointer, which allows the user to select a desired one of the lower level navigation menu item or icons by touch the thumb to a corresponding portion of the corresponding finger. as illustrated in fig. 88c , in response to detecting another defined gesture 8816 (e.g., user making circling motion in palm of hand with finger from other hand), the ar system scrolls through the menu, rendering fields of the navigation menu in a field of view of the user so as to appear to be on or attached to a portion of the user's hand. for instance, a number of fields may appear to scroll successively from one finger to the next. new fields may scroll into the field of view, entering from one direction (e.g., from proximate the thumb) and other fields may scroll from the field of view, exiting from the other direction (e.g., proximate the pinkie finger). the direction of scrolling may correspond to a rotational direction of the finger in the palm. for example the fields may scroll in one direction in response to a clockwise rotation gesture and scroll in a second, opposite direction, in response to a counterclockwise rotation gesture. other ul embodiments as described above, users may communicate with the ar system user interface through a series of gestures, totems, ui hardware, and other unique modes of interacting with the system. the following embodiments represent a few examples of the ui experience. it should be appreciated that the following list is not exhaustive and other embodiments of interacting with the system may be similarly used. the following methods of interacting with the system may be used with or without a totem. the following embodiments represent different ways by which a user may turn the system on, start or end a desired application, browse the web, create an avatar, share content with peers, etc. it should be appreciated that the following series of example embodiments are not exhaustive, but simply represent example user interfaces/user experiences through which users may interact with the ar system. avatar as discussed above, the user interface may be responsive to a variety of inputs. the user interface of the ar system may, for example, be responsive to hand inputs, for instance: gestures, touch, multi-touch, and/or multiple hand input. the user interface of the ar system may, for example, be responsive to eye inputs, for instance: eye vector, eye condition (e.g., open/close), etc. referring ahead to fig. 123a , in response to the one or more user inputs described above (e.g., a cupped palm with a pointed finger gesture, as shown in the illustrated embodiment, etc.) , the system may generate an avatar that may lead the user through a variety of options. in one or more embodiments, the avatar may be a representation of the user. in essence, the user may be rendered as a "puppet master" and the user avatar of the ar system present a set of icons, any of which may be selected by the user. as shown in scene 12302, the user, through a pre-determined gesture (e.g. a hand pulling gesture, a finger gesture, etc.) that is recognized by the ar system, may "pull" out the avatar from a desired location. as shown in scene 12304, the avatar has been populated. the avatar may be pre-selected by the user, in some embodiments, or, in other embodiments, the system may present the user with different avatars each time. the gesture that will generate the perception of the avatar may also be predetermined. in other embodiments, different hand gestures may be associated with different avatars. for example, the hand pulling gesture may generate the avatar shown in fig. 123a , but a finger crossing gesture may generate a mermaid avatar, for example (not shown). in other embodiments, different applications may have their own unique avatar. for example, if the user wishes to open a social media application, the social media application may be associated with its own particular avatar, which may be used to interact with the application. there may be many ways of detecting the hand gesture that generates/creates/populates the avatar. the gestures may be detected or recognized by the world cameras, sensors, hand gesture haptics, or any other input devices discussed above. few example approaches have been discussed above. referring now to fig 123b , once the avatar has been populated, additional options may be rendered adjacent to the avatar to help the user choose one or more options. as shown in fig. 123b , the avatar may be a dynamic avatar that moves and plays along with the user as the user selects an option. as shown in the example embodiment, the avatar in fig.123b may hold up various options (scene 12306) that the user may select through another hand gesture. as shown in scene 12308, the user may select a particular application from the presented icons (e.g., phone, games, contacts, etc.) that are rendered adjacent to the avatar. the user may for example select the "games" icon as shown in scene 12308. once the icon has been selected, the avatar may open up the game (using the avatar hand gesture, as shown in 12308). the game may then be rendered in 3d to the user. in one embodiment, the avatar may disappear after the user has selected the game, or in other embodiments, the avatar may remain, and the user may be free to choose other options/icons for other functionality as well. referring now to fig.123c , the user may select another option through the avatar. in the example embodiment, the user may select a "friend," (scene 12310) that the user may want to communicate with. the friend may then be rendered as an avatar, as shown in scene 12312. in one or more embodiments, the avatar may simply represent another avatar of the system, or a character in a game. or, the other avatar may be an avatar of another user, and the two users may be able to interact with each other through their avatars. for example, the first user may want to share a file with another user. this action may be animated in a playful manner by populating both the systems through avatars. as shown in fig. 123c , having generated the other avatar, the avatars may interact and pass on virtual objects to each other, as shown in scene 12312. for example, the first avatar may pass a virtual object related to the virtual game to the other avatar. fig. 123d shows detailed input controls 12314 that may be used to interact with the avatar. as shown in fig. 123d , various gestures may be used for user input behaviors. as shown in fig. 123d , some types of actions may be based on a location of virtual content, while others may be agnostic to virtual content. extrusion in another embodiment, the ui may follow an extrusion theme. for example, as shown in fig. 124a , the user may make a triangle gesture 12402 (e.g., index fingers together, in the illustrated embodiment) to open up the user interface. in response to the triangle gesture, the ar system may extrude a set of floating virtual icons 12404, as shown in fig. 124b . in one or more embodiments, the virtual icons may be floating blocks, or may simply be the logo associated with a particular application or functionality. in the embodiments shown in fig. 124b , in response to the gesture, a mail application, a music application, a phone application, etc. have been populated. in one or more embodiments, extrusion may refer to populating virtual objects (in this case, icons, selectable objects, etc.) on a fixed cross-sectional profile. the cross-sectional profile may be rotated, turned, and the various blocks may be rearranged etc. as shown in fig. 124b , the blocks may be opened up horizontally, and then rearranged based on the preferences of the user. if the user selects a particular icon, more icons that are subsets of the selected icon may be rendered beneath the selected icon, as shown in fig. 124c . as described previously, the blocks may be rotated around the cross-sectional plane to open up more options of a particular icon, as shown in fig. 124d . for example, if the user wishes to open up a particular application, and chooses to select a friend's profile within that application, the user may extrude the icons for various profiles as shown in the cross-sectional view of fig. 124e and 124f . as shown in fig. 124g , the user may then select a particular icon with a holding gesture of the hand such that the virtual icon is "pulled" from the cross-sectional plane and is nested in the user's hand. as shown in fig. 124g , the user may manipulate the selected virtual icon with the user's hands (12406). essentially, the virtual icon or block comes out of the cross-sectional plane, and the user may grasp the icon or block in his hands. for example, the user may want to view a particular friend's profile in more details. as shown in fig. 124h , the user may, with a particular hand gesture (e.g., a close and opening gesture, as shown in the fig. 124h ) open up the profile page 12408 as if simply opening up a crumpled piece of paper ( fig. 124i and 124j ). once the user is done looking through the friend's profile page 12410, the user may similarly crumple the virtual page back as shown in fig. 124k , and return it to the series of blocks that the user had previously extruded ( fig. 124l ). fig, 124m shows detailed input controls 12620 that may be used to interact with the avatar. as shown in fig. 124m , various gestures may be used for user input behaviors. as shown in fig. 124m , some types of actions may be based on a location of virtual content, while others may be agnostic to virtual content. gauntlet in yet another approach, the ui may follow a gauntlet theme, where the user's hand(in this case) or any other body part may be used as an axis of rotation, and the icons may be rendered as if appearing on the user's arm. as shown in fig. 125a and 125b , the user may, through a predetermined gesture 12502 (e.g., clasping the arm with his other hand, in this example) that is recognized by the system cause the generation of various icons on the user's arm. as shown in fig. 125c , the system may automatically generate icons 12504 based on the user's dragging gesture 12506 across his arm. the dragging gesture 12506 may cause the population of the virtual icons 12506. as was the case in the previously examples, the virtual icons may be applications, friend's profiles or any other type of functionality that may be further selected by the user. as shown in the fig. 125d , once the gestures have been populated, the user may with another gesture 12508 that is recognized by the system (e.g., two fingers to rotate a set of icons around the arm. this may cause more virtual icons to be populated on the side of the user's arm, as shown in fig. 125e . essentially, the length of the user's arm may be used as an axis by which to rotate the virtual axis around the user's arm. in one example, the user may select a particular icon 12510 ( fig. 125f ); the system may have some indicator to denote that it has now been selected (e.g., denoted by a different color, etc.). as shown in fig. 125g , the user may drag the selected icon 12510 to his wrist. this action may be recognized by the system, indicating to the user that this application may be opened. here, the user has selected a virtual object icon (e.g., a diamond shaped icon, as shown in the figs. 125g ). based on the icon selection, the other virtual icons may fade away and a virtual fading pattern may be projected on the user's wrist, as shown in fig. 125h and 125i respectively. upon dragging the icon to the user's wrist, the user may in a clasping motion, lift up the icon, such that the diamond icon 12510 is rendered in a larger scale into the room ( fig. 125j ). thus, the user has opened up a virtual object and has released the virtual object into the physical space he/she is currently occupying. for example, the user may leave the virtual object in a physical space such that another user may find it when entering the same physical space. or, in another example, as shown in fig. 125k and 125i , the user may have selected an icon that represents a contact or a friend. for example, the user may want to initiate a live conversation with the friend, or may want to engage in an activity with that friend. similar to the above example, the user may drag the icon representing the friend to the wrist, make a clasping motion and "release" the friend, such that a virtual rendering 12514 of the friend may appear in front of the user, as shown in fig. 125l . it should be appreciated that the user may interact with the virtual friend in real-time, which is made possible through the passable world techniques discussed above. fig, 125m shows detailed input controls 12516 that may be used to interact with the user interface. as shown in fig. 125m , various gestures may be used for user input behaviors. as shown in fig. 125m , some types of actions may be based on a location of virtual content, while others may be agnostic to virtual content. grow in another approach, the ui may follow a grow approach, such as a growing tree, for example, such that the icons of the ar system may be "grown" like a tree from the ground or a desk, for example. referring to figs. 126a-126l , the user, through various gestures, may select one or more icons (e.g., an application, a category of applications, etc.), and grow it into a tree to populate other icons that may be part of the selected application. more particularly, referring to fig. 126a , a set of icons denoting various applications or functionalities 12602 may be populated on the user's hand. as shown in fig. 126b and 126b , the user may select a particular icon to "grow," and place the virtual icon (e.g., through a clasping motion of the user's fingers) on a flat surface (e.g., desk, etc.). here, for example, the user has selected the social media category for example. to "grow" the category (e.g., in order to find other applications within the category), as shown in fig. 126c , the user may "plant" (e.g., with a pressing motion), press the virtual icon into a flat surface. this gesture may cause a rendering of a virtual tree or plant 12604 as shown in fig. 126d . as shown in fig. 126d , the plant may start small, and grow to a larger tree, such as the one shown in fig. 126e . as shown in figs. 126d and 126e , the plant may comprise various branches, each having icon(s) that are representative of more applications or options within a particular application. here, in the current example, the branches may be various applications within the category of social media (e.g., youtube ® , facebook ® , etc.). as shown in fig. 126e , the user may select one of the icons on the branches of the plant or tree, and similar to the prior example, pick up the virtual icon through a clasping gesture 12606 and "plant" it again at another location for it to grow. for example, as shown in fig. 126f and 126g , the user has clasped the application, and has then placed it on the flat surface to make the page "grow" from the ground as shown in fig. 126h . the virtual page may then appear as if sprouting from the ground, as shown in fig. 126i . the virtual page grows to become a virtual standalone tree structure 12608, and may be viewed by the user in detail, as shown in fig. 126i . once the user is done with the page 12608, the user may close or "cut" the tree to close the application. as shown in fig. 126j-126l , the user, in a cutting motion may cut through the page or the trunk of the tree to close the application. the closed application may then appear as a branch of the original virtual icon tree, similar to fig. 126e . it should be appreciated that the various gestures are predetermined by the system. the gestures may either be pre-programmed based on the application, or may be customized to suit the preferred gestures of the user. for example, the system may be programmed to recognize the swift hand motion at the trunk of the tree as a "cutting" swipe that indicates to the system that the application should be closed. the ar system may, for example, render a user interface for a web browser as page with tree in forward direction, and tail in backwards direction. for instance, the user interface may be rendered with a branching tree coming out a top of the webpage that shows the links from that webpage. the user interface may further be rendered with the branching tree extending off into a horizon. the ar system may render the user interface with roots of the branching tree graphically tied to the links on the webpage. consequently, rather than having to navigate (e.g., click) through one webpage at a time (e.g., three or four selections), the user may select a leaf node, or any other node, and jump directly to a desired webpage represented by the leaf node. in some implementations, the ar system may provide a scroll tool. the branching tree may dynamically change during scrolling as shown in the above figures. branches and leaf nodes may have a graphical iconification. the icons may, for example, show or represent a screenshot or thumbnail view of a website or webpage that will be navigated to in response to selection of that respective node. the user interface changes browsing from a sequential to a parallel experience. in response to a user selecting a webpage, the ar system renders another branching tree based on the selection. the branching tree may be rendered to visually tail away as it approaches a horizon (e.g., background, foreground, sides). for example, the ar system may render the branching tree to appear paler as the horizons are approached. the ar system may render the tale punctuated with nodes representing the websites or webpages that were used to navigate at a currently selected website or webpage. finger brush in another embodiment, the system may populate virtual icons/applications/functionality etc. based on a predetermined finger brushing gesture. for example, as shown in fig. 127a , the system may recognize a particular gesture 12702 (e.g., pointing index finger for a predetermined period of time) of the user's fingers that indicates that the user wants to use the finger or fingers as a "finger brush". as shown in the fig. 127b , the user may then "paint" a figure by dragging the finger(s) through space. this may cause the ar system to draw a virtual shape based on the movement of the user's fingers. as shown in fig. 127b , the user is in the process of drawing a rectangle. in one or more embodiments, the virtual icons or application may be populated within the confines of the shape drawn by the user. as shown in fig. 127c , the various virtual icons 12704 now appear within the drawn shape. now, the user may open up any particular icon and have it populate beside it, as shown in fig. 127d . fig, 127e shows detailed input controls 12706 that may be used to interact with the drawn shape. as shown in fig. 127e , various gestures may be used for user input behaviors. as shown in fig. 127e , some types of actions may be based on a location of virtual content, while others may be agnostic to virtual content. paint bucket referring now to fig. 128a-128p , another embodiment of user interface interaction is illustrated. as shown in fig. 128a , as was the case in the previous example, based on a user gesture 12802 (e.g., open palm, etc.), a set of virtual icons 12804 may be rendered such that they appear to be populated on the user's hand. the user may select a particular icon as shown in fig. 128b , and flick it ( fig. 128c ) toward a wall, or any other space in a paint bucket fashion. the flicking motion may translate to virtual drops of paint that may appear to be flung towards the wall, such that the selected icon, or applications within that icon ( a category of applications, for example) may then be "painted" on to the wall or any other space. the user may then select a particular virtual icon using a hand or finger gesture. as shown in fig. 128e and 128f , a particular icon 12808 may be selected. upon recognition of the selection gesture, the ar system may display the application (e.g., a search page, as shown in fig. 128g ). the user may then interact with the search page, to navigate to one or more desired websites, as shown in fig. 128h . using a closing-in gesture 12810 (e.g., a clasp of the index finger and the thumb, etc.), the user may store or "keep" certain a desired application or webpage (e.g., the web page of fig. 128i ) based on his/her preferences. referring to fig. 128h and 128i , the user for example, may be interested in a particular webpage, or a particular portion of the webpage, and may through a gesture (a closing-in motion, for example) store the desired portion. as shown in fig. 128i , based on the closing-in gesture 12810, the desired virtual content simply collapses or morphs the desired page into a virtual band 12812. this may be stored on the user's wrist, for example, as shown in fig. 128i . it should be appreciated that in other embodiment, the user may keep or store a desired webpage in other ways. for example, the desired webpage may be stored in a virtual box, or a real box, or be part of a totem. referring to fig. 128j-128l , other webpages/user profiles, or any other desired information may be similarly stored as other virtual bands around the user's wrist. in the embodiment shown in fig. 128j , various virtual icons may be stored on the user's palm. the user may then select a desired icon, and interact with the icon(s), as shown in figs. 128k and 128l . the various stored items may be denoted by various colors, but other similar distinguishing indicators may be similarly used. referring now to fig. 128n-128p , to open up the stored object (e.g., denoted by the virtual bands 12812 on the user's wrist), the user may simply use another gesture 12814 (e.g., a flinging action/motion of the palm) to fling open the virtual band. in this example embodiment, the flinging or flicking motion generates another paint bucket illusion, as shown in fig. 128o , such that two different colors (a different color for each of the virtual bands) are flung across a given space, to generate the desired stored webpage, user profile etc. thus, as shown in fig. 128p , the user may then review the stored application and/or webpage, and interact with the stored content in a desired manner. pivot referring now to fig. 129a -131l, another embodiment of user interface interaction is illustrated. as shown in fig. 129a , the user may, through a recognized hand gesture 12902 (e.g., index and thumb of one hand proximate to index and thumb of other hand) cause a virtual string 12904 to the rendered to the user. the virtual string, as shown in fig. 129b may be elongated to any length desired by the user. for example, if the user wishes to view a lot of applications, the string may be pulled out to become a longer virtual string. or, if the string is pulled out only to a smaller amount, fewer applications may be populated. the length of the virtual string 13104 may be populated so as to as mimic the motion of the user's hands. as shown in fig. 129c , the various virtual icons 12906 may be populated on the string, similar to a clothesline, and the user may simply with a hand gesture 12908, move the icons around such that the icons are moved with respect to the user's hand. for example, the user may scroll through the virtual icons by swipe his hand to the right, causing the virtual icons to also move accordingly to the right, as shown in fig. 129c . the user may then select a particular icon through another gesture 12910 (e.g., pointing two fingers at a particular virtual icon), as shown in fig. 129d . referring now to fig. 129e , the "contacts" application may be selected, as denoted by the colored indicator on the virtual icon. in one or more embodiments, the selection of a particular virtual icon may cause the virtual icon or page to move in the z direction by a hand gesture 12912 that makes the virtual icon come toward the user or go farther away from the user. as shown in figs. 129f-129h , once the contacts application has been opened, the user may browse through the contacts and select a contact to call. as shown in fig. 129g , the user may have selected "matt" from the contacts, and may initiate a call ( fig. 129h ). as shown in fig. 129l , when the user is talking to the contact, the user may simultaneously be able to open up other applications. for example, the user may, through another hand gesture 12912 open up a particular document, and "send" it to the contact, by physically moving, with another hand gesture 12914, the document over to the contact icon, as shown in fig. 129j-129l . thus, the user can seamlessly send files to other users by simple hand gestures. in the ar system, the user is able to touch and hold documents, webpages, etc. as 3d virtual objects that can be flung into space, moved around, and physically manipulated as if they were real objects. fig, 129m shows detailed input controls 12916 that may be used to interact with the user interface. as shown in fig. 129m , various gestures may be used for user input behaviors. as shown in fig. 129m , some types of actions may be based on a location of virtual content, while others may be agnostic to virtual content. pull strings in another embodiment, the various virtual icons may be rendered as suspended virtual strings 13002. each string may represent a different virtual icon of an application or a category of application, as shown in fig. 130a-130c . to select a particular virtual icon 13004, the user may tug (e.g., through a tugging gesture 13206) on a virtual string, as shown in fig. 130c and 130d . the tugging motion 13006 may "pull" the string down" such that the user may view the sub-categories or different icons of a particular application. here, as shown in fig. 130d and 130e , the user may have selected a music application, and the various icons 13010 shown in fig. 130e may represent various tracks. the user may then select a particular track, as shown in fig. 130f and 130f to open up the page and view details about the track, or a webpage associated with the track, for example. in the illustrated embodiment, a clasping motion 13012 may be used to select a particular track of interest. the user may further be able to pass on the track or the webpage to other users/friends, simply by pressing the virtual icon (e.g., through a pressing gesture 13014) associated with the track or music file with another icon representative of the user's friends, as shown in fig. 130h . thus, by detecting a pressing motion, the ar system may recognize the input intended by the user and initiate the transfer process of the file to the ar system of the user's friend. fig, 130i shows detailed input controls 13020 that may be used to interact with the user interface. as shown in fig. 130i , various gestures may be used for user input behaviors. as shown in fig. 130i , some types of actions may be based on a location of virtual content, while others may be agnostic to virtual content. spider web in another embodiment, the user interaction with the system may be through virtual "spiderwebs" created in the physical space around the user. for example, as shown in fig. 131a , the user, may make a fist and open it up 13102 such that virtual spider web strings are flung across space ( fig. 131b ). to select a particular virtual icon/application/category of application, the user may pull along the spider web string 13104 to pull the virtual icon closer to him/her ( fig. 131c-131d ). in the illustrated embodiment of fig. 131d , the web page 13106 has been populated for closer view. referring to fig. 131e , the user may then select, from the webpage 13106, a particular contact 13108, for example, and store the contact on a string of the spider web 13110 ( fig. 131e and131f). similar to the other embodiments above, the user may pass a document 13112, to the selected user 13108, as shown in fig. 131g and 131h , through the virtual string 13110. as shown in fig. 131h , the transfer process is underway, and the file is being transferred to the contact. fig, 131i shows detailed input controls 13120 that may be used to interact with the user interface. as shown in fig. 131i , various gestures may be used for user input behaviors. as shown in fig. 131i , some types of actions may be based on a location of virtual content, while others may be agnostic to virtual content. as shown in the above embodiment, the user interface of the ar system allows the user to interact with the system in innovative and playful ways that enhance the user experience with the ar system. it should be appreciated that other gaming techniques may be similarly used or programmed into the system. referring now to fig. 132 , example embodiments demonstrating a relationship between virtual content and one or more physical objects are illustrated. as shown in 13202, a virtual object may be floating. an object may be floating when it has no relationship to other physical surfaces or objects. this appearance may be a room centric treatment of the content, allowing the user to view the virtual object from all angles. similarly, as shown in 13204, content may be applied to a physical surface like a wall, cup or a person's arm, as was the case in several embodiments discussed above. the virtual content may take on some of the physical qualities of that surface. for example, if the virtual object is on a piece of real paper, and the real paper is lifted, the virtual object may also be lifted up. or, in another embodiment if the paper falls on the ground, the virtual object may also fall, mimicking a gravitational pull. this may also provide the user with a physical sense of touch when interacting with the content. in other embodiments, virtual content may be anchored, as was the case with some embodiments described above. this appearance type combines elements of floating and applied objects. the virtual content may be anchored to a specific surface as shown in 13206, following the behaviors and actions of that surface (e.g., spider web user interface experience, pivot user interface experience, etc.). alternatively, as shown in 13208, the virtual content may simply be "assigned" to a physical object such that it is no longer visible. for example, a document (denoted by a virtual document icon) may simply be assigned to a physical object, but the virtual icon may disappear as soon as the transfer process is complete. this may be a way by which the user can quickly navigate through content without necessarily visualizing every step. user scenarios prior to discussing other specific applications and/or user scenarios, an example process of receiving and updating information from the passable world model will be briefly discussed. the passable world model, discussed above, allows multiple users to access the virtual world stored on a cloud server and essentially pass on a piece of the user's world to one or more peers. for example, similar to other examples discussed above, a first user of an ar system in london may wish to partake in a conference with a second user of the ar system currently located in new york. the passable world model may allow the first user to pass on a piece of the passable world that constitutes the current physical surroundings of the first user to the second user, and similarly pass on a piece of the passable world that constitutes an avatar of the second user such that the second user appears to be in the same room as the first user in london. in other words, the passable world allows the first user to transmit information about the room to the second user, and simultaneously allows the second user to create an avatar to place himself/herself in the physical environment of the first user. thus, both users are continuously updating, transmitting and receiving information from the cloud, giving both users the experience of being in the same room at the same time. referring to figure 143 , an example process 14300 of how data is communicated back and forth between two users located at two separate physical locations is disclosed. it should be appreciated that each input ar system (e.g., having sensors, cameras, eye tracking, audio, etc.) may have a process similar to the one below. for illustrative purposes, the input of the following system may be input from the cameras, but any other input device of the ar system may be similarly used. in step 14302, the ar system may check for input from the cameras. for example, following the above example, the user in london may be in a conference room, and may be drawing some figures on the white board. this may or may not constitute input for the ar system. since the passable world is constantly being updated and built upon data received from multiple users, the virtual world existing on the cloud becomes increasingly precise, such that only new information needs to be updated to the cloud. for example, if the user simply moved around the room, there may already have been enough 3d points, pose data information, etc. such that the ar device of the user in new york is able to project the conference room in london without actively receiving new data from the user in london. however, if the user in london is adding new information, such as drawing a figure on the board in the conference room, this may constitute input that needs to be transmitted to the passable world model, and passed over to the user in new york. thus, in step 14304, the user device checks to see if the received input is valid input. if the received input is not valid, there is wait loop in place such that the system simply checks for more input 14302 if the input is valid, the received input is fed to the cloud server in step 14306. for example, only the updates to the board may be sent to the server, rather than sending data associated with all the points collected through the fov camera. on the cloud server, in step 14308, the input is received from the user device, and updated into the passable world model in step 14310. as discussed with respect to the system architectures described above, the passable world model on the cloud server may comprise processing circuitry multiple databases (including a mapping database 14334 with both geometric and topological maps), object recognizers 14332 and other suitable software components. in step 14320, based on the received input 14308, the passable world model is updated. the updates may then be sent to various user devices that may need the updated information, in step 14312. here, the updated information may be sent to the user in new york such that the passable world that is passed over to the user in new york can also view the first user's drawing as a picture is drawn on the board in the conference room in london. it should be appreciated that the second user's device may already be projecting a version of the conference room in london, based on existing information in the passable world model, such that the second user in new york perceives being in the conference room in london. in step 14326, the second user device receives the update from the cloud server. in step 14328, the second user device may determine if the update needs to be displayed. for example, certain changes to the passable world may not be relevant to the second user and may not be updated. in step 14330, the updated passable world model is displayed on the second user's hardware device. it should be appreciated that this process of sending and receiving information from the cloud server is performed rapidly such that the second user can see the first user drawing the figure on the board of the conference room almost as soon as the first user performs the action. similarly, input from the second user is also received in steps 14320-14324, and sent to the cloud server and updated to the passable world model. this information may then be sent to the first user's device in steps 14314-14318. for example, assuming the second user's avatar appears to be sitting in the physical space of the conference room in london, any changes to the second user's avatar (which may or may not mirror the second user's actions/appearance) may also be transmitted to the first user, such that the first user is able to interact with the second user. in one example, the second user may create a virtual avatar resembling the user, or the avatar may take the form of a bee that hovers around the conference room in london. in either case, inputs from the second user (for example, the second user may shake his head in response to the drawings of the first user), are also transmitted to the first user such that the first user can gauge the second user's reaction. in this case, the received input may be based on facial recognition and changes to the second user's face may be sent to the passable world model, and then passed over to the first user's device such that the change to the avatar being projected in the conference room in london is seen by the first user. similarly, there may be many other types of input that are effectively passed back and forth between multiple users of the ar system. although the particular examples may change, all interactions between a user of the ar system and the passable world is similar to the process described above, with reference to figure 143 . while the above process flow diagram describes interaction between multiple users accessing and passing a piece of the passable world to each other, figure 144 is an example process flow diagram 14400 illustrating interaction between a single user and the ar system. the user may access and interact with various applications that require data retrieved from the cloud server. in step 14402, the ar system checks for input from the user. for example, the input may be visual, audio, sensory input, etc. indicating that the user requires some type of data. for example, the user may wish to look up information about an advertisement he may have just seen on a virtual television. in step 14404, the system determines if the user input is valid. if the user input is valid, in step 14406, the input is fed into the server. on the server side, when the user input is received in step 14408, appropriate data is retrieved from a knowledge base 14440 in step 4410. as described above, there may be multiple knowledge databases connected to the cloud server from which to retrieve data. in step 14412, the data is retrieved and transmitted to the user device requesting data. back on the user device, the data is received from the cloud server in step 14414. in step 14416, the system determines when the data needs to be displayed in the form of virtual content, and if it does, the data is displayed on the user hardware 14418. as discussed briefly above, many user scenarios may involve the ar system identifying real-world activities and automatically performing actions and/or displaying virtual content based on the detected real-world activity. for example, the ar system recognizes the user activity and then creates a user interface that floats around the user's frame of reference providing useful information/virtual content associated with the activity. similarly, many other uses can be envisioned, some of which will be described in user scenarios below. having described the optics and the various system components of the ar system, some further applications of the ar system will now be discussed. the applications described below may have hardware and/or software components that may be separately installed onto the system, in some embodiments. in other embodiments, the system may be used in various industries, etc. and may need to be modified to achieve some of the embodiments below. it should be appreciated that the following embodiments are simplified for illustrative purposes and should not be read as limiting; and many more complex embodiments may be envisioned. privacy since the ar system may continually capture data from a user's surroundings, there may be concerns of privacy. for example, the user wearing the ar device may walk into a confidential meeting space, or may be exposed to sensitive content (e.g., nudity, sexual content, etc.). thus, it may be advantageous to provide one or more mechanisms to help ensure privacy while using the ar system. in one implementation, one or more components of the ar system may include a visual indicator that indicates when information is being collected by the ar system. for example, a head worn or mounted component may include one or more visual indicators (e.g., leds) that visually indicate when either visual and/or audio information is being collected. for instance, a first led may be illuminated or may emit a first color when visual information is being collected by cameras carried by the head worn component. a second led may be illuminated or may emit a second color when visual information is being collected by microphones or audio transducers carried by the head worn component. additionally or alternatively, the ar system may be responsive to defined gestures from any person in a field of view of a camera or other optical sensor of the ar system. in particular, the ar system may selectively stop capturing images in response to detecting the defined gesture. thus, a person in the field of view of the ar user can selectively cause the ar system to stop capturing images simply be executing a gesture (e.g., hand gesture, arm gesture, facial gesture, etc.). in one or more embodiments, the ar system may be responsive to gestures of the person wearing the ar device. in other embodiments, the ar system may be responsive to gestures of others in a physical space or environment shared with the person wearing the ar system. in yet another embodiment, for privacy purposes, the user may register with an application associated with the ar system. this may allow the user more control as to whether to be captured/stored by images/videos and renderings of other users of the system. a user registered with the ar system (or application associated with the ar system) may have more privacy control than one who does not have an account with the system. for example, if a registered user does not wish to be captured by other ar systems of other users, the system may, on recognizing the person, stop capturing images of that particular user, or alternatively, blur out visual images associated with the person. on the other hand, a person who has not registered with the ar system automatically has less control over privacy than one who has. thus, there may be a higher incentive to register with the ar system (or associated application). in another embodiment, the ar system may automatically implement safety controls based on a detected activity and/or recognized surroundings of the user. because the ar system is constantly aware of the user's surroundings and activities (e.g., through the fov cameras, eye cameras, sensors, etc.) the ar system may automatically go into a suspended mode when the ar system detects particular activities or surroundings. for example, if the ar system determines that the user is about to occupy a particular room in the house (e.g., bathroom, child's room, a pre-designated confidential area, etc.), the ar system may automatically go into a suspended mode, and terminate capture of information, or selectively capture only basic information from the user's ar system. or, if the ar system determines that the user is engaged in a particular activity (e.g., driving, etc.), the ar system may automatically go into the suspended or "off" mode so as to not distract the users by any incoming messages or virtual content. similarly, many other safety and/or privacy controls may be implemented in other applications as well. specific applications and examples of virtual rooms/spaces and user interfaces the following section will go through various examples and applications of virtual rooms and/or spaces, and utilizing the various embodiments of the ar systems discussed above in real-life practical applications. as previously discussed, an ar system may include one, or typically more, instances of individual ar systems. these individual ar systems typically include at least a head worn or head mounted component, which provides at least a visual augmented reality user experience, and typically an aural augmented reality experience. as discussed in detail above, the ar systems also typically include a processor component. the processor component may be separate and distinct from the head worn or mounted component, for example a belt pack which is communicatively coupled (e.g., tethered, wireless) to the head worn or mounted component (e.g., figs. 4a-4d ). as also previously discussed, the ar system may optionally include one or more space or room based sensor systems (e.g., fig 26 ). the space or room based sensor system may include one or more image capturing devices (e.g., cameras). cameras may be located to monitor a space, for instance a room. for example, cameras may be positioned in a number of corners in the room. the cameras may, for example, be very similar or even identical in structure to the forward facing cameras of the head worn or mounted component. thus, these cameras preferably capture 3d information, for instance as light field. the cameras of the space or room based sensor system device are typically fixed in space, in contrast to cameras of the head worn or mounted component. in one or more embodiments, there may be a space or room based sensor system for each of a plurality of spaces or rooms. as also previously discussed, the ar system may employ a plurality of object recognizers, which recognizes objects (e.g., taxonomically recognition, and/or specific recognition). the ar system can recognize a space based on object recognition of the structure and/or contents of the space. also, as previously discussed, the ar system may employ additional information, e.g., time, geographical coordinates (gps location information), compass direction, wireless networks, etc.) to identify a space. in one or more embodiments, the ar system may populate or render a virtual space (e.g., meta room) in a field of view of a user. for example, the individual ar systems may render or project virtual images to the retina of a user that impose on a user's view of a real world or physical space. similarly, any other optical approach detailed above may be used. the ar system may be used for a wide variety of everyday applications. the ar system may be used while the user is at work, and may even help enhance the user's work product. also for example, the ar system may be used in training users (e.g., educational training, athletic training, job-related training, etc.). as a further example, the ar system may be used for entertainment (e.g., gaming). as yet a further example, the ar system may be used in assisting with exercise, for instance by providing instruction and/or motivation. for example, the ar system may render something for the user to chase (e.g., world class runner), or a virtual character chasing the user (e.g., a t-rex). in one or more embodiments, the ar system may comprise additional application-specific components. for example, the ar system may be communicatively coupled to one or more optional sensor(s) (e.g., pedometer, motion sensor(s), heart rate sensor(s), breathing rate sensor(s), perspiration sensor(s), etc.). in one or more embodiments, the ar system may present motivational content as a game (e.g., a secret agent themed game). the ar system may also employ various types of totems (or objects that may be used to provide user input, as will be described in further detail below). in other words, the ar system may be used to provide a wide variety of augmented reality experiences, and may be used to enhance everyday experiences and/or assist in everyday tasks. the following disclosure will go through a series of such applications and/or embodiments. it should be appreciated that the embodiments described below are for illustrative purposes only, and should not be read as limiting. rooms or virtual spaces the following discussion addresses the concept of virtual rooms or virtual spaces. this discussion also addresses how a user navigates between virtual rooms or virtual spaces. in one or more embodiments, a user may access specific tools and/or applications when in a room virtual room or virtual space. the ar system provides for dynamic room mapping. for example, the ar system may map virtual spaces to physical locations, physical rooms or other physical spaces. mapping may be performed manually, semi-automatically, or automatically. the ar system provides a process for mapping and modifying a pre-existing room to a physical environment. the ar system provides a process for mapping multiple rooms in a physical space simultaneously. the ar system allows sharing, for example implementing co-located experiences. also for example, the ar system allows sharing specific apps; sharing entire rooms, and/or making items public or private. a number of example scenarios are discussed below. for example, a user may be working in a physical office space, and a message from co-worker may arrive, prompting a virtual alert to the user. in another example, a user located in his/her living room may select a virtual room or space, or may change his/her environment from a virtual entertainment or media room to a virtual workout room or virtual office space. in another example, a user operating in one virtual room or space, may open or otherwise access a specific application associated with a different room or space. for instance, a user may open or access a camera application from an entertainment or media room. as will be evident from the discussion herein, the ar system may implement a large number of other scenarios. a virtual room or virtual space is a convenient grouping or organization of virtual objects, virtual tools, applications, features and other virtual constructs (e.g., collectively virtual content), which are renderable in the field of vision of a user. virtual rooms or virtual spaces may be defined in one or more different ways. for example, virtual rooms or virtual spaces may be defined by: i) activity, goal or purpose; ii) location (e.g., work, home, etc.), iii) time of day, etc. users may define or create virtual rooms or virtual spaces to support understanding, ease of use, and/or search efficiency. in one or more embodiments, virtual rooms and/or spaces may be custom-defined by the user. in one or more embodiments, the ar system may provide a catalog or library of virtual rooms or virtual spaces that are predefined. for example, virtual rooms or spaces may be pre-populated with virtual content (e.g., virtual objects, virtual tools, and other virtual constructs, for instance applications, features, characters, text, digits, and other symbols) based on a theme. themes may be activity-based, location-based, time-based, intelligence-based, etc. the ar system provides a user interface that allows users to create or modify virtual rooms or virtual spaces, based on a set of preferences set by the user. the user may either design the room from scratch, or may modify or enhance a pre-defined virtual room or space. the virtual room may be modified by adding, removing or rearranging virtual content within the virtual room or space via a user interface of the wearable ar system. fig. 74a shows a user sitting in a physical office space 7402, and using a wearable ar system 7401 to experience a virtual room or virtual space in the form of a virtual office, at a first time, according to one illustrated embodiment. the physical office may include one or more physical objects, for instance walls, floor (not shown), ceiling (not shown), a desk and chair. as illustrated the ar system renders a virtual room 7402, in which the user may perform occupation-related tasks. hence, the virtual office is populated with various virtual tools or applications useful in performing the user's job. the virtual tools or applications may for example include various virtual objects or other virtual content, for instance two-dimensional drawings or schematics, two-dimensional images or photographs, and/or a three-dimensional architectural model, as shown in fig. 74a . the virtual tools or applications may include tools such as a ruler, caliper, compass, protractor, templates or stencils, etc. the virtual tools or applications may for example include interfaces for various software applications (e.g., email, a web browser, word processor software, presentation software, spreadsheet software, voicemail software, etc.). as shown in fig. 74a , some virtual objects may be stacked or overlaid with respect to one another. the user may select a desired virtual object with a corresponding gesture. for instance, the user may page through documents or images with a finger flicking gesture to iteratively move through the stack of virtual objects. some of the virtual objects may take the form of menus, selection of which may cause rendering of a submenu. as shown in fig. 74a , the user is shown a set of virtual content that the user may view through the ar device 7401. in the illustrated embodiment, the user may utilize hand gestures to build and/or enhance the virtual architectural model. thus, rather than having to build a model from physical structures, the architectural model may simply be viewed and constructed in 3d, thereby providing a more realistic, and easily modifiable way of visualizing a structure. referring now to fig. 74b , the physical office of fig. 74b is identical to that of fig. 74a , and the virtual office of fig. 74b is similar to the virtual office of fig. 74a . identical or similar elements are identified using the same reference numbers as in fig. 74a . only significant differences are discussed below. as shown in fig. 74b , the ar system may render a virtual alert or notification to the user in the virtual office. for example, the ar system may render a visual representation of a virtual alert or notification in the user's field of view. the ar system may additionally or alternatively render an aural representation of a virtual alert or notification. fig. 75 illustrates another example virtual room according to one or more embodiments. as shown in the virtual room 7500 of fig. 75 , the user is wearing a wearable ar system 7501, and is experiencing one or more virtual elements in a physical living room. however, the living room is populated with one or more virtual elements, such as the virtual architectural model, similar to that of fig. 74a and 74b . for example, the user may be at home, but may want to work on the architectural model. therefore, the user may have the ar system render a latest saved version of the architectural model on a physical table of the living room, such that the virtual architectural model sits on top of the table, as shown in fig. 75 . the physical living room may include one or more physical objects, for instance walls, floor, ceiling, a coffee table and sofa. as figs. 74a-b and 75 illustrate, a virtual office may be portable, being renderable in various different physical environments. it thus may be particularly advantageous if the virtual office renders identically in a subsequent use to its appearance or layout as the virtual office appeared in in a most previous use or rendering. thus, in each subsequent use or rendering, the same virtual objects will appear and the various virtual objects may retain their same spatial positions relative to one another as in a most recently previous rendering of the virtual office. in some implementations, this consistency or persistence of appearance or layout from one use to next subsequent use may be independent of the physical environments in which the virtual space is render. thus, moving from a first physical environment (e.g., physical office space) to a second physical environment (e.g., physical living room) will not affect an appearance or layout of the virtual office. fig. 76 shows another scenario 7600 comprising a user using a wearable ar system 7601. in the illustrated embodiment, the user is again in his/her own real living room, but is experiencing a few virtual elements (e.g., virtual tv screen 7604, virtual advertisement for shoes 7608, virtual mini-football game 7610, etc.). as shown in fig. 76 , the virtual objects are placed in relation to the real physical objects of the room (e.g., the desk, the wall, etc.). the physical living room may include one or more physical objects, for instance walls, floor, ceiling, a coffee table and sofa. for simplicity, the physical living room is illustrated as being identical to that of fig. 75 . hence, identical or similar elements are identified using the same reference numbers as in fig. 75 , and discussion of the virtual office will not be repeated in the interest of brevity. as illustrated the ar system renders a virtual room or virtual space in the form of a virtual entertainment or media room, in which the user relaxes and/or enjoys entertainment or consumes media (e.g., tv programs, movies, games, music, reading, etc.). hence, the virtual entertainment or media room is populated with various virtual tools or applications. the ar system 7601 may render the virtual entertainment or media room with a virtual television or primary screen 7604. the virtual television or primary screen can be rendered to any desired size. the virtual television or primary screen could even extend beyond the confines of the physical room. the ar system may render the virtual television or primary screen to replicate any known or yet to be invented physical television. thus, the ar system may render the virtual television or primary screen to replicate a period or classic television from the 1950s, 1960, or 1970s, or may replicate any current television. for example, the virtual television or primary screen may be rendered with an outward appears of a specific make and model and year of a physical television. also for example, the virtual television or primary screen may be rendered with the same picture characteristics of a specific make and model and year of a physical television. likewise, the ar system may render sound to have the same aural characteristics as sound from a specific make and model and year of a physical television. the ar system also renders media content to appear as if the media content was being displayed by the virtual television or primary screen. the media content may take any of a large variety for forms, including television programs, movies, video conference or calls, etc. the ar system may render the virtual entertainment or media room with one or more additional virtual televisions or secondary screens. additional virtual televisions or secondary screens may allow the user to enjoy second screen experiences. for instance, a first secondary screen 7610 may allow the user to monitor a status of a fantasy team or player in a fantasy league (e.g., fantasy football league), including various statistics for players and teams. additionally or alternatively, the second screen 7610 may allow the user to monitor other activities, for example activities tangentially related to the media content on the primary screen. for instance, the second screen 7610 may display a listing of scores in games from around a conference or league while the user watches one of the games on the primary screen. also for instance, the second screen 7610 may display highlights from games from around a conference or league, while the user watches one of the games on the primary screen. one or more of the secondary screens may be stacked as illustrated fig. 76 , allowing a user to select a secondary screen to bring to a top, for example via a gesture. for instance, the user may use a gesture to toggle through the stack of secondary screens in order, or may use a gesture to select a particular secondary screen to bring to a foreground relative to the other secondary screens. the ar system may render the virtual entertainment or media room with one or more three-dimensional replay or playback tablets. the three-dimensional replay or playback tablets may replicate in miniature, a pitch or playing field of a game the user is watching on the primary display, for instance providing a "god's eye view." the 3d dimensional replay or playback tablets may, for instance, allow the user to enjoy on-demand playback or replay of media content that appears on the primary screen. this may include user selection of portions of the media content to be play backed or replayed. this may include user selection of special effects, for example slow motion replay, stopping or freezing replay, or speeding up or fast motion replay to be faster than actual time. for example, the user may use one or more gestures to add annotations marking a receiver's route during a replay of a play in a football game, or to mark a blocking assignment for a linemen or back. the 3d replay or playback tablet may even allow a user to add a variation (e.g., different call) that modifies how a previous play being reviewed plays out. for example, the user may specify a variation in a route run by a receiver, or a blocking assignment assigned to a lineman or back. the ar system 7601 may use the fundamental parameters of the actual play, modifying one or more parameters, and then executing a game engine on the parameters to play out a previous play executed in an actual physical game but with the user modification(s). for example, the user may track an alternative route for a wide receiver. the ar system may make no changes to the actions of the players, except the selected wide receiver, the quarterback, and any defensive players who would cover the wide receiver. an entire virtual fantasy play may be played out, which may even produce a different outcome than the actual play. this may occur, for example, during an advertising break or time out during the game. this allows the user to test their abilities as an armchair coach or player. a similar approach could be applied to other sports. for example, the user may make a different play call in a replay of a basketball game, or may call for a different pitch in a replay of a baseball game, to name just a few examples. use of a game engine allows the ar system to introduce an element of statistical chance, but within the confines of what would be expected in real games. the ar system may render additional virtual content, for example 3d virtual advertisements. the subject matter or content of the 3d virtual advertisements 7608 may, for example, be based at least in part on the content of what is being played or watched on the virtual television or primary screen. the ar system may render virtual controls. for example, the ar system may render virtual controls mapped in the user's field of vision so as to appear to be within arm's reach of the user. the ar system allows users to navigate from virtual space to virtual space. for example, a user may navigate between a virtual office space ( fig. 74a and 74b ) and a virtual entertainment or media space ( fig. 75 and 76 ). as discussed herein, the ar system may be responsive to certain user input to allow navigation directly from one virtual space to another virtual space, or to toggle or browse through a set of available virtual spaces. the set of virtual spaces may be specific to a user, specific to an entity to which a user belongs, and/or may be system wide or generic to all users. to allow user selection of and/or navigation between virtual rooms or virtual spaces, the ar system may be responsive to one or more of, for instance, gestures, voice commands, eye tracking, and/or selection of physical buttons, keys or switches for example carried by a head worn component, belt pack or other physical structure of the individual ar system. the user input may be indicative of a direct selection of a virtual space or room, or may cause a rendering of a menu or submenus to allow user selection of a virtual space or room. fig. 77 shows another scenario 7700 in which the user is sitting in a physical living room space similar to the scenario of fig. 76 , and experiencing virtual elements in his living room. in the current embodiment, the user uses hand gestures to go through various virtual user interfaces, as denoted by the user's hand moving from left to right in a swiping motion. as illustrated in fig. 77 , the ar system may render a user interface tool which provides a user with a representation of choices of virtual rooms or virtual spaces, and possibly a position of a currently selected virtual room or virtual space in a set of virtual room or virtual space available to the user. as illustrated, the representation takes the form of a line of marks or symbols, with each marking representing a respective one of the virtual rooms or virtual spaces available to the user. a currently selected one of the virtual rooms or virtual spaces is visually emphasized, to assist the user in navigating forward or backward through the set. figs. 78a and 78b show similar scenarios 7802 and 7804 respectively. as shown in figs. 78a and 78b , the scene is set in the living room of the user wearing an ar system 7801, having a set of virtual elements (e.g., virtual screen, advertisement, etc.). similar to the embodiment illustrated in fig. 77 , the user users hand gestures to interact with the ar system. as shown in fig. 78a , the user moves both hands in a recognized gesture to open up additional functions, or applications. as shown in fig. 78b , in response to the user's gestures, additional virtual interface elements (or "apps") may be rendered in the user's view. as illustrated in fig. 78a , the user executes a first gesture (illustrated by double headed arrow), to open an icon based cluster user interface virtual construct ( fig. 78b ). the gesture may include movement of the user's arms and/or hands or other parts of the user's body, for instance head pose or eyes. alternatively, the user may use spoken commands to access the icon based cluster user interface virtual construct ( fig. 78b ). if a more comprehensive menu is desired, the user may use a different gesture. although the above examples user hand gestures for illustrative purposes, any other type of user input may be similarly used (e.g., eye gestures, voice commands, totems, etc.). as illustrated in fig. 78b , the icon based cluster user interface virtual construct 7808 provides a set of small virtual representations of a variety of different virtual rooms or spaces from which a user may select. this virtual user interface 7808 may provide quick access to virtual rooms or virtual spaces via representations of the virtual rooms or virtual spaces. the small virtual representations are themselves essentially non-functional, in that they do not include functional virtual content. thus, the small virtual representations are non-functional beyond being able to cause a rendering of a functional representation of a corresponding virtual room or space in response to selection of one of the small virtual representations. the set of small virtual representations may correspond to a set or library of virtual rooms or spaces available to the particular user. where the set includes a relatively large number of choices, the icon based cluster user interface virtual construct may, for example, allow a user to scroll through the choice. for example, in response to a second gesture, an ar system may re-render the icon based cluster user interface virtual construct with the icons shifted in a first direction (e.g., toward user's right), with one icon falling out of a field of view (e.g., right-most icon) and a new icon entering the field of view. the new icon corresponds to a respective virtual room or virtual space that was not displayed, rendered or shown in a temporally most immediately preceding rendering of the icon based cluster user interface virtual construct. a third gesture may, for example, cause the ar system to scroll the icons in the opposite direction (e.g., toward user's left). in response to a user selection of a virtual room or virtual space, the ar system may render virtual content associated with the virtual room or virtual space to appear in the user's field of view. the virtual content may be mapped or "glued" to the physical space. for example, the ar system may render some or all of the virtual content positioned in the user's field of view to appear as if the respective items or instances of virtual content are on various physical surfaces in the physical space, for instance walls, tables, etc. also for example, the ar system may render some or all of the virtual content positioned in the user's field of view to appear as if the respective items or instances of virtual content are floating in the physical space, for instance within reach of the user. fig. 79a shows a user sitting in a physical living room space 7902, and using an ar system 7901 to experience a virtual room or virtual space in the form of a virtual entertainment or media room (similar to the above embodiments), and the user executing gestures to interact with a user interface virtual construct 7904, according to one illustrated embodiment. as illustrated in fig. 79a , the ar system 7901 may render a functional group or pod user interface virtual construct 7904 , so at to appear in a user's field of view, preferably appearing to reside within a reach of the user. the pod user interface virtual construct 7904 includes a plurality of virtual room or virtual space based applications, which conveniently provides access from one virtual room or virtual space to functional tools and applications which are logically associated with another virtual room or virtual space. the pod user interface virtual construct 7904 may form a mini work station for the user. the ar system detects user interactions with the pod user interface virtual construct or the virtual content of the virtual room or space. for example, the ar system may detect swipe gestures, for navigating through context specific rooms. the ar system may render a notification or dialog box 7908, for example, indicating that the user is in a different room. the notification or dialog box 7908 may query the use with respect to what action that the user would like the ar system to take (e.g., close existing room and automatically map contents of room, automatically map contents of room to existing room, or cancel). fig. 79b shows a user sitting in a physical living room space, and using an ar system to experience a virtual room or virtual space in the form of a virtual entertainment or media room, the user executing gestures to interact with a user interface virtual construct, according to one illustrated embodiment. similar to fig. 79a , the ar system 7901 may render a functional group or pod user interface virtual construct 7904, so at to appear in a user's field of view, preferably appearing to reside within a reach of the user. as illustrated in fig. 79b , the ar system 7901 detects user interactions with the pod user interface virtual construct 7904 or the virtual content of the virtual room or space. for example, the ar system may detect a swipe or pinch gesture, for navigating to and opening context specific virtual rooms or virtual spaces. the ar system may render a visual effect to indicate which of the representations is selected. fig. 79c shows a user sitting in a physical living room space, and using an ar system 7901 to experience a virtual room or virtual space in the form of a virtual entertainment or media room, the user executing gestures to interact with a user interface virtual construct, according to one illustrated embodiment. as illustrated in fig. 79c , the ar system may render a selected application in the field of view of the user, in response to a selection of a representation illustrated in fig. 79b . for example, the user may select a social networking application, a web browsing application, or an electronic mail (email) application from, for example, a virtual work space, while viewing a virtual entertainment or media room or space. fig. 79d shows another scene 7908 in which the user is sitting in a physical living room space, and using an ar system 7901 to experience a virtual room or virtual space in the form of a virtual entertainment or media room, the user executing gestures to interact with a user interface virtual construct, according to one illustrated embodiment. as illustrated in fig. 79d , the user may perform a defined gesture, which serves as a hot key for a commonly used application (e.g., camera application). the ar system detects the user's gesture, interprets the gesture, and opens or executes the corresponding application. for example, the ar system may render the selected application 7920 or a user interface of the selected application in the field of view of the user, in response to the defined gesture. in particular, the ar system may render a fully functional version of the selected application or application user interface to the retina of the eyes of the user, for example so as to appear with arm's reach of the user. the camera application 7920 may include a user interface that allows the user to cause the ar system to capture images or image data. for example, the camera application 7920 may allow the user to cause outward facing cameras on a body or head worn component of an individual ar system to capture images or image data (e.g., 4d light field) of a scene that is in a field of view of the outward facing camera(s) and/or the user. defined gestures are preferably intuitive. for example, an intuitive two handed pinch type gesture for opening a camera application or camera user interface is illustrated in fig. 79d . the ar system may recognize other types of gestures. the ar system may store a catalog or library of gestures, which maps gestures to respective applications and/or functions. gestures may be defined for all commonly used applications. the catalog or library of gestures may be specific to a particular user. alternatively or additionally, the catalog or library of gestures may be specific to a specific virtual room or virtual space. alternatively, the catalog or library of gestures may be specific to a specific physical room or physical space. alternatively or additionally, the catalog or library of gestures may be generic across a large number of users and/or a number of virtual rooms or virtual spaces. as noted above, gestures are preferably intuitive, particular with relation to the particular function, application or virtual content to which the respective gesture is logically associated or mapped. additionally, gestures should be ergonomic. that is the gestures should be comfortable to be performed by users of a wide variety of body sizes and abilities. gestures also preferably involve a fluid motion, for instance an arm sweep. defined gestures are preferably scalable. the set of defined gestures may further include gestures which may be discretely performed, particular where discreetness would be desirable or appropriate. on the other hand, some defined gestures should not be discrete, but rather should be demonstrative, for example gestures indicating that a user intends to capture images and/or audio of others present in an environment. gestures should also be culturally acceptable, for example over a large range of cultures. for instance, certain gestures which are considered offensive in one or more cultures should be avoided. a number of proposed gestures are set out in table a, below. table-tabl0001 table a swipe to the side (slow) spread hands apart bring hands together small wrist movements (as opposed to large arm movements) touch body in a specific place (arm, hand, etc.) wave pull hand back swipe to the side (slow) push forward flip hand over close hand swipe to the side (fast) pinch- thumb to forefinger pause (hand, finger, etc.) stab (point) referring now fig. 79e , another scenario 7910 is illustrated showing a user sitting in a physical living room space, and using an ar system 7901 to experience a virtual room or virtual space in the form of a virtual entertainment or media room, the user executing gestures to interact with a user interface virtual construct, according to one illustrated embodiment. as illustrated in fig. 79e , the ar system 7901 renders a comprehensive virtual dashboard menu user interface, for example rendering images to the retina of the user's eyes. the virtual dashboard menu user interface may have a generally annular layout or configuration, at least partially surrounding the user, with various user selectable virtual icons spaced to be within arm's reach of the user. the ar system detects the user's gesture or interaction with the user selectable virtual icons of the virtual dashboard menu user interface, interprets the gesture, and opens or executes a corresponding application. for example, the ar system may render the selected application or a user interface of the selected application in the field of view of the user, in response to the defined gesture. for example, the ar system may render a fully functional version of the selected application or application user interface to the retina of the eyes of the user. as illustrated in fig. 79e , the ar system may render media content where the application is a source of media content. the ar system may render the application, application user interface or media content to overlie other virtual content. for example, the ar system may render the application, application user interface or media content to overlay a display of primary content on a virtual primary screen being displayed in the virtual room or space (e.g., virtual entertainment or media room or space). fig. 80a shows yet another scenario 8002 illustrated a user sitting in a physical living room space, and using an ar system 8001 to experience a first virtual décor (e.g., aesthetic skin or aesthetic treatment), the user executing gestures to interact with a user interface virtual construct, according to one illustrated embodiment. the ar system 8001 may allow a user to change or modify (e.g., re-skin) a virtual décor of a physical room or physical space. for example, as illustrated in fig. 80a , a user may utilize a gesture to bring up a first virtual décor, for example a virtual fireplace with a virtual fire and first and second virtual pictures. the first virtual décor (e.g., first skin) is mapped to the physical structures of the physical room or space (e.g., physical living room). as also illustrated in fig. 80a , the ar system may render a user interface tool which provides a user with a representation of choices of virtual décor, and possibly a position of a currently selected virtual décor in a set of virtual décor available to the user. as illustrated, the representation takes the form of a line of marks or symbols, with each marking representing a respective one of the virtual décor available to the user. a currently selected one of the virtual décor is visually emphasized, to assist the user in navigating forward or backward through the set. the set of virtual décor may be specific to the user, specific to a physical room or physical space, or may be shared by two or more users. fig. 80b shows another scenario 8004 in which the user executes gestures to interact with a user interface virtual construct, according to one illustrated embodiment. as illustrated in fig. 80b , a user may utilize a gesture to bring up a second virtual décor, different from the first virtual décor. the second virtual décor may, for example, replicate a command deck of a spacecraft (e.g., starship) with a view of a planet, technical drawings or illustrations of the spacecraft, and a virtual lighting fixture or luminaire. the gesture to bring up the second virtual décor may be identical to the gesture to bring up the first virtual décor, the user essentially toggling, stepping or scrolling through a set of defined virtual décors for the physical room or physical space (e.g., physical living room). alternatively, each virtual décor may be associated with a respective gesture. fig. 80c illustrates another scenario 8006 showing the user sitting in a physical living room space, and using an ar system 8001 to experience a third virtual décor (e.g., aesthetic skin or aesthetic treatment), the user executing gestures to interact with a user interface virtual construct, according to one illustrated embodiment. as illustrated in fig. 80c , a user may gesture to bring up a third virtual décor, different from the first and the second virtual décors. the third virtual décor may, for example, replicate a view of a beach scene and a different virtual picture. the gesture to bring up the third virtual décor may be identical to the gesture to bring up the first and the second virtual décors, the user essentially toggling, stepping or scrolling through a set of defined virtual décors for the physical room or physical space (e.g., physical living room). alternatively, each virtual décor may be associated with a respective gesture. fig. 81 shows yet another scenario 8100 in which a user of an ar system 8102 experiences another virtual room space in the form of a virtual entertainment or media room, the user executing gestures to interact with a user interface virtual construct, according to one illustrated embodiment. as illustrated in fig. 81 , the ar system 8101 may render a hierarchical menu user interface virtual construct 8111 including a plurality of virtual tablets or touch pads, so at to appear in a user's field of view, preferably appearing to reside within a reach of the user. these allow a user to navigate a primary menu to access user defined virtual rooms or virtual spaces, which are a feature of the primary navigation menu. the various functions or purposes of the virtual rooms or virtual spaces may be represented through icons, as shown in fig. 81 . fig. 82 shows another scenario 8200 in which a user of an ar system 8201 interacts with a virtual room or virtual space in the form of a virtual entertainment or media room, the user executing gestures to interact with a user interface virtual construct to provide input by proxy, according to one illustrated embodiment. as illustrated in fig. 82 , the ar system may render a user interface virtual construct 8211 including a plurality of user selectable virtual elements, so at to appear in a user's field of view. the user manipulates a totem 8213 to interact with the virtual elements of the user interface virtual construct 8211. the user may, for example, point a front of the totem 8213 at a desired element. the user may also interact with the totem 8213, for example by tapping or touching on a surface of the totem, indicating a selection of the element at which the totem is pointing or aligned. the ar system 8201 detects the orientation of the totem and the user interactions with the totem, interpreting such as a selection of the element at which the totem is pointing or aligned. the ar system the executes a corresponding action, for example opening an application, opening a submenu, or rendering a virtual room or virtual space corresponding to the selected element. the totem 8213 may replicate a remote control, for example remote controls commonly associated with televisions and media players. in some implementations, the totem 8213 may be an actual remote control for an electronic device (e.g., television, media player, media streaming box), however the ar system may not actually received any wireless communications signals from the remote control. the remote control may even not have batteries, yet still function as a totem since the ar system relies on images that capture position, orientation and interactions with the totem (e.g., remote control). figs. 83a and 83b show scenarios 8302 and 8304 illustrating a user sitting in a physical living room space, and using an ar system 8301 to experience a virtual room or virtual space in the form of a virtual entertainment or media room, the user executing gestures to interact with a user interface virtual construct to provide input, according to one illustrated embodiment. as illustrated in fig. 83a , the ar system 8301 may render a user interface virtual construct including an expandable menu icon that is always available. the ar system 8301 may consistently render the expandable menu icon in a given location in the user's field of view, or preferably in a peripheral portion of the user's field of view, for example an upper right corner. alternatively, ar system 8301 may consistently render the expandable menu icon 8311 in a given location in the physical room or physical space. as illustrated in fig. 83b , the user may gesture at or toward the expandable menu icon 8311 to expand the expandable menu construct 8312. in response, the ar system may render the expanded expandable menu construct 8312 to appear in a field of view of the user. the expandable menu construct 8312 may expand to reveal one or more virtual rooms or virtual spaces available to the user. the ar system 8301 may consistently render the expandable menu in a given location in the user's field of view, or preferably in a peripheral portion of the user's field of view, for example an upper right corner. alternatively, the ar system 8301 may consistently render the expandable menu 8311 in a given location in the physical room or physical space. fig. 84a shows another scenario 8402 illustrating a user of an ar system 8401 experiencing a virtual décor, and the user executing pointing gestures to interact with a user interface virtual construct, according to one illustrated embodiment. as illustrated in fig. 84a , the ar system 8401 may render a user interface tool which includes a number of pre-mapped menus. for instance, the ar system 8401 may render a number of poster-like virtual images 8412 corresponding to respective pieces of entertainment or media content (e.g., movies, sports events), from which the user can select via one or more pointing gestures. the ar system 8401 may render the poster-like virtual images 8412 to, for example, appear to the user as if hanging or glued to a physical wall of the living room, as shown in fig. 84a . the ar system 8401 detects the user's gestures, for example pointing gestures which may include pointing a hand or arm toward one of the poster-like virtual images. the ar system recognizes the pointing gesture or projection based proxy input, as a user selection intended to trigger delivery of the entertainment or media content which the poster-like virtual image represents. the ar system 8401 may render an image of a cursor, with the cursor appearing to be projected toward a position in which the user gestures, in one or more embodiments. fig. 84b shows another scenario 8402 illustrating a user of the ar system 8401 interacting with the poster virtual images 8412, similar to that of fig. 84a . in the illustrated embodiment, the user interacts with the poster virtual images 8412 through gestures 8416. fig. 84c shows another scenario 8406 showing a user of an ar system 8401 experiencing a selected (e.g., based on gestures 8416 of fig. 84b ) piece of entertainment or media content, the user executing touch gestures to interact with a user interface virtual construct, according to one illustrated embodiment. as illustrated in fig. 84c , in response a user selection, the ar system 8401 renders a display 8420 of the selected entertainment or media content, and/or associated virtual menus (e.g., high level virtual navigation menu, for instance a navigation menu that allows selection of primary feature, episode, of extras materials). as illustrated in fig. 84c , the display of the selected entertainment or media content may replace at least a portion of the first virtual décor. as illustrated in fig. 84c , in response the user selection, the ar system may also render a virtual tablet type user interface tool, which provides a more detailed virtual navigation menu 8422 than the high level virtual navigation menu. the more detailed virtual navigation menu 8422 may include some or all of the menu options of the high level virtual navigation menu, as well as additional options (e.g., retrieve additional content, play interactive game associated with media title or franchise, scene selection, character exploration, actor exploration, commentary). for instance, the ar system may render the detailed virtual navigation menu to, for example, appear to the user as if sitting on a top surface of a table, within arm's reach of the user. user experience retail examples figs. 89a-89j illustrate an ar system implemented retail experience, according to one illustrated embodiment. as illustrated, a mother and daughter each wearing respective individual ar systems (8901 and 8903 respectively) receive an augmented reality experience 8902 while shopping in a retail environment, for example a supermarket. as explained herein, the ar system may provide entertainment in addition to facilitating the shopping experience. for example, the ar system may render virtual content, for instance virtual characters which may appear to jump from a box or carton, and/or offer virtual coupons for selected items. the ar system may render games, for example games based on locations throughout the store and/or based on items on shopping list, list of favorites, or a list of promotional items. the augmented reality environment encourages children to play, while moving through each location at which a parent or accompanying adult needs to pick up an item. in another embodiment, the ar system may provide information about food choices, and may help users with their health/weight/lifestyle goals. the ar system may render the calorie count of various foods while the user is consuming it, thus educating the user on his/her food choices. if the user is consuming unhealthy food, the ar system may warn the user about the food so that the user is able to make an informed choice. the ar system may subtly render virtual coupons, for example using radio frequency identification (rfid) transponders and communications. the ar system may render visual affects tied or proximately associated with items, for instance causing a glowing affect around box glows to indicate that there is metadata associated with the item. the metadata may also include or link to a coupon for a discount or rebate on the item. the ar system may detect user gestures, and for example unlocking metadata in response to defined gestures. the ar system may recognize different gestures for different items. for example, as explained herein, a virtual animated creature may be rendered so as to appear to pop out of a box holding a coupon for the potential purchaser or customer. for example, the ar system may render virtual content that makes a user perceive a box opening. the ar system allows advertising creation and/or delivery at the point of customer or consumer decision. the ar system may render virtual content which replicates a celebrity appearance. for example, the ar system may render a virtual appearance of a celebrity chef at a supermarket. the ar system may render virtual content which assists in cross-selling of products. for example, one or more virtual affects may cause a bottle of wine to recommend a cheese that goes well with the wine. the ar system may render visual and/or aural affects which appear to be proximate the cheese, in order to attract a shopper's attention. the ar system may render one or more virtual affects in the field of the user that cause the user to perceive the cheese recommending certain crackers. the ar system may render virtual friends who may provide opinions or comments regarding the various produces (e.g., wine, cheese, crackers). the ar system may render virtual affects within the user's field of view which are related to a diet the user is following. for example, the affects may include an image of a skinny version of the user, which is rendered in response to the user looking at a high calorie product. this may include an aural oral reminder regarding the diet. in particular, fig. 89a illustrates a scenario 8902 in which a mother and daughter enjoy an augmented reality experience at a grocery store. the ar systems (8901 and 8903) may recognize the presence of a shopping cart or a hand on the shopping cart, and may determine a location of the user and/or shopping cart. based on this detected location, in one or more embodiments, the ar system may render a virtual user interface 8932 tethered to the handle of the shopping card as shown in fig. 89a . in one or more embodiments, the virtual user interface 8932 may be visible to both ar systems 8901 and 8903, or simply to the ar system 8901 of the mother. in the illustrated embodiment, a virtual coupon 8934 is also displayed (e.g., floating virtual content, tethered to a wall, etc.). in one or more embodiments, the grocery store may develop applications such that virtual coupons are strategically displayed to the user at various physical locations of the grocery store, such that they are viewable by users of the ar system. applications may, for example, include a virtual grocery list. the grocery list may be organized by user defined criteria (e.g., dinner recipes). the virtual grocery list may be generated before the user leaves home, or may be generated at some later time, or even generated on the fly, for example in cooperation with one of the other applications. the applications may, for example, include a virtual coupon book, which includes virtual coupons redeemable for discounts or rebates on various products. the applications may, for example, include a virtual recipe book, which includes various recipes, table of contents, indexes, and ingredient lists. selection of a virtual recipe may cause the ar system to update the grocery list. in some implementations, the ar system may update the grocery list based on a knowledge of the various ingredients the user already has at home, whether in a refrigerator, freezer or cupboard. the ar system may collect this information throughout the day as the user works in the kitchen of their home. the applications may, for example, include a virtual recipe builder. the recipe builder may build recipes around defined ingredients. for example, the user may enter a type of fish (e.g., salmon), and the recipe builder may generate a recipe that uses the ingredient. selection of a virtual recipe generated by the recipe builder may cause the ar system to update the grocery list. in some implementations, the ar system may update the grocery list based on a knowledge existing ingredients. the applications may, for example, include a virtual calculator, which may maintain a running total of cost of all items in the shopping cart. fig. 89b shows another scenario 8904 in which the mother and the daughter with ar systems (8901 and 8903 respectively) are enjoying an augmented reality experience in the produce section of the grocery store. the mother weighs a physical food item on a scale. a virtual content box 8938 may be displayed next to the scale to provide more information about the product, as shown in fig. 89b . in one or more embodiments, the ar system automatically determines the total cost of the item (e.g., price per pound multiplied by weight) enters the amount into the running total cost. in one or more embodiments, the ar system automatically updates the 'smart' virtual grocery list based on location to draw attention to items on the grocery list that are nearby. for example, the ar system may update the rendering of the virtual grocery list to visually emphasize certain items (e.g., focused on fruits and vegetables in the produce section). as shown in fig. 89b , virtual name tags 8936 may appear next to the physical vegetables (e.g., potatoes, corn, etc.), thereby serving as a reminder to the users. further, the ar system may render visual effects in the field of view of the user such that the visual affects appear to be around or proximate nearby physical items that appear on the virtual grocery list. fig. 89c shows another scenario 8906 in which the child selects a virtual icon 8940 to launch a scavenger hunt application. the scavenger hunt application may make the child's shopping experience more engaging and educational. the scavenger hunt application may present a challenge (e.g., locating food items from different countries around the world). points may be added to the child's score as she identifies food items and places them in her virtual shopping cart. fig. 89d shows another scenario 8908 in which the child is gesturing toward a bonus virtual icon 8942, in the form of a friendly monster or an avatar. the ar system may render unexpected or bonus virtual content to the field of view of the child's ar system 8903 to provide a more entertaining and engaging user experience for the child. fig. 89e shows another scenario 8910 in which the mother and daughter are in the cereal aisle of the grocery store. the mother selects a particular cereal to explore additional information, for example via a virtual presentation of metadata about the cereal, as denoted by the virtual content 8944. the metadata 8944 may, for example, include: dietary restrictions, nutritional information (e.g., health stars), product reviews and/or product comparisons, or customer comments. rendering the metadata virtually allows the metadata to be presented in a way that is easily readable, particular for adults how may have trouble reading small type or fonts. in the illustrated embodiment, the mother is interacting with the metadata 8944 through a gesture 8946. as also illustrated in fig. 89e , an animated character 8948 may be rendered to any customers with virtual coupons that may be available for a particular item. the ar system may render coupons for a given product to all passing customers, or only to customers who stop. alternatively or additionally, the ar system may render coupons for a given product to customers who have the given product on their virtual grocery list, or only to those who have a competing product on their virtual grocery list. alternatively or additionally, the ar system may render coupons for a given product based on knowledge of a customer's past or current buying habits and/or contents of the shopping cart. as illustrated in another scenario 8912 of fig. 89f , the ar system may render an animated character 8950 (e.g., friendly monster) in the field of view of at least the child. the ar system may render the animated character so as to appear to be climbing out of a box (e.g., cereal box). the sudden appearance of the animated character may prompt the child to start a game (e.g., monster battle). the child can animate or bring the character to life with a gesture. for example, a flick of the wrist may cause the ar system to render the animated character bursting through the cereal boxes. fig. 89g shows another scenario 8914 illustrated the mother at an end of an aisle, watching a virtual celebrity chef 8952 (e.g., mario batali) performing a live demo via the ar system 8901. the virtual celebrity chef 8952 may demonstrate a simple recipe to customers. all ingredients used in the demonstrated recipe may be available at the grocery store, thereby encouraging users to make the purchase. in some instances, the ar system may present the presentation live. this may permit questions to be asked of the celebrity chef 8952 by customers at various retail locations. in other instances, the ar system may present a previously recorded presentation. in some implementations, the ar system may capture images of the customers, for example via inward facing cameras carried by each customer's individual head worn ar system. the ar system may provide a composited virtual image to the celebrity of a crowd composed of the various customers. this may be viewed by the celebrity chef at an ar system, or device associated with the celebrity chef. fig. 89h illustrates another scenario 8916 in which the mother wearing the ar system 8901 is in a wine section of the grocery store. the mother may search for a specific wine using a virtual user interface 8954 of an application. the application may be a wine specific application, an electronic book, or a more general web browser. in response to selection of a wine, the ar system may render a virtual map 8956 in the field of view of the user, with directions for navigating to the desired wine, denoted by virtual name tags 8958. while the mother is walking through the aisles, the ar system may render data attached to the virtual name tags 8958 which appear to be attached or at least proximate respective bottles of wines. the data may, for example, include recommendations from friends, wines that appear on a customer's personal wine list, and/or recommendations from experts. the data may additionally or alternatively include food parings for the particular wine. fig. 89i illustrates scenario 8918 in which the mother and child conclude their shopping experience. the mother and child may, for example, by walking onto, across or through a threshold 8960. the threshold 8960 may be implemented in any of a large variety of fashions, for example as a suitably marked map. the ar system detects passage over or through the threshold 8960, and in response totals up the cost of all the groceries in the shopping cart. the ar system may also provide a notification or reminder to the user, identifying any items on the virtual grocery list where are not in the shopping cart and thus may have been forgotten. the customer may complete the check-out through a virtual display 8962. in one or more embodiments, the transaction may be conducted seamlessly without a credit card or any interaction with a cashier (e.g., money is automatically deducted from the user's bank, etc.). as illustrated in the scenario 8920 of fig. 89j , at the end of the shopping experience, the child receives a summary of her scavenger hunt gaming experience through a virtual score box 8964. the ar system may render the summary as virtual content, at least in the field of view of the child using ar system 8903. fig. 90 shows a scenario 9000 in which a customer employing an ar system 9001 is in a retail environment, for example a bookstore, according to one illustrated embodiment. as shown in fig. 90 , the customer may pick up a book totem 9012. the ar system 9001 detects the opening of the book totem 9012 , and in response renders an immersive virtual bookstore experience in the user's field of view. the virtual bookstore experience may, for example, include reviews of books, suggestions, and author comments, presentations or readings. the ar system may render additional content 9014 , for example virtual coupons. the virtual environment combines the convenience of an online bookstore with the experience of a physical environment. figs. 91a-91f illustrate scenarios of using ar systems in health care related applications. in particular, fig. 91a shows a scenario 9102 in which a surgeon and surgical team (each wearing ar systems 9101) are conducting a pre-operative planning session for an upcoming mitral valve replacement procedure. each of the health care providers is wearing a respective individual ar system 9101. as noted above, the ar system renders a visual representation 9114 of the consulting or visiting surgeon. as discussed herein, the visual representation 9114 may take many forms, from a very simple representation (e.g., an avatar) to a very realistic representation (e.g., the surgeon's physical form, as shown in fig. 91a ). the ar system renders a patient's pre-mapped anatomy (e.g., heart) in virtual form 9112 for the team to analyze during the planning. the ar system may render the anatomy using a light field, which allows viewing from any angle or orientation. for example, the surgeon could walk around the heart to see a back side thereof. the ar system may also render patient information. for instance, the ar system may render some patient information 9116 (e.g., identification information) so as to appear on a surface of a physical table. also for instance, the ar system may render other patient information (e.g., medical images, vital signs, charts) so as to appear on a surface of one or more physical walls. as illustrated in fig. 91b , the surgeon is able to reference the pre-mapped 3d anatomy 9112 (e.g., heart) during the procedure. being able to reference the anatomy in real-time may, for example, improve placement accuracy of a valve repair. outward pointed cameras capture image information from the procedure, allowing a medical student to observe virtually via the ar system from her remote classroom. the ar system makes a patient's information readily available, for example to confirm the pathology, and/or avoid any critical errors. fig. 91c shows a post-operative meeting or debriefing between the surgeon and patient. during the post-operative meeting, the surgeon is able to describe how the surgery went using a cross section of virtual anatomy 9112 or virtual 3d anatomical model of the patient's actual anatomy. the ar system allows the patient's spouse to join the meeting virtually through a virtual representation 9118 while at work. again, the ar system may render a light field which allows the surgeon, patient and spouse to inspect the virtual 3d anatomical model of the patient's actual anatomy from an desired angle or orientation. fig. 91d shows a scenario 9108 in which the patient is recovering in a hospital room. the ar system 9101 allows the patient to perceive any type of relaxing environment through a virtual setting 9120 selected by the patient, for example a tranquil beach setting. as illustrated in scenario 9110 of fig. 92e, the patient may practice yoga or participate in some other rehabilitation during the hospital stay and/or after discharge. the ar system 9101 allows the patient to perceive a friend virtually rendered environment in a virtual yoga class. as illustrated in the scenario 9142 of fig. 91f , the patient may participate in rehabilitation, for example by riding on a stationary bicycle 9152 during the hospital stay and/or after discharge. the ar system (not shown) renders, in the user's field of view, virtual information 9154 about the simulated cycling route (e.g., map, altitude, distance), patient's performance statistics (e.g., power, speed, heart rate, ride time). the ar system renders a virtual biking experience, for example including an outdoor scene, replicating a ride course such as a favorite physical route. additionally or alternatively, the ar system renders a virtual avatar 9156 as a motivational tool. the virtual avatar may, for example, replicate a previous ride, allowing the patient to compete with their own personal best time. fig. 92 shows a scenario 9200 in which a worker employs an ar system 9201 in a work environment, according to one illustrated embodiment. in particular, fig. 92 shows a landscaping worker operating machinery (e.g., lawn mower). like many repetitive jobs, cutting grass can be tedious. workers may lose interest after some period of time, thereby increasing the probability of an accident. further, it may be difficult to attract qualified workers, or to ensure that workers are performing adequately. the worker wears an individual ar system 9201, which renders virtual content in the user's field of view to enhance job performance. for example, the ar system may render a virtual game 9212, in which the goal is to follow a virtually mapped pattern. points are received for accurately following the pattern and hitting certain score multipliers before they disappear. points may be deducted for straying from the pattern or straying too close to certain physical objects (e.g., trees, sprinkler heads, roadway). while only one example environment is illustrated, this approach can be implemented in a large variety of work situations and environments. for example, a similar approach can be used in warehouses for retrieving items, or in retail environments for stacking shelves, or for sorting items such as mail. this approach may reduce or eliminate the need for training, since a game or pattern may be provided for many particular tasks. figs. 93a-93c show a user of an ar system 9301 in a physical office environment, interacting with a physical orb shaped totem 9312 (e.g., orb totem), according to another illustrated embodiment. as illustrated in fig. 93b , with a twist of her wrist, the user activates the ar system's virtually primary navigation menu, which is rendered in the user's field of vision to appears above the orb totem. as best illustrated in fig. 93c , the ar system also renders previously mapped virtual content to appear around the workspace as well. for example, the ar system also renders may render a virtual user interface associated with a social media account (e.g., twitter ® , facebook ® ), calendar, web browser, electronic mail application. in the illustrated embodiment, the user of the ar system 9301 uses a clockwise (or counterclockwise) motion to "open" the totem 9312. the totem 9312 may be thought of as a virtual user interface that allows the user to interact with the ar system. in the illustrated embodiment, in scene 9320, the user picks up the totem 9312. in scene 9322, the use makes a predetermined gesture or movement in relation to the totem 9312 to display a set of virtual menu 9316. it should be appreciated that this mapping of the totem and the virtual interface may be pre-mapped such that the ar system recognizes the gesture and/or movement, and displays the user interface appropriately. in scene 924, one or more virtual items 9318 are also displayed in the user's physical space. for example, the user may have selected one or more items to display through the user interface 9316. the user's physical space is now surrounded by virtual content desired by the user. in one or more embodiments, the virtual items 9318 may float in relation to the user (e.g., body-centric, head-centric, hand-centric, etc.) or be fixed to the physical surroundings (e.g., world-centric). the orb totem 9312 serves as a sort of backpack, allowing the user to take along a set of virtual content desired by the user. fig. 93d shows scene 9326 in which the user is interacting with a second physical totem 9332 rendered by the ar system 9301, according to another illustrated embodiment. the ar system 9301 collects image information, for example via one or more outward facing cameras on the body or head worn component. the ar system 9301 may, optionally, collect additional information about the physical space, for example an identity of any available wireless communications networks, gps location information, compass, etc. the ar system processes the collected information in order to determine an identity of the particular physical space in which the user is located. for example, the ar system may employ a variety of object recognizers to recognize various physical objects in the environment (e.g., walls, desk, chair). also for example, the ar system may combine such with other information (e.g., gps, compass, wireless network related), for instance as a topographical map, in order to ascertain the physical location of the user. for example, the ar system may employ a geometric map to propagate connectivity to a topological map. the topological map be an index into geometry, for example based on basis vectors (e.g., wi-fi, gps, rss, hash of space objects, hash of features, histogram profiles, optical markers). the ar system may also optionally determine a current time at the physical location (e.g., 9:15 am). based on the determined physical location, and optionally the current time, ar system renders virtual content to the field of view of the user, generating a view of a virtual office space, populated with virtual objects, people, and/or avatars. the ar system may, for example, render a virtual calendar. the ar system may render the virtual calendar to, for instance, appear to the user as if the virtual calendar were hanging on a physical wall in the user's workspace in the physical office environment. the ar system may, for example, render a one or more virtual pieces of work (e.g., virtual charts, virtual diagrams, virtual presentations, virtual documents). the ar system may render the pieces of work to, for instance, appear to the user as if the virtual pieces of work were posted in front of a physical wall in the user's workspace in the physical office environment. the ar system may render a virtual social network (e.g., twitter ® ) user interface. the ar system may, for example, render virtual social network user interface to, for instance, appear to the user as if the virtual calendar were hanging on a physical wall in the user's workspace in the physical office environment. the ar system may render a virtual electronic mail (e.g., email) user interface. the ar system may, for example, render a plurality of virtual email messages in a set, which can be scrolled through via gestures performed by the user and detected by the ar system. for instance, the ar system may render a set of virtual email messages to be read and a set of virtual email messages which the user has already read. as the user scrolls through the virtual email messages, the ar system re-renders the virtual content such that the read virtual email messages are moved from the unread set to the read set. the user may choose to scroll in either direction, for example via appropriate gestures. on receipt of a new email message, the ar system may render a virtual icon in the field of view of the user, indicative of the arrival of the new email message. the virtual icon may, for example, appear to fly through the air, for instance toward the orb totem. as illustrated in fig. 93d , the user can interact with the second physical totem 9332, to which the ar system may have mapped a virtual key pad. thus, the ar system may render a virtual key pad in the user's field of view, so as to appear as if the virtual key pad were on a surface of the second physical totem 9332. the user interacts with the second physical totem 9332, for example via typing type finger motions and/or tablet type finger motions (e.g., swiping). the ar system captures image information of the user's interactions with the second physical totem. the ar system interprets the user interactions in light of a mapping between locations of interactions and locations of various virtual keys being rendered. the ar system 9301 converts the interactions into key stroke data, which may be represented in any of a large variety of forms (e.g., ascii, extended ascii). this may allow the user to, for example, interact with email messages, social network interfaces, calendars, and/or pieces of work. fig. 93e shows scene 9328 in which the user in a physical office environment is interacting with a physical keyboard, according to another illustrated embodiment. the ar system maps and renders virtual content 9340 in the virtual office space, mapped to seem to the user to appear at various locations in the physical office space. the virtual content 9340 may include various work related applications or application user interfaces. for example, the ar system 9301 may render a 3d program including a 3d architectural model to help the user visualize a structure. in response to receipt of a new message, the ar system may provide a notification to the user. for example, the ar system may render a virtual visual effect of a message 9342 (e.g., email, tweet ® ) flying into the user' field of view, and optionally an aural alert or notification. in some implementations, the ar system assess a relative importance of the message, for instance rendering the visual and/or audio affect only for significantly important message. in response to receipt of a new gift (e.g., a virtual gift from a friend), the ar system may provide a notification to the user. for example, the ar system may render a virtual visual effect of a bird 9344 flying into the user' field of view and dropping a virtual package next to the orb totem 9312. the ar system may additionally, or alternatively provide an aural alert or notification. the user may gesture to open the virtual package. in response to the gesture, the ar system renders images of the virtual package opening to reveal that the gift is a game for the user to play. as shown in fig. 93e , the user may interact with the physical (real) keyboard to interact with the virtual content. the physical keyboard may be an actual keyboard, yet may function as a totem. for example, the ar system may have mapped a set of virtual keys to the physical keyboard. the user interacts with the physical keyboard, for example via typing type finger motions. the ar system captures image information of the user's interactions with the physical keyboard. the ar system interprets the user interactions in light of a mapping between locations of interactions and locations of various physical keys. the ar system converts the interactions into key stroke data, which may be represented in any of a large variety of forms (e.g., ascii, extended ascii). this may allow the user to, for example, interact with email messages, social network interfaces, calendars, and/or pieces of work. notably, there may be no wired or wireless communications from the physical keyboard to any other component. fig. 93f shows scene 9330 of a pair of users (wearing ar devices 9301 and 9303 respectively) in a physical office environment, interacting with a virtual office space and game, according to another illustrated embodiment. as illustrated in fig. 93f , the user of ar system 9303 may have launched a game 9350. the ar system 9303 communicates, either directly or indirectly, with the first ar system 9301, for example via passable world models. the interaction between the two individual ar systems causes the first user's individual ar system to render a scene which includes a virtual monster character peeking over the cubicle wall to challenge the first user to a particular game. this serves as a virtual invitation to join the game. the first user may accept by selecting her own virtual monster, and assigning it to a battleground at the end of the first user's desk. the game may evolve from that point, each user experiencing the same game via rendering to their respective individual ar systems. while illustrated with two users, a game may involve a single user, or more than two users. in some implementations, games may include thousands of users. fig. 93g shows scene 9348 of a pair of users in a physical office environment, interacting with a virtual office space and game through their respective ar systems 9301 and 9303. as illustrated in fig. 93g , the first user reassigns a battleground for their player (e.g., monster) from the end of her desk to a floor of the physical office environment. in response, the ar system may re-render the virtual content related to the game so as to appear to each of the users as if the battle is taking place on the floor. the ar system may adapt the game to changes in physical location. for example, the ar system may automatically scale the rendered content based on a size of an area or volume to which the virtual content has been mapped. in the illustrated example, moving her monster from the desk to the ground increases the available space. hence, the ar system may automatically scale the size of the first user's monster up, to fill the available space. fig. 93h shows scene 9346 of a pair of users in a physical office environment, interacting with a virtual office space and game through their respective ar systems 9301 and 9303. as illustrated in fig. 93h , the ar system renders the first user's monster as scaled up from a previous rendering ( fig. 93f ). the second user or co-worker accepts by placing his monster on the new battleground (e.g., the physical floor of the office space). in response, the ar system may re-render the virtual content related to the game so as to appear to each of the users as if the battle is taking place on the floor. the ar system may adapt the game to changes in physical location. for example, the ar system may automatically scale the size of the co-worker's monster up, to fill the available space, and allow the battle to start or continue. figs. 93i-93k show a user of the ar system 9301 interacting with virtual content of a virtual office space rendered by an ar system, according to another illustrated embodiment. in particular, figs. 93i-93k represent sequential instances of time, during which the user gestures to a scaling tool 9360 to scale the amount of non-work related images that are visible in her environment. in response, the ar system re-renders the virtual room or virtual space, to for example, reduce a relative size of visual content that is not related to the user's work. alternatively, the user may select certain applications, tools, functions, and/or virtual rooms or virtual spaces off or moved to a background (e.g., radially spaced outwardly). as shown in fig. 93j , the scaling tool 9360 has been moved to a represent a smaller percentage that what was shown in fig. 93i . similarly in fig. 93k , the scaling tool 9360 has been moved to represent an even smaller percentage as compared to figs. 93i and 93j . fig. 93l shows a user of the ar system interacting with virtual content of a virtual office space, according to another illustrated embodiment. the user selects, through a virtual contact list a number of contacts to invite to a group meeting from her contact application via a virtual contact use interface 9362. the user may invite the attendees by dragging and dropping their names and/or images into a virtual meeting room 9364, which is rendered in the user's field of view by the ar system 9301. the user may interact with the virtual user interface 9362 constructs via various gestures, or alternatively via voice commands. the ar system detects the gestures or voice commands, and generates meeting requests, which are electronically sent to the invitee, in one or more embodiments. fig. 93l shows a number of users in a physical conference room environment, interacting with virtual content rendered by an ar system, according to another illustrated embodiment. the meeting may be in response to the group meeting invites sent by a first one of the users ( fig. 93l ). the first user and a second user who is one of the invitees or group meeting participants may be physically present in the physical meeting room. a third user who is another one of the invitees or group meeting participants may be virtually present in the physical meeting room. that is, a virtual representation of the third user is visually and aurally rendered to the first and the second users via their respective individual ar systems. the respective individual ar systems may render the representation of the third to appear to be seated across a physical table from the first and the second users. the ar system achieves this using the passable world models generated from image information captured by the various individual ar systems, and optionally by any room or space based sensor systems if present. likewise, a virtual representation of the first and second users, along with the conference room, is visually and aurally rendered to the third user via the third user's respective individual ar system. the individual ar systems may render the representations of the first and second user, as well as the conference room, to appear to the third user as if the first and the second users are seated across the physical table from the third user. the ar system achieves this using the passable world models generated from image information captured by the various individual ar systems, and optionally by any room or space based sensor systems if present. the ar system may render virtual content which is shared by two or more of the users attending the meeting. for example, the ar system may render a virtual 3d model (e.g., light field representation of a building). also for example, the ar system may render virtual charts, drawings, documents, images, photographs, presentations, etc., viewable by all of the users, whether physically present or only virtually present. each of the users may visually perceive the virtual content, from their own perspectives. for example, each of the users may visually perceive the virtual 3d model, from their own perspectives. thus, any one of the users may get up and walk around the virtual 3d model, visually inspecting the 3d model from different vantage or viewpoints. changes or modifications to the virtual 3d model are viewable by each of the users. for example, if the first user makes a modification to the 3d model, the ar system re-renders the modified virtual 3d model to the first, the second, and the third users. while illustrated with the first and second users in the same physical location and the third user located at a different physical location, in one or more embodiments. for example, each person may be in a respective physical location, separate and/or remote from the others. alternatively, all attendees may be present in the same physical space, while gaining advantage of shared virtual content (e.g., virtual 3d model). thus, the specific number of attendees and their respective specific locations are not limiting. in some implementations, other users can be invited to join a group meeting which is already in progress. users can likewise, drop out of group meetings when desirable. other users can request to be invited to a group meeting, either before the group meeting starts or while the group meeting is in progress. the ar system may implement such invites in a fashion similar as discussed above for arranging the group meeting. the ar system may implement a handshaking protocol before sharing virtual content between users. the handshaking may include authenticating or authorizing users who wish to participate. in some implementations, the ar system employs peer-to-peer connections between the individual devices sharing points of view, for instance via passable world models. in some implementations, the ar system may provide real-time written translation of speech. for example, a first user can elect to receive a real-time written translation of what one or more of the other users say. thus, a first user who speaks english may request that the ar system provide a written translation of the speech of at least one of the second or the third users, who for example speak french. the ar system detects the speakers' speech via one or more microphones, for example microphones which are part of the individual ar system worn by the speaker. the ar system may have a chip or system (or application) that converts voice data to text, and may have a translation system that translates text one language to another. the ar system performs, or has performed, a machine-translation of the speakers' speech. the ar system renders the translation in written form to the field of view of the first user. the ar system may, for example, render the written translation to appear proximate a visual representation of the speaker. for example, when the speaker is the third user, the ar system renders the written text to appear proximate a virtual representation of the third user in the first user's field of view. when the speaker is the second user, the ar system renders the written text to appear proximate the real image of the second user in the first user's field of view. it should be appreciated that the translation application may be used for travel applications, and may make it easier for people to understand signs/languages/commands encountered in languages other than their native languages. in other implementations, similar to the example above, the ar system may display metadata ("profile information") as virtual content adjacent to the physical body of the person. for example, assume a user walks into a business meeting and is unfamiliar with people at the meeting. the ar system, may, based on a person's facial features (e.g., eye position, face shape, etc.) recognize the person, retrieve that person's profile information, or business profile information, and display that information in virtual form right next to the person. thus, the user, may be able to have a more productive and constructive meeting, having read up some prior information about the person. it should be appreciated that persons may opt out of having their information displayed if they chose to, as described in the privacy section above. in the preferred embodiment, the live translation and/or unlocking of metadata may either be performed on the user's system (beltpack, computer). referring now to fig. 94 , an example scene between users wearing respective ar systems 9401 is illustrated. as shown in fig. 94 , the users may be employees of an architectural firm, for example, and may be discussing an upcoming projecting. advantageously, the ar system 9401 may allow the users to interact with each other, and discuss the project by providing a visual representation of an architectural model 9412 on the physical table. as shown in fig. 94 , the users may be able to build onto the virtual architectural model 9412, or make any edits or modification to it. as shown in fig. 94 , the users may also interact with a virtual compass that allows the users to better understand aspects of the structure. also, as illustrated in fig. 94 , various virtual content 9414 may be tethered to the physical room that the users are occupying, thereby enabling a productive meeting for the users. for example, the virtual content 9414 may be drawings of other similar architectural plans. or, the virtual content 9414 may be associated with maps of where the structure is to be constructed in the real world, etc. figs. 95a-95e show a user of an ar system 9501 in an outdoor physical environment, interacting with virtual content rendered by an ar system at successive intervals, according to another illustrated embodiment. in particular, fig. 95a shows a user walking home along a city street, which includes a number of buildings. an establishment (e.g., restaurant, store, building) catches the user's attention. the user turns and gazes at the establishment's sign or logo, as shown in fig. 95a . the ar system 9501 detects the sign or logo appearing in the user's field of view to determine if metadata or other information is available. if metadata or other information is available, the ar system renders a cue to the user indicating that metadata or other information is available. for example, the ar system may cause a visual affect (e.g., highlight, halo, marquee, color) at least proximate the sign or logo. in the illustrated embodiment, a virtual "+" sign 9532 is rendered next to the sign to indicate that metadata is available. as illustrated in fig. 95b , the user may select the virtual icon 9532 to view the metadata or other information associated with the establishment (e.g., restaurant, store, building) with which the sign or logo is associated. for example, the user may gesture, for instance making a pointing gesture towards the sign or logo. as illustrated in fig. 95c , in response to the user selection, the ar system 9501 renders representations of information and/or metadata proximately associated with the establishment (e.g., restaurant, store, building) through a virtual content box 9534. for instance, the ar system 9501 may render a menu, photographs and reviews in another virtual folder 9536 that may be viewed by the user. in fact, the ar system 9501 may render representations of information and/or metadata proximately associated with various different types of physical and/or virtual objects. for example, the ar system may render metadata on or proximate a building, person, vehicle, roadway, piece of equipment, piece of anatomy, etc., which appears in a field of view of a user. when the ar system is rendering metadata concerning a physical object, the ar system first captures images of the physical object, and processes the images (e.g., object recognizers) to identify the physical object. the ar system may determine metadata logically associated with the identified physical object. for example, the ar system may search for a name and location, architect, year built, height, photographs, number of floors, points of interest, available amenities, hours of operation of a building. also for example, the ar system may find a menu, reviews by critics, review by friends, photographs, coupons, etc., for a restaurant. also for example, the ar system may find a show times, ticket information, reviews by critics, reviews by friends, coupons, etc., for a theater, movie or other production. also for example, the ar system may find a name, occupation, and/or title of a person, relationship to the person, personal details such as spouse's name, children's names, birthday, photographs, favorite foods, or other preferences of the person. the metadata may be defined logically associated with an object (e.g., inanimate object or person) for an entire universe of users, or may be specific to a single user or a set of users (e.g., co-workers). the ar system may allow a user to choose what metadata or other information to share with other users, to identify which other users may access the metadata or other information. for example, a user may define a set of metadata or other information related to a physical location (e.g., geographic coordinates, building) or a person. that user may define a set of users (e.g., subset of the universe of users) who are authorized or provided with privileges to access the metadata or other information. the authorization or privileges may be set on various levels, for example read only access, write access, modify access, and/or delete access. when a user is at a location or views an object for which the user has authorization or privilege to at least read or otherwise access information of metadata associated with the location or object, the ar system provides the user a cue indicative of the availability of the metadata or other information. for example, the individual ar system may render a defined visual affect in the user's field of view, so as to appear at least proximate the object or person for which metadata or other information is available. the ar system may, for example, render a line that appears to glow. the ar system renders the metadata or other information in the user's field of view in response to a trigger, for instance a gesture or voice command. fig. 95d shows a user of the ar system 9501 at a bus stop with a shelter and buildings in the background. in the illustrated embodiment, the ar system 9501 may detect a location of the user based on visual information and/or additional information (e.g., gps location information, compass information, wireless network information). for example, object recognizers may identify various physical objects present in the outdoor environment, for example the shelter or buildings. the ar system finds locations with matching physical objects. as previously described, the ar system may employ a topographical map of information (e.g., identity and/or signal strength of available wireless networks, gps location information) in assessing or determining a physical location. the ar system may detect the appearance of the shelter in the view of the user, and detect a pause sufficiently long to determine that the user is gazing at the shelter or at something on the shelter. in response, the ar system may render appropriate or corresponding virtual content. for example, the ar system may render virtual content in the user's field of view such that the virtual content appears to be on or extending from one or more surfaces of the shelter. alternatively, virtual content may be rendered to appear on other surfaces (e.g., sidewalk) or even appear to be floating in air. the ar system may recognize at the bus stop that the bus stop is regularly used by the user. in response, the ar system may render a first set of virtual content 9538 which the user typically uses when waiting for their public transit (e.g., bus, train) or other transportation (e.g., taxi, aircraft). for example, the ar system may render a social networking user interface (e.g., twitter ® , facebook ® , etc.). in another instance, the ar system may render a cue to the use's field of view in response to an incoming message (e.g., tweet ® ). also for example, the ar system may render reading material (e.g., newspaper, magazine, book), or other media (e.g., news, television programming, movie, video, games). as a further example, the ar system may render information about the transportation (e.g., time until a bus arrives and/or current location of the next bus). in another embodiment, the ar system may recognize the bus stop as a bus stop not regularly used by the user. in response, the ar system additionally or alternatively render a second set of virtual content 9540 which the user typically would like when waiting for public transit (e.g., bus, train) or other transportation (e.g., taxi, aircraft). for example, the ar system may render virtual representations of route maps, schedules, current route information, proximate travel time, and/or alternative travel options. fig. 95e shows a user of the ar system 9501 playing a game at the bus stop. as shown in fig. 95e , the user of the ar system 9501 may be playing a virtual game 9542 while waiting for the bus. in the illustrated embodiment, the ar system renders a game to appear in the user's field of view. in contrast to traditional 2d games, portions of this 3d game realistically appear to be spaced in depth from the user. for example, a target (e.g., fortress guarded by pigs) may appear to be located in the street, several feet or even meters from the user. the user may use a totem as a launching structure (e.g., sling shot), which may be an inanimate object or may be the user's own hand. thus, the user is entertained while waiting for the bus. figs. 96a-96d show a user of an ar system 9601 in a physical kitchen, interacting with virtual content rendered by the ar system 9601 at successive intervals, according to another illustrated embodiment. the ar system 9601 detects a location of the user, for example based on visual information and/or additional information (e.g., gps location information, compass information, wireless network information). for example, object recognizers may identify various physical objects present in the kitchen environment, for example the walls, ceiling, floor, counters, cabinets, appliances, etc. the ar system finds locations with matching physical objects. as previously described, the ar system may employ a topographical map of information (e.g., identity and/or signal strength of available wireless networks, gps location information) in assessing or determining a physical location. as illustrated in fig. 96a , in response to recognizing that the user is, for example, in the kitchen, the ar system 9601 may render appropriate or corresponding virtual content. for example, the ar system may render virtual content 9632 in the user's field of view so that the virtual content 9632 appears to be on or extending from one or more surfaces (e.g., walls of the kitchen, countertops, backsplash, appliances, etc.). virtual content may even be rendered on an outer surface of a door of a refrigerator or cabinet, providing an indication (e.g., list, images) of the expected current contents of the refrigerator or cabinet based on recently previous captured images of the interior of the refrigerator or cabinets. virtual content may even be rendered so as to appear to be within the confines of an enclosed volume such as an interior of a refrigerator or cabinet. the ar system 9601 may render a virtual recipe user interface including categories of types of recipes for the user to choose from, for example via a gesture. the ar system may render a set of food images (e.g., a style wall) in the user's field of view, for instance appearing as if mapped to the wall of the kitchen. the ar system may render various virtual profiles 9634 of the user's friends , for instance appearing to be mapped to a counter top, and alert the user to any food allergies or dietary restrictions or preferences of the friends. fig. 96a also illustrates a totem 9636 that may be used to interact with the ar system, and "carry" a set of virtual content with the user at all given times. thus, a side wall of the kitchen may be populated with virtual social media 9638, while counters may be populated with recipes, etc. as illustrated in fig. 96b , the user may use a virtual recipe finder user interface 9640 to search for recipes using various parameters, criteria or filters through a virtual search box 9642. for example, the user may search for a gluten-free appetizers recipe. as illustrated in fig. 96c , the user interface of the virtual recipe finder 9640 virtually presents various results 9644 of the search for recipes matching certain criteria (e.g., gluten-free and appetizer). the user interface may have one or more user selectable icons, selection of which allows the user to scroll through the search results. the user may select to scroll in any desired direction in which the search results 9644 are presented. if unsure of what recipe to use, the user may use the virtual interface to contact another user. for example, the user may select her mother to contact, for example by selecting an appropriate or corresponding entry (e.g., name, picture, icon) from a set (e.g., list) of the user's contacts. the user may make the selection via an appropriate gesture, or alternatively via a voice or spoken command. the ar system detects the gesture or voice or spoken command, and in response attempts to contact the other user (e.g., mother). as illustrated in fig. 96d , the user interface of a social networking application produces a cue indicative of the selected contact responding to the contact attempt. for example, the ar system may render a cue in a field of view of the user, indicative of the contact responding. for instance, the ar system may visually emphasize a corresponding name, picture or icon in the set of contacts. additionally or alternatively, the ar system may produce an aural alert or notification. in response, the user may accept the contact attempt to establish a communications dialog with the contact or other user (e.g., mother). for example, the user may make an appropriate gesture, which the ar system detects, and responds by establishing the communications dialog. for example, the ar system may render a virtual representation 9646 of the other user (e.g., mother) using the ar device 9603 into the field of view of the first user. the representation may take many forms, for example a simple caricature representation or a complex light field which realistically represents the other person in three-dimensions. the representation may be rendered to appear as if they are standing or sitting across a counter from the first user. likewise, the other user may view a representation of the first user. the two users can interact with one another, and with shared virtual content as if they were both present in the same physical space. the ar system may advantageously employ passable world models to implement the user experience, as discussed in detail above. figs. 97a-97f show users wearing ar systems 9701 in a living room of their home, interacting with virtual content rendered by an ar system at successive intervals, according to another illustrated embodiment. as illustrated in fig. 97a , in response to recognizing that the user is, for example, in their own living room and/or recognizing various guests, the ar system 9701 may render appropriate or corresponding virtual content. additionally or alternatively, the ar system may respond to a scheduled event, for example a live or a recorded concert for which the user has signed up or purchased a feed of or a ticket to participate. for example, the ar system may render virtual content 9732 in the user's field of view so that the virtual content appears to be on or extending from one or more surfaces (e.g., walls, ceiling, floor, etc.) or elsewhere within the volume of the physical space. if guests are present, individual ar systems worn by the guests may render virtual content in the respective fields of view of the guests. the virtual content 9732 may be rendered to each person's ar system based on that person's current position and/or orientation to render the virtual content from the perspective of the respective user. also as illustrated in fig. 97a , the user may, for example, use a virtual user interface 9736 to browse one or more music libraries, for example shared music libraries, for instance in preparation for a dinner party the user is hosting. the user may select songs or musical pieces by, for example, dragging and dropping virtual representations 9734 (e.g., icons, titles) of the user's favorites songs and/or artists and/or albums into a personal virtual beats music room, to create a perfect atmosphere to host the user's guests. in some implementations, the user may buy a ticket or right to access music, a concert, performance or other event. the music, concert, performance or other event may be live or may be previously recorded. as illustrated in fig. 97a , the ar system may render the concert, performance or other event as a virtual space, mapped onto a user's physical space. the ar system may employ passable world models to implement such. the ar system may, for example pass a passable world model of a venue to the individual ar systems worn by the various users. an initial passable world model may include information representing an entire venue, including details. subsequent passable world models may reflect only changes from previous passable world models. audio or sound may be provided in standard two channel stereo, in 5.1 or 7.1 surround sound, or in 3d spatial sound (e.g., sound wave phase shifter). audio or sound may be delivered by personal speakers or by shared speakers which provide sound to two or more users simultaneously. personal speakers may take the form of ear buds, on ear head phones or over ear head phones. these may be integrated into the head worn component which provides the virtual images (e.g., 4d light field). shared speakers may take the form of bookshelf speakers, floor standing speakers, monitor speakers, reference speakers or other audio transducers. notably, it will be easier to deliver a realistic sound field using personal speakers since the ar system does not have to account for different listener positions in such an arrangement. in another embodiment, the ar system may deliver a realistic sound/audio based on the digital environment that the user is supposed to be in. for example, the ar system may simulate audio controls such that they appear to be originating from a particular source or space. for example, sound emanating from a small enclosed room may be very different than sound emanating from an opera house. as discussed above, the sound wavefront may be successfully used to create the right sound quality to accompany the visuals of the ar system. the ar system can render virtual content to cause the user(s) to perceive a performance as occurring in their own location (e.g., living room). alternatively, the ar system can render virtual content to cause the user(s) to perceive themselves as attending a performance occurring in the venue, for example from any given vantage point, even with the ability to see the crowd around them. the user may, for example, select any desired vantage point in a venue, including front row, on stage or backstage. in some implementations, an artist who is preforming live may have a respective individual ar system which allows the artist to perceive an audience which is a composite of the various users attending the performance remotely. images and/or sounds from the various audience members may be captured via the individual ar systems worn by the respective audience members. this may allow for interaction between the performer and the audience, including for example a question and answer session. the use of 4d light field provides for a more realistic experience the might otherwise be achieved using more conventional approaches. fig. 97b shows a pair of guests having ar systems 9701 in the physical living room. the host user 9720 decides to take a picture of the guest. the host user makes a corresponding gesture (e.g., index finger and thumb at right angles on both hands), held in opposition to form a rectangle or frame. the host user's own individual ar system detects the gesture, interprets the gesture, and in response captures an image, for example via one or more outward facing cameras that form part of the individual ar system worn by the host user. the gesture also serves as an indication to the guests that their picture is being taken, thereby protecting privacy. once the user has taken a picture (e.g., digital photograph), the user may quickly edit the picture (e.g., crop, add caption, add filters), and post the picture to a social network. all this is performed using gestures via the ar system. in a related embodiment, once the user has taken a picture, a virtual copy of the picture may be pinned into the physical space. for example, the user may pin the virtual picture onto a physical wall in the room, or alternatively, may even pin the virtual picture into a virtual wall created by the ar system. it should be appreciated that the photographs may either be in 2d form, or even 3d photographs, in some embodiment. thus, the ar system constantly acquires 3d information, which may be retrieved and reused at a later time. for example, text messages or any items may appear in either 2d or 3d based on the user's preferences. the user may manipulate the virtual content by using gestures, as will be discussed further below, and may bring content toward himself or away simply by using gestures or any other user input. fig. 97c shows the host user and guests in the physical living room enjoying pictures, for example pictures captured during the party. as illustrated, the virtual picture 9722 has been pinned to the living room's physical wall. the ar system 9701 may render the pictures, for example such that each user perceives the pictures to be on a wall. the users can scale the pictures via appropriate gestures. the party wall lets others experience or re-experience the party, and the people attending the party. the party may be captured as a full light field experience of the whole party. this allows going back and reliving the party, not as a video, but as full point of view experience. in other words, a user would be able to wander around the room, seeing the people walk by the user, and viewing the party after the fact from essentially any vantage point. fig. 97d shows the host user and guests in the physical living room setting up a virtual display, monitor or screen to enjoy media content, for example a movie. as illustrated in fig. 97d , the host user may gesture to create a virtual display 9724, monitor or screen and to otherwise indicate or command the ar system to set up to display media content, for example a movie, television type programming, or video. in particular, the host user uses a two hand gesture 9726 to frame an area, for example facing a wall on which the media content should be rendered to appear. the host user may spread the index finger and thumb at right angles to make an l-shape to outline a desired perimeter of the virtual display 9724, monitor or screen. the host user may adjust the dimensions of the virtual display, monitor or screen 9724 through another gesture. notably, the use of a 4d light field directed to the retina of the users' eyes allows the size of the virtual display, monitor or screen to be virtually unlimited since there is practically no mechanical limit on scaling, the only appreciable limit being the resolution of the human eye. further, it is noted that the individual ar system of the host user (e.g., worn by host user) may coordinate with the individual ar systems of the guest users, such that the guest user can share the experience of the host user. thus, the host user's individual ar system may detect the host user's gesture(s), define the virtual display, monitor or screen, and even identify user-selected media content for presentation. the host user's individual ar system may communicate this information, either directly or indirectly, to the individual ar system of the guest users. this may be accomplished, through the passable world model, in one or more embodiments. fig. 97e shows the host user and guests in the physical living room setting up a virtual display, monitor or screen to enjoy media content, for example a movie. in contrast to fig. 97d , the host user makes another gesture 9728 that draws a diagonal with a pointed index finger, to indicate a position and size of the desired virtual display, monitor or screen. in fig. 97f , the user may further pick characteristics for the virtual display, monitor or screen 9724. for example, the user may gesture to pick aesthetic characteristics, for example of a border, bezel or frame through virtual icons 9730. the user may also gesture to pick operational characteristics, for example characteristics related to image reproduction and/or quality. for example, the user may select from a variety of legacy physical monitors or televisions. the ar system can replicate the picture characteristics of legacy monitors or televisions (e.g., a color television from 1967). thus, the host user may select a monitor or television from a list of makes and models and years, to replicate historically accurate devices, with the same physical cabinet look, same visual or picture characteristics look, and even replicate older sound. the user can experience older programs or media content on period realistic monitors or televisions. the user may experience new program or media content on older monitors or televisions. the ar system may create a virtual display, monitor, or television 9724 that faithfully replicates a top of line current day television or monitor, or even future televisions or monitors. these types of embodiments essentially obviate any reason to purchase a physical display system (e.g., computer, television, etc.). in fact, multiple users may use multiple televisions, with each television screen displaying different content. the ar system may also render virtual content to match the picture characteristics of movie projectors, whether classic period pieces, or the most up to date digital movie projectors. for example, the ar system may render virtual content to replicate one or more features of an a large scale cinematic projector and screen or screen. depending on the speaker configuration that is available, the ar system may even replicate the sound system of a movie theater. the ar system may render virtual content that replicates sitting in a theater. for example, the ar system may render virtual content that matches or closely resembles the architecture of a theater. thus user may select a theater for replication, for example from a list of classic theaters. the ar system may even create an audience that at least partially surrounds a user. the virtual content may, for example, be locked to the body coordinate frame. thus, as the user turns or tilts their head, the user may see virtual representations of different parts (e.g., walls, balcony) of a theater along with virtual representations of people who appear to be seated around the user. the user may even pick a seating position, or any other vantage point. a website or application store may be set up to allow users to design and share filters or other software which replicates the look and feel of classic televisions, monitors, projectors and screens, as well as various performance venues such as movie theaters, concert halls, etc. thus, a user may select a particular theater, location in the theater, a particular projector type and/or sound system type. all these features may simply be rendered on the user's ar system. for example, the user may desire to watch a particular vintage tv show on a vintage television set of the early 1960s. the user may experience sitting the episode in a virtual theater, seeing those sitting around and/or in front of the user. a body-centric field of view may allow the user so see others as the user turns. the ar system can recreate or replicate a theater experience. likewise, a user can select a particular concert venue, a particular seat or location (e.g., on stage, back stage) in the venue. in one or more embodiments, venues may be shared between users. fig. 97g shows a number of users, each holding a respective physical ray gun totem 9750, interacting with a virtual user interface 9752 rendered by an ar system to customize their weapons, according to one illustrated embodiment. before play, each user may pick one or more virtual customization components for their respective ray gun totem. the user may select customizations via a virtual customization user interface renders to each user's field of view by their respective individual ar systems. for example, the users may pick custom accessories (e.g., scopes, night vision scopes, laser scopes, fins, lights), for example by gesturing or by voice commands. each user's respective individual ar systems may detect the user's gestures or selections. rather than adding on additional physical components, the individual ar systems (e.g., body and/or head worn components) may render virtual content which customizes each ray gun in each user or player's field of view. thus, the various individual ar systems may exchange information, either directly or indirectly, for example by utilizing the passable world model, for example. notably, the physical ray gun totems 9750 may be simple devices which, for example, may not actually be functional. rather they are simply physical objects that may be given life through virtual content delivered in relation to the physical objects. as with previously described totems, the ar system detects user interaction, for example via image information captured outward facing cameras of each user's individual augmented reality device (e.g., head worn component). likewise, the ar systems may render blasts or other visual and/or aural affects in the users' fields of vision to replicate shooting of the ray guns. for example, a first individual ar device worn by a first user may detect the first user aiming the first ray gun totem which first user is carrying and detect the first user activating a trigger. in response, the first individual ar device renders a virtual blast affect to the field of view of the first user and/or a suitable sound to the ears of the first user, which appear to originate with the first ray gun totem. the first individual ar device passes a passable world mode, either directly or indirectly, to a second and a third individual ar system, worn by the second and the third users, respectively. this causes the second and the third individual ar systems, to render a virtual blast visual affect in the field of view of the second and third users so as to appear to have originated from the first ray gun totem. the second and the third individual ar systems may also render a virtual blast aural or sound affect to the ears of the second and third users so as to appear to have originated from the first ray gun totem. while illustrated with a generally gun shaped totem, this approach may be used with other totems including inanimate totems and even animate totems. for example, a user could choose to "weaponized" a portion of the body (e.g., hand). for example, a user may choose to place virtual rockets on their hands and/or to have virtual fireballs emanate from their fingertips. it is of course possible to have the ar systems render many other virtual affects. fig. 97h shows a number of users of ar systems 9701, each holding a respective physical ray gun totem 9750, with virtual customizations, playing a game with virtual content rendered via the ar system, according to one illustrated embodiment. as illustrated in fig. 97h , the users may play a game in which the battle virtual aliens or robots from another world. the individual ar systems render the virtual aliens in the fields of view of the respective users. as noted above, the respective individual ar systems may track the respective user's aiming and firing interactions, and relay the necessary information to the other ones of the individual ar systems. the users may cooperate in the game, or may play against each other. the individual ar systems may render a virtual scoreboard in the users' fields of vision. scores or even portions of the game play may be shared via social media networks. figs. 98a-98c show a user in a living room of her home, interacting with virtual content rendered by an ar system at successive intervals, according to another illustrated embodiment. as illustrated in fig. 98a , in response to recognizing that the user is, for example, in her own living room, the ar system may render appropriate or corresponding virtual content. for example the user may by watching a television program on a virtual television 9814 which her individual ar system 9801 has rendered in her field of vision to appear as if on a physical wall of the living room. the individual ar system 9801 may also render a second virtual screen 9816 with related media content (e.g., voting menu, contestant rankings or standings) to provide the user with a second screen experience. the individual ar system 9801 may further render a third screen (not shown) with additional content, for example social media content, or electronic messages or mail. the user may also, for example, view or shop for artwork. for example, the individual ar system may render an artwork viewing or shopping user interface to a totem 9812. as previously discussed the totem 9812 may be any physical object (e.g., sheet of metal or wood). the totem may, for instance, resemble a tablet computing device is terms of area dimensions, although could have a much smaller thickness since no on-board electronics are required. also as previously discussed, the individual ar system 9801 detects user interactions with the totem, for instance finger gestures, and produces corresponding input. the individual ar system 9801 may further produce a virtual frame 9818 to view artwork as it would appear on a wall of the user's living room. the user may control the dimensions of the frame using simple gestures, such as those previously described for establishing the dimensions of a virtual display, monitor or screen. the user may also select a frame design, for example from a set of frame images. thus, the user is able to see how various pieces of art fits the décor of the house. the individual ar system 9801 may even render pricing information proximate the selected artwork and frame as shown in virtual box 9820. as illustrated in fig. 98b , in response to seeing an advertisement 9822 for a vehicle the user likes, the user gestures to perform research on the particular vehicle. in response, the individual ar system 9801 may re-render the second virtual screen with related media content (e.g., vehicle specifications, vehicle reviews from experts, vehicle reviews from friends, recent cost trends, repair trends, recall notices). as also illustrated in fig. 98b , the individual ar system 9801 may, for example, render a high level virtual menu 9824 of the use's virtual spaces in the user's field of view, to appear as if the virtual menu is on a physical wall of the user's living room. the user may interact with the menu using simple gestures to interact with the virtual spaces, which the individual ar system monitors. the virtual menu may be scrollable in response to defined gestures. as also illustrated in fig. 98b , the user may gesture (e.g., grasping and pulling gesture) to pull a virtual 3d model of the vehicle from the virtual television or virtual monitor. as illustrated in fig. 99c, in response to the user grasping and pulling gesture ( fig. 98b ), the ar system may render a virtual three-dimensional model 9840 to the user's field of vision, for example located between the user and the virtual television or virtual monitor. when using a light field, a user may even be able to walk around the vehicle or rotate the three-dimensional model of the vehicle in order to examine the vehicle from various different viewpoints or perspectives. it may even be possible to render the interior of the vehicle, as if the user were sitting in the vehicle. the ar system may render the vehicle in any user selected color. the ar system may also render dealer information, color choices and other vehicle specifications in another virtual screen 9842, as shown in fig. 98c . virtual enhancements such as the ability to retrieve a three-dimensional model may be synchronized with, or triggered by, broadcast content or programming. alternatively, visual enhancements may be based on user selections. the user may save the three-dimensional model 9840 of the vehicle and/or vehicle related research to a vehicle virtual room or virtual space. for example, the user may make a gesture (e.g., waving or backhanded sweeping motion) toward the appropriate folder of the virtual menu. the ar system 9801 may recognize the gesture, and save the vehicle related information in a data structure associated with the vehicle virtual room or virtual space for later recall. fig. 98d shows a user of the ar system 9801 in a driveway, interacting with virtual content 9850 rendered by the ar system 9801, according to another illustrated embodiment. the user may step out to the driveway, to see how the vehicle would appear parked in front of the user's home. the ar system renders a three-dimensional view of the vehicle 9850 to the user's field of vision to make the vehicle appear to be positioned in the driveway. the ar system may automatically scale the appearance of the virtual vehicle through gestures, as shown in fig. 98d . in one or more embodiments, the ar system may use a separate operating system, which may function somewhat similarly to game engines. while a traditional game engine may work for some systems, other systems may impose additional requirements making the user of a traditional game engine difficult. in one or more embodiments, the operating system may be split into two distinct modes, and corresponding solutions and/or architectures, to meet the requirements of both modes. like a traditional computer system, the operating system (os) operates in 2 distinct modes: i) modal, and ii) nonmodal. nonmodal mode is similar to a typical computer desktop, with multiple applications running simultaneously so that the user can surf the web, instant message (im), and check email simultaneously. modal mode is similar to a typical videogame in which all the applications shut down (or goes into the background), and the game completely takes over the system. many games fit into such a mode, while traditional computing functions will need a nonmodal approach. to achieve this, the os may be split into two components: (a) the subsystem, and (b) the windowing interface. this is similar in some respects to how modern operating systems work. for an example, under a particular operating system, the kernel and many applications work together to provide the subsystem, but then other operating systems may provide the user a traditional desktop, icons, and windows. similarly, the os may likewise be split into a subsystem of one type of operating system (e.g., linux kernel for basic operations) and custom applications (e.g., pacer, gyros, gps, passable world modeling, etc.), for another operating system (e.g., windows ® system). the two modes would apply only to the window ® system, as the subsystems would by necessity run continuously. however, the two modes may also introduce additional complexities to the system. while the nonmodal system may offer traditional computing features, it operates in a decidedly nontraditional way. the 3d nature of it, along with a combination of planar surfaces (screens) combined with nonplanar objects (3d objects placed within the user's view) introduce questions about collision, gravity, and depth, many traits shared by modern game engines. for this reason, the "operating system" portion of the system may be custom-designed. the simplest nonmodal application is the "surface." a simple virtual 2d planar surface rendered in the 3d environment and running traditional computing tools (e.g., web browser, etc.). it is anticipated that most users will run the system with several surfaces in both a body-centric orientation (e.g., twitter ® feed to the left, facebook ® on the right) and in a world-centric orientation (e.g., hulu ® stuck on the wall over the fireplace). the next nonmodal application step is "notifiers." these may, for example, be 2d planar surfaces augmented with 3d animation to notify the user of some action. for example, email will probably remain a traditional 2d planar system, but notification of new mail could be done, for instance via a bird flying by and dropping off a letter on the surface, with a similar effect of a water droplet in a pond as the message is "received." another nonmodal application step relates full 3d applications. not all applications may fit into this space and initially the offerings will be limited. virtual pets are perfect examples of full 3d, nonmodal applications: a fully 3d rendered and animated "creature" following the user throughout the day. nonmodal applications may also be the foundation of "inherited" applications from an existing platform.. it is anticipated that most ar systems will be full-modal applications. for example, when a game is launched (e.g., in which users use ray gun totems to battle virtual invaders rendered into their respective fields of vision), a modal application is used. when launched, all the user's surfaces and virtual content will disappear and the entire field will be replaced with objects and items from game. upon leaving the game, the user's individual virtual surfaces and virtual content may be revived. modal systems may rely on a game engine. some games may make use of a higher-end game engine, while others require simpler gaming engines. each game may select a game engine fit to their design choices and corporate guidance. in one or more embodiments, a virtual collection of various gadgets in a modal system may be utilized. at start the user defines a "play area" (maybe a tabletop or floor space) and then begins placing virtual "toys." initially, the virtual toys could be very basic objects (e.g., balls, sticks, blocks) with only fundamental physics principles (e.g., gravity, collision detection). then, the user can progress to more advanced virtual toys, for example purchased in-game via a virtual store or coming as bundled add-ons with other games (e.g., army men). these more advanced virtual toys may bring along their own animations or special attributes. each virtual toy may come with basic animations and behaviors to allow interactions with other objects. using a system of "tags" and "properties," unexpected behaviors could develop during use or play. for example, a user may drops a simple virtual cartoon character on a table. the virtual cartoon character may immediately go into a "patrol mode". shortly afterwards, the virtual cartoon character toy recognize similarly tagged objects and start to coordinate formations. similarly, other such virtual characters may be brought onto the table using the virtual collection. this approach brings several interesting aspects to the system. there may be few or no rules at all, other than those specifically stipulated by the user. thus, the virtual collection is designed to be a true play zone. it one embodiment, games may be branded to be virtual collection "compatible". in addition, elements may be sold (e.g., through micro-transactions) directly to others. this may also the first step toward introducing the user to merging real and virtual objects into cohesive single experiences. if the physical table could be accurately and dynamically mapped then any physical object can become a virtual character, in one or more embodiments. the virtual collection game may be used by any user of the system, but they may not buy it simply for the experience. this is because the virtual collection is not a standalone game. people may buy the system to play a set of compatible games (e.g., games with a roughly common ul, table-top interaction paradigm, and an offering of in-game assets in the appropriate format). as illustrated in fig. 99 , a variety of different types of games and game titles are suitable to be made as compatible games through the virtual game collection 9902. for example, any classic board-games 9914 in new "digital" formats may be included. also for example, tower-defense games 9904(e.g., arranging assets on the table, in an attempt to block oncoming waves of enemies) may be included. as another example, "god" strategy games 9906 may be included . as yet a further example, even popular sports games 9908 (football, soccer, baseball, etc.) may be included. other adventure games 9910 may also be included in the virtual game collection. the class of compatible table top games is strategically important. external developers can make compelling games using an existing game engine which would most likely need to be modified to accept new input (e.g., hand/eye/totem tracking) and import to the ar system. toy box the ar system may implement various games what have inter-operable components. the games may, for example be designed for tabletop use. each game may essentially be independent from other games, yet a construct allows sharing of elements or assets between games, even though those elements or assets may not be specifically designed into the game into which the element or asset is being shared. thus, a first game may not have explicit definition of an element or asset that is explicitly defined and used in a second game. yet, when the element or asset from the second game appears unexpectedly in the first game, the first game is able to accommodate the element or asset based on an application of a defined set of rules and one or more characteristics associated with the element. in one or more embodiments, a virtual toy collection interface may be implemented in which elements or assets of every installed game (that is compatible with the virtual toy collection interface) is available in one integration location. this interface may be understood by all the games that is compatible with the interface. a first game designer may define a first game with a first set of elements or assets. a second game designer may define a second game with a second set of elements or assets, different from the first set of elements or assets. the second designer may be completely unrelated to the first designer and may have never seen, or even heard of the first game, and may know nothing of the elements or assets of the first game. however, each game designer may make respective games with elements or assets that understand physics as their baseline interaction. this renders the elements or assets interchangeable between different games. for example, a first game may include a tank character, which is capable of moving, rotating a turret and firing a canon. a second game may include a dress up doll character (e.g., barbie ® doll), and may have no explicit definition of a tank or properties associated with a tank character. a user may then cause the tank character from the first game to visit the second game. both games may include fundamental characteristics or properties (e.g., an ontology of game space). if both the first and the second games have a common construct (e.g., understand physics, physics engine) the second game can, at least to some extent, handle the introduction of the character (e.g., tank) from the first game. thus, the character (e.g., tank) from the first game can interact with the character (e.g., barbie ® doll) from the second game. for instance, the character (e.g., tank) from the first game may shoot the character (e.g., barbie ® doll) from the second game, via message passing. the character from the second game (e.g., barbie ® doll) does not know how to receive or does not understand the message (e.g., "you got shot"). however, both games have basic physics in common. thus, while the first character (e.g., tank) cannot shoot the second character (e.g., barbie ® doll), the first character (e.g., tank) can run over the second character (e.g., barbie ® doll). the world is used as the communication mechanism. the ar system may rely on passable world model for communication. in the above example the first and second characters do not need a common language, since they have physics in common. it would be conceivable to take a ball from one game, and use a doll from another game as a bat to hit the ball, since the physics of two objects colliding are defined. thus, if the physics are shared, the games or applications do not need a communication protocol between virtual objects belong to each. again, if a tank runs into a doll, the doll gets run over, even if getting run over by a tank was not explicitly defined in the second game, or for that matter the first game. various levels in the ar system are maps of the real world. the user interface is based primarily on tracking of hands, eye, and/or totem. tracking a user's hands includes tracking gestures. tracking totem use includes tracking pose of the totem, as well as interaction of a user's hands or fingers with the totem. it should be appreciated that the capabilities of an individual ar system may be augmented by communicatively connecting (tethered or wirelessly) the individual ar system to non-portable equipment (e.g., desktop personal computer, ar server, etc.) to improve performance. user worn components may pass-through information to the ar device (e.g., desktop personal computer, ar server, etc.), which may provide extra computational power. for example, additional computational power may be desired, for instance for rendering, to run more object recognizers, to cache more cloud data, and/or to render extra shaders. other applications in one or more embodiments, the ar system may allow users to interact with digital humans. for example, a user may walk into an abandoned warehouse, but the space may become populated with digital humans such that it resembles a bank. the user may walk up to a teller who may be able to look at the user's eyes and interact with him/her. because the system tracks the user's eyes, the ar system can render the digital human such that the digital human makes eye contact with the user. or, in a related embodiment, eye-tracking technology may be used in other applications as well. for example, if a user walks toward a kiosk, the kiosk may be equipped with eye-trackers that are able to determine what the user's eyes are focusing on. based on this information, a digital human, or video representation of a human at the kiosk (e.g., a video at the kiosk) may be able to look into the user's eyes while interacting with the user. in another embodiment, a performer may be able to create virtual representations of himself or herself such that a digital version of the performer may appear in the user's physical space. for example, a musician may simply be playing music at a green-room that is recording the performance, and this performance may be broadcast to the living rooms of multiple users. however, the system may only use change data to broadcast what is changing in the user's performance rather than having to re-render every aspect of the performer while he is performing. thus, a very accurate rendering of the virtual representation of the performed may be rendered in multiple user's living rooms. in yet another improvement, having the eye-tracking data of the user, the digital human (the virtual representation of the performer in this case) may be rendered such that the digital human is making eye contact with the user. thus, this may improve the user experience by having virtual representations/digital human interact directly with multiple users. in one or more embodiments, the ar system may be used for educational purposes. for example, a series of educational virtual content may be displayed to a child. the child may physically touch the virtual object, or in other embodiment, the child may simply look at the virtual object for a longer period of time to unlock metadata related to the object. for example, the child may be surrounded by various sea creatures in his/her living room. based on the user input, metadata related to the virtual object may be duly unlocked. this provides an entirely new paradigm in education in that virtually any space may be transformed to an educational space. as illustrated in the shopping experience of figs. 89a-j , even a grocery store may be used as an educational playground. similarly, the ar system may be used in advertising applications as well. for example, the user may see a particular advertisement on tv, or maybe see a pair of shoes he/she may like on a peer. based on the user input (eye gaze, touching, or any other input), the user may be directed to the company's webpage, or to another seller who may be selling the item. for example, virtual icon may automatically populate within the field-of-view of the user, providing various purchase-related options to the user. or, in a related embodiment, the item may simply be placed in a "shopping cart" or similar storage bag, such that the user can check out the item later. in related embodiments, a different type of advertising paradigm may be envisioned. for example, a visual impression ("click" and buy-through) model may be utilized for purchases. for example, if a user sees a pair of shoes on a peer, and takes the step of going to the retailer's website, and at least place a similar pair of shoes in the online shopping cart, the advertiser may perhaps pay the peer through a referral program. in other words, the ar system knows, through eye tracking techniques that the user has seen the peer's pair of shoes, and that the user has become aware of shoes due to that interaction (e.g., even if the peer and the user do not talk about the shoes). this information may be leveraged advantageously, and the peer may be rewarded by the advertiser or the retailer. or, in anther embodiment, a user may sell his impressions, clicks and buy-throughs to advertisers. in other words, advertisers may choose to buy data directly from a set of users. thus, rather than advertisers having to publish ads and subsequently monitor user behavior, individual users may simply sell their behavior data to the advertiser. this empowers users with control to utilize the data based on individual preferences. in yet another embodiment, a revenue share program may be implemented such that advertisers share their revenue with users in exchange for content/data. for example, an advertiser may directly pay the user to collect or receive data collected through the ar systems. in yet another implementation, the ar system may be used for personalized advertising. thus, rather than seeing images or advertising content being displayed on models or celebrities, advertising content may be personalized such that each person sees an advertisement with his/her own avatar. for example, rather than seeing a billboard advertisement with a celebrity, the advertisement may feature the user himself wearing the product, say shoes. this may also be a way for the consumer to model the product and judge whether the item or product is desirable to them. moreover, the personalized advertisement may be more appealing to users since it's a direct appeal to each user, and the ar system may tap into personality traits of the user to advertise directly to him/her. in another application, the ar system may be implemented as a parental guidance application that may monitor children's usage of the ar system, or generally monitor children's behavior even when the parent is not physically proximate to the child. the ar system may use it's mapping capabilities to retrieve images/videos of spaces such that parents can virtually be anywhere at any time with the kids. thus, even if the child is at school, or at a park, the parent may be able to create an avatar of himself/herself to plant themselves into that space and watch over the kids if need be. in another embodiment, the ar system may allow users to leave virtual objects for other users to discover in a real physical space (e.g., fig. 125j ). this may be implemented within a game setting (e.g., scavenger hunt gaming application, etc.) in which users strive to unlock virtual objects at various physical spaces. or, similarly, a user may leave important information in the form of virtual content for a friend who may later be occupying the same physical space. in an optional embodiment the user may "lock" the virtual content such that it may only be unlocked by a trusted source or friend. given that the ar system may "recognize" users based on unique identifiers, or else, based on a user's appearance, the ar system may only unlock the virtual content, or metadata related to the virtual content when "touched" or activated by the intended recipient, to ensure privacy and safety. in another gaming application, one or more users may be able to play their favorite video games in a physical space. thus, rather than playing a video game or mobile game on a screen, the ar system may render the game in 3d and in the physical scale most appropriate to the user and the physical location. for example, the ar system may render virtual bricks and "birds" that may be physically clutched by the user and be thrown toward virtual bricks, to gain points and progress to the next level. these games may be played in any physical environment. for example, new york city may be transformed to a virtual playground with multiple users of the ar system using both physical and virtual objects to interact with each other. thus, the ar system may have many such gaming applications. in yet another application, the ar system may be used for exercising purposes. the ar system may transform exercise into an enjoyable game. for example, the ar system may render virtual dragons that may appear to be chasing a user, to make the user run faster, for example. the user may go on a run in his neighborhood, and the ar system may render virtual content that makes the run more enjoyable. for example, the exercise application may take the form of a scavenger hunt that the user has to get to within a fixed period of time, forcing the user to run/exercise more efficiently. in another embodiment, the ar system may render a "plant" or any other virtual content whose form, shape or characteristics may change based on the user's behavior. for example, the ar system may render a plant that blooms when the user exhibits "good" behavior and wither away when the user does not. in a specific example, the plant may bloom when the user is being a good boyfriend, for example (e.g., buys flowers for girlfriend, etc.) and may wither away when the user has failed to call his girlfriend all day. it should be appreciated that in other embodiments, the plant or other object may be a physical object or totem that registers to the ar system's machine vision, such that the physical object is tied to the ar system. thus, many such gaming applications may be used to make the user experience more fun and interactive with the ar system and/or other users of the ar system. in yet another embodiment, the ar system may have applications in the field of health insurance. given the ar system's ability to constantly monitor a user's behavior, companies may be able to gauge a user's health based on his behaviors and accordingly price insurance premiums for the individual. this may serve as an incentive for healthy behavior to drive premiums down for insurance because the company may see that the user is healthy and is low-risk for insurance purposes. on the other hand, the company may assess unhealthy behavior and accordingly price the user's premiums at a higher rate based on this collected data. similarly, the ar system may be used to gauge productivity of employees at a company. the company may collect data on an employee's work habits and productivity and may be able to accordingly provide incentives or compensation to the employee based on the observed productivity. in another health application, the ar system may be implemented in the healthcare space, and may be used in virtual radiology, for instance. for example, rather than relying simply on 2d images or mri scans, the ar system may instead render a virtual model of a particular organ, enabling the doctor to determined exactly where, in the 3d space the tumor or infection is located (e.g., fig. 91a ). the ar system may use a combination of mri and ct scan images, for example, to create an accurate virtual model of a patient's organ. for example, the system may create a virtual heart based on received data such that the doctor can see where there might be a problem within the 3d space of the heart. it should be appreciated that the ar system may thus have many utilities in the health care and hospital space, and may help doctors (e.g., surgeon, radiologist etc.) accurately visualize various organs in the body to diagnose or treat their patients accordingly. in a related embodiment, the ar system may help improve healthcare because the doctor may have access to all of the patient's medical history at his/her disposal. this may include patient behavior (e.g., information not necessarily contained in medical records). thus, in one or more embodiments, the history of patient behavior may be appropriately categorized, and presented to the doctor/medical technician such that the doctor can treat the patient accordingly. for example, if the patient is unconscious, the doctor may (based on the user's privacy controls) be able to search through the record of the user's behavior in the recent past to determine a cause of the ailment and treat the patient accordingly. because the ar system has advanced eye tracking capabilities (e.g., gaze tracking that monitors the pupil, and the cornea), the ar system may detect certain patterns in eye movements (e.g., changes in speech, rapid changes in pupil size, etc.), or the retina when the patient is having a seizure. the ar system may then analyze the pattern, and determine if it is a recurring pattern every time a user is having a seizure. for example, all seizure patients may have a similar eye patterns or changes in pupil size, or other similar symptoms. or, every patient may have a distinct pattern or eye movements/pupil size changes etc. when undergoing a seizure. in either case, equipped with patterns that are unique to seizures or individual patients that have undergone seizures, the ar system may program the back of a user's retina with light signals or patterns that may treat or prevent seizures. in one or more embodiments, a light therapy program may be periodically administered to the patient, which may act as a distraction or therapy while the user is having a seizure. overtime, such a therapy may reduce or stop the occurrences of seizures in the user/patient. for example, a particular light pattern (e.g., frequency, wavelength, color, etc.) may be known to help mitigate or otherwise treat or prevent seizures altogether. it has been observed that seizures may be instigated by certain types of light; therefore light patterns delivered to the back of the retina may have the effect of un-doing the effects of that type of light, in some cases. thus, the ar system may be used to detect seizures, and may also be used to prevent or treat them. in an optional embodiment, based on collected information from the patient's eye movements, the ar system may create a retina map that may be used to program various aspects of the brain through retina photonic wavefronts. there may be other applications of using light signals that are projected into the retina. this light therapy may further be used in psychological applications, and subtly controlling brain signals to change the user's thoughts or impulses. in another embodiment, the ar system may detect patterns of a user's behavior and actively improve a user's health. for example, a user of the ar system may suffer from obsessive compulsive disorder (ocd). the ar system may monitor the user's behavior. when the patient is displaying symptoms of ocd (e.g., nervous ticks, counting, scratching, etc.) the system may automatically render a virtual image of the user's doctor who may help calm the user down. in another embodiment, the ar system may automatically display virtual content that has a calming effect on the patient. or, in another embodiment, the ar system may be linked to a drug delivery system that may immediately administer prescribed medication whenever the patient displays a certain kind of behavior. for example, if the user is physical hurting himself during fits of an ocd episode, the ar system that is linked to an intravenal drug delivery system may automatically administer medication that may make the patient drowsy, and therefore prevent the patient from harming himself. in yet another embodiment, the ar system may help refocus a user at work if the user is distracted or seems unable to focus on work. this may help the user be more efficient and productive at work. because the ar system is constantly capturing images and videos, the ar system may detect unproductive behavior (e.g., unrelated internet browsing, low productivity, etc.), and may appropriately render virtual content to help motivate the user. in some embodiments, the ar system may be used to shape a pre-existing generalized model of a human (e.g., man, woman, child, etc.) by morphing a set of control points extracted from a data cloud of another person. thus, the ar system may use a 3d model generalized model of a person's body, but sculpt another person's face into the 3d model. possible advantages of such an approach are that an existing rigged model can have many elements (ligament, muscle function, detail etc.) that cannot be captured by a simple scan of a person's face. however, the simple scan may provide enough information about the user's face to make the generalized model resemble a particular person in fine detail. in other words, the ar system can benefit from the highly precise 3d model and supplement it with necessary detail captured from the simple scan to produce an accurate 3d version of the person. garden overview (plants) for high-dimensional representation of information, the ar system may map content to familiar natural shapes. nature encodes vast amounts of information in trees, grass, etc. for example, the ar system may represent each person or role in an organization as a virtual "plant" having parameters that can be modified by the respective user, and optionally modified by others. the users may, for example, encode the color, shape, leaves, flowers, etc., of the plant with their respective status. if a user is overworked, the respective plant could appear withered. if a user is unhappy, the leaves of the respective plant could fall off. if the user has a lack of resources, the leaves of the respective plant that represents the user may turn brown, etc. the users may provide their respective plants to a leader (e.g., manager, ceo). the leader can place all the plants in a virtual garden. this provides the leader with a high-bandwidth view of organization, through the general color or concept of a garden. such graphical illustration of problems facilitates visual recognition of problems or lack thereof with the organization. email in one or more embodiments, the ar system may implement an electronic mail or message interface using a similar natural or plant approach. for example, the ar system may render a tree, where each branch corresponds to or represents a person, entity or logical address. the ar system may represent each message (e.g., email message) as a leaf of the tree, the leaves visually associated with a branch that represents the person, entity or address from which the respective message was either received or sent. the ar system may render relatively old messages as brown and/or dried out, these leaves eventually falling from the tree to the ground. sub-branches or twigs may represent connectivity with other persons, entities or logical address, for example those copied or blind copied on a message. this allows a user to easily prune branches representing annoying people, or place those branches on a back of the tree or otherwise out of direct view. in yet another embodiment, in response to a user selection/manipulation or picking up an object, the ar system may provide an indication of what is semantically known about the object. for example, the ar system may cause the world to glow softly with respect to what is semantically known. for instance, if a user picked up a television, the ar system can render virtual content that shows places that a television could be placed. "remember this" application in yet another embodiment, the ar system may allow a user to explicitly designate important objects in an environment (e.g., favorite cup, car keys, smartphone, etc.) for tracking. in particular, the ar system may employ an interactive modeling/analysis stage, and then track the designated object(s) visually and essentially continuously. this allows the ar system to recall a last known position of the designated object(s) upon request (e.g., "where was my phone last seen?") of a user. for example, if the user has designated a cell phone as such an object, a specific cell phone object recognizer may execute to identify a presence of the particular user's cell phone in captured image information. the resulting location information for each time cell phone is detected can be distributed back to a cloud based computer system. when the user has misplaced the cell phone, the user may simply query the ar system to search for the location in which cell phone was most recently detected. body worn component picture application it should be appreciated that the image sensor(s) (e.g., camera(s)) of the body worn (e.g., head worn) component can capture image information in a variety of forms. for example, the camera(s) can capture 2d still images or pictures, 2d moving pictures or video, or a 4d light field (e.g., world model). the ar system may execute or provide image information to an application, which formats or transforms the image information and forwards or provides the formatted or transformed information as instructed. for example, the application allows for 2d image printing, 2d image sharing, 2d video sharing, 3d video sharing, for instance with others having ar system, and 3d physical printing, etc. for native 2d cameras and 2d videos, if the ar system tracks head pose, it can re-render a virtual traversal of a space based on where a user moves, using the passable world model. for implementations with cameras that capture 4d light field, an application may allow capture of 2d images or 2d videos from the 4d light field. transforming to 2d images or 2d videos allows sharing or printing using conventional 2d software and printers. the ar system may also share 3d views, for example a 3d view that is locked to a user's head. such embodiments may use techniques similar to rendering in a game engine. in some implementations, the camera may be capable of capturing a 3d wide field of view moving images or video. such images or videos, for example, may be presented via an ar system component capable or rendering 3d wide field of view images or some other device that can present to a user a wide field of view. calibration the following section will go through calibration elements in a global coordinate system in relation to tracking cameras of the individual ar system. referring to the fig. 136 , for illustrative purposes it can be assumed that the ar system utilizes a camera system (such as a single camera or camera arrays) (e.g., fov cameras, depth cameras, infrared cameras, etc.) to detect and estimate the three-dimensional structure of the world. as discussed above, this information may, in turn, be used to populate the map (e.g., passable world model) with information about the world that may be advantageously retrieved as needed. in the ar system, the display system may be generally fixed with regard to the camera physically (e.g., the cameras and the display system may be fixedly coupled or fastened together, such as by virtue of the structures of a head mounted display). any pixel rendered in the virtual display may be characterized by a pixel value (e.g., notation exchangeable as pixel coordinates) and a three-dimensional position. referring to the fig. 136 , given an arbitrary 3d point p 13602 in the world, the goal may be to compute a pixel u 13604 in the display (e.g. with a resolution 1280x720), so that the 3d position of the pixel u lies exactly between p and the user's pupil e 13606. in this model, the 3d location of pupil and the 3d configuration of the virtual display screen 13610 are explicitly modeled (an image floating in the air as perceived by a user, which is created by the display optics). the 3d location of pupil e is parametrized as a 3d point within the camera reference system. the virtual display 13610 is parametrized by 3 external corners (anchor points) a0 13612, a1 13614, and a2 13616 (3x1 vectors). the pixel values of these anchor points as a0, a1, a2 are also known (2x1 vectors). given a pixel location u, the 3d location of the pixel location u may be computed using the following equation: let a represent the simplified multiplication matrix applied to [u:1]. thus, the above equation becomes equivalent to the following equation: it should be noted that a is not composed from a0, a1, a2 directly. anchor points can be arbitrarily chosen, but a remains fixed to a specific screen. it should be appreciated that the illustration of a0, a1, a2 in fig. 136 is only used for illustrative purposes, and that a0, a1, a2 may not computed specifically during the calibration process. rather, it may be sufficient to compute the value for a. a is a 3x3 matrix whose degree of freedom is at most 9: 3 for a0, 3 for a1, 3 for a2. if a1-a0 is assumed to be perpendicular to a2-a0, the degree of freedom (dof) of a is deducted by 1. if the aspect ratio of the virtual screen 13610 is known, the dof of a is again deducted by 1. if the distance between the screen center to the pupil 13506 is known, the dof is again deducted 1. if the field of view of the screen is known, the dof deducts are at most 5. thus, the only unknown may be the distance (1), in-plane rotation (2) and view angle (3) it should be appreciated that the goal of calibration is to estimate a and e. in the rendering stage, given an arbitrary 3d location p 13602 (in the camera reference system), the pixel value u which corresponds to the point where the line between p and e intersects with the virtual screen may be calculated. since u = a * [u^t, 1]^t, the constraints that e-u and e-p are aligned is equivalent to: it should be appreciated that c is an unknown multiplier. equation (2) has 3 equations, and 3 unknowns (u_x, u_y, c). by solving equation (2), the simplified closed form solution can be written as the following equations: as discussed above, the calculation of c is omitted here for purposes of simplicity. it should be appreciated that the above solution has no prior assumption on the screen geometry. if those assumptions (e.g., screen sides of the virtual screen are perpendicular, the screen axis is parallel to the ray of sight, etc.) are counted for, the above equations may be simplified further. in view of the above considerations, in one embodiment a suitable calibration process may comprise the steps outlined below. it should be appreciated that such a calibration generally requires the user to wear the head mounted ar system, and to provide some responses based upon what the user sees through the ar device while viewing the physical world. the example calibration outlined below envisions an aiming system utilizing a reticle. of course, other approaches may be similarly used, and the following steps should not be read as limiting. first, a marker may be printed out. in one or more embodiments, aruco markers may be used. aruco is a minimal c + + library for detection of augmented reality markers. the library relies on the use of coded markers. each marker may have a unique code (e.g., unique black and white patterns). next, the marker may be placed in front of the user such that that a missing part of the marker is placed at a corner of the user's field of view. next, a rough location of the user's pupil with regards to the camera is measured (e.g., centimeters). the location may be measured in the camera coordinate system. the camera aperture may be located at 0,0,0 in a 3d coordinate space. the rough location measurement may at most cost a one centimeter error. next, the user may wear the wearable ar system in a manner such that the marker may be seen both by the user and the camera. a configuration program may be run in order to determine if the camera detects the marker. if the camera detects the marker, the user will see the color image on the screen. given a reasonable initial calibration value, the user may also see, through a display device of the ar system, a green grid roughly aligned with a chess board. however, even if the user does not see it the first time, the user may be asked to continue. next, either the left eye or the right eye may be calibrated first. when the calibration process starts, the user may move his or her head so that the corner of the marker highlighted in the hmd screen aims at the physical corresponding corner of the marker. the user may make a selection to command the software to move to the next target. the targets may be randomly selected. this process may be repeated n times (e.g., based on a predetermined value). n is recommended to be more than twice the number of dofs of a calibration model. after n data points are collected, the program may pause during an optimization process, subsequent to which the software may present both eyes with a grid. the eye, having undergone the calibration may see the green grid well aligned with the physical board. this result may be auto-saved in the file. the calibration process provides a set of correspondences (x_i, y_i, z_i, u_i, v_i) in which, i = 1:n, and x,y,z are the 3d points detected by the camera and u,v is the screen pixel location aligned by a user. there may a number of constraints, such as the following equation: prior knowledge of screen physical structure may also provide constraints: perpendicular screen side constraints may be represented by the following equation: screen to pupil distance (assumed to be d) constraints may be represented by the following equation: combining the constraints above, e and a may be solved using a quadratic optimization method (e.g., newton's method for optimization, etc.). in other words, referring back to the fig. 136 , the goal of calibration is to determine a location of an image plane relative to the tracking camera (which may be mounted on the user's head). further, a location of the user's eye may also be accounted for. the eye is located at a particular distance away from the image plane and looks at the physical world through the ar system. in one embodiment the user will receive the virtual aspects of the ar experience from a spatial light modulator (e.g., fiber scanning device, etc.) mounted to the ar system, and this imagery may be presented at a known focal length (the representative image plane for the "virtual screen", and that focal plane can be warped, rotated, etc.). again, the goal of the calibration is to estimate where the image plane is located relative to the camera. in other words, there may or may not be a camera looking at the eye ("eye tracking camera") for gaze, etc. while the eye tracking cameras may make calibration more accurate, it should be appreciated that the calibration process may work with or without the eye tracking camera. generally, the tracking cameras and the ar device will be rigidly coupled, so a set of known assumptions may be made about the relationship between the tracking cameras and the ar device. thus one can perform the virtual scan calibration once for the user, but every time a new user wears the ar system, a new calibration may be conducted. the user's eye position may be referred to as e as shown in fig. 136 (which is a 3x1 vector; (x,y,z)). the calibration system also takes input from the camera, as described above. coordinate values of various points may be measured by the cameras. based on these values, a coordinate system with respect to the camera may be constructed. for example, assuming there is a point in the real world that is x,y,z, this point may be defined as being 0,0,0 on the camera itself. one goal of doing such a calibration is to measure a point on the virtual screen - so that when the user looks through the ar system, the point on the image plane, and the point in real world space are on the same line in space. this allows for the system to render virtual content at the appropriate location on the virtual screen/image plane. in other words, if the virtual screen is "a", and a point u is to be rendered on (a 2x1 pixel value), a point po in real space p0 (x,y,z) may need to be determined. in other words, one needs to determine a function u = fu (p, e, a). for example, a pixel location u needs to be determined given that p is known, e is unknown and a is unknown (with reference to fig. 136 ). the goal is to determine e and a in the above relationship. one can start from a reverse perspective on the problem to solve the relationship. the first step may be to calculate the 3-d coordinate position of the u pixel on the image plane a. thus a reverse process of rendering is presented: given a 2-d pixel value, how can a 3-d location (as opposed to rendering, wherein a 3-d location is known and one needs to determine the 2-d pixel) be calculated. one may recall that the virtual screen or plane a need not be perpendicular to the user, but rather could be at any orientation relative to the user of the ar system. in one or more embodiments, there may be warping. plane a may be defined by three corners: a0, a1, a2. for example, say that a virtual screen resolution is 800x600 pixels: one can say that a0 is 0,0; a1 is 800,0; a2 is 800,600. these coordinates may be referred to as the 3-d coordinate values for these three points a0, a1, and a2. if (u-a0) is subtracted, a vector from point a0 to the point u is obtained. if one multiplies it by the reverse and transposes it, then it becomes ([a1-a0, a2-a0]-1). then if it is multiplied [a1-a0, a2-a0] (this is a 3x2 matrix), then a 3-d coordinate of the u with respect to a0 may be obtained. now if this is added to a0, the 3-d coordinates of the u pixel inside of the camera workspace/coordinate system may be obtained. thus, a linear algebra relationship for v (think of "v" as "capital u") may be used. for example, if u is (x,y), this may be simplified as: v = a*[ux, uy, 1]. thus everything may be condensed into a 3x3 matrix. thus far, in this configuration the values for a0, a1, or a2 are not known. therefore, one goal of calibration may be to determine the value of matrix a. in other words, if the values of matrix a is known, the exact geometry of the image plane may also be known. in other words, the geometry of the image plane is encoded by matrix a. as discussed above, the goal of this calibration in this scenario is to render a pixel u such that e, the pixel u, and p0 form a line. as described above, when an ar system is placed on a new user, the ar system may be calibrated. the calibration system may present a point - so that the user may attempt to align that point to a physical aspect of the real world. this may be repeated for a plurality of points (e.g., 20 points), after which the user may be calibrated and ready to operate. such a process may be presented to the user as a simple game that takes only a few seconds (e.g., user fires a laser through eye movement, or hitting virtual targets with the eye). in one embodiment, another formula may be used that will enforce the three subject points being on the same line. in other words, a point may be presented, and the user may be asked to align that point to a physical object in the real world: p-e (the vector for p to the eye) is equivalent to a multiple of, or some constant c and vector (v-e). one may recall from the discussion above that u and p are known, so p-e=c*(v-e). then p-e=c*(a*[ux, uy, 1] - e). thus for each point that the user playing the calibration game aims, he/she may generate such a constraint, each of which consists of three equations (for x, y, and z). thus, and of course, if 20 such equations are accumulated, then there will be 60 constraints (e.g., 20 x 3). the unknown is a, which is a 3x3 matrix; e is a 3x1 matrix. if there are some assumptions about a (e.g., that the screen is not skewed, and the aspect ratio of the screen is known, the actual distance of the virtual plane to the tracking camera, etc.), then there may be some regularization when solving these equations. thus, after accounting for such regularizations, there may be 12 unknowns plus the unknown cs. c is a scalar. if there is no prior knowledge, then the number of unknowns are: 3 + 9 - n (where n is the number of calibrating points; each time there is at least one additional c). the number of constraints is n*3. also, one needs an initial rough guess of the position of the virtual plane relative to the tracking camera. so if 3+9 - n < 3n; 12 < 4n; or 3 < n. in other words, there are only 4 points. thus a larger number of points may be collected from the user to try to obtain at least a squares solution, or a robust estimator solution. reqularizations in order to determine a screen-to-eye distance, another equation may be used. the distance between the center of the pupil e and the center of the screen may need to be determined. the center of the screen is simply the width of screen w divided by 2 (w/2) and height of screen h divided by 2 (h/2). thus, the screen center in the camera coordinate system may be represented by the following equation: then, one may subtract the pupil e and place constraints to make the squared value equal to some prior value d(s-e) (screen to eye). this may produce an equation as follows: next, if one knows that the screen is not skewed, then there are two sides of the screen are always perpendicular to each other. this perpendicular screen constraint means the inverse of the first column of a * the second column of a = 0. this may be called the "perpendicular screen constraint". next, if one knows that the screen is not rotated with respect to the eye (e.g., the screen is always right in front of the user in an upright position), this information may also be critical. the vector from e to the center of the screen may be represented as the following equation: perhaps this vector may be termed "alpha," representing a distance from the eye to screen center. one knows that the first column of a is along the width of the screen and second column of a is along the height of the screen. thus one has: and thus, in such a configuration, the width is perpendicular to the user's ray of sight, and the height is also perpendicular to the user's ray of sight. therefore, that screen may be perpendicular to the user's ray of sight (could be one or the other). thus there are four constraints; this reduces the total dof of a down to 5. thus more regularizations allow a smaller number of calibration data points, and also increase the accuracy thereof significantly. it should be appreciated that if the calibration is done once, a relationship between the virtual screen and the eye is known. the unknowns have been separated out with regard to the screen versus those unrelated to the screen. this is good because user eye configurations can differ. given that data pertaining to a is known, the only unknown becomes the location of the eye e. in other words, if one conducts the calibration routine having the user aiming 10 points, then there will be 10 arrays stacked together that can be solved; the only unknown will be e (e.g., the a may be eliminated). thus one can use the same solver equation with less unknowns, but much higher accuracy using this technique. if the system has an eye tracking camera (e.g., an image capture device directed toward the eyes of the user), then e may be a given as well. in such a case, when the user wears the head-mounted ar device, calibration may not be needed, because a, the geometry of the screen plane, is pre-calibrated (by the factory, by some other users, or by the same user previously). since the eye camera directly measures e, a rendering may be done without any calibration. it is worth noting that if these kinds of constraints are not accurate, there may be a fourth kind of regularization: prior knowledge of the eye location. in other words, it is desirable that the distance of the current eye location to the position of a previous eye location be very small. therefore, in least squares representation, it may be represented by the following equation: of course, it should be appreciated the value of the eprior may be derived through the eye-tracking cameras. referring now to fig. 145 , an example method 145 of performing calibration on ar systems is discussed. at f14502 a virtual image is displayed to a user. the virtual image may be any image. as discussed above, the virtual image may simply comprise a point at which the user is focused at. in other embodiments, the virtual image may be any image, and the user may be directed to focus at a particular pixel (e.g., denoted by a particular color, etc.). at 14504, the ar system determines a location of the virtual image. in one or more embodiments, the location of the virtual image may be known because the system knows the depth at which the virtual image is being displayed to the user. at 14506, the ar system may calculate a location of the user's eye pupil. this may be calculated through the various techniques outlined above. at 14508, the ar system, may user the calculated location of the user's eye pupil to determine a location at which a pixel of the virtual image is displayed to the user. user input may also be utilized to determine the location of the pixel. at 14510, the user may be asked to align the pixel point to a known point in space. at 14512, a determination may be made as to whether enough points n have been collected. it should be appreciated that the various pixel points may be strategically located at various points, and in various directions, to obtain accurate calibration values for a number of parts of the display of the ar system. as described above, in some embodiments, the number of points (e.g., 20 pixel points) should be rather high to get higher accuracy. if it is determined that more points are needed, then the process goes back to 14502 to collect data for other pixel points. if, at 14512, it is determined that enough points have been collected, various values of the pixel and/or displayed may be adjusted based on the collected data (14514). transaction-assistance configurations the subject ar systems are ideally suited for assisting users with various types of transactions, financial and otherwise, because the ar systems are well suited to identify, localize, authenticate, and even determine gaze of the user. in one or more embodiments, a user may be identified based on eye-tracking. the subject ar system generally has knowledge pertaining to the user's gaze and point of focus. as discussed above, in various embodiments, the head-mounted ar system features one or more cameras that are oriented to capture image information pertinent to the user's eyes. in one configuration, such as that depicted in fig. 137 , each eye of the user may have a camera 13702 focused on the eye, along with 3 or more leds (in one embodiment directly below the eyes as shown) with known offset distances to the camera, to induce glints upon the surfaces of the eyes, as described in detail above. three leds are used with known offset is because by triangulation, one can deduce the 3d distance from the camera to each glint point. with at least 3 points and approximate spherical model of the eye, the curvature of the eye may be deduced. with 3d offset and known orientation to the eye, one can form an exact (images) or abstract (gradients or other features) template of the iris or retina and (in other embodiments the retina and the pattern of veins in and over the eye). this allows for precise identification of the user: in one or more embodiments, iris identification may be used to identify the user. the pattern of muscle fibers in the iris of an eye forms a stable and unique pattern for each person. this information may be advantageously used as an identification code in many different ways. the goal is to extract a sufficiently rich texture from the eye. since the cameras of the ar system point at the eye from below or from the side, the code need not be rotation invariant. fig. 138 shows an example code 13800 from an iris just for reference. there may be cameras below and many other leds that provide 3d depth information. this may be used to form a template code, and be normalized for pupil diameter and its 3d position. such a code may be captured over time from several different views as the user is registering with the device (e.g., during a set-up time, etc.). as described above, in one embodiment the hmd comprises a diffraction display driven by a laser scanner steered by a steerable fiber optic cable. this cable may also be utilized to look into the eye and view the retina itself which is also a unique pattern of rods, cones (visual receptors) and blood vessels. these also form a pattern unique to each individual and can therefore be used to uniquely identify each person. referring now to fig. 139 , an image of the retina 13900 is illustrated. similar to the above embodiment, the image of the retina may also be converted to pattern using any number of conventional means. for example, a pattern of dark and light blood vesicles may be unique to each user. this may be converted to a "dark-light" code by standard techniques such as running gradient operators on the image and counting high/low transitions in a standardized grid centered at the center of the retina. since the various ar systems described here are designed to be worn persistently, they may also be utilized to monitor any slow changes in the user's eyes (e.g., such as the development of cataracts, etc.). further, visualization of the iris and retina may also be utilized to alert the user of other health changes, such as congestive heart failure, atherosclerosis, and cholesterol, signs of which often first appear in the eyes. thus the subject systems may be utilized to identify and assist the user with enhanced accuracy for at least the following reasons. first, the system can determine the curvature/size of the eye, which assists in identifying the user since eyes are of similar but not exactly the same size between people. second, the system has knowledge of temporal information; the system can determine the user's normal heart rate, if the user's eyes are producing a water firm, if the eyes verge and focus together, if breathing patterns, blink rates, or blood pulsing status in the vessels are normal, etc.. next, the system also can use correlated information; for example, the system can correlate images of the environment with expected eye movement patterns, and can also check that the user is seeing the same expected scene that is supposed to be located at that location, (e.g., as derived from gps, wi-fi signals and maps of the environment, etc.). for example, if the user is supposedly at home, the system should be seeing expected pose correct scenes inside of the known home. finally, the system can use hyperspectral and/skin/muscle conductance to also identify the user. all the above may be advantageously used to develop an extremely secure form of user identification. in other words, the system may be utilized to determine an identity of the user with a relatively high degree of accuracy. since the system can be utilized to know who the user is with unusual certainty and on a persistent basis (the temporal information), it can also be utilized to allow micro-transactions. passwords or sign up codes may be eliminated. the subject system may determine an identity of the user with high certainty. with this information the user may be allowed access to any website after a simple notice (e.g., a floating virtual box) about the terms of that site. in one embodiment the system may create a few standard terms so that the user instantly knows the conditions on that site. if one or more websites do not adhere to a fair set of conditions, then the ar system may not automatically allow access or micro transactions (as will be described below) on that particular website. on a given website, the ar system may ensure that the user has not only viewed or used some content but the ar system may also determine a length of time for which the content was used (e.g., a quick browse might be free, but there may be a charge on a larger amount of usage). in one more embodiments, as described above, micro-transactions may be easily performed through such a system. for example different products or services may be priced at a fraction of a penny (e.g., a news article may cost 1/3 of a cent; a book may be charged at a penny a page; music at 10 cents a listen, etc.). within the current currency paradigm, it is hardly practical to utilize micro-transactions, because it may be more difficult to keep track of such activity amongst users. however, with the ar system, the ar system may easily determine the user activity and track it. in one or more embodiments, the ar system may receive a small percentage of the transaction (e.g., 1% transaction fee, etc.). in one embodiment, the system may be utilized to create an account, controllable by the user in which a set of micro transactions are aggregated. this set may be aggregated such that the user may pay the website or entity when the amount exceeds a threshold value. or, in another embodiment, the amount may simply be cleared on a routine basis, if the threshold value has not been reached. in another embodiment, parents may have similar access to their children's accounts. for example, policies may be set allowing no more than a certain percentage of spending, or creating a limit on spending. various embodiments may be facilitated, as will be described using the following embodiments. goods may be delivered to the user's preferred location, even if the user is not physically present, due to the ar telepresence concept. that is, with ar telepresence, the user may be at an office location, but may let the delivery person in to their home, or else appear to the delivery person by avatar telepresence. since the system may be utilized to track the eye, it can also allow "one glance" shopping. that is, the user may simply look at an object (say a robe in a hotel) and create a stipulation such as, "i want that, when my account goes back over $3000 dollars". when a user views a particular object of interest, similar products may also be displayed virtually to the user. in one or more embodiments, the ar system may read barcodes. this may also facilitate the user in making the transaction. in one or more embodiments, a used market may be rendered for as many products and product categories as possible. the used items may always be contrasted against the new ones. for many items, since the ar system may be utilized to render a 3d object, the user may simply walk around the 3d object to examiner it from all sides. it is envisioned, that overtime, most items may correspond to a 3d model which may be updated by a quick scan of the object. indeed, many items, such as cellphones or smartphones, may become virtualized such that the user gets the same functionality without having to purchase or carry the conventional hardware. in one or more embodiments, users of the ar system may manage possessions by always having access to a catalog of objects, each of which can be instantly put on the market at a suggested or user settable rate. in one or more embodiments, the ar system may have an arrangement with local companies to store goods at a cost to the user, and split the cost with one or more websites. in one or more embodiments, the ar system may provide virtual markets. in other words, the ar system may host market places that may be entirely virtual (via servers) or entirely real. in one or more embodiments, the ar system may develop a unique currency system. the currency system may be indexed to the very reliable identification of each person using the subject technology. in such a case there could be no stealing when every actor is securely known. such a currency may grow over time when the number of users increases. that is, every user who joins the system may add to the total money in the system. similarly, every time an item is purchased, the currency may inflate beyond a point such that users do not have an incentive to keep large amounts of money. this encourages free movement of money in the economy. the currency may be modeled to stimulate maximum interaction/maximum economic growth. new money may be distributed in inverse ratio to existing wealth of money. new users may receive more, and wealthy people may receive less. the reverse may be true if the money supply shrinks past a threshold limit. rather than being subject to human intervention, this currency system may run on an adaptive mathematical model using best known economic practices. that is, during a recession, the inflation factor of the currency may become bigger such that money starts flowing into the system. when there's a boom in the economy, money might even shrink to dampen market swings. in one or more embodiments, the model parameters would be publically broadcast and the currency would float on other currencies. in one embodiment, a retinal signature secured data access may be utilized. in such an embodiment, the subject system may allow text, image, and content to be selectively transmittable to and displayable only on trusted secure hardware devices, which allow access when the user can be authenticated based on one or more dynamically measured retinal signatures. since the display device projects directly onto the user's retina, only the intended recipient (identified by retinal signature) may be able to view the protected content. further, because the viewing device actively monitors the user's retina, the dynamically-read retinal signature may be recorded as proof that the content was in fact presented to the users eyes (e.g. a form of digital receipt, possibly accompanied by a verification action such as executing a requested sequence of eye movements). spoof detection may rule out attempts to use previous recordings of retinal images, static or 2d retinal images, generated images etc. based on models of natural variation expected. a unique fiducial/watermark may be generated and projected onto the retinas to generate a unique retinal signature for auditing purposes. the breadth of the present invention is not to be limited to the examples provided and/or the subject specification, but rather only by the scope of claim language associated with the claims.
161-271-465-233-895
US
[ "TW", "CN", "US", "WO" ]
H01B5/14,B29C48/07,B29C48/30,H01L23/52,H05K1/03,H05K3/00,H01G4/228,H01G4/008,H01G4/01,H01G4/20,H01G4/30,H01G4/33,H05K1/02
2013-02-25T00:00:00
2013
[ "H01", "B29", "H05" ]
film constructions for interdigitated electrodes with bus bars and methods of making same
an interdigitated electrode film co-extruded with bus bars for thin film electronics or other devices. first electrode layers are located between first and second major surfaces of the film with a first bus bar electrically connecting and integrated with the first electrode layers. second electrode layers are located between the first and second major surfaces with a second bus bar electrically connecting and integrated with the second electrode layers. the first electrode layers are interdigitated with the second electrode layers, and insulating layers electrically isolate the first bus bar and electrode layers from the second bus bar and electrode layers. the electrode films include multilayer films with vertical bus bars and multilane films with horizontal bus bars.
1. a multilayer interdigitated electrode film having a first major surface, a second major surface opposite the first major surface, an in-plane direction extending along the first and second major surfaces, and a z-direction extending between the first and second major surfaces, the film comprising: a first plurality of electrode layers between the first and second major surfaces along the in-plane direction; a first bus bar electrically connecting and integrated with the first plurality of electrode layers along the z-direction; a second plurality of electrode layers between the first and second major surfaces along the in-plane direction; a second bus bar electrically connecting and integrated with the second plurality of electrode layers along the z-direction; and a plurality of insulating layers between the first and second plurality of electrode layers, the plurality of insulating layers electrically isolating the first bus bar and the first plurality of electrode layers from the second bus bar and the second plurality of electrode layers, wherein the first plurality of electrode layers are interdigitated with the second plurality of electrode layers, wherein the first plurality of electrode layers, the first bus bar, the second plurality of electrode layers, the second bus bar, and the plurality of insulating layers all comprise an extrudable material. 2. the film of claim 1 , wherein one of the first plurality of electrode layers comprises a first outermost layer on the first major surface, and one of the second plurality of electrode layers comprises a second outermost layer on the second major surface. 3. the film of claim 1 , wherein one of the plurality of insulating layers comprises a first outermost layer on the first major surface, and another one of the plurality of insulating layers comprises a second outermost layer on the second major surface. 4. the film of claim 1 , further comprising a first skin layer on the first major surface and a second skin layer on the second major surface. 5. the film of claim 4 , wherein the first and second skin layers are removable without damaging operation of the film. 6. the film of claim 1 , wherein the first and second plurality of electrode layers comprise a conductive polymer. 7. the film of claim 1 , wherein the plurality of insulating layers comprise a thermoplastic polymer. 8. the film of claim 1 , wherein the first and second plurality of electrode layers comprise a polymer with conductive nanoparticles. 9. the film of claim 1 , wherein the plurality of insulating layers comprise a polymer with high permittivity nanoparticles. 10. the film of claim 1 , further comprising: a third plurality of electrode layers between the first and second major surfaces along the in-plane direction, wherein the second bus bar electrically connects and is integrated with the third plurality of electrode layers along the z-direction on a side of the second bus bar opposite the second plurality of electrode layers; a fourth plurality of electrode layers between the first and second major surfaces along the in-plane direction; a third bus bar electrically connecting and integrated with the fourth plurality of electrode layers along the z-direction; and another plurality of insulating layers between the third and fourth plurality of electrode layers, the another plurality of insulating layers electrically isolating the second bus bar and the third plurality of electrode layers from the third bus bar and the fourth plurality of electrode layers, wherein the third plurality of electrode layers are interdigitated with the fourth plurality of electrode layers. 11. a multilane interdigitated electrode film having a first major surface, a second major surface opposite the first major surface, an in-plane direction extending along the first and second major surfaces, and a z-direction extending between the first and second major surfaces, the film comprising: a first plurality of electrode layers between the first and second major surfaces along the z-direction; a first bus bar electrically connecting and integrated with the first plurality of electrode layers along the in-plane direction; a second plurality of electrode layers between the first and second major surfaces along the z-direction; a second bus bar electrically connecting and integrated with the second plurality of electrode layers along the in-plane direction; and a plurality of insulating layers between the first and second plurality of electrode layers, the plurality of insulating layers electrically isolating the first bus bar and the first plurality of electrode layers from the second bus bar and the second plurality of electrode layers, wherein the first plurality of electrode layers are interdigitated with the second plurality of electrode layers, wherein the first plurality of electrode layers, the first bus bar, the second plurality of electrode layers, the second bus bar, and the plurality of insulating layers all comprise an extrudable material. 12. the film of claim 11 , wherein the first and second plurality of electrode layers comprise a conductive polymer. 13. the film of claim 11 , wherein the plurality of insulating layers comprise a thermoplastic polymer. 14. the film of claim 11 , wherein the first and second plurality of electrode layers comprise a polymer with conductive nanoparticles. 15. the film of claim 11 , wherein the plurality of insulating layers comprise a polymer with high permittivity nanoparticles. 16. a method of making a multilayer interdigitated electrode film having a first major surface, a second major surface opposite the first major surface, an in-plane direction extending along the first and second major surfaces, and a z-direction extending between the first and second major surfaces, the method comprising: extruding a first plurality of electrode layers along the in-plane direction and a first bus bar electrically connecting and integrated with the first plurality of electrode layers along the z-direction; extruding a second plurality of electrode layers along the in-plane direction and a second bus bar electrically connecting and integrated with the second plurality of electrode layers along the z-direction; and extruding a plurality of insulating layers between the first and second plurality of electrode layers, the plurality of insulating layers electrically isolating the first bus bar and the first plurality of electrode layers from the second bus bar and the second plurality of electrode layers, wherein the first plurality of electrode layers are interdigitated with the second plurality of electrode layers, wherein the extruding steps collectively comprise simultaneously co-extruding the first plurality of electrode layers and the first bus bar with the second plurality of electrode layers and the second bus bar and with the plurality of insulating layers to make the electrode film. 17. the method of claim 16 , wherein the first and second plurality of electrode layers comprise a conductive polymer. 18. the method of claim 16 , wherein the plurality of insulating layers comprise a thermoplastic polymer. 19. the method of claim 16 , wherein the first and second plurality of electrode layers comprise a polymer with conductive nanoparticles. 20. the method of claim 16 , wherein the plurality of insulating layers comprise a polymer with high permittivity nanoparticles. 21. a method of making a multilane interdigitated electrode film having a first major surface, a second major surface opposite the first major surface, an in-plane direction extending along the first and second major surfaces, and a z-direction extending between the first and second major surfaces, the method comprising: extruding a first plurality of electrode layers along the z-direction and a first bus bar electrically connecting and integrated with the first plurality of electrode layers along the in-plane direction; extruding a second plurality of electrode layers along the z-direction and a second bus bar electrically connecting and integrated with the second plurality of electrode layers along the in-plane direction; and extruding a plurality of insulating layers between the first and second plurality of electrode layers, the plurality of insulating layers electrically isolating the first bus bar and the first plurality of electrode layers from the second bus bar and the second plurality of electrode layers, wherein the first plurality of electrode layers are interdigitated with the second plurality of electrode layers, wherein the extruding steps collectively comprise simultaneously co-extruding the first plurality of electrode layers and the first bus bar with the second plurality of electrode layers and the second bus bar and with the plurality of insulating layers to make the electrode film. 22. the method of claim 21 , wherein the first and second plurality of electrode layers comprise a conductive polymer. 23. the method of claim 21 , wherein the plurality of insulating layers comprise a thermoplastic polymer. 24. the method of claim 21 , wherein the first and second plurality of electrode layers comprise a polymer with conductive nanoparticles. 25. the method of claim 21 , wherein the plurality of insulating layers comprise a polymer with high permittivity nanoparticles. 26. a multilayer interdigitated electrode film having a first major surface, a second major surface opposite the first major surface, an in-plane direction extending along the first and second major surfaces, and a z-direction extending between the first and second major surfaces, the film comprising: a first electrode layer between the first and second major surfaces along the in-plane direction; a first bus bar electrically connecting and integrated with the first electrode layer along the z-direction; a plurality of second electrode layers between the first and second major surfaces along the in-plane direction; a second bus bar electrically connecting and integrated with the plurality of second electrode layers along the z-direction; and a plurality of insulating layers between the first electrode layer and the plurality of second electrode layers, the plurality of insulating layers electrically isolating the first bus bar and the first electrode layer from the second bus bar and the plurality of second electrode layers, wherein the first electrode layer is interdigitated with the plurality of second electrode layers, wherein the first electrode layer, the first bus bar, the second plurality of electrode layers, the second bus bar, and the plurality of insulating layers all comprise an extrudable material.
background many common electronic devices can be fabricated in a continuous manner on a flexible substrate. continuous film-based methods have been demonstrated for complete or partial fabrication of capacitors, resistors, thin film batteries, organic photovoltaics (opvs), organic light emitting diodes (oleds), and other components. however, there are fewer continuous techniques available for producing fully integrated multilayer electronic films, especially those with a large number of layers and electrodes, for example over 100 layers. also, many thin film electronic devices are produced through multiple vapor deposition and patterning steps. accordingly, a need exists for complex electrodes and methods to fabricate them. summary a multilayer interdigitated electrode film, consistent with the present invention, has a first major surface, a second major surface opposite the first major surface, an in-plane direction extending along the first and second major surfaces, and a z-direction extending between the first and second major surfaces. a first plurality of electrode layers are located between the first and second major surfaces along the in-plane direction, and a first bus bar electrically connects and is integrated with the first plurality of electrode layers along the z-direction. a second plurality of electrode layers are located between the first and second major surfaces along the in-plane direction, and a second bus bar electrically connects and is integrated with the second plurality of electrode layers along the z-direction. the first plurality of electrode layers are interdigitated with the second plurality of electrode layers, and insulating layers electrically isolate the first bus bar and the first plurality of electrode layers from the second bus bar and the second plurality of electrode layers. a multilane interdigitated electrode film, consistent with the present invention, has a first major surface, a second major surface opposite the first major surface, an in-plane direction extending along the first and second major surfaces, and a z-direction extending between the first and second major surfaces. a first plurality of electrode layers are located between the first and second major surfaces along the z-direction, and a first bus bar electrically connects and is integrated with the first plurality of electrode layers along the in-plane direction. a second plurality of electrode layers are located between the first and second major surfaces along the z-direction, and a second bus bar electrically connects and is integrated with the second plurality of electrode layers along the in-plane direction. the first plurality of electrode layers are interdigitated with the second plurality of electrode layers, and insulating layers electrically isolate the first bus bar and the first plurality of electrode layers from the second bus bar and the second plurality of electrode layers. methods consistent with the present invention include co-extrusion of materials to form the multilayer and multilane interdigitated electrode films. brief description of the drawings the accompanying drawings are incorporated in and constitute a part of this specification and, together with the description, explain the advantages and principles of the invention. in the drawings, fig. 1 is a cross-sectional view of a multilayer interdigitated electrode film with buried electrodes and vertical bus bars; fig. 2 is a perspective view of the film of fig. 1 ; fig. 3 is a cross-sectional view of a multilayer interdigitated electrode film with exposed electrodes and vertical bus bars; fig. 4 is a cross-sectional view of a multilane interdigitated electrode film with horizontal bus bars; fig. 5 is a perspective view of the film of fig. 4 ; fig. 6 is a cross-sectional view of a multilayer interdigitated electrode film with vertical bus bars before singulation of the film into separate electrode films; fig. 7 is a front view of a feedblock for making an interdigitated electrode film with bus bars; fig. 8 is a front exploded perspective view of the feedblock of fig. 7 ; fig. 9 is a top view block diagram of a system for making an interdigitated electrode film with bus bars; and fig. 10 is a perspective view of the system of fig. 9 . detailed description embodiments of the present invention include multilayer (and multilane) melt-processable polymeric film constructions and fabrication methods to produce them. the advantages of melt processing in the realm of thin film electronics are two-fold: significant decrease in thickness of individual electrically active layers and a significant increase in electrode surface area. melt processing in multilayer form can provide, for example, fully integrated electronic films via a single processing method. additional benefits of multilayer melt processing that are potentially useful for thin film electronics include precise interfacial control, control of adhesion at interfaces, precise thickness control, and high cross- and down-web uniformity. these techniques and combinations thereof can be used to produce a series of structures with alternating electrode layers. a common feature of these multilayer devices is the presence of both vertical and horizontal electrode segments, the vertical segments aligned with the film axis of smallest dimension, and the horizontal segments aligned along the in-plane direction of the extruded film, although the opposite arrangement is also possible as a multilane electrode film. the constructions are compatible with continuous fabrication methods such as multilayer extrusion and multilane extrusion. another advantage of these films is the connection between integrated in-plane electrodes and vertical connecting electrodes along the z-direction of the film. the vertical bus bars stabilize the interdigitated electrode structure to make them electrically stable without short circuits, enable a robust connection to the in-plane electrodes, and provide a way of singulating repeating units of the device structure into individual electrode films. stabilizing the interdigitated electrodes with vertical bus bars also helps to prevent variable electrode spacing at the edges and electrical short circuits. another advantage of these films is they can be made by a process that eliminates the need for 3d patterning, printing, or multiple lithographic steps. applications of these electrode films include, for example, actuators, sensors, and capacitors. film constructions figs. 1-6 show exemplary constructions for multilayer and multilane interdigitated electrode films. as shown, these films have a first major surface, a second major surface opposite the first major surface, and an in-plane direction generally along the first and second major surfaces. the films are described with reference to x-, y-, and z-directions. the x-direction is along the length of the film (or down web direction when making the film), the y-direction is along the width of the film, and the z-direction is along a distance between the first and second major surfaces. figs. 1 and 2 are cross-sectional and perspective views, respectively, of a multilayer interdigitated electrode film 10 with buried electrodes and vertical bus bars. film 10 includes electrode layers 12 interdigitated with electrode layers 14 between the major surfaces along the in-plane direction of the film. insulating layers 16 separate electrode layers 12 from electrode layers 14 . a bus bar 18 electrically connects and is integrated with electrode layers 12 at one location between the major surfaces of film 10 along the z-direction, for example an edge of film 10 . a bus bar 20 electrically connects and is integrated with electrode layers 14 at another location between the major surfaces of film 10 along the z-direction, for example another edge of film 10 . optional skin layers 15 and 17 can be located on the major surfaces of the film as the outermost layer. film 10 has buried electrodes in that insulating layers 16 cover the electrode layers 12 and 14 on the outermost layers of the major surfaces of film 10 , aside from optional skin layers 15 and 17 . skin layers are layers used to protect the film and can be removed without damaging operation of the film, for example its properties as an electrode. skin layers 15 and 17 can be formed of the same material or different materials. the skin layers should be non-conductive if they are not removed prior to use of the electrode film. fig. 3 is a cross-sectional view of a multilayer interdigitated electrode film 22 with exposed electrodes and vertical bus bars. film 22 includes electrode layers 24 interdigitated with electrode layers 26 between the major surfaces along the in-plane direction. insulating layers 28 separate electrode layers 24 from electrode layers 26 . a bus bar 30 electrically connects and is integrated with electrode layers 24 at one location between the major surfaces of film 22 along the z-direction, for example an edge of film 22 . a bus bar 32 electrically connects and is integrated with electrode layers 26 at another location between the major surfaces of film 22 along the z-direction, for example another edge of film 22 . film 22 has exposed electrodes in that insulating layers 28 do not cover the electrode layers 24 and 26 on the outermost layers of the major surfaces of film 22 . figs. 4 and 5 are cross-sectional and perspective views, respectively, of a multilane interdigitated electrode film 34 with horizontal bus bars. film 34 includes electrode layers 36 interdigitated with electrode layers 38 between the major surfaces along the z-direction. insulating layers 40 separate electrode layers 36 from electrode layers 38 . a bus bar 44 electrically connects and is integrated with electrode layers 36 on one major surface of film 34 along the in-plane direction, and a bus bar 42 electrically connects and is integrated with electrode layers 38 on the other major surface of film 34 along the in-plane direction. fig. 6 is a cross-sectional view of a multilayer interdigitated electrode film 46 with vertical bus bars before singulation of the film. film 46 includes three sections in this example. the first section includes electrode layers 48 interdigitated with electrode layers 50 between the major surfaces along the in-plane direction. insulating layers 52 separate electrode layers 48 from electrode layers 50 . a bus bar 54 electrically connects and is integrated with electrode layers 48 between the major surfaces along the z-direction, and a bus bar 56 electrically connects and is integrated with electrode layers 50 between the major surfaces along the z-direction. the second section includes electrode layers 58 interdigitated with electrode layers 60 between the major surfaces along the in-plane direction. insulating layers 62 separate electrode layers 58 from electrode layers 60 . bus bar 56 electrically connects and is integrated with electrode layers 58 between the major surfaces along the z-direction, and a bus bar 64 electrically connects and is integrated with electrode layers 60 between the major surfaces along the z-direction. the third section includes electrode layers 66 interdigitated with electrode layers 68 between the major surfaces along the in-plane direction. insulating layers 70 separate electrode layers 66 from electrode layers 68 . bus bar 64 electrically connects and is integrated with electrode layers 66 between the major surfaces along the z-direction, and a bus bar 72 electrically connects and is integrated with electrode layers 68 between the major surfaces along the z-direction. film 46 can be singulated by being cut along score lines 74 and 76 at the common bus bars 56 and 64 to produce three separate multilayer interdigitated electrode films from the three sections. more or fewer sections can be used in order to make a desired number of non-singulated electrode films in a single process, for example. in the exemplary films shown in figs. 1-6 , the electrode layers are interdigitated on a one-to-one basis, meaning the adjacent electrode layers alternate between the first and second bus bars. other types of interdigitation are possible, for example every two electrode layers alternating between the first and second bus bars. the type of interdigitation can be based upon, for example, a desired performance or application of the electrode film. the amount of overlap between the interdigitated electrode layers among the first and second bus bars can also be varied to increase or decrease the amount of overlap to affect the performance of the electrode film, for example. the bus bars are integrated with the electrode layers, meaning the bus bars and associated electrode layers are a continuous material, set of materials, or blend of materials with additives. this feature means the bus bars and electrodes can be formed in a single processing method, for example, or with fewer processing steps compared with applying the bus bars to the electrode layers after formation of the electrode film. the electrode layers are implemented with a material having a sufficient electrical conductivity for the film to function as an electrode film. the insulating layers are implemented with a material electrically isolating the interdigitated electrode layers in order for the electrode film to operate as desired or intended. for example, the insulating layers can be used to prevent electrical short circuits between the interdigitated electrodes. the insulating layers can be implemented with a single continuous layer of material, for example, electrically isolating the interdigitated electrode layers. alternatively, the insulating layers can be multiple layers of the same or different materials joined together to electrically isolated the interdigitated electrodes. fabrication of the films fabrication of the interdigitated electrode film constructions can be accomplished with either multilayer extrusion, multilane extrusion, or a combination thereof. therefore, the exemplary materials to make the films are melt processable, in many cases thermoplastics, and in some cases thermoplastic elastomers. figs. 7 and 8 are front and exploded perspective views, respectively, of a feedblock 80 for making an interdigitated electrode film with bus bars. feedblock 80 has several separate blocks that collectively co-extrude the materials to make an interdigitated electrode film. block 82 extrudes a bus bar, and blocks 84 extrude electrode layers integrated with the bus bar. blocks 82 and 84 have a common opening 85 to extrude the material for the bus bar and electrode layers. block 86 extrudes a bus bar, and blocks 88 extrude electrode layers integrated with the bus bar. blocks 86 and 88 have a common opening 89 to extrude the material for the bus bar and electrode layers. block 90 extrudes insulating layers between and electrically isolating the interdigitated electrode layers and has an opening 91 to extrude material for the insulating layers. block 92 extrudes an optional skin layer and has an opening 93 to extrude the material for the skin layer. block 94 extrudes another optional skin layer and has an opening 95 to extrude the material for the other skin layer. the materials from these blocks can be co-extruded simultaneously during at least a portion of the process to make the electrode film as an integrated article. as illustrated in fig. 8 , each of the blocks has a port to receive the materials for extrusion. blocks 82 and 84 have a port 83 . blocks 86 and 88 have a port 87 . block 90 has a port 96 . blocks 92 and 94 have ports 97 and 98 , respectively. only one port is shown for each block for illustrative purposes only. the blocks can have multiple ports fed by a network of pipes behind the feedblock providing the materials for extrusion. the number and location of the ports can be selected to provide, for example, a uniform or desired extrusion of material from each of the blocks. blocks 82 , 86 , and 90 can be adjusted in the z-direction, using the same or similar construction, to provide for more or fewer electrode and insulating layers. also, blocks 82 , 86 , and 90 can be adjusted in the y-direction to provide for more or less overlap between electrode layers or to adjust the width of the electrode film. in use, these blocks are typically held together in a frame to help control layer formation during the co-extrusion of material from the blocks. one or more sources of material provide the materials for the electrode layers, insulating layers, and optional skin layers to the ports of the blocks. the material is provided under process conditions providing for co-extrusion of the materials to form the desired interdigitated electrode film with integrated bus bars. in particular, feedblock 80 can co-extrude a multilayer interdigitated electrode film with optional skin layers to make a film having the exemplary construction shown in figs. 1 and 2 . feedblock 80 can also be used to co-extrude a multilane interdigitated electrode film having the exemplary construction shown in figs. 4 and 5 , if blocks 92 and 94 for the optional skin layers are removed. the process conditions for co-extrusion can depend upon the materials used for the conductive and insulating layers. generally, extrusion conditions are chosen to adequately feed, melt, mix and pump the material streams in a continuous and stable manner. final melt stream temperatures are chosen within a range which avoids freezing, crystallization or unduly high pressure drops at the low end of the temperature range and which avoids degradation at the high end of the temperature range. figs. 9 and 10 are top and perspective views, respectively, of a block diagram of a system 100 for making multiple unsingulated interdigitated electrode films with integrated bus bars. system 100 includes several feedblocks 102 , 104 , and 106 used to co-extrude melt streams forming interdigitated electrode films 110 , 112 , and 114 , respectively. a material supply 108 provides the materials to feedblocks 102 , 104 , and 106 required to co-extrude the films, and a die 117 combines the individual melt streams (electrode films, 110 , 112 , and 114 ). the co-extruded films 110 , 112 , and 114 from die 117 collectively form an unsingulated film 116 , which can be wound on a take up roll 118 . feedblocks 102 , 104 , and 106 may correspond with feedblock 80 and be joined together in the z-direction at the blocks used to make the bus bars. by combining the feedblocks at those blocks, the feedblocks can collectively co-extrude multiple multilayer interdigitated electrode films 116 having common integrated vertical bus bars at the locations where the feedblocks are joined together. an example of film 116 is shown in the exemplary film construction of fig. 6 , which would result from joining together three of feedblocks 80 . more or fewer feedblocks can be joined together to co-extrude a desired number of unsingulated interdigitated electrode films. the optional skin layers can be used to protect film 116 as it is wound on take up roll 118 , for example. also, system 100 can optionally include one or more multipliers, which separate a melt stream and recombine it by stacking the separated portions on one another. an example of a multiplier is disclosed in u.s. pat. no. 6,827,886, which is incorporated herein by reference as if fully set forth. materials for fabrication layer formulations for the interdigitated electrode films may comprise a thermoplastic host polymer that provides common mechanical, physical, and chemical properties to the layers. for example, a single thermoplastic host can be mixed, blended, or otherwise combined with conductive materials (e.g., conductive polymers, conductive nanomaterials, metallic nanomaterials including silver and copper, particles, flakes, wires or whiskers; metal oxide particles, flakes or nanoparticles, nanorods, etc.; carbon nanoparticles, dispersable graphenes or single- or multi-walled carbon nanotubes) to provide a thermoplastic conductor and this same or another thermoplastic host can be mixed, blended, or otherwise combined with high permittivity nanoparticles (e.g., barium titanate) to provide a superior dielectric material. in another example, a thermoplastic elastomer (e.g. silicone polyoxamide or other tpe) may be combined, mixed or blended with conductive additives to give a compliant conductive extrudable material; and likewise may be mixed with high permittivity additives to give a complaint dielectric material. materials that are useful to film making via extrusion, that could also be a host to conductive or dielectric additives include abs, acrylics, cellulosics, coc, eva, evoh, polyamides, polyesters, polyurethanes, pp, pe, pc, peek, pei, ps, pvc, fluoropolymers (ptfe), polysulfone, san. at least one of the polymeric materials can be elastomeric. thermoplastic materials that have elastomeric properties are typically referred to thermoplastic elastomers (tpes). thermoplastic elastomers are generally defined as materials that exhibit high resilience and low creep as though they were covalently crosslinked at ambient temperatures, yet process similar to traditional thermoplastics and flow when heated above their softening point. tpes typically have a tg below room temperature, and often below 0° c.; whereas, traditional thermoplastics typically have a tg above room temperature, and often near 100° c. thermoplastic elastomeric materials useful in the conductive electrode layers, the nonconductive insulating layers, or both as a first polymeric material or one of a mixture or blend of polymeric materials include, for example, linear, radial, star, and tapered block copolymers such as those described below. examples of such a polymeric material include silicone elastomers, acrylic elastomers, polyurethanes, polybutadienes, thermoplastic elastomers, polybutadiene-acrylonitrile copolymers, materials such as styrene ethylene butadiene styrene sold under the kraton trade name, and combinations thereof. at least one of the polymeric materials can be a thermoplastic. examples of a thermoplastic polymeric material include pressure sensitive adhesives, fluoropolymers and polymers comprising silicone and acrylic moieties, polyesters, pens, pets, polypropylene and polyethylene, and the like. examples of fluoropolymers include homopolymers such as polyvinylidene difluoride (pvdf), copolymers such as polyvinylidene fluoride-trifluoroethylene p(vdf-trfe), and the like. materials for the electrode layers fall into three categories: inherently conductive polymers or mixtures thereof, dispersions of compatibilized conductors (e.g., dispersed graphenes) in a polymer, and conductive nanoparticle filled polymers. examples of a intrinsically conductive polymers include poly(3,4-ethylenedioxy thiophene), polyaniline, polypyrrole, polythiophene, polyacetylene, copolymers of and physical mixtures thereof. in some cases, these conducting polymers can be melt processed neat, and in other cases must be blended with traditional thermoplastic or thermoplastic elastomer materials to provide an extrudable composition. the polymeric materials and blends can be made more conductive with optional particles or fillers. for example, a thermoplastic host can be doped with conductive materials (e.g., dispersed graphene, exfoliated graphite, carbon nanofoam, or single walled carbon nanotubes) to provide a thermoplastic conductor. mixtures or blends of polymeric materials can be utilized to form the nonconductive insulating layers. additives to increase the dielectric constant of the insulating layers may be added or compounded with the polymeric material of the nonconductive layers. examples additives include batio 3 , lead zirconate titanate (pzt), pt (lead titanate) and pt composites, and combinations thereof other examples include zirconia, exfoliated clays, and the like. solid polymer electrolytes (spes) are mixtures that may include ionic polymers, ionic liquids, salts, polar polymers, or non-polar polymers. one or more of the polymers in the spe of embodiments of the present invention can be a thermoplastic. spes alone or in combination with conductive additives can increase the conductivity of the resulting composite through the combination of ionic host conductivity and electronic additive conductivity. materials for fabrication are also disclose in u.s. pat. no. 8,067,094, which is incorporated herein by reference as if fully set forth.
161-938-311-926-069
US
[ "US" ]
H04M15/00
2012-08-31T00:00:00
2012
[ "H04" ]
system and method for off-net planned cost derivation, analysis, and data consolidation
implementations of the present disclosure involve a system and/or method for correlating the cost of ordered telecommunications services. the system receives a customer order that includes various services. the services are matched to available solutions for the services. each solution includes one or more product instances that are used to implement the solution. each product instance may be matched to a service component. each service component may be associated with an on-net service or an off-net service. service components associated with on-net services may be assigned a cost. service components associated with off-net services may be correlated to an off-net service and an estimated cost.
1. a system for correlating costs of telecommunications services comprising: a computing device including a processor coupled to a system memory, the system memory storing instructions for execution on the processor, the instructions configured to cause the processor to: receive a service; match the service to a solution, wherein the solution comprises one or more product instances; correlate each product instance with a service component comprising an off-net component in a correlation database or an on-net component in the correlation database; determine a planned off-net cost for each off-net component according to an external carrier circuit identifier associated with the off-net component; and aggregate the planned off-net cost for each off-net component and a cost for each on-net component into a planned cost. 2. the system of claim 1 , wherein: the service comprises a telecommunications service; the one or more product instances, when combined, are configured to provide the telecommunications service; and the service component operates to provide a second service required by a product instance. 3. the system of claim 1 , wherein the instructions are further configured to cause the processor to: generate a bill for an off-net service; compare the bill for the off-net service to the planned off-net cost; and generate a communication conveying a billing dispute when a difference between the bill and the planned off-net cost exceeds a threshold. 4. the system of claim 3 , wherein the instructions are further configured to cause the processor to generate a message conveying a dispute resolution that offers to end the billing dispute in exchange for a compromise billing amount that is less than the bill for the off-net service, but greater than the planned off-net cost. 5. the system of claim 1 , wherein the instructions are further configured to cause the processor to update the planned cost according to a changed product instance provided by at least one of a workflow system, an inventory system, or a provisioning system. 6. the system of claim 1 , wherein the instructions are further configured to cause the processor to: generate at least one search criteria using a keyword in a product instance description; search the correlation database using the at least one search criteria and determine at least one match; generate a ranking for the at least one match according to a similarity of the match to the product instance description; and select a highest ranked match. 7. the system of claim 6 , wherein the instructions are further configured to cause the processor to generate a message describing the at least one search criteria used for ranking the at least one match. 8. a non-transitory computer-readable medium storing instructions for a method of correlating costs of telecommunications services wherein the instructions comprise: receiving a service; matching the service to a solution, wherein the solution comprises one or more product instances; correlating each product instance with a service component comprising an off-net component in a correlation database or an on-net component in the correlation database; determining a planned off-net cost for each off-net component according to an external carrier circuit identifier associated with the off-net component; and aggregating the planned off-net cost for each off-net component and a cost for each on-net component into a planned cost. 9. the non-transitory computer-readable medium of claim 8 , wherein: the service comprises a telecommunications service; the one or more product instances, when combined, are configured to provide the telecommunications service; and the service component operates to provide a second service required by a product instance. 10. the non-transitory computer-readable medium of claim 8 , wherein the instructions further comprise: generating a bill for an off-net service; comparing the bill for the off-net service to the planned off-net cost; and generating a communication conveying a billing dispute when a difference between the bill and the planned off-net cost exceeds a threshold. 11. the non-transitory computer-readable medium of claim 10 , wherein the instructions further comprise generating a message conveying a dispute resolution that offers to end the billing dispute in exchange for a compromise billing amount that is less than the bill for the off-net service, but greater than the planned off-net cost. 12. the non-transitory computer-readable medium of claim 8 , wherein the instructions further comprise updating the planned cost according to a changed product instance provided by at least one of a workflow system, an inventory system, or a provisioning system. 13. the non-transitory computer-readable medium of claim 8 , wherein the instructions further comprise: generating at least one search criteria using a keyword in a product instance description; searching the correlation database using the at least one search criteria and determine at least one match; generating a ranking for the at least one match according to a similarity of the match to the product instance description; and selecting a highest ranked match. 14. the non-transitory computer-readable medium of claim 13 , wherein the instructions further comprise generating a message describing the at least one search criteria used for ranking the at least one match. 15. a method of correlating costs of telecommunications services comprising: receiving a service; matching the service to a solution, wherein the solution comprises one or more product instances; correlating each product instance with a service component comprising an off-net component in a correlation database or an on-net component in the correlation database; determining a planned off-net cost for each off-net component according to an external carrier circuit identifier associated with the off-net component; and aggregating the planned off-net cost for each off-net component and a cost for each on-net component into a planned cost. 16. the method of claim 15 , wherein: the service comprises a telecommunications service; the one or more product instances, when combined, are configured to provide the telecommunications service; and the service component operates to provide a second service required by a product instance. 17. the method of claim 15 , further comprising: generating a bill for an off-net service; comparing the bill for the off-net service to the planned off-net cost; and generating a communication conveying a billing dispute when a difference between the bill and the planned off-net cost exceeds a threshold. 18. the method of claim 17 , further comprising generating a message conveying a dispute resolution that offers to end the billing dispute in exchange for a compromise billing amount that is less than the bill for the off-net service, but greater than the planned off-net cost. 19. the method of claim 15 , further comprising updating the planned cost according to a changed product instance provided by at least one of a workflow system, an inventory system, or a provisioning system. 20. the method of claim 15 , further comprising: generating at least one search criteria using a keyword in a product instance description; searching the correlation database using the at least one search criteria and determine at least one match; generating a ranking for the at least one match according to a similarity of the match to the product instance description; and selecting a highest ranked match. 21. the method of claim 20 , further comprising generating a message describing the at least one search criteria used for ranking the at least one match.
related applications this application claims priority under 35 u.s.c. §119(e) to provisional patent application no. 61/695,826 titled “off-net planned cost derivation, analysis, and data consolidation,” filed on aug. 31, 2012, and provisional patent application no. 61/709,658 titled “off-net planned cost derivation, analysis, and data consolidation,” filed on oct. 4, 2012, which are both hereby incorporated by reference herein. field of the disclosure aspects of the present disclosure involve deriving a cost estimate related to providing services that include on-net and off-net telecommunications services. a customer's order for services is correlated to one or more service components each with an expected cost. the estimated cost may be updated as the services are provisioned and any potential billing discrepancies may be identified before actual billing occurs. background in the telecommunications industry, sales agents are tasked with quickly providing accurate cost estimates for services offered by other telecommunications provider. as technology has evolved, telecommunications services have grown from basic telephone services, to advanced telephone services (including digital telephone services, caller id, call waiting, voicemail, etc.), internet connectivity, high-speed internet connectivity, dedicated network connections between a company's various locations, voice over ip (voip) services, video streaming and video conferencing services, among others. to provide the services, such providers must maintain and deploy a vast amount of infrastructure. such infrastructure includes switches, routers, repeaters, servers, firewalls, and physical connections between components that make up a network. for example, to provide internet service to an office in a building, the office may need to be connected to a building router that is in turn connected to a network cable running under a street that's in front of the building. the cable running under the street may in turn be connected to the greater internet at a switch located near the city center. accurately estimating the cost for providing services is often challenging because in many instances, a given telecommunications provider must use other provider's infrastructures. using the above office example, the telecommunications provider may own the infrastructure that connects to the greater internet, but may not own the infrastructure in the building or the cable running under the street. the costs associated with using other telecommunications providers' infrastructure are referred to as off-network or “off-net” costs, while the costs associated with using the provider's infrastructure are referred to as on-network or “on-net” costs. connecting the office to the internet therefore may include off-network costs for using the building's infrastructure and the local infrastructure running under the street, in addition to the on-net costs associated with connecting to the provider's infrastructure. in order to accurately estimate the cost of services, a sales agent must account for both the off-net costs and the on-net costs. a telecommunication company therefore acts as both a provider of services and an intermediary between the customers and the off-net providers. accurately estimating the costs associated with off-net services is important, because incorrect estimates can lead to costly billing disputes with customers. in practice, the sheer volume of invoices often makes it difficult to recognize billing discrepancies. the inevitable discrepancies often lead to lengthy billing disputes that result in the non-payment of balances or partial payments. thus, it is advantageous to recognize any discrepancies between the customer's bill and the estimated cost as early as possible, so that the dispute process can be taken care of quickly. therefore, there is a need to define logic for end-to-end correlation of a quoted cost and accurately identify planned off-net access costs to estimate the cost of services sold to provide a more accurate estimate and audit processes to improve gross margins, optimize billing dispute resolution for all parties, and maximize capacity utilization. at present, no industry standard tools exist to derive off-net planned cost before billing occurs. it is with these issues and problems in mind that various aspects of the present disclosure were developed. summary according to one aspect, a system and method for correlating costs of telecommunications services is provided. the system receives one or more services from a customer order and matches the services to available solutions for providing the services. each solution is made up of product instances that are used to provide each solution. product instances may correspond to service components that designate whether the service is an on-net or on-net service. the cost of each off-net service component may be correlated according to an external carrier circuit identifier associated with the off-net component. the expected cost of the off-net services may be summed to estimate an off-net planned cost. brief description of the drawings example embodiments are illustrated in referenced figures of the drawings. it is intended that the embodiments and figures disclosed herein are to be considered illustrative rather than limiting. fig. 1 depicts a block diagram of an order process from order entry to billing; fig. 2 depicts a method of performing off-net cost correlation of an ordered service; fig. 3 depicts a correlation method for a list of services; fig. 4 depicts a method for updating a cost estimate during the implementation of an order; fig. 5 depicts a general purpose computer that may be used in the implementation of the off-net cost system. detailed description aspects of the present disclosure involve a system and method for estimating costs associated with providing a network service or set of network services that include on-net services, off-net services, or both. the services ordered may be analyzed and correlated to past service costs and provide an estimate of the on-net and off-net costs. the estimate may be revised throughout the process of fulfilling the order. any relevant data relating to the installation of a telecommunication system may be consolidated and analyzed to revise cost estimates. the cost correlation system identifies services that will result in an off-net cost by matching orders to solutions made up of one or more product instances. each product instance may be matched to an on-net or off-net cost. the cost correlation system estimates the cost of the off-net service, and consolidates the off-net costs with on-net costs to provide a complete estimate for providing the desired services. the system may update the cost estimate as a workflow for constructing the system is established and as hardware needed to provide the service is provisioned and installed. the process of providing telecommunications services may be broken into a series of steps, starting at the creation and entry of an order and concluding with the delivery of the ordered services and billing. a cost correlation system may be configured to interface with one or more existing systems used for placing and fulfilling orders. for example, referring to fig. 1 , an example order and fulfillment system 100 and cost correlation system 160 are depicted. the order and fulfillment system 100 divides the process of ordering and providing services into order entry, workflow design, inventory, provisioning, and billing. each of these steps in the order process may be implemented using an individual system. for example, the depicted order and fulfillment system 100 includes an order entry system 110 , a workflow system 120 , an inventory system 130 , a provisioning system 140 , and a billing system 150 . the cost correlation system 160 may be configured to receive information regarding an order from each of the individual systems as the order progresses from order entry to actual implementation. at each stage in the order process, the cost correlation system 160 may receive information about the order and correlate the information to on-net costs and past costs associated with the use of the off-net services. the cost correlation system 160 then provides feedback to the order and fulfillment system 100 to improve the quote provided to the customer, and identify whether providing the services will deviate from the quoted cost, allowing the system to identify a potential billing discrepancy before the customer receives a bill. the order system 110 provides a customer with quotes for services and accepts customer orders. customer requirements may be provided to the order entry system 110 and the order entry system matches the customer requirements to services offered by the telecommunications provider. for example, the order system 110 may include a commercially available order entry and quoting system such as siebel, pipeline, or any other order system. the order system 110 may be configured to facilitate the ordering of products and/or services. for example, the order system 110 may be configured to show services that are available in geographic locations, allow for quotes for products/services to be generated, and allow for a customer order to be inputted and an invoice generated. for example, a customer may wish to have a dedicated network connection between two of the companies locations. the order system 110 may also have access to information about network infrastructure that is necessary for generating quotes. thus, in order to provide a quote for the dedicated network connection, the order system 110 may retrieve information related to the network infrastructure that could be used to interconnect company locations. quotes for services may be generated after receiving a request for products and/or services offered by the telecommunications provider. for example, a customer may wish to add video conferencing services between two of the customer's locations. the order system receives the locations of that the customer wishes to connect and generates a quote. video conferencing services may, for example, require one-time costs of connecting each location to a network and installing hardware at each location, as well as monthly reoccurring service costs. the cost correlation system 160 receives a service or services and correlates the services to on-net services and off-net services. the correlation is used to estimate the price of each off-net service by using past order data stored in the correlation database 170 . the cost correlation system 160 then provides the expected cost determined by the correlation of the services ordered to on-net and off-net costs to the order system 110 . the order system 110 may then use the estimated off-net costs along with on-net costs to generate a quote. once an order has been received, the workflow system 120 generates an order workflow describing the steps defining how the order will be fulfilled. the order workflow includes each aspect of how the ordered services will be provided to the customer. returning to the example of interconnecting to geographically remote facilities, when connecting a new office building without any existing network infrastructure to existing infrastructure involves a workflow that includes the physical installation of any necessary cabling, switches, or other infrastructure and each aspect of the installation. for example, the workflow may include laying fiber optic cables and adding a switch that connects to the fiber optic cable at the new building. the workflow may also include a plan for the allocation of existing network infrastructure. for example, once the new network infrastructure has been connected to existing network infrastructure, the workflow may call for an allocation of bandwidth on the existing network infrastructure. the generated workflow may be sent to the cost correlation system 160 which utilizes the correlation database 170 to project the cost of any off-net services that will be utilized by the services indicated in the order workflow. the workflow information allows for the accuracy of the estimate of the off-net costs to be improved by including updated information on how the implementation of the ordered services will be performed. after the order workflow has been generated, the inventory system 130 may be used to determine whether there is sufficient inventory for fulfilling the order. the inventory system 130 may be configured to keep records of the various network elements forming the network infrastructure of the telecommunications provider, as well as the information about the operational status of the network element. for example, the inventory system 130 may keep a record of where each network element is located, whether the network element is operational, the operating capacity of the network element, the level of operating capacity at which the network element is operating, and any other information about each network element. the inventory system also may keep records regarding network elements that are not currently in use or deployed. for example, the inventory system 130 may also include network elements, such as unused cables, routers, switches, firewalls, audio/video equipment, or any other network elements. the inventory system 130 may receive an inquiry from the workflow system 120 regarding the availability of various network elements. in some cases, the inventory stipulated by the order workflow may not be available. for example, continuing with the example above, there may be no additional network capacity on the existing network infrastructure. thus, a new workflow would need to be developed by the workflow system 120 to accommodate not using the currently specified existing network infrastructure. the updated workflow may then be sent to the cost correlation system 160 , the off-net costs may be re-estimated, and the total cost may be updated. once the inventory system 130 determines the appropriate inventory is available, the provisioning system 140 is tasked with the implementation of the order workflow. again, using the above example, this may include scheduling the installation of the new infrastructure and the allocation of existing network infrastructure. similar to the inventory system 130 , the provisioning system 140 may determine that a given workflow is not viable. for example, the provisioning system 140 may determine that provisioning the bandwidth is impossible. again, the order workflow may be modified to provide the ordered services without using the previously selected network infrastructure and the modified workflow may be sent to the cost correlation system 160 so that the estimated cost may be updated. once all of the services have been provisioned and are active, the billing system 150 bills the customer for the actual cost of the services. before sending the customer a bill for the services, the billing system 150 may compare the final bill with the quoted and/or updated estimate(s). if the difference between an off-net cost on the final bill and the estimated off-net cost exceeds a threshold, the billing system 150 may be configured to automatically generate a billing dispute with the off-net service provider. referring now to fig. 2 , a method of estimating service costs including on-net and off-net services is depicted. a customer order or a requested service quote may be provided to the cost correlation system 160 by the order entry system 110 (operation 200 ). the customer order or quote request may include a list of one or more services and may be automatically sent to the cost correlation system 160 when a customer requests a quote, when the order is placed, or may be manually entered by a user. after receiving the customer order or quote request, the cost correlation system 160 may match each item in the order to one or more services offered by the telecommunications company (operation 210 ). for example, a customer order may include internet connectivity, video streaming services, and voice services. each of the services may be analyzed individually and a total cost including any projected off-net costs, may then be determined. each service may then be matched to a solution that provides the indicated service. (operation 220 ). each solution may include a service that includes one or more product instances that, when combined, are capable of providing the desired service. for example, the ordered internet connectivity and video streaming service may both include network related product instances such an allocation of bandwidth from an existing network between the two locations, new additions to the existing network infrastructure to connect one or both of the locations, and network hardware such as firewalls, switches, or any other networking hardware. the cost correlation system 160 matches each product instance to a service component that indicates whether the product instance involves an on-net service or an off-net service (operation 230 ). the cost correlation system 160 then divides the costs into on-net and off-net costs (operation 240 ). the prices of on-net costs are determined by the identifying the current rate that the telecommunications provider is charging for the service component (operation 250 ). off-net costs are estimated by correlating the service components with previously billed amounts for the service component (operation 255 ). each off-net service component may be associated an external carrier circuit identifier (ecckt) identifying the off-net service. an ecckt is used by off-net vendors as billing identification codes for the services that they offer. the ecckt may include information that is necessary to order and implement the off-net service component, including the off-net vender name, the cost of the service component, and any other information necessary for the implementation of the off-net service component. the cost of each service component may be aggregated into a planned cost, reported to the order entry system 110 , and stored in a database for future comparison (operation 260 ). the customer may or may not decide to order the quoted services, and the quote process may be repeated several times until a customer decides on the exact services that the customer wishes to purchase. referring to fig. 3 , a method of correlating an off-net service to a cost is depicted. as described above with reference to fig. 2 , estimating the cost of an order may be accomplished by matching orders to services, breaking down the services into one or more product instances, and matching the product instances to service components that have an associated cost. the various service component costs may then be tabulated to form a cost estimate for the order. each service component offered by the telecommunications company may be included in the correlation database 170 . the correlation database 170 may be according to any database standard. the correlation database 170 includes an element for each service component. each element may include various fields that describe the functions of the service component, the location, the performance, or any other data describing the features and functionality of the service component. for example, each element may include an identifier of the service component, such as an ecckt for off-net service components, a cost associated with the service component, historical costs associated with the service component, a description of the service component, key words associated with the service component, and any other relevant information. the correlation database 170 may operate on either the same computing system as the cost correlation system 160 or on a computing system in communication with the cost correlation system 160 . the cost correlation system 160 may receive a service component or a list of service components that are for on-net or off-net services (operation 300 ). each provided service component then may be correlated to an on-net or off-net service component by searching the correlation database 170 . the search may include a keyword search of the service component descriptions (operation 310 ). the information contained in the various service component fields may be used as search criteria. for example, a service component may have a connection speed field designating a minimum bandwidth of 1 gb. a corresponding search of the correlation database 170 may include searching for off-net service components with a connection speed field with a value of at least 1 gb. in addition to using the fields describing the service component, the cost correlation system 160 may also identify keywords from a service component's description to conduct a keyword search of the correlation database 170 . for example, the service component may include a general description filed listing a “point-to-point high definition video from location a to location b.” in this example, the cost correlation system 160 may identify the keywords to include “point-to-point,” “high definition video,” “video,” and “location a to location b.” the cost correlation system 160 may initially search the correlation database 170 for on-net services that correspond to the service component, and, if no on-net service corresponding to the service component is identified, then search for matching off-net service components. each search of the correlation database 170 may return a list of identified off-net service component matches along with a corresponding explanation of why each off-net service was matched (operation 320 ). for example, the explanation may include a list of keywords that were found in a matched off-net service component. each of the off-net service components that are matched may also be scored according to how well they correspond with the service component being searched for. the best match may then be selected and if the service component is for an off-net service component, the corresponding ecckt is determined (operation 330 ). a cost estimate for the off-net service component may then be determined using pricing information associated with the ecckt including historical cost data, such as historical estimates and actual billed costs. as described above, once an order has been entered, the order and fulfillment system 100 works towards the fulfillment of the order by creating a workflow outlining the steps to implement the order, assessing the current available inventory, and provisioning services for the implementation of the order. the solution used for the order may be modified based on various conditions, such as inventory issues and implementation issues. the order itself is not modified due to the conditions, but the solution used to fulfill the order may be modified to provide the same service using available network infrastructure. for example, the workflow system 120 may be configured to determine the sequence of steps required to implement the order. for example, an ordered service may require adding network infrastructure, such as direct fiber optic line between two locations. adding the fiber optic line may include adding a fiber optic cable between the locations and connecting the locations to the fiber optic cable using networking equipment. the workflow system 120 determines and schedules the construction of the connection. changes may be made to the workflow so that the ordered services may be properly implemented. for example, the original order may have made incorrect assumptions about network capacity or the ability to add new network infrastructure. thus, the estimated cost of an order may be adjusted as deviations from the original quote are made. the workflow constructed by the workflow system 120 may therefore be provided to (or retrieved by) the cost correlation system 160 and expected cost may be re-correlated to reflect the updated solution to the order. referring to fig. 4 , a method of modifying an estimated service cost is depicted. instead of an order being received by the cost correlation system 160 , as in the case of fig. 2 , a modified solution may be provided to the cost correlation system 160 (operation 400 ). the modified solution is directed towards the same services ordered by the customer, but involves a modification created by the workflow system 120 , the inventory system 130 , or the provisioning system 140 . the product instances associated with the modified solution are matched to services components (operation 410 ) and divided into on-net and off-net services (operation 420 ). the costs of on-net services are determined (operation 430 ) and off-net services are estimated by correlating the off-net services (operation 435 ). each off-net service component may be correlated to determine an expected cost. the correlation logic may be modified according to the current stage of the order in the order and fulfillment system 100 . for example, the cost correlation may divide product instances into workflow specific product instances and into hybrid products instances that are directed towards products found in both the workflow system 120 and the order system 110 . for example, a workflow specific product instance may include obtaining a permit to add network infrastructure, while a hybrid product instance may include adding a fiber optic cable. the correlation database 170 may then be searched according to the type of product instance. similarly, updates to the implementation made by the inventory system 130 and the provisioning system 140 may be divided into specific product instances and hybrid product instances. the correlation may be conducted and the costs aggregated to determine an updated estimate for the cost (operation 440 ). thus, the projected cost of providing the services may be updated and tracked at each stage in the order process. the updated total cost may be compared to the quoted cost of the services and the results may be stored for later evaluation (operation 450 ). the difference between each estimated cost and the quoted cost may compared to a threshold. when the estimated cost deviates to far from the quoted cost a user may be alerted or the order and fulfillment system 100 may perform a mitigating action to bring the cost estimate within the threshold difference. after all of the services have been provisioned, the billing system 150 determines a final bill using on-net costs and the costs charged by off-net providers. the actual amounts billed by the off-net providers may be sent to the cost correlation system 160 and used to update the correlation database 170 as historical cost data for each ecckt. the correlation database 170 may also be updated to include the planned cost for each stage of the order process and final billed cost. if a bill for an off-net service exceeds the estimated cost by more than a threshold, then a letter or electronic communication conveying a billing dispute may be generated by the billing system 150 and sent to the off-net provider. this billing dispute may be automatically generated and sent or may require user intervention before sending. in addition to generating a billing dispute, the billing system 150 may also automatically generated dispute resolution that offers to end the dispute in exchange for a compromise billing amount. the compromise billing amount may include any amount between the planned cost and the billed cost. the compromise billing amount may selected according to a dispute policy for one or more off-net providers or service components. for example, the compromise billing amount for an off-net provider may be half way between the estimated cost and the billed cost. thus, a billing discrepancy between the off-net provider and the customer may be automatically mitigated immediately following the receipt of a bill for off-net services without customer involvement. fig. 5 illustrates an example general purpose computer 500 that may be useful in implementing the described technology. the example hardware and operating environment of fig. 5 for implementing the described technology includes a general purpose computing device in the form of a personal computer, server, or other type of computing device. in the implementation of fig. 5 , for example, the general purpose computer 500 includes a processor 510 , a cache 560 , a system memory 570 , 580 , and a system bus 590 that operatively couples various system components including the cache 560 and the system memory 570 , 580 to the processor 510 . there may be only one or there may be more than one processor 510 , such that the processor of the general purpose computer 500 comprises a single central processing unit (cpu), or a plurality of processing units, commonly referred to as a parallel processing environment. the general purpose computer 500 may be a conventional computer, a distributed computer, or any other type of computer; the invention is not so limited. the system bus 590 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, a switched fabric, point-to-point connections, and a local bus using any of a variety of bus architectures. the system memory may also be referred to as simply the memory, and includes read only memory (rom) 570 and random access memory (ram) 580 . a basic input/output system (bios) 572 , containing the basic routines that help to transfer information between elements within the general purpose computer 500 such as during start-up, is stored in rom 570 . the general purpose computer 500 further includes one or more hard disk drives or flash-based drives 620 for reading from and writing to a persistent memory such as a hard disk, a flash-based drive, and an optical disk drive 630 for reading from or writing to a removable optical disk such as a cd rom, dvd, or other optical media. the hard disk drive 520 and optical disk drive 530 are connected to the system bus 590 . the drives and their associated computer-readable media provide nonvolatile storage of computer-readable instructions, data structures, program engines and other data for the general purpose computer 500 . it should be appreciated by those skilled in the art that any type of computer-readable media which can store data that is accessible by a computer, such as magnetic cassettes, flash memory cards, digital video disks, random access memories (rams), read only memories (roms), and the like, may be used in the example operating environment. a number of program engines may be stored on the hard disk 520 , optical disk 530 , rom 570 , or ram 580 , including an operating system 582 , a cost correlation system 584 such as the one described above, one or more application programs 586 , and program data 588 . a user may enter commands and information into the general purpose computer 500 through input devices such as a keyboard and pointing device connected to the usb or serial port 540 . these and other input devices are often connected to the processor 510 through the usb or serial port interface 540 that is coupled to the system bus 590 , but may be connected by other interfaces, such as a parallel port. a monitor or other type of display device may also be connected to the system bus 590 via an interface, such as a video adapter 560 . in addition to the monitor, computers typically include other peripheral output devices (not shown), such as speakers and printers. the general purpose computer 500 may operate in a networked environment using logical connections to one or more remote computers. these logical connections are achieved by a network interface 550 coupled to or a part of the general purpose computer 500 ; the invention is not limited to a particular type of communications device. the remote computer may be another computer, a server, a router, a network pc, a client, a peer device, and typically includes many or all of the elements described above relative to the general purpose computer 500 . the logical connections include a local-area network (lan) a wide-area network (wan), or any other network. such networking environments are commonplace in office networks, enterprise-wide computer networks, intranets and the internet, which are all types of networks. the network adapter 550 , which may be internal or external, is connected to the system bus 590 . in a networked environment, programs depicted relative to the general purpose computer 500 , or portions thereof, may be stored in the remote memory storage device. it is appreciated that the network connections shown are example and other means of and communications devices for establishing a communications link between the computers may be used. the embodiments of the invention described herein are implemented as logical steps in one or more computer systems. the logical operations of the present invention are implemented (1) as a sequence of processor-implemented steps executing in one or more computer systems and (2) as interconnected machine or circuit engines within one or more computer systems. the implementation is a matter of choice, dependent on the performance requirements of the computer system implementing the invention. accordingly, the logical operations making up the embodiments of the invention described herein are referred to variously as operations, steps, objects, or engines. furthermore, it should be understood that logical operations may be performed in any order, unless explicitly claimed otherwise or a specific order is inherently necessitated by the claim language. the foregoing merely illustrates the principles of the invention. various modifications and alterations to the described embodiments will be apparent to those skilled in the art in view of the teachings herein. it will thus be appreciated that those skilled in the art will be able to devise numerous systems, arrangements and methods which, although not explicitly shown or described herein, embody the principles of the invention and are thus within the spirit and scope of the present invention. from the above description and drawings, it will be understood by those of ordinary skill in the art that the particular embodiments shown and described are for purposes of illustrations only and are not intended to limit the scope of the present invention. references to details of particular embodiments are not intended to limit the scope of the invention.
162-137-508-935-109
JP
[ "JP", "US", "KR" ]
H01L29/786,H01L21/336,H01L21/20,H01L21/02,H01L21/36
2011-03-04T00:00:00
2011
[ "H01" ]
semiconductor device manufacturing method
problem to be solved: to provide a manufacturing method of a highly reliable semiconductor device with less threshold voltage fluctuation.solution: an insulation layer capable of discharging oxygen by heating is formed in contact with an oxide semiconductor layer, and by applying light on a gate electrode or a metal layer formed in a region overlapping the gate electrode, oxygen is added in the oxide semiconductor layer in the region overlapping the gate electrode. accordingly, the problem can be solved by reducing oxygen deficiency and interface state existing in the oxide semiconductor layer in the region overlapping the gate electrode.
1. a method for manufacturing a semiconductor device, comprising the steps of: forming a gate electrode over a substrate having an insulating surface; forming an anti-oxidation layer on and in contact with the gate electrode, the anti-oxidation layer containing at least one of molybdenum nitride, tungsten nitride, titanium nitride, tantalum nitride, and aluminum nitride; forming an insulating layer over the gate electrode and the anti-oxidation layer; forming an oxide semiconductor layer in contact with the insulating layer; and performing a light irradiation treatment on at least the gate electrode, whereby oxygen released from the insulating layer is added to the oxide semiconductor layer. 2. the method for manufacturing a semiconductor device according to claim 1 , wherein the insulating layer is formed by a sputtering method using oxygen or a mixed gas of oxygen and argon. 3. a method for manufacturing a semiconductor device, comprising the steps of: forming an oxide semiconductor layer over a substrate having an insulating surface; forming an insulating layer in contact with the oxide semiconductor layer; forming an anti-oxidation layer over the insulating layer, the anti-oxidation layer containing at least one of molybdenum nitride, tungsten nitride, titanium nitride, tantalum nitride, and aluminum nitride; forming a gate electrode over the insulating layer and on and in contact with the anti-oxidation layer; and performing a light irradiation treatment on at least the gate electrode, whereby oxygen released from the insulating layer is added to the oxide semiconductor layer. 4. the method for manufacturing a semiconductor device according to claim 3 , wherein the insulating layer is formed by a sputtering method using oxygen or a mixed gas of oxygen and argon. 5. a method for manufacturing a semiconductor device, comprising the steps of: forming a gate electrode over a substrate having an insulating surface; forming an anti-oxidation layer on and in contact with the gate electrode, the anti-oxidation layer containing at least one of molybdenum nitride, tungsten nitride, titanium nitride, tantalum nitride, and aluminum nitride; forming a gate insulating layer over the substrate and the anti-oxidation layer; forming an oxide semiconductor layer over the gate insulating layer; forming an insulating layer in contact with the oxide semiconductor layer to overlap with the gate electrode; forming a metal layer over the insulating layer to overlap with the insulating layer and the gate electrode; forming a source electrode and a drain electrode in electrical contact with the oxide semiconductor layer; and performing a light irradiation treatment on at least the metal layer, whereby oxygen released from the insulating layer is added to the oxide semiconductor layer. 6. the method for manufacturing a semiconductor device according to claim 5 , wherein a layer having an optical absorptance of 60% or more in a wavelength region from 400 nm to 1000 nm both inclusive, is formed as the metal layer. 7. the method for manufacturing a semiconductor device according to claim 5 , wherein the insulating layer is formed by a sputtering method using oxygen or a mixed gas of oxygen and argon. 8. a method for manufacturing a semiconductor device, comprising the steps of: forming an island-shaped metal layer over a substrate having an insulating surface; forming an insulating layer over the island-shaped metal layer; forming an oxide semiconductor layer in contact with the insulating layer; forming a gate insulating layer over the oxide semiconductor layer; forming an anti-oxidation layer over the gate insulating layer, the anti-oxidation layer containing at least one of molybdenum nitride, tungsten nitride, titanium nitride, tantalum nitride, and aluminum nitride; forming a gate electrode over the gate insulating layer and on and in contact with the anti-oxidation layer, so as to overlap with the island-shaped metal layer and the insulating layer; forming a source electrode and a drain electrode in electrical contact with the oxide semiconductor layer; and performing a light irradiation treatment on at least the island-shaped metal layer, whereby oxygen released from the insulating layer is added to the oxide semiconductor layer. 9. the method for manufacturing a semiconductor device according to claim 8 , wherein a layer having an optical absorptance of 60% or more in a wavelength region from 400 nm to 1000 nm both inclusive, is formed as the island-shaped metal layer. 10. the method for manufacturing a semiconductor device according to claim 8 , wherein the insulating layer is formed by a sputtering method using oxygen or a mixed gas of oxygen and argon. 11. the method for manufacturing a semiconductor device according to claim 1 , wherein the insulating layer is a single film or a stacked-layer film containing silicon oxide, aluminum oxide, hafnium oxide, hafnium silicate, hafnium aluminate, zirconium oxide, yttrium oxide, lanthanum oxide, or cerium oxide. 12. the method for manufacturing a semiconductor device according to claim 3 , wherein the insulating layer is a single film or a stacked-layer film containing silicon oxide, aluminum oxide, hafnium oxide, hafnium silicate, hafnium aluminate, zirconium oxide, yttrium oxide, lanthanum oxide, or cerium oxide. 13. the method for manufacturing a semiconductor device according to claim 5 , wherein the insulating layer is a single film or a stacked-layer film containing silicon oxide, aluminum oxide, hafnium oxide, hafnium silicate, hafnium aluminate, zirconium oxide, yttrium oxide, lanthanum oxide, or cerium oxide. 14. the method for manufacturing a semiconductor device according to claim 8 , wherein the insulating layer is a single film or a stacked-layer film containing silicon oxide, aluminum oxide, hafnium oxide, hafnium silicate, hafnium aluminate, zirconium oxide, yttrium oxide, lanthanum oxide, or cerium oxide. 15. the method for manufacturing a semiconductor device according to claim 1 , wherein the insulating layer is formed by a sputtering method using oxygen-excess silicon oxide, sio x with x greater than 2, as a target. 16. the method for manufacturing a semiconductor device according to claim 3 , wherein the insulating layer is formed by a sputtering method using oxygen-excess silicon oxide, sio x with x greater than 2, as a target. 17. the method for manufacturing a semiconductor device according to claim 5 , wherein the insulating layer is formed by a sputtering method using oxygen-excess silicon oxide, sio x with x greater than 2, as a target. 18. the method for manufacturing a semiconductor device according to claim 8 , wherein the insulating layer is formed by a sputtering method using oxygen-excess silicon oxide, sio x with x greater than 2, as a target.
background of the invention 1. field of the invention the present invention relates to a manufacturing method of a semiconductor device. 2. description of the related art semiconductor elements such as transistors using silicon for their semiconductor layers (hereinafter abbreviated as silicon semiconductor elements) have been used for a variety of semiconductor devices and been essential for manufacturing semiconductor devices. to manufacture large-size semiconductor devices, a method in which a material such as glass, which is suitable for increasing in size, is used for substrates and thin-film silicon which can be deposited in a large area is used for semiconductor layers has been widely employed. in such semiconductor elements using thin-film silicon, the semiconductor layers need to be formed at temperatures less than or equal to the upper temperature limits of their substrates, and thus, amorphous silicon and polysilicon which can be formed at relatively low temperatures have been widely used. amorphous silicon has advantages of being able to be deposited in a large area and allowing semiconductor elements having uniform element characteristics to be manufactured by a simple process at relatively low cost; thus, amorphous silicon has been widely used for semiconductor devices with a large area, such as solar batteries. meanwhile, amorphous silicon has a disadvantage of low electron mobility owing to its amorphous structure which causes a scattering of electrons at grain boundaries. to make up for the disadvantage, polysilicon, that has a higher mobility realized by irradiating amorphous silicon with laser or the like to be locally dissolved and recrystallized, or by crystallization using a catalytic element, has been widely used in semiconductor devices such as liquid crystal displays in which both large area and high carrier mobility need to be achieved. in addition, in recent years, oxide semiconductors that are metal oxides having semiconductor characteristics have attracted attention as novel semiconductor layer materials having high mobility, which is an advantage of polysilicon, and uniform element characteristics, which are an advantage of amorphous silicon. as semiconductor devices such as transistors using oxide semiconductors as their semiconductor layers (hereinafter abbreviated as oxide semiconductor devices), for example, as in patent document 1, a thin film transistor manufactured using tin oxide, indium oxide, zinc oxide, or the like has been proposed. reference patent document 1: japanese published patent application no. 2007-123861 summary of the invention however, as for oxide semiconductor devices, a problem of fluctuation of electrical characteristics (low reliability) is known, though there are a variety of advantages as described above. for example, the threshold voltage of the transistor is changed by a bias-temperature test (bt test). note that in this specification, the “threshold voltage” refers to a gate voltage which is needed to turn on the transistor. one cause of the fluctuation of electrical characteristics of the oxide semiconductor device is an oxygen vacancy in the oxide semiconductor layer or an interface state (also called a surface state) which is attributable to lattice mismatch in the interface between the oxide semiconductor layer and the gate insulating film. the oxygen vacancy or the interface state in the oxide semiconductor layer leads to generation of a carrier (excess electron); therefore, the threshold voltage is more likely to be changed as the number of oxygen vacancies or interface states is increased in the oxide semiconductor layer in a region which overlaps with the gate electrode. the present invention was made under the foregoing technical background. it is an object of one embodiment of the present invention to provide a method for manufacturing a highly reliable semiconductor device with less change in threshold voltage. in order to solve the above-described problem, in one embodiment of the present invention, an insulating film from which oxygen can be released by heating is formed in contact with an oxide semiconductor layer, and light irradiation treatment is performed on a gate electrode or a metal layer formed in a region which overlaps with the gate electrode. accordingly, the insulating layer in a region which overlaps with the gate electrode is heated and thus adds oxygen into the oxide semiconductor layer in a region which overlaps with the gate electrode. that is, one embodiment of the present invention is a method for manufacturing a semiconductor device in which a gate electrode is formed over a substrate having an insulating surface, an insulating layer from which oxygen can be released by heating is formed over the gate electrode, an oxide semiconductor layer is formed in contact with the insulating layer, and light irradiation treatment is performed on the gate electrode to add oxygen from the insulating layer in a region which overlaps with the gate electrode into the oxide semiconductor layer, whereby the number of oxygen vacancies and the number of interface states in the oxide semiconductor layer in a region which overlaps with the gate electrode are reduced. further, one embodiment of the present invention is a method for manufacturing a semiconductor device in which an oxide semiconductor layer is formed over a substrate having an insulating surface, an insulating layer from which oxygen can be released by heating is formed over the oxide semiconductor layer, a gate electrode is formed over the insulating layer, and light irradiation treatment is performed on the gate electrode to add oxygen from the insulating layer in a region which overlaps with the gate electrode to the oxide semiconductor layer, whereby the number of oxygen vacancies and the number of interface states in the oxide semiconductor layer in a region which overlaps with the gate electrode are reduced. according to the above-described embodiment of the present invention, the insulating layer in the region which overlaps with the gate electrode is also heated by the light irradiation treatment on the gate electrode, so that oxygen in the insulating layer is detached. since the insulating layer is formed in contact with the oxide semiconductor layer, detached oxygen can be added into the oxide semiconductor layer in the region which overlaps with the gate electrode, which results in a reduction in the number of oxygen vacancies or interface states in the oxide semiconductor layer in the region which overlaps with the gate electrode. in this manner, according to the above-described embodiment of the present invention, a highly reliable semiconductor device with less change in threshold voltage can be manufactured. further, one embodiment of the present invention is a method for manufacturing a semiconductor device in which an anti-oxidation layer containing at least one of molybdenum nitride, tungsten nitride, titanium nitride, tantalum nitride, and aluminum nitride is formed on a surface of the gate electrode, which is in contact with the insulating layer in the above-described embodiment of the present invention. according to the above-described embodiment of the present invention, oxidation of the gate electrode on the side which is in contact with the insulating layer due to oxygen released from the insulating layer by the light irradiation treatment can be suppressed. to reduce the size of semiconductor devices, it is important to reduce the thickness of an insulating layer between a gate electrode and a semiconductor layer; however, a metal oxide film with high resistance formed by oxidation of the gate electrode leads to an increase in the thickness of the insulating layer in some cases. hence, with the anti-oxidation layer which suppresses oxidation of the gate electrode and is formed according to the above-described embodiment of the present invention, a downsized semiconductor device with high reliability can be manufactured. one embodiment of the present invention is a method for manufacturing a semiconductor device in which a gate electrode is formed over a substrate having an insulating surface, a gate insulating layer is formed over the gate electrode, an oxide semiconductor layer is formed over the gate insulating layer, an insulating layer from which oxygen can be released by heating is formed in contact with the oxide semiconductor layer so as to overlap with the gate electrode, a metal layer is formed over the insulating layer so as to overlap with the gate electrode and the insulating layer, and light irradiation treatment is performed on the metal layer to add oxygen from the insulating layer in a region which overlaps with the gate electrode to the oxide semiconductor layer, whereby the number of oxygen vacancies and the number of interface states in the oxide semiconductor layer in a region which overlaps with the gate electrode are reduced. further, one embodiment of the present invention is a method for manufacturing a semiconductor device in which an island-shaped metal layer is formed over a substrate having an insulating surface, an insulating layer from which oxygen can be released by heating is formed over the metal layer, an oxide semiconductor layer is formed in contact with the insulating layer, a gate insulating layer is formed over the oxide semiconductor layer, a gate electrode is formed over the gate insulating layer so as to overlap with the metal layer and the insulating layer, and light irradiation treatment is performed on the metal layer to add oxygen from the insulating layer in a region which overlaps with the metal layer to the oxide semiconductor layer, whereby the number of oxygen vacancies and the number of interface states in the oxide semiconductor layer in a region which overlaps with the gate electrode are reduced. according to the above-described embodiment of the present invention, the insulating layer in the region which overlaps with the metal layer is also heated by the light irradiation treatment on the metal layer, so that oxygen in the insulating layer is detached. since the insulating layer is formed in contact with the oxide semiconductor layer, detached oxygen can be added into the oxide semiconductor layer in the region which overlaps with the metal layer, which results in a reduction in the number of oxygen vacancies or interface states in the oxide semiconductor layer in the region which overlaps with the gate electrode. the metal layer is not directly involved in operation of the semiconductor device unlike the gate electrode; thus, any material which generates heat effectively by light irradiation treatment can be used for the metal layer regardless of its resistance or thickness. accordingly, a highly reliable semiconductor device with less change in threshold voltage can be manufactured at lower cost. the metal layer also acts to suppress incidence of external light into the oxide semiconductor layer in the region which overlaps with the gate electrode (acts as a so-called light-blocking film) as well as to heat the insulating layer. accordingly, a highly reliable semiconductor device with less change in characteristics due to light incidence from outside can be manufactured. further, one embodiment of the present invention is a method for manufacturing a semiconductor device in which a layer having an optical absorptance of 60% or more in the wavelength region from 400 nm to 1000 nm both inclusive is formed as the metal layer in the above-described embodiment of the present invention. according to the above-described embodiment of the present invention, the metal layer can efficiently absorb irradiated light to generate light, so that oxygen can be added efficiently into the oxide semiconductor layer in the region which overlaps with the gate electrode even by light irradiation at low energy. accordingly, power consumption and frequency of maintenance of an apparatus for light irradiation can be reduced. accordingly, a highly reliable semiconductor device with less change in threshold voltage can be manufactured at lower cost. further, one embodiment of the present invention is a method for manufacturing a semiconductor device in which the insulating layer is formed by a sputtering method using oxygen or a mixed gas of oxygen and argon in the above-described embodiment of the present invention. according to the above-described embodiment of the present invention, a sufficient number of oxygen atoms are contained in the insulating layer, whereby oxygen vacancies or interface states in the oxide semiconductor layer in the region which overlaps with the gate electrode can be effectively reduced by performing the light irradiation treatment to heat the insulating layer. accordingly, a highly reliable semiconductor device with less change in threshold voltage can be manufactured. the expression “b is formed over a” or “b is formed on a” explicitly described in this specification, etc. means not only the case where b is formed on and in direct contact with a but also the case where a and b are not in direct contact with each other, i.e., the case where another object is provided between a and b. here, a and b each denote an object (e.g., device, element, circuit, wiring, electrode, terminal, film, or layer). therefore, for example, the explicitly described expression that a layer b is formed on or over a layer a includes both the cases where the layer b is formed on and in direct contact with the layer a and where another layer (e.g., a layer c or a layer d) is formed on and in direct contact with the layer a and the layer b is formed on and in direct contact with the layer. note that the layer (e.g., the layer c or the layer d) may be a single layer or a plurality of layers. in this specification, ordinal numbers such as “first”, “second”, and “third” are used in order to avoid confusion among elements, and do not give a limitation on the number of elements. functions of a “source” and a “drain” of a transistor in this specification are sometimes switched when a transistor of the opposite polarity is used or when the direction of current flowing is changed in circuit operation, for example. therefore, the terms “source” and “drain” can be switched in this specification. according to one embodiment of the present invention, a highly reliable semiconductor device with less change in threshold voltage in which oxygen vacancies or interface states in an oxide semiconductor layer in a region which overlaps with a gate electrode are reduced can be provided. brief description of the drawings in the accompanying drawings: figs. 1a and 1b illustrate a structure of a semiconductor device described in embodiment 1; figs. 2a to 2d illustrate a method for manufacturing the semiconductor device described in embodiment 1; figs. 3a to 3c illustrate a method for manufacturing the semiconductor device described in embodiment 1; fig. 4 illustrates a method for manufacturing the semiconductor device described in embodiment 1; figs. 5a and 5b illustrate a structure of a semiconductor device described in embodiment 2; figs. 6a to 6c illustrate a method for manufacturing the semiconductor device described in embodiment 2; figs. 7a and 7b illustrate a method for manufacturing the semiconductor device described in embodiment 2; figs. 8a and 8b illustrate a structure of a semiconductor device described in embodiment 3; figs. 9a to 9c illustrate a method for manufacturing the semiconductor device described in embodiment 3; figs. 10a and 10b illustrate a method for manufacturing the semiconductor device described in embodiment 3; figs. 11a and 11b illustrate a structure of a semiconductor device described in embodiment 4; figs. 12a to 12c illustrate a method for manufacturing the semiconductor device described in embodiment 4; fig. 13 illustrates a method for manufacturing the semiconductor device described in embodiment 4; and figs. 14a to 14c illustrate modes of an electronic device using a semiconductor device in accordance with one embodiment of the present invention. detailed description of the invention hereinafter, embodiments of the present invention are described in detail using the drawings. note that the present invention is not limited to the following description, and it is easily understood by those skilled in the art that various changes and modifications can be made without departing from the spirit and scope of the present invention. therefore, the present invention should not be construed as being limited to the description in the following embodiments. in the structures of the present invention described below, the same portions or portions having similar functions are denoted by the same reference numerals throughout the drawings, and description thereof is not repeated. [embodiment 1] in embodiment 1, a manufacturing method of a semiconductor device according to an embodiment of the present invention is described using figs. 1a and 1b , and figs. 2a to 2d , figs. 3a to 3c , and fig. 4 . <structure of semiconductor device according to this embodiment> figs. 1a and 1b illustrate an example of a structure of a semiconductor device manufactured according to a method of this embodiment, a bottom-gate transistor 150 : fig. 1a is a top view of the transistor 150 and fig. 1b is a cross-sectional schematic diagram taken along a dashed line a-b in fig. 1a . in the top view of fig. 1a , only patterned film(s) and/or layer(s) are shown for easy understanding of the structure. although the manufacturing method is described on the case where the transistor 150 is an n-channel transistor whose carriers are electrons in this embodiment, the transistor 150 is not limited to the n-channel transistor. the transistor 150 shown in figs. 1a and 1b includes a substrate 100 , a base layer 102 formed over the substrate 100 , a gate electrode 104 formed over the base layer 102 , an anti-oxidation layer 105 formed over the gate electrode 104 , an insulating layer 106 formed over the base layer 102 , the gate electrode 104 , and the anti-oxidation layer 105 , an oxide semiconductor layer 108 which includes a low resistance region 108 a functioning as a source region (or a drain region) and a channel formation region 108 b and is formed over the insulating layer 106 , a first interlayer insulating layer 110 formed over the insulating layer 106 and the oxide semiconductor layer 108 , a second interlayer insulating layer 112 formed over the first interlayer insulating layer 110 , and a source electrode 114 a and a drain electrode 114 b which are electrically connected to the low resistance region 108 a through openings in the first interlayer insulating layer 110 and the second interlayer insulating layer 112 . <manufacturing method of semiconductor device according to this embodiment> a method for manufacturing the transistor 150 is described below using figs. 2a to 2d , figs. 3a to 3c , and fig. 4 . first, the base layer 102 is formed over the substrate 100 (see fig. 2a ). any substrate can be used as the substrate 100 as long as the substrate has an insulating surface. for example, a non-alkali glass substrate such as an aluminosilicate glass substrate, an aluminoborosilicate glass substrate, or a barium borosilicate glass substrate can be used. such a glass substrate is suitable for increasing in size, and glass substrates of g10 size (2850 mm×3050 mm), g11 size (3000 mm×3320 mm), and the like have been manufactured; thus, the semiconductor device according to one embodiment of the present invention can be mass-produced at low cost. alternatively, as the substrate 100 , an insulating substrate formed of an insulator, such as a quartz substrate or a sapphire substrate, a semiconductor substrate which is formed using a semiconductor material such as silicon and has a surface covered with an insulating material, or a conductive substrate which is formed of a conductor such as metal or stainless steel and has a surface covered with an insulating material can be used. although there is no particular limitation on the thickness of the substrate 100 , it is preferable that the thickness of the substrate 100 be equal to or less than 3 mm, further preferably equal to or less than 1 mm for a reduction in thickness and weight of the semiconductor device. it is preferable that the substrate 100 have light-transmitting properties in the case where light irradiation treatment is carried out through the substrate 100 . specifically, it is preferable that the light transmittance of the substrate 100 in the wavelength region from 400 nm to 700 nm both inclusive be greater than or equal to 70%. it is further preferable that the light transmittance of the substrate 100 in the wavelength region from 400 nm to 700 nm both inclusive be greater than or equal to 90%. as an example, a non-alkali glass substrate with a thickness of 0.7 mm and a light transmittance of greater than or equal to 80% in the wavelength region from 400 nm to 700 nm both inclusive may be used as the substrate 100 . the base layer 102 prevents impurity diffusion from the substrate 100 ; silicon oxide (sio 2 ), silicon nitride (sin), silicon oxynitride (sion), silicon nitride oxide (sino), aluminum oxide (alo 2 ), aluminum nitride (aln), aluminum oxynitride (alon), aluminum nitride oxide (alno), or the like may be deposited by a known method such as a cvd method such as a plasma-enhanced cvd method, a pvd method, or a sputtering method. the base layer 102 may have a single-layer structure or a stacked-layer structure. in the case of a stacked-layer structure, the above films may be stacked to form the base layer 102 . here, silicon oxynitride contains more oxygen than nitrogen in composition: for example, silicon oxynitride contains oxygen, nitrogen, silicon, and hydrogen at concentrations ranging from 50 atomic % to 70 atomic % both inclusive, from 0.5 atomic % to 15 atomic % both inclusive, from 25 atomic % to 35 atomic % both inclusive, and from 0 atomic % to 10 atomic % both inclusive, respectively. the above concentrations are concentrations when measured using rutherford backscattering spectrometry (rbs) or hydrogen forward scattering spectrometry (hfs). in addition, the total of the percentages of the constituent elements does not exceed 100 atomic %. there is no particular limitation on the thickness of the base layer 102 ; for example, the base layer 102 preferably has a thickness of greater than or equal to 10 nm and less than or equal to 500 nm. when the base layer 102 is thinner than 10 nm, the base layer 102 might not be formed in part of the region because of in-plane thickness distribution attributed to a deposition apparatus; on the other hand, when the base layer 102 is thicker than 500 nm, deposition rate thereof or a manufacturing cost might be increased. as an example, 100-nm-thick silicon oxide or silicon nitride may be deposited by a plasma-enhanced cvd method to form the base layer 102 . the base layer 102 is not necessarily provided in the case where another corresponding anti-impurity-diffusion layer is provided on a surface of the substrate 100 . the same is applied in the case where another corresponding anti-impurity-diffusion layer is provided between the oxide semiconductor layer 108 and the substrate 100 . next, the gate electrode 104 and the anti-oxidation layer 105 are formed over the base layer 102 (see fig. 2b ). the gate electrode 104 and the anti-oxidation layer 105 can be formed as follows: a layer for the gate electrode 104 and a layer for the anti-oxidation layer 105 are formed over the base layer 102 and partly removed by a known method such as a dry etching method or a wet etching method using a photoresist mask. although the gate electrode 104 and the anti-oxidation layer 105 are separately described for convenience of explanation in this specification, etc., the anti-oxidation layer 105 can be regarded as part of the gate electrode 104 . as the gate electrode 104 , for example, a layer formed of at least a metal film or an alloy film containing tantalum (ta), tungsten (w), titanium (ti), molybdenum (mo), aluminum (al), copper (cu), chromium (cr), or neodymium (nd) as its main component or a nitride film of such a metal or an alloy thereof by a known method such as a sputtering method or an evaporation method may be used. there is no particular limitation on the thickness of the gate electrode 104 ; it is preferable to have a thickness of greater than or equal to 10 nm and less than or equal to 500 nm, for example. when the gate electrode 104 is thinner than 10 nm, the gate electrode 104 might not be formed in part of the region because of in-plane thickness distribution attributed to a deposition apparatus; on the other hand, when the gate electrode 104 is thicker than 500 nm, deposition rate thereof or a manufacturing cost might be increased. as an example, tungsten may be deposited to a thickness of 300 nm by a sputtering method to form the gate electrode 104 . in the case where an element whose light reflectance is high, such as aluminum or copper, is used for the gate electrode 104 , it is necessary to form a layer whose light absorptance is high on a surface to be irradiated with light by light irradiation treatment in order to suppress reflection of light. specifically, in the case where a layer whose light reflectance in the wavelength region from 400 nm to 1000 nm both inclusive is greater than or equal to 50% (hereinafter abbreviated as a high-reflectivity layer) is used as part of the gate electrode 104 , it is preferable to provide a layer whose light absorptance in the wavelength region from 400 nm to 1000 nm both inclusive is greater than or equal to 60% (hereinafter abbreviated as a high-absorptivity layer) between the high-reflectivity layer and the base layer 102 and/or between the high-reflectivity layer and the anti-oxidation layer 105 . as an example, a stacked-layer structure in which 100-nm-thick tungsten or 100-nm-thick titanium is deposited on a top surface and a bottom surface of 300-nm-thick aluminum by a sputtering method, a stacked-layer structure in which 100-nm-thick tungsten or 100-nm-thick titanium is deposited on a surface (a light incidence surface at light irradiation treatment) of 300-nm-thick aluminum by a sputtering method, or the like may be used as the gate electrode 104 . in the case where a metal whose heat resistance is low, such as aluminum, is used as part of the gate electrode 104 , a semiconductor device described in this specification needs to be formed at temperatures under the upper temperature limit of the gate electrode 104 . as the anti-oxidation layer 105 , for example, a layer formed of at least one of molybdenum nitride, tungsten nitride, titanium nitride, and tantalum nitride by a known method such as a sputtering method or an evaporation method can be used. there is no particular limitation on the thickness of the anti-oxidation layer 105 ; it is preferable to have a thickness of greater than or equal to 5 nm and less than or equal to 100 nm, for example. when the anti-oxidation layer 105 is thinner than 5 nm, the anti-oxidation layer 105 might not be formed in part of the region because of in-plane thickness distribution attributed to a deposition apparatus. further, in the case where the resistance of the anti-oxidation layer 105 is higher than that of the gate electrode 104 , it is preferable that the thickness of the anti-oxidation layer 105 be less than or equal to 100 nm to suppress an increase in resistance. as an example, titanium nitride may be deposited to a thickness of 30 nm by a sputtering method to form the anti-oxidation layer 105 . with the anti-oxidation layer 105 , oxidation of the gate electrode 104 by oxygen detached from the insulating layer 106 heated by light irradiation treatment can be suppressed. to reduce the size of semiconductor devices, it is important to reduce the thickness of an insulating layer between a gate electrode and a semiconductor layer; however, a metal oxide film with high resistance formed by oxidation of the gate electrode leads to an increase in the thickness of the insulating layer in some cases, which adversely affects electrical characteristics. hence, provision of the anti-oxidation layer 105 enables a highly reliable semiconductor device to be manufactured particularly in the case where the size of the semiconductor device is small. although the anti-oxidation layer 105 is formed in any embodiment described in this specification, the anti-oxidation layer 105 is not necessarily provided, which depends on the quality of a material of the gate electrode 104 , a margin of thickness of the insulating layer between the gate electrode 104 and the oxide semiconductor layer 108 , or the like. next, the insulating layer 106 is formed over the base layer 102 , the gate electrode 104 , and the anti-oxidation layer 105 (see fig. 2c ). it is necessary to use a layer from which oxygen can be released by heating as the insulating layer 106 . the expression “oxygen can be released by heating” means that the amount of released oxygen which is converted to oxygen atoms is greater than or equal to 1×10 18 atoms/cm 3 according to thermal desorption spectroscopy (tds) by heating at a temperature lower than or equal to 300° c. as the insulating layer 106 , a single film or a stacked-layer film containing silicon oxide, aluminum oxide, hafnium oxide, hafnium silicate, hafnium aluminate, zirconium oxide, yttrium oxide, lanthanum oxide, or cerium oxide as its main component may be formed by a known method such as a cvd method such as a plasma-enhanced cvd method, a pvd method, or a sputtering method. as a method for forming the insulating layer 106 from which oxygen can be released by heating, it is preferable to use a sputtering method capable of keeping the oxygen concentration in a deposition atmosphere high. for example, deposition is performed by a sputtering method using oxygen or a mixed gas of oxygen and a rare gas (e.g., argon) as a deposition gas, whereby the insulating layer 106 from which oxygen can be released by heating can be formed. in the case where a mixed gas of oxygen and a rare gas is used as a deposition gas, the proportion of oxygen is preferably set higher; it is preferable to set the proportion of oxygen to 6% or more and less than 100% of the deposition gas. accordingly, the insulating layer 106 can be formed to contain a sufficient amount of oxygen, so that oxygen vacancies or interface states in the oxide semiconductor layer 108 in a region which overlaps with the gate electrode 104 can be more effectively reduced by light irradiation treatment to heat the insulating layer 106 . the expression “contain a sufficient amount of oxygen” means that the amount of released oxygen which is converted to oxygen atoms is greater than or equal to 1×10 19 atoms/cm 3 , preferably greater than or equal to 3×10 20 atoms/cm 3 according to thermal desorption spectroscopy (tds) by heating at a temperature lower than or equal to 300° c. the insulating layer 106 from which oxygen can be released by heating can also be formed by a sputtering method using oxygen-excess silicon oxide (sio x (x>2)) as a target. in the oxygen-excess silicon oxide (sio x (x>2)), the number of oxygen atoms per unit volume is greater than twice the number of silicon atoms per unit volume. the number of silicon atoms and the number of oxygen atoms per unit volume are measured by rutherford backscattering spectrometry. in the case where such a target material is used, the oxygen concentration in the deposition gas is not limited to the above-described one. the thickness of the insulating layer 106 is preferably greater than or equal to 0.1 nm and less than or equal to 500 nm, for example. when the thickness of the insulating layer 106 is thinner than 0.1 nm, it is difficult to keep insulation between the gate electrode 104 and the oxide semiconductor layer 108 by the insulating layer 106 . as the thickness of the insulating layer 106 is larger, a short channel effect increases and the threshold voltage tends to shift more in the negative direction. since the insulating layer 106 from which oxygen can be released by heating is provided, heating of the gate electrode 104 by light irradiation treatment is accompanied by heating of the insulating layer 106 in a region which overlaps with the gate electrode 104 , so that oxygen is released therefrom. consequently, oxygen vacancies or interface states in the oxide semiconductor layer 108 in the region which overlaps with the gate electrode 104 can be reduced. accordingly, a highly reliable semiconductor device with less change in threshold voltage can be manufactured. as an example, a mixed gas of oxygen and argon may be introduced into a deposition atmosphere, and silicon oxide may be deposited to a thickness of 30 nm by a sputtering method while keeping the oxygen concentration in the gas at 6% or more to form the insulating layer 106 . such a layer formed by a sputtering method is preferable because the amount of hydrogen, nitrogen, and the like is less. next, the oxide semiconductor layer 108 is formed in contact with the insulating layer 106 (see fig. 2d ). the oxide semiconductor layer 108 can be formed as follows: a layer for the oxide semiconductor layer 108 is formed over the insulating layer 106 and is partly removed by a known method such as a dry etching method or a wet etching method using a photoresist mask. as the layer for the oxide semiconductor layer 108 , for example, a layer containing at least one selected from in, ga, sn, and zn formed by a sputtering method or the like may be used. for example, an oxide of four metal elements, such as an in—sn—ga—zn—o-based oxide semiconductor; an oxide of three metal elements, such as an in—ga—zn—o-based oxide semiconductor, an in—sn—zn—o-based oxide semiconductor, an in—al—zn—o-based oxide semiconductor, a sn—ga—zn—o-based oxide semiconductor, an al—ga—zn—o-based oxide semiconductor layer, or a sn—al—zn—o-based oxide semiconductor; an oxide of two metal elements, such as an in—zn—o-based oxide semiconductor, a sn—zn—o-based oxide semiconductor, an al—zn—o-based oxide semiconductor, a zn—mg—o-based oxide semiconductor, a sn—mg—o-based oxide semiconductor, an in—mg—o-based oxide semiconductor, or an in—ga—o-based material; or an oxide of one metal element, such as an in—o-based oxide semiconductor, a sn—o-based oxide semiconductor, or a zn—o-based oxide semiconductor can be used. in addition, any of the above oxide semiconductors may further contain an element other than in, ga, sn, and zn, for example, sio 2 . for example, the in—ga—zn—o-based oxide semiconductor means an oxide semiconductor containing indium (in), gallium (ga), and zinc (zn), and there is no limitation on the composition ratio thereof. as an example, in the case where an in—zn—o-based layer is formed as the oxide semiconductor layer 108 , a target has a composition ratio of in:zn=50:1 to 1:2 in an atomic ratio (in 2 o 3 :zno=25:1 to 1:4 in a molar ratio), preferably in:zn=20:1 to 1:1 in an atomic ratio (in 2 o 3 :zno=10:1 to 1:2 in a molar ratio), further preferably in:zn=15:1 to 1.5:1 in an atomic ratio (in 2 o 3 :zno=15:2 to 3:4 in a molar ratio). for example, in a target used for formation of an in—zn—o-based oxide semiconductor, the composition ratio is set that z>1.5x+y where an atomic ratio of in:zn:o═x:y:z. as the layer for the oxide semiconductor layer 108 , a thin film expressed by a chemical formula, inmo 3 (zno) m (m>0), can also be used. here, m represents one or more metal elements selected from zn, ga, al, mn, and co. for example, ga, ga and al, ga and mn, ga and co, or the like can be used as m. as a target used for formation of the layer for the oxide semiconductor layer 108 , for example, a target of metal oxide having a composition ratio of in 2 o 3 :ga 2 o 3 :zno=1:1:1 (in a molar ratio) or a composition ratio of in 2 o 3 :ga 2 o 3 :zno=1:1:2 (in a molar ratio) may be used. as for the target of metal oxide used for formation of the layer for the oxide semiconductor layer 108 , the relative density of an oxide semiconductor in the target is set to preferably 80% or more, further preferably 95% or more, still further preferably 99.9% or more. with the use of such an oxide semiconductor target with high relative density, the oxide semiconductor layer 108 can be formed densely. as a sputtering gas, a rare gas (typically, argon), oxygen, or a mixed gas of a rare gas and oxygen can be used. it is preferable to use a high-purity gas in which impurities such as hydrogen, water, hydroxyl, or hydride are reduced to a concentration in the order of ppm (further preferably, in the order of ppb). as the sputtering gas used for formation of the layer for the oxide semiconductor layer 108 , for example, oxygen may be supplied at a flow rate of 40 sccm (the proportion of the oxygen flow is 100%) to a sputtering apparatus. at the time of forming the layer for the oxide semiconductor layer 108 , for example, the substrate is held in a process chamber that is maintained at reduced pressure, and the substrate is heated to a temperature higher than or equal to 100° c. and lower than or equal to 600° c., preferably higher than or equal to 200° c. and lower than or equal to 400° c. further, a high-purity gas from which impurities such as hydrogen, water, a hydroxyl group, and hydride are removed is introduced while moisture remaining in the process chamber is removed, whereby the layer is deposited using the above-described target of metal oxide. impurities contained in the layer can be reduced by heating the substrate at the time of deposition. in addition, damage on the layer due to sputtering is suppressed. that substrate heat treatment needs to be performed at temperatures lower than or equal to either upper temperature limit of the gate electrode 104 and the anti-oxidation layer 105 . before the deposition of the layer for the oxide semiconductor layer 108 , it is preferable to perform preheat treatment in order to remove moisture and the like remaining in the sputtering apparatus. for the preheat treatment, a method in which the inside of the process chamber is heated to higher than or equal to 200° c. and lower than or equal to 600° c. under reduced pressure, a method in which introduction and exhaust of nitrogen or an inert gas are repeated while the inside of the process chamber is heated, and the like can be given. after the preheat treatment, the substrate or the sputtering apparatus is cooled, which is followed by film deposition without exposure to the air. in that case, not water but oil or the like is preferably used as a coolant for the target. although a certain level of effect can be provided by introduction and exhaust of nitrogen repeated without heating, it is further preferable to perform the treatment with the inside of the process chamber heated. for removing moisture and the like remaining in the sputtering apparatus before, during, or after the film deposition, an entrapment vacuum pump is preferably used for a vacuum pump provided for the process chamber. for example, a cryopump, an ion pump, a titanium sublimation pump, or the like may be used. a turbo pump provided with a cold trap may be used. since hydrogen, water, and the like can be removed from the process chamber with any of the above pumps, the oxide semiconductor layer 108 can be formed with less impurity concentration. the conditions for depositing the layer for the oxide semiconductor layer 108 can be set as follows: the distance between the substrate and the target is 170 mm, the pressure is 0.4 pa, the direct current (dc) power is 0.5 kw, and the atmosphere is an oxygen atmosphere (the proportion of the oxygen flow is 100%). the pulse direct current (dc) power is preferably used because particles can be reduced and the film thickness can be uniform. an appropriate thickness of the layer differs depending on the oxide semiconductor material to be used, the usage, or the like; thus, the thickness thereof may be determined as appropriate in accordance with the material, the usage, or the like. the thickness of the layer for the oxide semiconductor layer 108 is preferably greater than or equal to 3 nm and less than or equal to 50 nm. this is because when the oxide semiconductor layer 108 is too thick (e.g., 100 nm or more), there is a possibility that the short channel effect has a large influence to make a small transistor normally on. here, “normally on” means a state where a channel formation region exists without voltage application to the gate electrode so that current flows through the transistor. the oxide semiconductor layer 108 can be formed as follows: the layer for the oxide semiconductor layer 108 formed through the above step is partly removed by a known method such as a dry etching method or a wet etching method using a photoresist mask. an example of an etching gas which can be used for the dry etching is a gas containing chlorine (a chlorine-based gas such as chlorine (cl 2 ), boron trichloride (bcl 3 ), silicon tetrachloride (sicl 4 ), or carbon tetrachloride (ccl 4 )). a gas containing fluorine (a fluorine-based gas such as carbon tetrafluoride (cf 4 ), sulfur hexafluoride (sf 6 ), nitrogen trifluoride (nf 3 ), or trifluoromethane (chf 3 )), hydrogen bromide (hbr), oxygen (o 2 ), any of these gases to which a rare gas such as helium (he) or argon (ar) is added, or the like may also be used. examples of an etchant that can be used for the wet etching are a mixed solution of phosphoric acid, acetic acid, and nitric acid; and an ammonia peroxide mixture (hydrogen peroxide solution of 31 wt %:ammonia solution of 28 wt %:water=5:2:2). an etchant such as ito-07n (produced by kanto chemical co., inc.) may also be used. next, light irradiation treatment 130 is performed on the gate electrode 104 (see fig. 3a ). that light is absorbed in the gate electrode 104 to heat the gate electrode 104 . the insulating layer 106 in the region which overlaps with the gate electrode 104 is accordingly heated to release oxygen contained in the insulating layer 106 . part of released oxygen is added into the oxide semiconductor layer 108 especially in the region which overlaps with the gate electrode 104 . although the light irradiation treatment 130 is performed on the gate electrode 104 in this embodiment, the light irradiation treatment 130 may be performed on the entire surface of the substrate. further, although the light irradiation treatment 130 is performed from the bottom surface side as shown in fig. 3a in this embodiment, one embodiment of the present invention is not limited thereto; the light irradiation treatment 130 may be performed from the top surface side (i.e., the oxide semiconductor layer 108 side in fig. 3a ) or from both surface sides. as for a method for heating the gate electrode 104 , the gate electrode 104 may be formed using a magnetic metal and irradiated with an electromagnetic wave such as a microwave, so that the gate electrode 104 can be heated by induction heating. as for the magnetic metal, for example, a layer formed of at least one of metal films and alloy films containing iron (fe), cobalt (co), nickel (ni), gadolinium (gd), terbium (tb), dysprosium (dy), holmium (ho), erbium (er), thulium (tm), vanadium (v), chromium (cr), manganese (mn), copper (cu), zinc (zn), palladium (pd), or platinum (pt) as a main component may be used. the ion binding property of the oxide semiconductor layer 108 is high, and thus owing to the out-compensation effect, oxygen vacancies are likely to be generated. some of oxygen vacancies form donors to generate electrons that are carriers. therefore, oxygen vacancies in the vicinity of the interface with the insulating layer 106 in the region of the oxide semiconductor layer 108 which overlaps with the gate electrode 104 shifts the threshold voltage of the transistor in the negative direction (i.e., makes the transistor a so-called normally-on transistor). in view of the foregoing, the above-described light irradiation treatment 130 is performed to add oxygen into the oxide semiconductor layer 108 in the region which overlaps with the gate electrode 104 , so that the number of oxygen vacancies in that region can be reduced, thereby suppressing the shift of the threshold voltage in the negative direction. in addition, such a reduction in oxygen vacancies or interface states in the oxide semiconductor layer 108 in the region which overlaps with the gate electrode 104 by the above-described light irradiation treatment 130 enables trapping of electrical charges caused by operation of the semiconductor device in the interface between the insulating layer 106 and the channel formation region to be sufficiently suppressed. further, against a concern about an increase in the thickness of the insulating layer by surface oxidation of the gate electrode 104 due to oxygen released on the gate electrode 104 side from the insulating layer 106 , the anti-oxidation layer 105 is formed in this embodiment to suppress the surface oxidation of the gate electrode 104 . the anti-oxidation layer 105 is formed on the top surface of the gate electrode 104 as shown in fig. 3a in this embodiment; however, one embodiment of the present invention is not limited thereto. for example, the anti-oxidation layer 105 may be formed not only on the top surface of the gate electrode 104 but also on the side surface of the gate electrode 104 . the light irradiation treatment 130 can be performed with a laser apparatus, for example. as the laser, one or more of the following can be used: a gas laser such as an ar laser, a kr laser, or an excimer laser; a laser of which medium is single crystal yag, yvo 4 , forsterite (mg 2 sio 4 ), yalo 3 , or gdvo 4 , or polycrystalline (ceramic) yag, y 2 o 3 , yvo 4 , yalo 3 , or gdvo 4 that is doped with one or more of nd, yb, cr, ti, ho, er, tm, and ta as a dopant; a glass laser; a ruby laser; an alexandrite laser; a ti:sapphire laser; a copper vapor laser; and a gold vapor laser. further, a solid state laser a laser medium of which is solid provides an advantage of a long period of maintenance free and an advantage of a relatively stable output. alternatively, instead of the laser apparatus, a discharge lamp typified by a flash lamp (e.g., a xenon flash lamp and a krypton flash lamp), a xenon lamp, or a metal halide lamp; or an exothermic lamp typified by a halogen lamp or a tungsten lamp can be used. the flash lamp repeats emission with extremely high intensity in a short time (longer than or equal to 0.1 msec and shorter than or equal to 10 msec) and can irradiate a large area; thus, efficient heating is possible regardless of the area of the substrate 100 . further, the flash lamp can control heating of the gate electrode 104 by change in the interval of emission time. moreover, since the life of the flash lamp is long and stand-by power consumption thereof is low, running cost can be suppressed. as one example, as the light irradiation treatment 130 , the gate electrode 104 may be heated with a xenon flash lamp with an emission time of 1 msec. next, a mask 120 is formed on the region of the oxide semiconductor layer 108 which overlaps with the gate electrode 104 , and then impurity addition treatment 131 is performed on the oxide semiconductor layer 108 . in this manner, in the oxide semiconductor layer 108 , an impurity is added into a region on which the mask 120 is not positioned, so that the low resistance region 108 a which functions as the source region (or the drain region) and the channel formation region 108 b are formed in a self-aligned manner (see fig. 3b ). the mask 120 may be removed after the low resistance region is formed. the mask 120 can be formed as follows, for example: a known resist material is provided on the oxide semiconductor layer 108 and is subjected to light exposure with the use of a photomask, and then removed as appropriate by a known method such as a dry-etching method or a wet-etching method. there is no particular limitation on the thickness of the mask 120 ; for example, it is preferable that the thickness of the mask 120 be greater than or equal to 0.3 μm and less than or equal to 5 μm. when the thickness of the mask 120 is less than 0.3 μm, the impurity may pass through the mask 120 to the semiconductor layer 108 at the time of the impurity addition treatment 131 . on the other hand, a thicker thickness than 5 μm is not preferable in terms of a deposition rate thereof and a manufacturing cost. the impurity addition treatment 131 can be performed using, for example, one or more selected from a rare gas such as argon (ar), krypton (kr), or xenon (xe), and the elements belonging to the 15 th group of the periodic table such as nitrogen (n), phosphorus (p), arsenic (as), and antimony (sb) with an ion-doping apparatus or an ion implantation apparatus. a typical example of the ion-doping apparatus is a non-mass-separation type apparatus in which plasma excitation of a process gas is performed and an object to be processed is irradiated with all kinds of ion species generated. in this apparatus, the object is irradiated with ion species of plasma without mass separation. in contrast, the ion implantation apparatus is a mass-separation apparatus. in the ion implantation apparatus, mass separation of ion species of plasma is performed and an object to be processed is irradiated with ion species having predetermined masses. for example, as the impurity addition treatment 131 , a region including a portion where the oxide semiconductor layer 108 may be irradiated with an argon (ar) gas with an ion doping apparatus. in the case of using argon as a source gas, the low resistance region 108 a which functions as the source region (or the drain region) may be formed by performing irradiation with acceleration voltage in the range from 0.1 kv to 100 kv and the dose in the range from 1×10 14 ions/cm 2 to 1×10 17 ions/cm 2 . the resistivity of the low resistance region 108 a is preferably greater than or equal to 1×10 −4 ω·cm and less than or equal to 3 ω·cm, further preferably greater than or equal to 1×10 −3 ω·cm and less than or equal to 3×10 −1 ω·cm. accordingly, a reduction in the on-state current can be suppressed to increase the on/off ratio. since argon is an inert gas, a gas atmosphere and temperatures during ion addition are easily controlled; thus, work efficiency and safety can be improved. the gate electrode 104 is entirely heated by the above-described light irradiation treatment 130 ; therefore, oxygen is released also from the insulating layer 106 on the side portion of the gate electrode 104 in the bottom-gate semiconductor device described in this embodiment, and may be added into the low resistance region 108 a of the oxide semiconductor layer 108 to increase the resistance thereof. however, since the impurity addition treatment 131 is also performed on those regions to sufficiently reduce the resistance, an adverse effect on the electrical characteristics (e.g., reduction in the on-state current due to high resistance of the low resistance region 108 a ) can be prevented. the impurity addition treatment 131 , which is performed on the oxide semiconductor layer 108 in this embodiment, is not necessarily performed. the impurity addition treatment 131 can be skipped as long as ohmic contact can be formed by electrical connection between the oxide semiconductor layer 108 and the source electrode 114 a and between the oxide semiconductor layer 108 and the drain electrode 114 b. next, over the insulating layer 106 and the oxide semiconductor layer 108 , the first interlayer insulating layer 110 and the second interlayer insulating layer 112 are formed (see fig. 3c ). the first interlayer insulating layer 110 may be formed of, for example, a single layer or a stacked layer of an insulating film of silicon oxide, silicon nitride, silicon oxynitride, silicon nitride oxide, aluminum oxide, aluminum nitride, aluminum oxynitride, aluminum nitride oxide, or the like by a cvd method such as a plasma-enhanced cvd method, a pvd method, a sputtering method, or the like. the second interlayer insulating layer 112 may be formed of, for example, an organic insulating material such as polyimide, acrylic, polyamide, or polyimide-amide, or a siloxane resin by a coating method such as a spin-coating method or a dispenser method, a printing method such as a screen printing method, or the like. the siloxane resin corresponds to a resin that contains a si—o—si bond. siloxane includes a skeleton formed from a bond of silicon (si) and oxygen (o). an organic group (such as an alkyl group and an aryl group) or a fluoro group may be used as a substituent thereof. the second interlayer insulating layer 112 is a layer for planarizing unevenness of a top surface of the first interlayer insulating layer 110 ; accordingly, electrodes, wirings, or the like can be formed appropriately on the transistor 150 . the second interlayer insulating layer 112 is not necessarily provided, which depends on the surface unevenness after formation of the first interlayer insulating layer 110 . although the first interlayer insulating layer 110 and the second interlayer insulating layer 112 each have a single-layer structure in this embodiment, a stacked-layer structure of two or more layers may be employed as well. as one example, aluminum oxide may be deposited to a thickness of 300 nm by a plasma-enhanced cvd method to form the first interlayer insulating layer, and then, polyimide may be deposited to a thickness of 1.5 μm by a spin-coating method and cured by heat treatment to form the second interlayer insulating layer 112 . next, openings are formed in the first interlayer insulating layer 110 and the second interlayer insulating layer 112 , and then, the source electrode 114 a which is electrically connected to the low resistance region 108 a through the opening and the drain electrode 114 b which is electrically connected to the low resistance region 108 a through the opening are formed ( fig. 4 ). the openings in the first interlayer insulating layer 110 and the second interlayer insulating layer 112 may be formed by selectively removing the first interlayer insulating layer 110 and the second interlayer insulating layer 112 by a known method such as a dry-etching method or a wet-etching method using a photoresist mask. the source electrode 114 a and the drain electrode 114 b can be formed as follows: a conductive layer is formed by a sputtering method, an evaporation method, or the like and etched into a desired shape by a known method such as a dry-etching method or a wet-etching method using a photoresist mask. the source electrode 114 a and the drain electrode 114 b can also be formed by forming a conductive layer in appropriate portions by a droplet discharging method, a printing method, an electrolytic plating method, or the like. alternatively, a reflow method or a damascene method may be used. the conductive layer forming the source electrode 114 a and the drain electrode 114 b is formed using a metal such as aluminum (al), gold (au), copper (cu), tungsten (w), tantalum (ta), molybdenum (mo), titanium (ti), or chromium (cr), si, ge, an alloy thereof, or a nitride thereof. a stacked-layer structure of such materials may also be used. as one example, 50-nm-thick titanium, 500-nm-thick aluminum, and 50-nm-thick titanium are staked by a sputtering method and are patterned by a dry-etching method using a photoresist mask to form the source electrode 114 a and the drain electrode 114 b. through the above-described process, the semiconductor device that is the bottom-gate transistor 150 as shown in fig. 1b can be manufactured, which includes the base layer 102 formed over the substrate 100 , the gate electrode 104 formed over the base layer 102 , the anti-oxidation layer 105 formed over the gate electrode 104 , the insulating layer 106 formed over the base layer 102 , the gate electrode 104 , and the anti-oxidation layer 105 , the oxide semiconductor layer 108 which includes the low resistance region 108 a functioning as the source region (or the drain region) and the channel formation region 108 b and is formed over the insulating layer 106 , the first interlayer insulating layer 110 formed over the insulating layer 106 and the oxide semiconductor layer 108 , the second interlayer insulating layer 112 formed over the first interlayer insulating layer 110 , and the source electrode 114 a and the drain electrode 114 b which are electrically connected to the low resistance region 108 a through the openings in the first interlayer insulating layer 110 and the second interlayer insulating layer 112 . although not shown, the gate electrode 104 is electrically led over the second interlayer insulating layer 112 through a conductive wiring via a contact hole which is formed in the first interlayer insulating layer 110 and the second interlayer insulating layer 112 . in the oxide semiconductor layer 108 , the carrier concentration is sufficiently low (e.g., less than 1×10 12 /cm 3 , preferably less than 1.45×10 10 /cm 3 ) as compared with the carrier concentration (about 1×10 14 /cm 3 ) of a general silicon wafer. at a drain voltage in the range of from 1 v to 10 v, the off-state current (current flowing between the source and the drain when the gate-source voltage is 0 v or less) is 1×10 −13 å or less or the off-state current density (a value obtained by dividing an off-state current by a channel width of a transistor) is 10 aa/μm (“a” represents “atto” and denotes a factor of 10 −18 ) or less, preferably 1 aa/μm or less, and further preferably 100 za/μm (“z” represents “zepto” and denotes a factor of 10 −21 ) or less, in the case where the channel length is 10 μm and the total thickness of the oxide semiconductor layer is 30 nm. the resistance at the time when the transistor is off (off-state resistance r) can be calculated according to ohm's law using the values of the off-state current and the drain voltage, and the off-state resistivity ρ can be calculated according to the formula ρ=ra/l (r is the off-state resistance) using the cross-sectional area a of the channel formation region and the channel length l. the off-state resistivity is preferably 1×10 9 ω·m or higher (or 1×10 10 ω·m or higher). the cross-section area a can be obtained according to the formula a=dw (d: the thickness of the channel formation region, w: the channel width). the off-state current of a transistor including amorphous silicon is about 10 −12 a, whereas the off-state current of a transistor including an oxide semiconductor is 1/10000 or less of that of the transistor including amorphous silicon. thus, the transistor 150 whose off-state current characteristics are extremely excellent can be provided. <effect of semiconductor device manufactured according to this embodiment> in the transistor 150 shown in figs. 1a and 1b , which is manufactured through the above-described process, not only the gate electrode 104 but also the insulating layer 106 in the region which overlaps with the gate electrode 104 is heated by the light irradiation treatment, so that oxygen contained in the insulating layer is released. that oxygen released from the insulating layer 106 can be added into the oxide semiconductor layer 108 , which is in contact with the insulating layer 106 , in the region which overlaps with the gate electrode 104 . accordingly, oxygen vacancies or interface states in the oxide semiconductor layer 108 in the region which overlaps with the gate electrode 104 can be reduced. in this manner, according to the method described in this embodiment, a highly reliable semiconductor device with less change in the threshold voltage can be manufactured. [embodiment 2 ] in embodiment 2, a semiconductor device whose structure is different from embodiment 1 is described using figs. 5a and 5b , figs. 6a to 6c , and figs. 7a and 7b . in the structure of this embodiment described below, portions which are the same as or have functions similar to those (functions) in embodiment 1 are denoted by the same reference numerals in the drawings, and the description thereof is not repeated. <structure of semiconductor device according to this embodiment> figs. 5a and 5b illustrate an example of a structure of a semiconductor device manufactured according to a method of this embodiment, a top-gate transistor 550 : fig. 5a is a top view of the transistor 550 and fig. 5b is a cross-sectional schematic diagram taken along a dashed line c-d in fig. 5a . in the top view of fig. 5a , only patterned film(s) and/or layer(s) are shown for easy understanding of the structure. although the manufacturing method is described on the case where the transistor 550 is an n-channel transistor whose carriers are electrons in this embodiment, one embodiment of the present invention is not limited to the case of an n-channel transistor. the transistor 550 shown in figs. 5a and 5b includes a substrate 100 , a base layer 102 formed over the substrate 100 , an oxide semiconductor layer 108 which includes a low resistance region 108 a functioning as a source region (or a drain region) and a channel formation region 108 b and is formed over the base layer 102 , an insulating layer 106 formed over the oxide semiconductor layer 108 , an anti-oxidation layer 105 and a gate electrode 104 which are formed over the insulating layer 106 , a first interlayer insulating layer 110 formed over the insulating layer 106 , the anti-oxidation layer 105 , and the gate electrode 104 , a second interlayer insulating layer 112 formed over the first interlayer insulating layer 110 , and a source electrode 114 a and a drain electrode 114 b which are electrically connected to the low resistance region 108 a through openings in the first interlayer insulating layer 110 , the second interlayer insulating layer 112 , and the insulating layer 106 . <manufacturing method of semiconductor device according to this embodiment> a method for manufacturing the transistor 550 is described below using figs. 6a to 6c and figs. 7a and 7b . first, the base layer 102 is formed over the substrate 100 , the oxide semiconductor layer 108 is formed over the base layer 102 , and the insulating layer 106 from which oxygen can be released by heating is formed over the base layer 102 and the oxide semiconductor layer 108 (see fig. 6a ). material qualities, characteristics, formation methods, and the like of the substrate 100 , the base layer 102 , the insulating layer 106 , and the oxide semiconductor layer 108 are the same as those in embodiment 1, and thus detailed description thereof is not repeated here. next, the anti-oxidation layer 105 and the gate electrode 104 are formed over the insulating layer 106 (see fig. 6b ). material qualities, characteristics, formation methods, and the like of the anti-oxidation layer 105 and the gate electrode 104 are the same as those in embodiment 1, and thus detailed description thereof is not repeated here. next, light irradiation treatment 130 is performed on the gate electrode 104 (see fig. 6c ). accordingly, oxygen is added into the oxide semiconductor layer 108 in the region which overlaps with the gate electrode 104 , like in embodiment 1. although the light irradiation treatment 130 is performed on the gate electrode 104 in this embodiment, the light irradiation treatment 130 may be performed on the entire surface of the substrate. further, although the light irradiation treatment 130 is performed from the top surface side as shown in fig. 6c in this embodiment, one embodiment of the present invention is not limited thereto; the light irradiation treatment 130 may be performed from the bottom surface side (i.e., the substrate 100 side in fig. 6c ) or from both surface sides. an apparatus, a method, and the like of the light irradiation treatment 130 are the same as those in embodiment 1, and thus detailed description thereof is not repeated here. next, impurity addition treatment 131 is performed on a region including the oxide semiconductor layer 108 , so that the low resistance region 108 a which functions as the source region (or the drain region) and the channel formation region 108 b are formed (see fig. 7a ). an apparatus, a method, and the like of the impurity addition treatment 131 are the same as those in embodiment 1, and thus detailed description thereof is not repeated here. in this embodiment, since the gate electrode 104 , which is formed over the oxide semiconductor layer 108 , can be used as a mask for the impurity addition treatment 131 , impurities can be prevented from being added into the oxide semiconductor layer 108 in a region which overlaps with the gate electrode 104 , whereby the low resistance region 108 a which functions as the source region (or the drain region) and the channel formation region 108 b can be formed in a self-aligned manner (see fig. 7a ). accordingly, the manufacturing process can be simplified. consequently, a semiconductor device can be manufactured at lower cost. next, over the gate electrode 104 , the anti-oxidation layer 105 , and the insulating layer 106 , the first interlayer insulating layer 110 and the second interlayer insulating layer 112 are formed (see fig. 7b ). material qualities, characteristics, formation methods, and the like of the first interlayer insulating layer 110 and the second interlayer insulating layer 112 are the same as those in embodiment 1, and thus detailed description thereof is not repeated here. the first interlayer insulating layer 110 , which is formed over the insulating layer 106 , the anti-oxidation layer 105 , and the gate electrode 104 in this embodiment, is not necessarily provided. further, although the first interlayer insulating layer 110 and the second interlayer insulating layer 112 each have a single-layer structure in this embodiment, a stacked-layer structure of two or more layers may be employed as well. a material quality and a structure of an interlayer insulating layer may be selected as appropriate considering the use application or requisite characteristics of the transistor 550 . through the above-described process, the semiconductor device that is the top-gate transistor 550 as shown in fig. 5b can be manufactured, which includes the base layer 102 formed over the substrate 100 , the oxide semiconductor layer 108 which includes the low resistance region 108 a functioning as the source region (or the drain region) and the channel formation region 108 b and is formed over the base layer 102 , the insulating layer 106 formed over the base layer 102 and the oxide semiconductor layer 108 , the anti-oxidation layer 105 and the gate electrode 104 which are formed over the insulating layer 106 , the first interlayer insulating layer 110 formed over the insulating layer 106 , the oxide anti-oxidation layer 105 , and the gate electrode 104 , the second interlayer insulating layer 112 formed over the first interlayer insulating layer 110 , and the source electrode 114 a and the drain electrode 114 b which are electrically connected to the low resistance region 108 a through openings in the first interlayer insulating layer 110 , the second interlayer insulating layer 112 , and the insulating layer 106 . although not shown, the gate electrode 104 is electrically led over the second interlayer insulating layer 112 through a conductive wiring via a contact hole which is formed in the first interlayer insulating layer 110 and the second interlayer insulating layer 112 . <effect of semiconductor device manufactured according to this embodiment> in the transistor 550 shown in figs. 5a and 5b , which is manufactured through the above-described process, not only the gate electrode 104 but also the insulating layer 106 in the region which overlaps with the gate electrode 104 is heated by the light irradiation treatment 130 , so that oxygen contained in the insulating layer is released. that oxygen released from the insulating layer 106 can be added into the oxide semiconductor layer 108 , which is in contact with the insulating layer 106 , in the region which overlaps with the gate electrode 104 . accordingly, oxygen vacancies or interface states in the oxide semiconductor layer 108 in the region which overlaps with the gate electrode 104 can be reduced. in this manner, according to the method described in this embodiment, a highly reliable semiconductor device with less change in the threshold voltage can be manufactured. further, the gate electrode 104 can also be used as a mask for the impurity addition treatment 131 to form the low resistance region 108 a and the channel formation region 108 b in the oxide semiconductor layer 108 in a self-aligned manner, in addition to the function for heating of the insulating layer 106 . accordingly, the manufacturing process of a semiconductor device can be simplified. accordingly, according to the method described in this embodiment, a highly reliable semiconductor device with less change in the threshold voltage can be manufactured at lower cost. [embodiment 3 ] in embodiment 3, a semiconductor device whose structure is different from embodiment 1 is described using figs. 8a and 8b , figs. 9a to 9c , and figs. 10a and 10b . in the structure of this embodiment described below, portions which are the same as or have functions similar to those (functions) in embodiment 1 are denoted by the same reference numerals in the drawings, and the description thereof is not repeated. <structure of semiconductor device according to this embodiment> figs. 8a and 8b illustrate an example of a structure of a semiconductor device manufactured according to a method of this embodiment, a bottom-gate transistor 850 : fig. 8a is a top view of the transistor 850 and fig. 8b is a cross-sectional schematic diagram taken along a dashed line e-f in fig. 8a . in the top view of fig. 8a , only patterned film(s) and/or layer(s) are shown for easy understanding of the structure. although the manufacturing method is described on the case where the transistor 850 is an n-channel transistor whose carriers are electrons in this embodiment, one embodiment of the present invention is not limited to the case of an n-channel transistor. the transistor 850 shown in figs. 8a and 8b includes a substrate 100 , a base layer 102 formed over the substrate 100 , a gate electrode 104 formed over the base layer 102 , an anti-oxidation layer 105 formed over the gate electrode 104 , a gate insulating layer 802 formed over the base layer 102 , the gate electrode 104 , and the anti-oxidation layer 105 , an oxide semiconductor layer 108 which includes a low resistance region 108 a functioning as a source region (or a drain region) and a channel formation region 108 b and is formed over the gate insulating layer 802 , an insulating layer 106 formed over the oxide semiconductor layer 108 and the gate insulating layer 802 , a metal layer 804 formed over the insulating layer 106 , a first interlayer insulating layer 110 formed over the insulating layer 106 and the metal layer 804 , a second interlayer insulating layer 112 formed over the first interlayer insulating layer 110 , and a source electrode 114 a and a drain electrode 114 b which are electrically connected to the low resistance region 108 a through openings in the insulating layer 106 , the first interlayer insulating layer 110 and the second interlayer insulating layer 112 . <manufacturing method of semiconductor device according to this embodiment> a method for manufacturing the transistor 850 is described below using figs. 9a to 9c and figs. 10a and 10b . first, the base layer 102 is formed over the substrate 100 , the gate electrode 104 and the anti-oxidation layer 105 are formed over the base layer 102 , and the gate insulating layer 802 is formed over the base layer 102 , the gate electrode 104 , and the anti-oxidation layer 105 (see fig. 9a ). material qualities, characteristics, formation methods, and the like of the substrate 100 , the base layer 102 , the gate electrode 104 , and the anti-oxidation layer 105 are the same as those in embodiment 1, and thus detailed description thereof is not repeated here. as the gate insulating layer 802 , a single film or a stacked-layer film containing silicon oxide (sio 2 ), aluminum oxide (al 2 o 3 ), hafnium oxide (hfo 2 ), hafnium silicate (hfsio 2 ), hafnium aluminate (hfalo), zirconium oxide (zro 2 ), yttrium oxide (y 2 o 3 ), lanthanum oxide (la 2 o 3 ), or cerium oxide (ceo 2 ) as its main component may be formed by a known method such as a cvd method such as a plasma-enhanced cvd method, a pvd method, or a sputtering method. alternatively, a single film or a stacked-layer film containing silicon nitride (sin), silicon oxynitride (sion), silicon nitride oxide (sino), hafnium silicate nitride (hfsion), or hafnium aluminate nitride (hfalon) as its main component may be formed by a known method such as a cvd method such as a plasma-enhanced cvd method, a pvd method, or a sputtering method. the gate insulating layer 802 is not limited to a film from which oxygen can be released by heating but can be formed of any kind of film; thus, a variety of high-dielectric constant materials can be used therefor. as an example, hafnium silicate nitride may be deposited to a thickness of 10 nm by a sputtering method to form the gate insulating layer 802 . such a layer formed by a sputtering method is preferable because the amount of hydrogen, nitrogen, and the like is less. next, the oxide semiconductor layer 108 is formed over the gate insulating layer 802 , the insulating layer 106 is formed over the gate insulating layer 802 and the oxide semiconductor layer 108 , and the metal layer 804 is formed over the insulating layer 106 to overlap with the gate electrode 104 (see fig. 9b ). material qualities, characteristics, formation methods, and the like of the insulating layer 106 and the oxide semiconductor layer 108 are the same as those in embodiment 1, and thus detailed description thereof is not repeated here. as the metal layer 804 , for example, a layer formed of at least a metal film or an alloy film containing tantalum (ta), tungsten (w), titanium (ti), molybdenum (mo), aluminum (al), copper (cu), chromium (cr), or neodymium (nd) as its main component or a nitride film of such a metal or an alloy thereof by a known method such as a sputtering method or an evaporation method may be used. the metal layer 804 , with which oxygen is added into a region which overlaps with the gate electrode 104 by light irradiation treatment 130 later, needs to be formed to overlap with the gate electrode 104 . other than the above-described materials, a material having an optical absorptance of 60% or more in the wavelength region from 400 nm to 1000 nm both inclusive can also be used. for example, a metal oxide film of titanium oxide, molybdenum oxide, chromium oxide, cobalt oxide, copper oxide, nickel oxide, magnesium oxide, or the like may be formed by a known method such as a cvd method such as a plasma-enhanced cvd method, a pvd method, or a sputtering method. the metal layer 804 , which is not directly involved in operation of the transistor 850 , can be formed using any kind of material having the above-described optical absorptance regardless of its electrical characteristics such as a resistance, whereby irradiation light of the light irradiation treatment 130 can be efficiently converted into heat. accordingly, oxygen can be added into the oxide semiconductor layer in the region which overlaps with the gate electrode 104 efficiently even by low-energy light irradiation, which leads to a reduction in the power consumption of an apparatus for the light irradiation and a reduction in the frequency of maintenance thereof. the metal layer 804 may be used as a second gate electrode such that a dual-gate semiconductor device is formed. as an example, molybdenum oxide may be deposited to a thickness of 200 nm by a sputtering method to form the metal layer 804 . next, the light irradiation treatment 130 is performed on the metal layer 804 (see fig. 9c ). that light is absorbed in the metal layer 804 to heat the metal layer 804 . the insulating layer 106 in a region which overlaps with the metal layer 804 is accordingly heated to release oxygen contained in the insulating layer 106 . part of released oxygen is added into the oxide semiconductor layer 108 especially in the region which overlaps with the gate electrode 104 since the metal layer 804 overlaps with the gate electrode 104 and is in contact with the oxide semiconductor layer 108 . although the light irradiation treatment 130 is performed on the metal layer 804 in this embodiment, the light irradiation treatment 130 may be performed on the entire surface of the substrate. an apparatus, a method, and the like of the light irradiation treatment 130 are the same as those in embodiment 1, and thus detailed description thereof is not repeated here. next, impurity addition treatment 131 is performed thereon using the metal layer 804 as a mask, so that the low resistance region 108 a which functions as the source region (or the drain region) and the channel formation region 108 b are formed in the oxide semiconductor layer 108 (see fig. 10a ). an apparatus, a method, and the like of the impurity addition treatment 131 are the same as those in embodiment 1, and thus detailed description thereof is not repeated here. next, over the insulating layer 106 and the metal layer 804 , the first interlayer insulating layer 110 and the second interlayer insulating layer 112 are formed, openings are formed in the insulating layer 106 , the first interlayer insulating layer 110 , and the second interlayer insulating layer 112 , and then, the source electrode 114 a which is electrically connected to the low resistance region 108 a through the opening and the drain electrode 114 b which is electrically connected to the low resistance region 108 a through the opening are formed ( fig. 10b ). material qualities, characteristics, formation methods, and the like of the first interlayer insulating layer 110 , the second interlayer insulating layer 112 , the source electrode 114 a , and the drain electrode 114 b are the same as those in embodiment 1, and thus detailed description thereof is not repeated here. the first interlayer insulating layer 110 , which is formed over the insulating layer 106 and the metal layer 804 in this embodiment, is not necessarily provided. further, although the first interlayer insulating layer 110 and the second interlayer insulating layer 112 each have a single-layer structure in this embodiment, a stacked-layer structure of two or more layers may be employed as well. a material quality and a structure of an interlayer insulating layer may be selected as appropriate considering the use application or requisite characteristics of the transistor 850 . through the above-described process, the semiconductor device that is the bottom-gate transistor 850 as shown in fig. 8b can be manufactured, which includes the substrate 100 , the base layer 102 formed over the substrate 100 , the gate electrode 104 formed over the base layer 102 , the anti-oxidation layer 105 formed over the gate electrode 104 , the gate insulating layer 802 formed over the base layer 102 , the gate electrode 104 , and the anti-oxidation layer 105 , the oxide semiconductor layer 108 which includes the low resistance region 108 a functioning as the source region (or the drain region) and the channel formation region 108 b and is formed over the gate insulating layer 802 , the insulating layer 106 formed over the oxide semiconductor layer 108 and the gate insulating layer 802 , the metal layer 804 formed over the insulating layer 106 , the first interlayer insulating layer 110 formed over the insulating layer 106 and the metal layer 804 , the second interlayer insulating layer 112 formed over the first interlayer insulating layer 110 , and the source electrode 114 a and the drain electrode 114 b which are electrically connected to the low resistance region 108 a through the openings in the insulating layer 106 , the first interlayer insulating layer 110 , and the second interlayer insulating layer 112 . although not shown, the gate electrode 104 is electrically led over the second interlayer insulating layer 112 through a conductive wiring via a contact hole which is formed in the gate insulating layer 802 , the insulating layer 106 , the first interlayer insulating layer 110 , and the second interlayer insulating layer 112 . <effect of semiconductor device manufactured according to this embodiment> in the transistor 850 shown in figs. 8a and 8b , which is manufactured through the above-described process, not only the metal layer 804 but also the insulating layer 106 in the region which overlaps with the metal layer 804 is heated by the light irradiation treatment 130 , so that oxygen contained in the insulating layer is released. that oxygen released from the insulating layer 106 is added into the oxide semiconductor layer 108 , which is in contact with the insulating layer 106 , in the region which overlaps with the gate electrode 104 since the gate electrode 104 overlaps with the metal layer 804 . accordingly, oxygen vacancies or interface states in the oxide semiconductor layer 108 in the region which overlaps with the gate electrode 104 can be reduced. in this manner, according to the method described in this embodiment, a highly reliable semiconductor device with less change in the threshold voltage can be manufactured. the metal layer 804 is not directly involved in operation of the semiconductor device unlike the gate electrode 104 ; thus, any material which generates heat effectively by the light irradiation treatment 130 can be used for the metal layer 804 regardless of its resistance or thickness, which allows the energy of the light irradiation treatment 130 to be low. accordingly, a highly reliable semiconductor device with less change in threshold voltage can be manufactured at lower cost. further, the metal layer 804 can also be used as a mask for the impurity addition treatment 131 to form the low resistance region 108 a which functions as the source region (or the drain region) and the channel formation region 108 b in the oxide semiconductor layer 108 in a self-aligned manner, in addition to the function for heating of the insulating layer 106 . accordingly, the manufacturing process of a semiconductor device can be simplified. accordingly, according to the method described in this embodiment, a highly reliable semiconductor device with less change in the threshold voltage can be manufactured at lower cost. further, the metal layer 804 also acts to suppress incidence of external light into the oxide semiconductor layer in the region which overlaps with the gate electrode 104 (acts as a so-called light-blocking film). accordingly, a highly reliable semiconductor device with less change in characteristics due to light incidence from outside can be manufactured. [embodiment 4 ] in embodiment 4, a semiconductor device in which the position of the metal layer 804 is different from embodiment 3 is described using figs. 11a and 11b , figs. 12a to 12c , and fig. 13 . in the structure of this embodiment described below, portions which are the same as or have functions similar to those (functions) in embodiment 1 or 3 are denoted by the same reference numerals in the drawings, and the description thereof is not repeated. <structure of semiconductor device according to this embodiment> figs. 11a and 11b illustrate an example of a structure of a semiconductor device manufactured according to a method of this embodiment, a top-gate transistor 1150 : fig. 11a is a top view of the transistor 1150 and fig. 11b is a cross-sectional schematic diagram taken along a dashed line g-h in fig. 11a . in the top view of fig. 11a , only patterned film(s) and/or layer(s) are shown for easy understanding of the structure. although the manufacturing method is described on the case where the transistor 1150 is an n-channel transistor whose carriers are electrons in this embodiment, one embodiment of the present invention is not limited to the case of an n-channel transistor. the transistor 1150 shown in figs. 11a and 11b includes a substrate 100 , a base layer 102 formed over the substrate 100 , a metal layer 804 formed over the base layer 102 , an insulating layer 106 formed over the base layer 102 and the metal layer 804 , an oxide semiconductor layer 108 which includes a low resistance region 108 a functioning as a source region (or a drain region) and a channel formation region 108 b and is formed over the insulating layer 106 , a gate insulating layer 802 formed over the insulating layer 106 and the oxide semiconductor layer 108 , an anti-oxidation layer 105 formed over the gate insulating layer 802 , a gate electrode 104 formed over the anti-oxidation layer 105 , a first interlayer insulating layer 110 formed over the gate insulating layer 802 , the anti-oxidation layer 105 , and the gate electrode 104 , a second interlayer insulating layer 112 formed over the first interlayer insulating layer 110 , and a source electrode 114 a and a drain electrode 114 b which are electrically connected to the low resistance region 108 a through openings in the first interlayer insulating layer 110 , the second interlayer insulating layer 112 , and the gate insulating layer 802 . <manufacturing method of semiconductor device according to this embodiment> a method for manufacturing the transistor 1150 is described below using figs. 12a to 12c and fig. 13 . first, the base layer 102 is formed over the substrate 100 , the metal layer 804 is formed over the base layer 102 , the insulating layer 106 is formed over the base layer 102 and the metal layer 804 , the oxide semiconductor layer 108 is formed over the insulating layer 106 , the gate insulating layer 802 is formed over the insulating layer 106 and the oxide semiconductor layer 108 , and the anti-oxidation layer 105 and the gate electrode 104 are formed over the gate insulating layer 802 (see fig. 12a ). material qualities, characteristics, formation methods, and the like of the substrate 100 , the base layer 102 , the metal layer 804 , the insulating layer 106 , the oxide semiconductor layer 108 , the gate insulating layer 802 , the anti-oxidation layer 105 , and the gate electrode 104 are the same as those in embodiment 3, and thus detailed description thereof is not repeated here. next, light irradiation treatment 130 is performed on the metal layer 804 (see fig. 12b ). accordingly, oxygen is added into the oxide semiconductor layer 108 in a region which overlaps with the gate electrode 104 , like in embodiment 3. although the light irradiation treatment 130 is performed on the metal layer 804 in this embodiment, the light irradiation treatment 130 may be performed on the entire surface of the substrate. an apparatus, a method, and the like of the light irradiation treatment 130 are the same as those in embodiment 1, and thus detailed description thereof is not repeated here. next, impurity addition treatment 131 is performed thereon using the gate electrode 104 as a mask, so that the low resistance region 108 a which functions as the source region (or the drain region) and the channel formation region 108 b are formed in the oxide semiconductor layer 108 (see fig. 12c ). an apparatus, a method, and the like of the impurity addition treatment 131 are the same as those in embodiment 1, and thus detailed description thereof is not repeated here. next, over the gate insulating layer 802 , the anti-oxidation layer 105 , and the gate electrode 104 , the first interlayer insulating layer 110 and the second interlayer insulating layer 112 are formed, openings are formed in the first interlayer insulating layer 110 , the second interlayer insulating layer 112 , and the gate insulating layer 802 , and then, the source electrode 114 a which is electrically connected to the low resistance region 108 a through the opening and the drain electrode 114 b which is electrically connected to the low resistance region 108 a through the opening are formed ( fig. 13 ). material qualities, characteristics, formation methods, and the like of the first interlayer insulating layer 110 , the second interlayer insulating layer 112 , the source electrode 114 a , and the drain electrode 114 b are the same as those in embodiment 1, and thus detailed description thereof is not repeated here. the first interlayer insulating layer 110 , which is formed over the gate insulating layer 802 , the anti-oxidation layer 105 , and the gate electrode 104 in this embodiment, is not necessarily provided. further, although the first interlayer insulating layer 110 and the second interlayer insulating layer 112 each have a single-layer structure in this embodiment, a stacked-layer structure of two or more layers may be employed as well. a material quality and a structure of an interlayer insulating layer may be selected as appropriate considering the use application or requisite characteristics of the transistor 1150 . through the above-described process, the semiconductor device that is the top-gate transistor 1150 as shown in fig. 11b can be manufactured, which includes the substrate 100 , the base layer 102 formed over the substrate 100 , the metal layer 804 formed over the base layer 102 , the insulating layer 106 formed over the base layer 102 and the metal layer 804 , the oxide semiconductor layer 108 which includes the low resistance region 108 a functioning as the source region (or the drain region) and the channel formation region 108 b and is formed over the insulating layer 106 , the gate insulating layer 802 formed over the insulating layer 106 and the oxide semiconductor layer 108 , the anti-oxidation layer 105 formed over the gate insulating layer 802 , the gate electrode 104 formed over the anti-oxidation layer 105 , the first interlayer insulating layer 110 formed over the gate insulating layer 802 , the anti-oxidation layer 105 , and the gate electrode 104 , the second interlayer insulating layer 112 formed over the first interlayer insulating layer 110 , and the source electrode 114 a and the drain electrode 114 b which are electrically connected to the low resistance region 108 a through the openings in the first interlayer insulating layer 110 and the second interlayer insulating layer 112 . <effect of semiconductor device manufactured according to this embodiment> in the transistor 1150 shown in figs. 11a and 11b , which is manufactured through the above-described process, not only the metal layer 804 but also the insulating layer 106 in the region which overlaps with the metal layer 804 is heated by the light irradiation treatment 130 , so that oxygen contained in the insulating layer is released. that oxygen released from the insulating layer 106 is added into the oxide semiconductor layer 108 , which is in contact with the insulating layer 106 , in the region which overlaps with the gate electrode 104 since the gate electrode 104 overlaps with the metal layer 804 . accordingly, oxygen vacancies or interface states in the oxide semiconductor layer 108 in the region which overlaps with the gate electrode 104 can be reduced. in this manner, according to the method described in this embodiment, a highly reliable semiconductor device with less change in the threshold voltage can be manufactured. the metal layer 804 is not directly involved in operation of the semiconductor device unlike the gate electrode 104 ; thus, any material which generates heat effectively by the light irradiation treatment 130 can be used for the metal layer 804 regardless of its resistance or thickness, which allows the energy of the light irradiation treatment 130 to be low. accordingly, a highly reliable semiconductor device with less change in threshold voltage can be manufactured at lower cost. further, the gate electrode 104 can also be used as a mask for the impurity addition treatment 131 to form the low resistance region 108 a functioning as the source region (or the drain region) and the channel formation region 108 b in the oxide semiconductor layer 108 in a self-aligned manner, in addition to the function for heating of the insulating layer 106 . accordingly, the manufacturing process of a semiconductor device can be simplified. accordingly, according to the method described in this embodiment, a highly reliable semiconductor device with less change in the threshold voltage can be manufactured at lower cost. further, the metal layer 804 also acts to suppress incidence of external light into the oxide semiconductor layer in the region which overlaps with the gate electrode 104 (acts as a so-called light-blocking film), in addition to the function of heating of the insulating layer 106 . accordingly, a highly reliable semiconductor device with less change in characteristics due to light incidence from outside can be manufactured. [embodiment 5 ] an oxide semiconductor element disclosed in this specification can be applied to a variety of electronic devices (including game machines). examples of the electronic devices are a television device, a monitor of a computer or the like, a camera such as a digital camera or a digital video camera, a digital photo frame, a mobile phone (also referred to as a cellular phone or a cellular phone device), a portable game console, a handheld terminal, an audio reproducing device, a large-sized game machine such as a pachinko machine, and the like. examples of electronic devices including the oxide semiconductor element described in any of the above embodiments are described using figs. 14a to 14c . fig. 14a illustrates a handheld terminal including a main body 1401 , a housing 1402 , a first display portion 1403 a , and a second display portion 1403 b . the first display portion 1403 a is a touch panel, and for example, as shown in the left in fig. 14a , which of “voice input” and “key input” is performed can be selected by a selection button 1404 displayed on the first display portion 1403 a . the selection button can be displayed with a variety of sizes, which is easy to use for people of any generation. when “key input” is selected, for example, a keyboard 1405 is displayed on the first display portion 1403 a as shown in the right in fig. 14a ; accordingly, letters can be input quickly by keyboard input as in conventional handheld terminals, for example. further, either of the first display portion 1403 a and the second display portion 1403 b can be detached from the handheld terminal as shown in the right in fig. 14a . for example, the second display portion 1403 b can function as a touch panel for a reduction in weight to carry around to be operated by one hand while the other hand supports the housing 1402 , which is very convenient. further, the handheld terminal shown in fig. 14a can be equipped with a function of displaying a variety of information (e.g., a still image, a moving image, and a text image) on the display portion; a function of displaying a calendar, a date, the time, or the like on the display portion; a function of operating or editing the information displayed on the display portion; a function of controlling processing by various kinds of software (programs); and the like. furthermore, an external connection terminal (e.g., an earphone terminal or a usb terminal), a recording medium insertion portion, and the like may be provided on the back surface or the side surface of the housing. the handheld terminal shown in fig. 14a may be structured to transmit and receive data wirelessly. through wireless communication, desired book data or the like can be purchased and downloaded from an electronic book server. further, the housing 1402 shown in fig. 14a may be equipped with an antenna, a microphone function, or a wireless communication function to be used as a mobile phone. fig. 14b illustrates one mode of an image display device. the image display device illustrated in fig. 14b includes a display portion 1411 which functions as a window glass equipped with a touch-input function. since the oxide semiconductor layer 108 used in the method for manufacturing a semiconductor device disclosed in this specification has light-transmitting properties, the display portion 1411 can be formed with a sufficient transmittance of visible light (e.g., 50% or more) which allows us to view the outside sight by using a light-transmitting substrate (e.g., alkali-free glass) as the substrate 100 and micro-wiring. thus, for example, the display portion 1411 functions as a window glass in a normal state as shown in the left in fig. 14b , and necessary data can be displayed on the display portion 1411 as shown in the right in fig. 14b by touching a surface of the display portion 1411 . further, a unit for wirelessly transmitting and receiving data (hereinafter abbreviated as a wireless unit) may be provided in part of an internal circuit of the display portion 1411 . accordingly, for example, with a piezoelectric vibrator 1412 having a wireless unit provided in the image display device, an audio signal transmitted from the wireless unit in part of the internal circuit of the display portion 1411 can be received by the wireless unit of the piezoelectric vibrator 1412 to vibrate the piezoelectric vibrator 1412 , whereby the display portion 1411 can be vibrated to emit sound with a stable volume uniformly. fig. 14c illustrates one mode of a goggle-type display (head mounted display). in an image display device shown in fig. 14c , a left-eye panel 1422 a , a right-eye panel 1422 b , and an image display button 1423 are provided for a main body 1421 . since the oxide semiconductor layer 108 used in the method for manufacturing a semiconductor device disclosed in this specification has light-transmitting properties, the panels can be formed with a sufficient transmittance of visible light (e.g., 50% or more) by using a light-transmitting substrate (e.g., alkali-free glass) as the substrate 100 and micro-wiring. consequently, we can view the outside sight through the left-eye panel 1422 a and the right-eye panel 1422 b , so that we can view the sight as with normal eye glasses, as shown in the bottom left in fig. 14a . further, to obtain necessary data, we push an image display button 1423 , so that an image is displayed on one or both of the left-eye panel 1422 a and the right-eye panel 1422 b as shown in the bottom right in fig. 14c . the methods, structures, and the like described in this embodiment can be combined as appropriate with any of the methods, structures, and the like described in the other embodiments. this application is based on the japanese patent application serial no. 2011-047879 filed with japan patent office on mar. 4, 2011, the entire contents of which are hereby incorporated by reference.
162-610-315-181-25X
JP
[ "JP", "CN", "US", "WO" ]
A61B6/03,G06T11/00,A61B6/00,G06T5/00,G06T7/00,G06K9/00
2013-02-05T00:00:00
2013
[ "A61", "G06" ]
x-ray ct apparatus and image reconstruction method
to provide an x-ray ct apparatus that can reduce calculation time required for an iterative approximation projection data correction process by restricting a range for the iterative approximation projection data correction process and generate low-noise images according to the examination purpose, the calculation device of the x-ray ct apparatus generates correction projection data by performing an iterative approximation projection data correction process for projection data acquired in scanning and reconstructs ct images using the correction projection data. the calculation device determines a range to which iterative approximation projection data correction process is applied based on scanning conditions and reconstruction conditions. for example, a slice direction application range is determined based on an x-ray beam width, and a channel direction application range is determined based on an fov. the calculation device performs an iterative approximation projection data correction process for projection data corresponding to the determined application range to generate correction projection data.
1 . an x-ray ct apparatus comprising: an x-ray generating device irradiating an x-ray from the surroundings of an object; an x-ray detection device detecting an x-ray transmitted through the object; a data collection device collecting data detected by the x-ray detection device; a calculation device generating projection data by inputting data to be collected by the data collection device and reconstructing a ct image using the projection data; and a display device displaying the ct image, wherein the calculation device is comprised of: an application range determining unit determining an application range for an iterative approximation projection data correction process that is a correction process by the iterative approximation method which uses a smoothing coefficient showing a correction intensity for the projection data; an iterative approximation projection data correction processing unit performing the iterative approximation projection data correction process for projection data that corresponds to the range determined by the application range determining unit to generate correction projection data; and an image reconstruction unit reconstructing a ct image using the correction projection data. 2 . the x-ray ct apparatus according to claim 1 , wherein an application range display unit displaying an application range for the iterative approximation projection data correction process on the ct image is further included. 3 . the x-ray ct apparatus according to claim 1 , wherein the application range determining unit determines a range to apply the iterative approximation projection data correction process based on at least one of scanning condition information, the x-ray irradiation dose information, and image reconstruction condition information. 4 . the x-ray ct apparatus according to claim 1 , further comprising: an input unit receiving input of scanning condition information or image reconstruction condition information, wherein the application range determining unit determines a range to apply the iterative approximation projection data correction process based on the input scanning condition information or the input image reconstruction condition information. 5 . the x-ray ct apparatus according to claim 1 , further comprising: an input unit receiving roi setting on a ct image, wherein the application range determining unit determines a range to apply the iterative approximation projection data correction process based on the set roi. 6 . the x-ray ct apparatus according to claim 1 , wherein the application range determining unit determines a range to apply the iterative approximation projection data correction process based on the roi set on a ct image. 7 . the x-ray ct apparatus according to claim 6 , wherein the application range determining unit sets one large roi including a plurality of rois and determines an application range for the iterative approximation projection data correction process based on the large roi. 8 . the x-ray ct apparatus according to claim 1 , further comprising: a measurement unit measuring information about periodic movement of organs during scanning, wherein the application range determining unit calculates a periodic variation of an image based on the information about the periodic movement of organs measured by the measurement unit, determines an optimal time phase for reconstruction based on the calculated variation, and then sets a time range including the determined time phase as a range to which the iterative approximation projection data correction process is applied. 9 . the x-ray ct apparatus according to claim 1 , further comprising: a scanning unit for contrast monitoring that performs scanning for contrast monitoring to monitor contrast agent reaching in scanning using a contrast agent, wherein the calculation unit determines a range in which an iterative approximation projection data correction process is applied to monitoring projection data acquired in scanning for contrast monitoring by the application range determining unit and includes a monitor image reconstruction unit that reconstructs a ct image for monitoring using the correction projection data acquired by performing the iterative approximation projection data correction process for the monitoring projection data corresponding to the determined range. 10 . the x-ray ct apparatus according to claim 1 , wherein the application range determining unit restricts a range to which the iterative approximation projection data correction process is applied in at least any one of the body-axis direction, the channel direction, and the time direction from among all the ranges of the projection data. 11 . the x-ray ct apparatus according to claim 2 , wherein margin regions are set around a range determined by the application range determining unit, and the iterative approximation projection data correction process is performed for projection data corresponding to the margin regions by changing the smoothing coefficient continuously. 12 . the x-ray ct apparatus according to claim 11 , wherein the application range display unit displays an application range for the iterative approximation projection data correction process together with the margin regions. 13 . the x-ray ct apparatus according to claim 1 , wherein an operation screen display unit displaying an operation screen for adjusting a range to apply the iterative approximation projection data correction process is further included. 14 . an image reconstruction method of using a smoothing coefficient showing a correction intensity to create correction projection data by performing correction processing for projection data by the iterative approximation method and reconstructing a ct image using the correction projection data, wherein a calculation device performs: an application range determination step of determining a range in which correction processing by the iterative approximation method is applied to the projection data, and a correction projection data creation step of performing correction processing by the iterative approximation method for projection data corresponding to the determined range and creating correction projection data.
technical field the present invention relates to an x-ray ct apparatus etc. that obtain ct images by irradiating an x-ray to an object. in particular, the invention relates to a technique in which an x-ray ct apparatus executes projection data correction at a high speed by the iterative approximation method. background art in order to perform ct examination with less exposure dose, an x-ray ct apparatus executing image reconstruction by the iterative approximation method has been developed in recent years. the image reconstruction by the iterative approximation method can obtain ct images with less noise even at a low dose of radiation. in the non-patent literature 1, an iterative approximation projection data correction process that is one of the iterative approximation methods is disclosed. an iterative approximation projection data correction process is one projection data correction process that is preprocessing of image reconstruction. in an iterative approximation projection data correction process, an update formula in which a projection value of projection data is a variable is used. the update formula includes a smoothing coefficient (referred to also as a correction coefficient or a penalty item) showing a correction intensity. also, the update formula includes weighting addition processing between adjacent elements. in an iterative approximation projection data correction process, the above update formula is used to update a projection value repeatedly. then, for each update, a projection value obtained after the update is evaluated using a cost function. until the cost function result becomes satisfactory, projection value update is repeated for each detection element. the formula (1) shows a cost function to be used in conventional an iterative approximation projection data correction process. the formula (2) shows an update formula to be used in conventional an iterative approximation projection data correction process. in the above formulas (1) and (2), “p” is an updated projection value, “y” is an original projection value, “β” is a smoothing coefficient, “d” is a detection characteristic value, “i” is a detection element number, “n” is a repetition number, and “w” is a weight. additionally, the respective formulas corresponding to the above formulas (1) and (2) are described in the non-patent literature 1. the formula (1) corresponds to the formula (9) described on p. 1274 of the non-patent literature 1. the formula (2) corresponds to the formula (11) described on p. 1274 of the non-patent literature 1. in the present invention, although the respective formulas are described using a different form and different symbols from the non-patent literature 1 so that the explanation is made along the main aim of the invention, the contents of the above formulas (1) and (2) are the same as the respective formulas described in the non-patent literature 1. citation list non-patent literature nptl 1: jing wang et al., “penalized weighted least-squares approach to sinogram noise reduction and image reconstruction for low-dose x-ray computed tomography”, ieee transactions on medical imaging, vol. 25, no. 10, october 2006, 1272-1283 summary of invention technical problem however, in the process described in the above non-patent literature 1, an iterative approximation projection data correction process is applied to all the detection elements. therefore, there is a problem that it requires an enormous time to process. the present invention was made in light of the above problems, and the purposes are to reduce a calculation time required for an iterative approximation projection data correction process by limiting an application range for an iterative approximation projection data correction process and to provide an x-ray ct apparatus etc. capable of generating low-noise images according to the examination purpose. solution to problem in order to achieve the above purposes, an x-ray ct apparatus is comprised of an x-ray generating device irradiating an x-ray from the surroundings of an object; an x-ray detection device detecting an x-ray transmitted through the object; a data collection device collecting data detected by the x-ray detection device; an calculation device generating projection data by inputting data to be collected by the data collection device and reconstructing a ct image using the projection data; and a display device displaying the ct image, and the calculation device is comprised of an application range determining unit determining an application range for an iterative approximation projection data correction process that is a correction process by the iterative approximation method which uses a smoothing coefficient showing a correction intensity for the projection data; a an iterative approximation projection data correction processing unit performing the iterative approximation projection data correction process for projection data that corresponds to the range determined by the application range determining unit to generate correction projection data; and an image reconstruction unit reconstructing a ct image using the correction projection data. also, the image reconstruction method performs correction processing by the iterative approximation method for projection data using a smoothing coefficient showing a correction intensity to generate correction projection data, reconstructs a ct image using the correction projection data, and then performs a application range determining step in which an calculation device determines a range to apply correction processing by the said iterative approximation method for the projection data and a correction projection data generating step performing correction processing by the said iterative approximation method for projection data that corresponds to the determined range to generate correction projection data. advantageous effects of invention according to the present invention, by limiting an application range for an iterative approximation projection data correction process, a calculation time required for an iterative approximation projection data correction process can be reduced, and an x-ray ct apparatus etc. capable of generating low-noise images according to the examination purpose can be provided. brief description of drawings fig. 1 is an outside view showing the overall configuration of the x-ray ct apparatus 1 . fig. 2 is a hardware block diagram of the x-ray ct apparatus 1 . fig. 3 is a functional block diagram of the calculation device 202 . fig. 4 is a flow chart showing the overall process flow. fig. 5 is a functional block diagram of the calculation device 202 a in the first embodiment. fig. 6 is a diagram showing an example of the slice direction application range 1001 . fig. 7 is a diagram showing an example of the channel direction application range 1002 . fig. 8 is an example of the channel direction application range 1002 and the application range margin 2002 to be expressed on the sinogram 1000 . fig. 9 is a diagram explaining the relationship between a channel direction application range, an application range margin, and a smoothing coefficient change. fig. 10 is a diagram explaining the relationship between a slice direction application range as well as an application range margin and a smoothing coefficient change. fig. 11 is a diagram showing an example of the application range setting/display screen 501 a in the first embodiment. fig. 12 is a flow chart showing the process flow in the first embodiment. fig. 13 is a functional block diagram of the calculation device 202 b in the second embodiment. fig. 14 is a diagram showing an example of the application range display screen 501 b in the second embodiment. fig. 15 is a diagram explaining the rotation direction application range 1003 . fig. 16 is a diagram showing a relationship between the electrocardiographic information 300 and the irradiation dose variation curve 600 . fig. 17 is an example of the rotation direction application range 1003 and the application range margin 2003 to be expressed on the sinogram 1000 b. fig. 18 is a flow chart showing the process flow in the second embodiment. fig. 19 is a functional block diagram of the calculation device 202 c in the third embodiment. fig. 20 is a diagram showing an example of the application range setting/display screen 501 c in the third embodiment. fig. 21 is a flow chart showing the process flow in the third embodiment. fig. 22 is a flowchart explaining the details of roi setting. fig. 23 is a functional block diagram of the calculation device 202 d in the fourth embodiment. fig. 24 is a diagram showing a relationship between optimal cardiac phases to be calculated from an image variation and a time direction application range. fig. 25 is a flow chart showing the process flow in the fourth embodiment. fig. 26 is a functional block diagram of the calculation device 202 e in the fifth embodiment. fig. 27 is a flow chart showing the process flow in the fifth embodiment. description of embodiments hereinafter, the suitable embodiments of the present invention will be described in detail based on diagrams. first, referring to figs. 1 and 2 , the hardware configuration of the x-ray ct apparatus 1 will be described. the x-ray ct apparatus 1 is generally comprised of the scanner 10 and the operation unit 20 . the scanner 10 includes the bed device 101 , the x-ray generating device 102 , the x-ray detection device 103 , the collimator device 104 , the high-voltage generating device 105 , the data collection device 106 , the driving device 107 , etc. the operation unit 20 includes the central control device 200 , the input/output device 201 , the calculation device 202 , etc. an operator inputs scanning conditions, reconstruction conditions, etc. via the input/output device 201 . the scanning conditions are, for example, an x-ray beam width, a bed sending speed, a tube current, a tube voltage, a scanning range (range in the body-axis direction), the number of scanning views per rotation, etc. also, the reconstruction conditions, for example, are a region of interest, an fov (field of view), a reconstruction filter function, etc. the input/output device 201 includes the display device 211 displaying a ct image etc., the input device 212 such as a mouse, a trackball, a keyboard, and a touch panel, the storage device 213 storing data, etc. the central control device 205 inputs the scanning conditions and reconstruction conditions and transmits a control signal required for scanning to the respective devices included in the scanner 10 . the collimator device 104 controls its position based on the control signal. when scanning starts after receiving a scanning start signal, the high-voltage generating device 105 applies a tube voltage and a tube current to the x-ray generating device 102 based on a control signal. in the x-ray generating device 102 , electrons of energy according to the applied tube voltage are emitted from a cathode, the emitted electrons collide strike a target (an anode), and then an x-ray of energy according to the electronic energy is irradiated to the object 3 . the driving device 107 rotates the gantry 100 in which the x-ray generating device 102 , the x-ray detection device 103 , etc. are installed around the object 3 based on a control signal. the bed device 101 controls a bed based on the control signal. an irradiation range of an x-ray irradiated from the x-ray generating device 102 is limited by a collimator. the x-ray is absorbed (attenuated) according to the x-ray absorption coefficient in each tissue in the object 3 , passes through the object 3 , and then is detected by the x-ray detection device 103 disposed in the position opposite to the x-ray generating device 102 . the x-ray detection device 103 is comprised of a plurality of detection elements arranged in the two-dimensional direction (a channel direction and the column direction orthogonal to this). the x-ray received by each detection element is converted into real projection data. that is, the data collection device 106 performs various data processes (such as changing to digital data, log conversion, and calibration) for the x-ray detected by the x-ray detection device 103 , and the x-ray is collected as raw data to be input in the calculation device 202 . at this time, because the x-ray generating device 102 and the x-ray detection device 103 facing each other rotate around the object 3 , the x-ray generating device 102 is to irradiate an x-ray from the surroundings of the object 3 . also, the x-ray detection device 103 is to detect an x-ray transmitted through the object 3 . that is, the raw data is collected in the rotation direction at discrete positions of the x-ray tube (the detector position opposite to the x-ray tube). the acquisition unit of the projection data at each position of the x-ray tube is referred to as “view”. the calculation device 202 is comprised of the reconstruction processing device 221 , the image processing device 222 , etc. the input/output device 201 includes the input device 212 , the display device 211 , the storage device 213 , etc. the reconstruction processing device 221 generates projection data by inputting raw data to be collected by the data collection device 106 . also, the reconstruction processing device 221 performs an iterative approximation projection data correction process for the projection data to generate correction projection data. then, ct images are reconstructed using the correction projection data. additionally, the present invention relates to the improvement of an iterative approximation projection data correction process. the iterative approximation projection data correction process related to the present invention will be described later. the reconstruction processing device 221 stores the generated ct images in the storage device 213 . also, the reconstruction processing device 221 displays a generated ct image on the display device 211 . alternatively, the image processing device 222 performs image processing for ct images to be stored in the storage device 213 and displays ct images after image processing on the display device 211 . in the x-ray ct apparatus 1 , there are multi-slice ct that uses the x-ray detection device 103 where detection elements are arranged in the two-dimensional direction and single-slice ct that uses the x-ray detection device 103 where detection elements are arranged in one column i.e., the one-dimensional direction (a channel direction only). in the multi-slice ct, an x-ray beam spreading in a conical or pyramid shape is irradiated from the x-ray generating device 102 that is the x-ray source according to the x-ray detection device 103 . in the single-slice ct, an x-ray beam spreading like a fan is irradiated from the x-ray generating device 102 . normally, in scanning by the x-ray ct apparatus 1 , an x-ray is irradiated while the gantry 100 is rotating around the object 3 placed on the bed (however, positioning scanning is excluded.). a scanning mode where the bed is fixed during scanning and the x-ray generating device 102 rotates around the object 3 like a circle orbit is referred to as axial scanning. also, a scanning mode where the bed moves continuously and the x-ray generating device 102 rotates around the object 3 like a spiral orbit is referred to as spiral scanning. in case of the axial scanning, the bed device 101 keeps the bed resting during scanning. also, in case of the spiral scanning, the bed device 101 moves the bed parallel in the body-axis direction of the object 3 during scanning according to the bed sending speed that is one of scanning conditions. next, referring to fig. 3 , the functional configuration of the x-ray ct apparatus 1 of the present invention will be described. particularly, fig. 3 shows the functional configuration of the calculation device 202 . the calculation device 202 has the application range determining parameter acquisition unit 31 , the application range determining unit 32 , the iterative approximation projection data correction processing unit 33 , the image reconstruction unit 34 , and the application range display region calculation unit 35 as the main functional configuration. additionally, as the assumption of the present invention, the calculation device 202 uses a cost function of the formula (3) and an update formula of the formula (4) shown as follows to perform an iterative approximation projection data correction process for projection data. the formulas (3) and (4) correspond to the cost function disclosed in the non-patent literature 1 (the formula (9) on p. 1274 in the non-patent literature 1) and the update formula (the formula (11) on the same page in the same patent literature) respectively. here, “p” is an updated projection value, “y” is an original projection value, “β” is a smoothing coefficient, “d” is a detection characteristic value, “i” is an index for time, “j” is an index for a position (of a detection element), “n” is a repetition number, and “w” is a weight. before performing an iterative approximation projection data correction process, the calculation device 202 determines a range to apply the iterative approximation projection data correction process (hereinafter, referred to as an application range). an application range is determined according to the examination purpose, the scanning conditions, etc. the application range includes a position range of a detection element and a time range. as the range for a position of a detection element, there are an application range in a slice direction and an application range in a channel direction. also, because the x-ray ct apparatus 1 obtains projection data from a plurality of angle directions while rotating around the object 3 , the range for time is a range of the rotation direction (view angle) of the gantry 100 , in other words. the above application range is expressed as ranges of the indexes “i” and “j” of the addition unit in the update formula and the cost function (the above formulas (4) and (3)) to be used for an iterative approximation projection data correction process. as described above, “i” is an index for time and “j” is an index for a position (of a detection element). the calculation device 202 calculates application ranges (of the indexes “i” and “j”) based on the scanning conditions, the examination purpose, etc. to apply an iterative approximation projection data correction process to projection data in the application ranges. the respective functional units shown in fig. 3 will be described. the application range determining parameter acquisition unit 31 acquires a parameter (hereinafter, referred to as an application range determining parameter) for determining an application range for an iterative approximation projection data correction process. an application range determining parameter, for example, may be scanning condition information set for the x-ray ct apparatus 1 , irradiation dose information, or image reconstruction condition information. also, the parameter may be periodic movement information of organs, such as electrocardiographic information in electrocardiographic synchronous scanning. also, the parameter may be information obtained by analyzing an image, such as a variation of a contrast monitoring image in contrast-agent imaging. an application range determining parameter can be acquired from the external devices such as the input device 212 , the storage device 213 , and the electrocardiograph 109 and the storage regions (such as a ram) in the calculation device 202 . for example, scanning condition information is various parameters such as an x-ray beam width and a body-axis direction scanning range. the scanning condition information is input from the input device 212 by an operator before scanning. alternatively, the scanning condition information is stored in the storage region in the storage device 213 and the calculation device 202 . a tube current and a tube voltage are included in irradiation dose information. an optimal value of irradiation dose information is calculated by the calculation device 202 based on scanning conditions, reconstruction conditions, a physique of an object, etc. and stored in the storage region in the calculation device 202 . alternatively, the value is stored in the storage device 213 . image reconstruction condition information such as an roi, an fov, and a body-axis direction range to be reconstructed is input from the input device 212 . alternatively, the information is stored in the storage device 213 . electrocardiographic information is acquired from the electrocardiograph 109 (see fig. 3 ) is attached to the object 3 in real time during scanning the cardiac region etc. a variation of a contrast monitoring image in scanning using a contrast-agent can be obtained from analysis results by the calculation device 202 . the application range determining unit 32 acquires projection data to be input from the data collection device 106 . also, an application range determining parameter is acquired from the application range determining parameter acquisition unit 31 . then, the application range determining unit 32 determines an application range for an iterative approximation projection data correction process to the acquired projection data. the application range is a range for improving the image quality. the purpose of the image quality improvement is roughly classified into two. one is a case where ideal image quality cannot be obtained due to scanning at a low exposure dose for the exposure dose reduction. the other is to further improve image quality of a target site with a sufficient exposure dose. the application range determining unit 32 determines an application range for an iterative approximation projection data correction process based on an application range determining parameter such as scanning conditions. the application range determining unit 32 restricts a position range of a detection element to which correction processing is applied and a time range from among the entire projection data. the position range of a detection element means a channel direction range and a slice direction range of the detection element. also, the time range means a range of a rotation angle (view angle) of the detection device. the position range of the detection element corresponds to a range of the index “j” of the addition unit included in the above cost function (the formula (3)) and update formula (the formula (4)). also, the time range corresponds to a range of the index “i” of the addition unit included in the above cost function (the formula (3)) and update formula (the formula (4)). the application range determining unit 32 outputs the determined application range to the iterative approximation projection data correction processing unit 33 and the application range display region calculation unit 35 . the details of the application range determination method using the respective application range determining parameters will be described in the respective embodiments. the application range determining unit 32 determines a magnitude of smoothing coefficient included in the formula (4) according to the target image quality and the examination purpose. the smoothing coefficient is a coefficient showing a correction intensity. the iterative approximation projection data correction processing unit 33 performs an iterative approximation projection data correction process for an application range determined by the application range determining unit 32 . in the iterative approximation projection data correction process, the calculation device 202 applies the update formula of the formula (4) to projection data in the application range. until the cost function shown in the formula (3) provides a desirable result, the calculation is repeated. after the calculation, the obtained projection value is output as correction projection data to the image reconstruction unit 34 . the image reconstruction unit 34 reconstructs ct images based on correction projection data input from the iterative approximation projection data correction processing unit 33 . the image reconstruction unit 34 outputs the reconstructed ct images to the display device 211 . the application range display region calculation unit 35 performs calculation to display an application range determined by the application range determining unit 32 . for example, an application range position on a ct image is calculated. the display device 211 displays a ct image reconstructed by the image reconstruction unit 34 in addition to an application range of an iterative approximation projection data correction process. for example, the display device 211 clearly indicates the above application range on the ct image. it may be configured so that the boundary line between the inside and the outside of the application range is displayed. additionally, a mode to display the boundary is not limited to a line, and the boundary may be displayed in another mode. next, referring to fig. 4 , the overall process flow of the x-ray ct apparatus 1 of the present invention will be described. first, the x-ray ct apparatus 1 performs positioning scanning for the object 3 . next, the x-ray ct apparatus performs various condition settings such as scanning conditions and reconstruction conditions based on the positioning image generated by positioning scanning. then, the x-ray ct apparatus 1 performs tomographic scanning (main scanning) to acquire projection data (step s 101 ). the calculation device 202 performs an iterative approximation projection data correction process for projection data to be acquired (step s 102 ). in the present invention, as described above, an application range of the iterative approximation projection data correction process is determined before executing repeated calculation for the iterative approximation projection data correction process. the determining method for an application range of the iterative approximation projection data correction process will be described in each embodiment. the calculation device 202 executes the iterative approximation projection data correction process only for projection data in an application range. the calculation device 202 performs image reconstruction using correction projection data corrected by an iterative approximation projection data correction process to generate a ct image (step s 103 ). the calculation device 202 performs image reconstruction by the iterative approximation method, for example. because a part of an application range of the projection data is corrected by the iterative approximation projection data correction process in the present invention, noise reduction is performed for a part of the correction projection data. therefore, image quality in a site corresponding to the above application range is improved on a ct image generated by the correction projection data. the calculation device 202 displays a ct image to be generated (noise-reduced image) on the display device 211 . also, the calculation device 202 may display a range to which an iterative approximation projection data correction process is applied on a ct image, for example (step s 104 ). the details of the display mode will be described later. first embodiment next, referring to figs. 5 to 12 , the first embodiment will be described. as described above, when an application range for an iterative approximation projection data correction process is restricted, streak artifacts may occur in the boundary region between the area in which correction processing was performed and the area in which correction processing was not performed. therefore, in the first embodiment, the calculation device 202 set margin regions for the application range. also, it is configured so that a smoothing coefficient included in the update formula of the iterative approximation projection data correction process continues smoothly near the boundaries between the inside and the outside of the application range. specifically, the smoothing coefficient to be applied in margin regions is changed continuously so that the coefficient become smaller gradually from the application range toward the outside of the application range. fig. 5 is a diagram showing the functional configuration of the calculation device 202 a of the first embodiment. the calculation device 202 a of the first embodiment includes the margin setting unit 36 and the smoothing coefficient determination unit 37 in addition to the functional configuration of the calculation device 202 shown in fig. 4 . that is, the calculation device 202 a of the first embodiment has the application range determining parameter acquisition unit 31 a , the application range determining unit 32 a , the margin setting unit 36 , the smoothing coefficient determination unit 37 , the iterative approximation projection data correction processing unit 33 , the image reconstruction unit 34 , and the application range display region calculation unit 35 a. additionally, the same symbols are used for the configuration elements similar to those shown in figs. 1 , 2 , and 3 , and the repeated explanations are omitted. also, although the calculation device 202 a of the first embodiment is hardware similar to the calculation device 202 shown in fig. 2 , the symbols are different from the calculation device 202 shown in fig. 2 due to the different functional configuration. the application range determining parameter acquisition unit 31 a of the first embodiment acquires an fov of an x-ray beam width and a scanning range size in a cross-section as an application range determining parameter. the x-ray beam width is included in scanning condition information. the fov is included in reconstruction condition information. the scanning condition information and reconstruction condition information may be the contents set in the input device 212 by an operator or the contents preset (stored in the storage device 213 ) for each examination purpose. the application range determining unit 32 a determines an application range for an iterative approximation projection data correction process based on the x-ray beam width θ. specifically, a detection element range in the body-axis direction (slice direction) corresponding to the x-ray beam width θ is calculated, and then the calculated detection element range is specified as the slice direction application range 1001 of correction processing. fig. 6 is a diagram viewing the body-axis direction of the object 3 in the horizontal direction of the diagram. as shown in fig. 6 , the flare angle θ in the body-axis direction of an x-ray beam irradiated from the x-ray tube 102 is an x-ray beam width. the application range determining unit 32 a sets a detection element range in a slice direction corresponding to the x-ray beam width θ as the slice direction application range 1001 of correction processing. also, the application range determining unit 32 a determines an application range for an iterative approximation projection data correction process based on an fov. specifically, the application range determining unit 32 a calculates a detection element range in a channel direction corresponding to the fov. then, the calculated detection element range is set as the channel direction application range 1002 of correction processing. fig. 7 is a diagram viewing the body-width (x) direction of the object 3 in the horizontal direction of the diagram and the body-axis direction in the depth direction of the diagram. the range 4 shown in the dot-dash line in fig. 7 is set as the fov. the application range determining unit 32 a sets a detection element range in the channel direction corresponding to the fov as the channel direction application range 1002 of correction processing. the margin setting unit 36 of fig. 5 sets margin regions for an application range determined by the application range determining unit 32 a. the update formula to be used for an iterative approximation projection data correction process includes weighting addition processing between adjacent elements as shown in the above formula (4). the margin setting unit 36 sets margins of calculation processing for based on an adjacent element range for performing the weighting addition processing. for example, if the adjacent element range for the weighting addition processing has two elements, one element margins for calculation processing are set on the both sides. additionally, this is an example, and two or more element margins for calculation processing may be set. also, the margin setting unit 36 extends the application range determined by the application range determining unit 32 a in order to prevent streak artifacts from occurring as described above. in this case, the margins are referred to as an application range margin. sizes of the application range margins are desirably set in light of the range of influence in processes after an iterative approximation projection data correction process. for example, when filtering processing is performed after the iterative approximation projection data correction process, the margin setting unit 36 sets application range margins of the number of elements influencing on the filtering processing. also, the direction of the application range margins is set according to the application range direction. for example, from among a channel direction and a slice direction, the application range margins are provided for at least either of the directions or both of the directions. fig. 8 is the sinogram 1000 of projection data of one cross section. the horizontal axis shows a channel position of a detection element, and the vertical axis shows a rotation angle. the sinogram 1000 shows a projection value of each detection element in each rotation angle position in grayscale (shading). for example, when the channel direction application range 1002 shown in fig. 7 is expressed on the sinogram 1000 , the range is that shown in gray in fig. 8 . the margin setting unit 36 sets the channel direction application range margins 2002 on the both sides of the channel direction of the channel direction application range 1002 . the smoothing coefficient determination unit 37 in fig. 5 calculates a smoothing coefficient to be applied to an application range and application range margins. the smoothing coefficient determination unit 37 sets a smoothing coefficient to be applied to the application range margins so that it becomes continuously smaller from the application range toward the outside of the application range. thus, the smoothing coefficient is smoothly changed in the boundaries (application range margins) between the inside and the outside of the application range, which can reduce streak artifacts. the smoothing coefficient determination unit 37 changes a smoothing coefficient smoothly in the boundaries between the inside and the outside of the application range for both of the channel direction and the body-axis direction. fig. 9 is a diagram showing a change of a smoothing coefficient near the boundaries between the inside and the outside of the application range in the channel direction. as shown in fig. 9 , the application range margins 2002 are set on the boundaries between the channel direction application range 1002 and the outside of the application range. the smoothing coefficient determination unit 37 sets a smoothing coefficient to be applied to the inside of the application range 1002 to a constant value. then, a smoothing coefficient to be applied to the outside region of the application range 1002 is set to “0”. additionally, in the boundary region between the inside and the outside of the application range (the application range margins 2002 ), a smoothing coefficient is set so that it is changed smoothly. similarly, also in the slice direction, the margin setting unit 36 sets the slice direction application range margins 2001 for the slice direction application range 1001 (see fig. 10 ). also, the smoothing coefficient determination unit 37 sets a smoothing coefficient also for the slice direction similarly to the channel direction. fig. 10 is a diagram showing a change of a smoothing coefficient in the slice direction. in the example of fig. 10 , the slice direction application ranges 1001 a and 1001 b are set for a plurality of regions in the body-axis direction. as shown in fig. 10 , the application range margins 2001 a and 2001 b are set respectively for the boundary between the application ranges 1001 a and 1001 b and the outside region of the application ranges. the smoothing coefficient determination unit 37 may set different smoothing coefficients for the respective application ranges 1001 a and 1001 b as shown in fig. 10 . in case of setting different smoothing coefficients for the respective application ranges 1001 a and 1001 b , as shown in fig. 10 , the smoothing coefficient may be changed in stages by setting the application range margin 2001 c widely in the intermediate region between the application ranges 1001 a and 1001 b. the application range display region calculation unit 35 a in fig. 5 calculates a position on the ct image of the application range determined by the application range determining unit 32 . in the first embodiment, application range margins are provided around an application range. therefore, it is desirable that both positions on a ct image of the application range and the application range margins are calculated. additionally, it may be configured so that an operator switches whether to display the boundaries between the application range and the application range margins by a selection operation or not. fig. 11 is a diagram showing an example of the application range setting/display screen 501 a . in the example of fig. 11 , the boundary line 1005 showing a application range and the boundary line 2005 showing a application range margin are displayed on a ct image displayed in the ct image display area 51 . either one of the boundary line 1005 showing the application range and the boundary line 2005 showing the application range margin may be displayed. also, it may be configured so that an operator can switch whether to display the boundary lines 1005 and 2005 . also, on the application range setting/display screen 501 a shown in fig. 11 , an input operation unit (the slide bars 55 , 56 , and 57 ) for moving the respective boundary lines 1005 and 2005 or changing the sizes may be provided. intuitive operation can be achieved by using a gui adjusting the sizes and positions of the boundary lines 1005 and 2005 as the input operation unit, for example. when an operator operates the input operation unit to move the positions of the boundary lines 1005 and 2005 and change the sizes, the application range determining unit 32 a and the margin setting unit 36 resets the application range or the application range margin for an iterative approximation projection data correction process to the moved positions or changed sizes. the iterative approximation projection data correction processing unit 33 performs the iterative approximation projection data correction process again for the reset application range etc. fig. 12 is a flow chart describing the process flow executed by the calculation device 202 a of the first embodiment. the calculation device 202 a acquires projection data from the data collection device 106 (step s 201 ). also, the calculation device 202 a (the application range determining unit 32 a ) acquires scanning condition information etc. (step s 202 ). the scanning condition information to be acquired is the x-ray beam width θ and an fov. the calculation device 202 a calculates the slice direction application range 1001 based on an x-ray beam width as shown in fig. 6 (step s 203 ). then, the channel direction application range 1002 is calculated based on an fov as shown in fig. 7 (step s 204 ). because an amount of data to be deleted is large when restricting an application range in a slice direction, the slice direction application range 1001 is determined first. next, the calculation device 202 a sets the application range margins 2001 and 2002 corresponding to each application range (step s 205 ). the calculation device 202 a calculates a smoothing coefficient to be applied to an application range and application range margins (step s 206 ). as shown in figs. 9 and 10 , the smoothing coefficient is changed so that it continues smoothly inside and outside the application range. next, the calculation device 202 a applies the smoothing coefficient calculated in step s 206 to the application range and the application range margins determined in the processes of steps s 203 to s 205 in order to perform an iterative approximation projection data correction process (step s 207 ). the application range determined in the processes of steps s 203 to s 204 is expressed as the range of the index “j” for a position from among the indexes “i” and “j” included in the update formula (the above formula (4)) of an iterative approximation projection data correction process. also, a smoothing coefficient corresponds to β included in the update formula. the calculation device 202 a outputs correction projection data as a result of an iterative approximation projection data correction process and sends it to the reconstruction processing device 221 . the reconstruction processing device 221 performs image reconstruction using correction projection data corrected by an iterative approximation projection data correction process and generates a ct image (step s 208 ). the reconstruction processing device 221 , for example, performs image reconstruction by the iterative approximation method. because a part of an application range of projection data was corrected by an iterative approximation projection data correction process in the present invention, noise-reduction is performed for a part of the correction projection data. image quality at the site corresponding to the above application range is improved on the ct image to be generated by the correction projection data. the calculation device 202 a calculates a display region of an application range on a ct image (step s 209 ). the calculation device 202 a displays a generated ct image on the display device 211 (step s 210 ). at this time, the calculation device 202 displays a range where an iterative approximation projection data correction process was applied on the ct image as shown in fig. 11 (step s 211 ). as described above, the calculation device 202 a of the first embodiment first restricts a range to apply an iterative approximation projection data correction process based on scanning conditions or reconstruction conditions such as an x-ray beam width and an fov when performing an iterative approximation process of projection data. also, application range margins are provided in regions adjacent to an application range, and a smoothing coefficient is set so that an intensity of correction processing becomes smooth in the boundary between the inside and the outside of the application range. then, the above smoothing coefficient is applied to the application range and the application range margins to execute an iterative approximation projection data correction process. hence, because the iterative approximation projection data correction process can be limited to a part of projection data, the processing time can be reduced. also, because an application range is set based on scanning conditions and reconstruction conditions, processing time can be reduced properly according to the purpose of ct examination. also, because an application range is determined by scanning conditions, reconstruction conditions, etc., correction processing is performed for projection data corresponding to a target site. therefore, low-noise images can be generated in a short time. also, margins are provided around an application range, and a smoothing coefficient is set so that a correction intensity becomes smaller gradually according to the distance from the application range, which can reduce difference due to an image quality change of the inside and the outside of the application range. also, because the boundary line between an application range and the outside of the application range is superimposed and displayed on a generated ct image, a region for which correction processing was performed can be visually recognized on the ct image during the observation. second embodiment next, referring to figs. 13 to 18 , the second embodiment will be described in detail. in the second embodiment, the x-ray ct apparatus 1 uses irradiation dose information as a parameter to determine an application range of an iterative approximation projection data correction process. the irradiation dose information is parameters such as an x-ray tube current and a tube voltage. the irradiation dose information is determined based on scanning conditions, a scanning site, a physique of an object, etc. the calculation device 202 of the x-ray ct apparatus calculates a change curve of an optimal dose to be irradiated to each body-axis direction position in prior to scanning. normally, a sufficient irradiation dose to meet target image quality is output at a diagnostic site (target site). on the other hand, by using a low irradiation dose required only for image reconstruction for the other sites, exposure dose reduction is enhanced. in the second embodiment, by utilizing irradiation dose information to be used in scanning, an application range for an iterative approximation projection data correction process is determined. fig. 13 is a diagram showing the functional configuration of the calculation device 202 b of the second embodiment. in the second embodiment, the irradiation dose information acquisition unit 31 b is provided instead of the application range determining parameter acquisition unit 31 of the calculation device 202 shown in fig. 3 . as shown in fig. 13 , the calculation device 202 b of the second embodiment has the irradiation dose information acquisition unit 31 b , the application range determining unit 32 b , the iterative approximation projection data correction processing unit 33 , the image reconstruction unit 34 , and the application range display region calculation unit 35 . additionally, the same symbols are used for the configuration elements similar to those shown in figs. 1 , 2 , and 3 , and the repeated explanations are omitted. also, although the calculation device 202 b of the second embodiment is hardware similar to the calculation device 202 shown in fig. 2 , the symbols are different from the calculation device 202 in fig. 2 due to the different functional configuration. fig. 14 is an example of the application range display screen 501 b of the second embodiment. the positioning image 601 and the irradiation dose variation curve 600 are displayed on the application range display screen 501 b . also, the body-axis direction position of the positioning image 601 corresponds to that of the irradiation dose variation curve 600 . the irradiation dose information acquisition unit 31 b of the second embodiment acquires irradiation dose information as a parameter to determine an application range. the irradiation dose information is, for example, the irradiation dose variation curve 600 shown in fig. 14 . the irradiation dose variation curve 600 shows a change of an irradiation dose [mas] according to the body-axis direction position. the irradiation dose information that is calculated by the calculation device 202 b based on scanning conditions etc. or that is preset may be utilized. the irradiation dose information may also be generated based on electrocardiographic information to be input from the electrocardiograph 109 in electrocardiographic synchronous scanning etc. in the present embodiment, as an example, description will be made for a case where the whole body (including the heart) is scanned while changing an irradiation dose. however, even in a case other than the whole-body scanning including the heart, the present invention can be applied. the application range determining unit 32 b calculates a range in the body-axis direction (slice direction) to which an iterative approximation projection data correction process is applied based on irradiation dose information input from the irradiation dose information acquisition unit 31 b . for example, the application range determining unit 32 b sets a threshold value for the irradiation dose variation curve 600 . then, a body-axis direction range with an irradiation dose smaller than the threshold value is set as an application range. alternatively, a body-axis direction range with an irradiation dose larger than the threshold value may also be set as an application range. setting a range with an irradiation dose smaller than a threshold value as an application range has a purpose to improve image quality by correcting projection data for a range scanned with a small irradiation dose. on the other hand, setting a range with an irradiation dose larger than a threshold value as an application range has a purpose to further improve image quality of a diagnostic image by correcting projection data of a range including a target site. the target site is normally scanned with a sufficiently large irradiation dose. also, the application range determining unit 32 b may determine an application range based on whether there is a variation (a derivative value) in the body-axis direction of an irradiation dose. for example, an irradiation dose that the irradiation dose variation curve 600 shows in fig. 14 is changed significantly in the range of the calvaria and the range from the thorax to the abdomen of the object 3 . the application range determining unit 32 b sets the slice direction ranges with a large variation of an irradiation dose as the slice direction application ranges 1001 c and 1001 d of correction processing. also, the lower extremities are scanned with a small irradiation dose. a slice direction range that is scanned with an irradiation dose lower than a predetermined threshold value is set as the slice direction application range 1001 e of correction processing. additionally, the application range determining unit 32 b calculates a range of a rotation direction to which an iterative approximation projection data correction process is applied based on a change of an irradiation dose in the rotation direction. fig. 15 is a diagram expressing the rotation direction application ranges 1003 a and 1003 b. the application range determining unit 32 b sets a threshold value for an irradiation dose changing depending on the rotation angle direction and restricts an application range of an iterative approximation projection data correction process to a rotation angle range larger or smaller than the threshold value. alternatively, an application range may be restricted according to whether there is a variation (a derivative value) in the rotation angle direction of an irradiation dose. also, in cardiac scanning, the application range determining unit 32 b may determine an application range on the basis of the waveform (such as the r wave) characteristic of an electrocardiogram. fig. 16 is a diagram showing an electrocardiographic waveform acquired in cardiac synchronous scanning and an irradiation dose determined according to the electrocardiographic waveform. the horizontal axis shows the time. as shown in fig. 16 , in electrocardiographic synchronous scanning (ecg: electrocardiogram), in order to reduce motion artifacts due to cardiac movement, a sufficient irradiation dose is irradiated in a range including a cardiac phase (static cardiac phase) optimal for scanning based on electrocardiographic information. an irradiation dose to be irradiated in the other phases is low. the application range determining unit 32 b sets a rotation direction range (time direction range) in which an irradiation dose is larger than a predetermined threshold value as the rotation direction application range 1003 a of correction processing. alternatively, a rotation direction range (time direction range) in which an irradiation dose is equal or less than a predetermined threshold value may be set as the rotation direction application range 1003 b of correction processing. the rotation direction application ranges 1003 a and 1003 b shown in fig. 15 respectively corresponds to those in fig. 16 . fig. 17 is the sinogram 1000 b of projection data. the horizontal axis shows a channel position of a detection element, and the vertical axis shows a rotation angle position. the rotation direction application range 1003 b on the sinogram 1000 b is expressed as a range shown in the dotted lines and the arrows in fig. 17 , for example. a predetermined range in the rotation angle direction is restricted as the application range 1003 . additionally, as described in the first embodiment, the application range margin 2003 may be set also for the rotation angle direction. the application range display region calculation unit 35 b of the second embodiment calculates display data to display an application range determined by the application range determining unit 32 b. fig. 14 is a diagram showing an example of the application range display screen 501 b. in the example of fig. 14 , the irradiation dose variation curve 600 is shown so as to correspond to the body-axis direction position of the positioning image 601 . also, the boundary lines, the arrows, etc. showing the slice direction application ranges 1001 c , 1001 d , and 1001 e are displayed on the positioning image 601 . additionally, as shown in fig. 15 , it may be configured so that the diagram showing the rotation direction application ranges 1003 a and 1003 b is displayed in the application range display screen 501 b. also, when electrocardiographic synchronous scanning is being performed, it may be configured so that the boundary lines and the arrows showing the application ranges 1003 a and 1003 b are displayed on the electrocardiogram 300 and the irradiation dose variation curve 600 as shown in fig. 16 . also, in the application range display screen 501 b of fig. 14 , an input operation unit to move the marks (the boundary lines and the arrows in figs. 14 , 15 , and 16 ) showing application ranges to be displayed and to change the sizes may be provided. when the application range positions are moved or the sizes are changed by the input operation unit, the application range determining unit 32 b resets the application ranges to the moved or changed positions and sizes, and then executes an iterative approximation projection data correction process again. fig. 18 is a flow chart explaining a process flow that the calculation device 202 b of the second embodiment executes. the calculation device 202 b acquires projection data from the data collection device 106 (step s 301 ). the calculation device 202 b (the application range determining unit 32 b ) acquires irradiation dose information (step s 302 ). the calculation device 202 b performs threshold determination for the acquired irradiation dose information or determines a variation (a derivative value) (step s 303 ). then, based on the determination result, the slice direction application range 1001 is first calculated (step s 304 ). for example, the calculation device 202 b calculates the slice direction application ranges 1001 c , 1001 d , and 1001 e according to the irradiation dose change in a slice direction. a correction processing application range in the slice direction is expressed as a range of the index “j” for a position in the update formula. as described above, a slice direction range in which an irradiation dose is smaller (or larger) than a predetermined threshold value is set as the slice direction application range 1001 e . alternatively, the slice direction application ranges 1001 c and 1001 d are determined according to whether there is a variation (a derivative value) of the irradiation dose. then, the calculation device 202 b calculates a rotation direction application range (step s 305 ). the determination method of the rotation direction application range is the same as the slice direction application range 1001 . for example, a rotation direction application range in which an irradiation dose is smaller (or larger) than a predetermined threshold value is set as the application range. alternatively, the rotation direction application range is restricted according to whether there is a variation (a derivative value) of an irradiation dose. also, because an irradiation dose is determined based on the electrocardiographic information acquired during scanning in electrocardiographic synchronous scanning, an optimal cardiac phase is set as the rotation direction application range 1003 on the basis of the characteristic waveform (for example, the r wave) of the irradiation dose. the correction processing application ranges 1003 a and 1003 b in the rotation direction are expressed as ranges of the index “i” for time in the update formula. the process flow after step s 306 is the same as the processes after step s 205 of the first embodiment. the calculation device 202 b sets application range margins corresponding to each application range (step s 306 ). the calculation device 202 b calculates a smoothing coefficient to be applied to the application range and the application range margins (step s 307 ). next, the calculation device 202 b applies the smoothing coefficient calculated in step s 307 to the application ranges and the application range margins determined in the processes from step s 304 to step s 306 , and then performs an iterative approximation projection data correction process (step s 308 ). the application ranges determined in steps s 304 and 305 are expressed as ranges of the indexes “i” and “j” included in the update formula (the above formula (4)) of the iterative approximation projection data correction process. also, the smoothing coefficient determined in step s 307 corresponds to β included in the update formula. the calculation device 202 b outputs projection data as a result of the iterative approximation projection data correction process and transmits it to the reconstruction processing device 221 . the reconstruction processing device 221 performs image reconstruction using the correction projection data corrected by an iterative approximation projection data correction process to generate a ct image (step s 309 ). also, the calculation device 202 b calculates display data for displaying an application range (step s 310 ). the calculation device 202 b displays a generated ct image on the display device 211 (step s 311 ). also, the calculation device 202 b displays a range to which the iterative approximation projection data correction process is applied on the application range display screen 501 b as shown in fig. 14 (step s 312 ). as described above, the calculation device 202 b of the second embodiment restricts an application range for an iterative approximation projection data correction process in a slice direction and the rotation direction based on irradiation dose information in scanning with an optimal irradiation dose that performs scanning while changing an irradiation dose in a body-axis direction position or rotation direction position. particularly, in cardiac scanning, a variation curve of the irradiation dose is generated based on electrocardiographic information. therefore, a range for applying an iterative approximation projection data correction process based on the electrocardiographic information is restricted in a slice direction and the rotation direction. hence, a processing time of the iterative approximation projection data correction process can be reduced. also, because an application range for correction processing is displayed together with a positioning image, an image diagram, or electrocardiographic information, an operator can easily distinguish the application range for an iterative approximation projection data correction process. hence, whether a noise-reduction image corresponding to the irradiation dose can be obtained or not can be clarified. third embodiment next, referring to figs. 19 to 22 , the third embodiment will be described in detail. in the third embodiment, a method to calculate an application range for an iterative approximation projection data correction process based on roi (region of interest) information set on an image by an operator will be described. fig. 19 is a diagram showing the functional configuration of the calculation device 202 c of the third embodiment. in the third embodiment, the roi information acquisition unit 31 c is provided instead of the application range determining parameter acquisition unit 31 of the calculation device 202 shown in fig. 3 . as shown in fig. 19 , the calculation device 202 c of the third embodiment has the roi information acquisition unit 31 c , the application range determining unit 32 c , the iterative approximation projection data correction processing unit 33 , the image reconstruction unit 34 , and the application range display region calculation unit 35 c. additionally, the same symbols are used for the configuration elements similar to those shown in figs. 1 , 2 , and 3 , and the repeated explanations are omitted. also, although the calculation device 202 c of the third embodiment is hardware similar to the calculation device 202 shown in fig. 2 , the symbols are different from the calculation device 202 in fig. 2 due to the different functional configuration. the roi information acquisition unit 31 c acquires roi information as a parameter to determine an application range for an iterative approximation projection data correction process. the roi information may be set on a ct image by an operator or may be set based on image analysis results. in the present embodiment, an example of determining a range of the iterative approximation projection data correction process based on the roi information set on a ct image by an operator will be described. the roi information acquisition unit 31 c displays the application range setting/display screen 501 c shown in fig. 20 on the display device 211 , for example. the application range setting/display screen 501 c is an operation screen for which an operator sets an roi and an application range for an iterative approximation projection data correction process. the application range setting/display screen 501 c shown in fig. 20 will be described. the application range setting/display screen 501 c has the ct image display area 51 , the rotation direction application range display area 52 , the slice direction application range display area 53 , the electrocardiographic information/irradiation dose information display area 54 , the slide bars 55 , 56 , and 57 , etc. the ct image display area 51 displays a ct image generated based on projection data. additionally, the ct image may be the original image (a ct image reconstructed based on projection data before correction processing) or may be a ct image reconstructed based on correction projection data for which an iterative approximation projection data correction process was performed. for example, the original image is displayed for the roi setting immediately after scanning, and a ct image reconstructed based on correction projection data is displayed after the roi setting. it is configured so that an operator can set an roi (i.e., the application range 1005 for the iterative approximation projection data correction process) on a ct image. the channel direction application range display area 52 displays a channel direction application range. in the example of fig. 20 , a plurality of the channel direction application ranges 1003 c , 1003 d , and 1003 e are set and displayed. the body-axis direction application range display area 53 displays a positioning image and an image generation range in the body-axis direction. the image generation range is set as a reconstruction condition and is specified as the slice direction application range 1001 of an iterative approximation projection data correction process. the electrocardiographic information/irradiation dose information display area 54 displays the electrocardiographic information 300 and the irradiation dose information (the irradiation dose variation curve) 600 along the same time axis. also, the application ranges 1003 c , 1003 d , and 1003 e in the time direction (rotation direction) of an iterative approximation projection data correction process are displayed along the same time axis as the time axis of the electrocardiographic information 300 and the irradiation dose information 600 . the slide bars 55 and 56 are operation areas for adjusting a position and size of an application range. for example, by adjusting the slide bar 55 , the positions of the dotted lines and the arrows showing the time direction application ranges 1003 can be adjusted. also, by adjusting the slide bar 56 , the arrow lengths (dotted-line ranges) showing the time direction application ranges 1003 can be adjusted. the slide bar 57 is an operation area for changing a cross-sectional position of an image to be displayed in the ct image display area 51 . in case of setting an roi set by an operator to an application range for an iterative approximation projection data correction process, the roi information acquisition unit 31 c acquires an image generation range in the body-axis direction from the set reconstruction conditions. also, an fov is acquired corresponding to the roi. the application range determining unit 32 c calculates application ranges in the body-axis direction (slice direction) and the channel direction based on roi information to be input from the roi information acquisition unit 31 c. the application range display region calculation unit 35 c calculates display data so as to display a ct image and positioning image to be displayed on the above application range setting/display screen 501 c or an application range in a range corresponding to irradiation dose information etc. fig. 21 is a flow chart explaining the process flow to be executed by the calculation device 202 c of the third embodiment. the calculation device 202 c acquires projection data from the data collection device 106 (step s 401 ). next, the calculation device 202 c (the roi information acquisition unit 31 c ) generates an roi setting image based on the projection data to display on the above application range setting/display screen 501 c (step s 402 ). after an operator sets an roi and reconstruction conditions (step s 403 ), the calculation device 202 c acquires an fov corresponding to the image generation range in the body-axis direction and the roi that were set. additionally, on the above application range setting/display screen 501 c , a voi (volume of interest) in which a two-dimensional roi is extended three-dimensionally may be able to be set. the calculation device 202 c calculates an application range in a slice direction based on the acquired image generation range (step s 404 ). then, the calculation device 202 c calculates the acquired application range in the fov channel direction (step s 405 ). as shown in the setting screen of fig. 20 , the application range in a slice direction is set as the range corresponding to the image generation range input by an operator (the range of the symbol 1001 set in the body-axis direction application range display area 53 ). also, an application range may be determined not only for an iterative approximation projection data correction process according to the roi information but also by referring to electrocardiographic information and irradiation dose information as shown in the second embodiment. in this case, the calculation device 202 c determines the rotation direction application ranges 1003 c , 1003 d , and 1003 e based on the electrocardiographic information and the irradiation dose information. thus, after setting a slice direction and an application range in the rotation direction, the calculation device 202 determines a range of the index “j” (a range for a position) of the update formula to be used for the calculation in an iterative approximation projection data correction process based on the slice direction application range. similarly, a range of the index “i” for time is determined based on the rotation direction application range. the process flow after step s 406 is the same as the processes after step s 205 of the first embodiment. the calculation device 202 c sets application margins corresponding to each application range (step s 406 ). the calculation device 202 c calculates a smoothing coefficient to be applied to the application range and the application range margins (step s 407 ). next, the calculation device 202 c applies a smoothing coefficient calculated in step s 407 to the application range and the application range margins determined in the processes from steps s 404 to s 406 and performs an iterative approximation projection data correction process (step s 408 ). the application range determined in steps s 404 and s 405 is expressed as the range of the indexes “i” and “j” included in the update formula (the above formula (4)) of an iterative approximation projection data correction process. also, a smoothing coefficient corresponds to β included in the update formula. the calculation device 202 c outputs correction projection data as a result of the iterative approximation projection data correction process and transmits the data to the reconstruction processing device 221 . the reconstruction processing device 221 performs image reconstruction using the correction projection data corrected by the iterative approximation projection data correction process to generate a ct image (step s 409 ). also, the calculation device 202 c calculates display data for displaying an application range (step s 410 ). the calculation device 202 c displays the generated ct image on the display device 211 (step s 411 ). also, the calculation device 202 c displays a range to which the iterative approximation projection data correction process was applied on the application range setting/display screen 501 c as shown in fig. 20 (step s 412 ). for example, the calculation device 202 c displays the application range for the iterative approximation projection data correction process on the ct image display area 51 and displays the application range 1005 on the displayed ct image. also, the calculation device 202 c displays the rotation direction application ranges 1003 c , 1003 d , and 1003 e respectively in the rotation direction application range display area 52 and the electrocardiographic information/irradiation dose information display area 54 . also, the calculation device 202 c displays the slice direction application range 1001 in the slice direction application range display area 53 . as described above, when an roi is set by an operator, the calculation device 202 c restricts a range for applying an iterative approximation projection data correction process based on the roi. additionally, in cardiac scanning, a range to which an iterative approximation projection data correction process is applied based on electrocardiographic information is restricted in a slice direction and the rotation direction. hence, the application range for an iterative approximation projection data correction process is restricted, which can reduce the processing time. also, because the application range for the correction processing is displayed together with a positioning image, an image diagram, or electrocardiographic information, an operator can easily distinguish the application range. hence, whether a noise-reduction image corresponding to the irradiation dose can be obtained in the specified roi or not can be clarified. additionally, a plurality of rois may be set. a case of setting a plurality of rois will be described additionally. in a case where a plurality of rois are set on the same ct image, it may be configured so that an application range corresponding to each roi is provided. however, this makes the application range distribution complicated, and the calculation of an iterative approximation projection data correction process will be complicated. additionally, in case of providing application range margins etc., a function showing a smoothing coefficient will be also complicated. therefore, in a case where a plurality of rois are set on the same ct image, an roi including the plurality of rois (hereinafter, referred to as a large roi) is reset, and a range in a slice direction and channel direction corresponding to the reset large roi may be set as an application range for an iterative approximation projection data correction process. in the following description, an roi set respectively is referred to as a small roi, and a range including a plurality of small rois is referred to as a large roi. fig. 22 is a flow chart explaining processes in a case where a plurality of small rois are set. first, the calculation device 202 c (the application range determining unit 32 c ) receives selection of an image to set an roi. an operator adjusts the slide bar 57 on the setting screen to select a body-axis direction position of a ct image to be displayed on the ct image display area 51 (step s 501 ). on the selected ct image, rois (small rois) are set (step s 502 ). the small rois may overlap with each other or may be separated. although the shapes of the small rois are desirably circular, they may also be the other shapes such as a rectangle and an ellipse. in a case where a plurality of rois are set (step s 503 : yes), the calculation device 20 c uses coordinate information of small rois set in step s 502 to calculate a region including all the small rois. this region is set as a large roi (step s 504 ). the large roi may include a region that an operator did not set as a small roi. the shape of the large roi is circular. also, the large roi is desirably set so that the center is close to the rotation center of scanning. this is because a region closer to the rotation center becomes an almost linear range on a sinogram and is easy to calculate an application range for an iterative approximation projection data correction process. the processes after setting a large roi are the same as those after the above step s 404 . that is, the calculation device 202 acquires position information of the large roi set by an operator and calculates a slice direction application range based on the acquired roi information. then, a channel direction application range is calculated. in a case where irradiation dose information etc. has already been acquired, a rotation direction application range is calculated. after setting the application ranges in the slice and channel directions, the calculation device 202 determines a range of the index “j” (a range for a position) of the update formula to be used for calculation in an iterative approximation projection data correction process based on the slice direction application range. similarly, according to the rotation direction application range, a range of the index “i” for time is determined. as described above, in the third embodiment, an application range for an iterative approximation projection data correction process can be restricted in the body-axis direction and channel direction based on an roi set on a ct image by an operator. also, in a case where irradiation dose information etc. have been acquired, an application range for an iterative approximation projection data correction process can be restricted in the rotation direction according to the irradiation dose information. consequently, the processing time for the iterative approximation projection data correction process can be reduced. also, the application range setting/display screen 501 c displays an application range in each direction in a ct image or a positioning image and on an irradiation dose variation curve, etc. therefore, an operator can easily distinguish the application range for the iterative approximation projection data correction process. hence, whether a desired noise-reduction image is obtained in a desired range or not can be easily checked. additionally, in a case where a plurality of rois are set, an application range can be determined by setting one large roi including the plurality of rois. this can prevent process complication and reduce the processing time. hence, the usability and user-friendliness are improved, and the processing time can also be reduced. fourth embodiment next, referring to figs. 23 to 25 , the fourth embodiment will be described in details. in the fourth embodiment, the method for calculating an application range for an iterative approximation projection data correction process based on variation information about moving organs will be described. the moving organs are the heart and the lungs, for example. in the following description, the heart will be described as an example. the x-ray ct apparatus 1 calculates difference values between images with different time phases based on electrocardiographic information measured by the electrocardiograph 109 during scanning to calculate an image variation. then, a time phase with a small image variation is set as an optimal cardiac phase, and a diagnostic image is reconstructed using projection data of the optimal cardiac phase. in the fourth embodiment, the optimal cardiac phase and the neighboring time range are set as an application range for an iterative approximation projection data correction process. fig. 23 is a diagram showing the functional configuration of the calculation device 202 d of the fourth embodiment. in the fourth embodiment, the optimal cardiac phase determination unit 31 d is provided instead of the application range determining parameter acquisition unit 31 of the calculation device 202 shown in fig. 3 . as shown in fig. 23 , the calculation device 202 d of the fourth embodiment has the optimal cardiac phase determination unit 31 d , the application range determining unit 32 d , the iterative approximation projection data correction processing unit 33 , the image reconstruction unit 34 , and the application range display region calculation unit 35 . additionally, the same symbols are used for the configuration elements similar to those shown in figs. 1 , 2 , and 3 , and the repeated explanations are omitted. also, although the calculation device 202 d of the fourth embodiment is hardware similar to the calculation device 202 shown in fig. 2 , the symbols are different from the calculation device 202 in fig. 2 due to the different functional configuration. the optimal cardiac phase determination unit 31 d calculates difference values between images with different time phases based on electrocardiographic information. then, variations between images in a target time phase are calculated from the sum of difference values. the optimal cardiac phase determination unit 31 d sets a time phase whose variation between images is the smallest as an optimal cardiac phase, for example. additionally, the optimal cardiac phase may be a time phase in which variations between images are the smallest values in all the phases or may be a time phase the respective variations between images are the smallest values in the expansion and contraction phases in light of the heart expansion/contraction. fig. 24 is a diagram showing an example of the variation curve 700 that shows an image variation change in each time phase. the horizontal axis is the time phase (time), and the vertical axis is the image variation. by obtaining differences between images in neighboring time phases to calculate image variations, the variation transition can be found as shown in fig. 24 . since motion artifacts do not appear so much in a time phase with a small image variation, the time phase is determined as an optimal cardiac phase, for example. in fig. 24 , the time phases 701 and 702 are set as optimal cardiac phases. the application range determining unit 32 d acquires optimal cardiac phase information determined by the optimal cardiac phase determination unit 31 d . then, a time-phase range including an optimal cardiac phase is set as the time direction (rotation direction) application range 1003 for an iterative approximation projection data correction process. as shown in fig. 24 , for example, a time-phase range in the vicinity of the optimal cardiac phase 702 is set as the application range 1003 f in the time direction. similarly, a time-phase range in the vicinity of the optimal cardiac phase 701 is set as the application range 1003 g in the time direction. fig. 25 is a flow chart explaining the process flow to be executed by the calculation device 202 d of the fourth embodiment. the calculation device 202 d acquires projection data and electrocardiographic information (step s 601 and step s 602 ). the reconstruction processing device 221 reconstructs a ct image for each time phase based on the acquired projection data and electrocardiographic information (step s 603 ). the calculation device 202 d (the optimal cardiac phase determination unit 31 d ) calculates difference values between images with different time phases (step s 604 ). then, an image variation in a target phase is calculated from the sum of the difference values (step s 605 ). the calculation device 202 d determines an optimal cardiac phase based on the image variation calculated in step s 605 (step s 606 ). then, the o application range determining unit 31 d (the calculation device 202 d ) acquires optimal cardiac phase information determined by the optimal cardiac phase determination unit 31 d . the calculation device 202 d sets a neighboring time-phase range including an optimal cardiac phase as the application ranges 1003 f and 1003 g for an iterative approximation projection data correction process (step s 607 ). the calculation device 202 d determines a range of the index “i” (a range for time) of the update formula to be used for calculation in an iterative approximation projection data correction process based on the above application ranges 1003 f and 1003 g. the process flow after step s 608 is the same as that after step s 205 of the first embodiment. the calculation device 202 d sets application range margins corresponding to each application margin (step s 608 ). the calculation device 202 d calculates a smoothing coefficient to be applied to the application range and the application range margins (step s 609 ). next, the calculation device 202 d applies a smoothing coefficient calculated in step s 609 to the application range and the application range margins determined in the processes of steps s 607 to s 608 and performs an iterative approximation projection data correction process (step s 610 ). the application range determined in step s 607 is expressed as a range of the index “i” included in the update formula (the above formula (4)) of an iterative approximation projection data correction process. also, a smoothing coefficient determined in step s 609 corresponds to β included in the update formula. the calculation device 202 d outputs correction projection data as a result of the iterative approximation projection data correction process and transmits it to the reconstruction processing device 221 . the reconstruction processing device 221 performs image reconstruction using correction projection data corrected by an iterative approximation projection data correction process to generate a ct image (step s 611 ). also, the calculation device 202 d calculates display data to display an application range (step s 612 ). the calculation device 202 d displays the generated ct image on the display device 211 (step s 613 ). also, the calculation device 202 d displays a range to which the iterative approximation projection data correction process was performed on the ct image, the setting screen, etc. (step s 614 ). for example, the calculation device 202 d displays an application range for an iterative approximation projection data correction process as shown in the rotation direction application range display area 52 of fig. 20 . as described above, in the fourth embodiment, in case of scanning periodically moving organs, the calculation device 202 d calculates an image variation and determines an optimal phase based on the image variation. then, a time-direction range in the vicinity of the time phase determined as the optimal phase is determined as an application range for an iterative approximation projection data correction process. hence, an application range for correction processing can be restricted in the time direction (i.e., the rotation direction). therefore, the processing time can be reduced. fifth embodiment next, referring to figs. 26 to 27 , the fifth embodiment will be described in detail. in the fifth embodiment, an iterative approximation projection data correction process to an image for contrast-agent monitoring in contrast agent scanning will be described. in the conventional ct examination using a contrast agent, contrast-agent monitor scanning is performed to monitor a contrast agent reaching a region of interest. in the contrast-agent monitor scanning, since whether the contrast agent has reached or not can be monitored in a predetermined monitoring position, scanning is performed at a low dose. in the fifth embodiment, an iterative approximation projection data correction process is performed also for projection data acquired for contrast-agent monitoring at a high speed by restricting a range. this improves image quality of an image for contrast-agent monitoring, which can perform highly accurate concentration monitoring. fig. 26 is a diagram showing the functional configuration of the calculation device 202 e of the fifth embodiment. in the fifth embodiment, the monitor image analysis unit 31 e is provided instead of the application range determining parameter acquisition unit 31 of the calculation device 202 shown in fig. 3 . as shown in fig. 26 , the calculation device 202 e of the fifth embodiment has the monitor image analysis unit 31 e , the application range determining unit 32 e , the iterative approximation projection data correction processing unit 33 , the monitor image reconstruction unit 34 e , and the application range display region calculation unit 35 e. additionally, the same symbols are used for the configuration elements similar to those shown in figs. 1 , 2 , and 3 , and the repeated explanations are omitted. also, although the calculation device 202 e of the fifth embodiment is hardware similar to the calculation device 202 shown in fig. 2 , the symbols are different from the calculation device 202 in fig. 2 due to the different functional configuration. the monitor image analysis unit 31 e acquires projection data for monitoring scanning (hereinafter, referred to as monitoring projection data) in a cross section to monitor a contrast agent concentration at each predetermined time. then, a monitoring image is reconstructed immediately based on the monitoring projection data. the monitor image analysis unit 31 e generates a time differential image between a monitoring image scanned last time and that scanned this time. the monitor image analysis unit 31 e analyzes the inside of the generated time differential image. in the analysis, a region where relatively large difference values concentrate is searched in the time differential image. then, an roi is set so as to include a region where large difference values concentrate. the roi set for the time differential image is specified as an roi of a monitor image. the roi setting is performed at each time of monitoring scanning. the application range determining unit 32 e acquires roi information set by the monitor image analysis unit 31 e . based on the roi information, the application range determining unit 32 e performs the similar processes to the third embodiment for a monitor image. that is, the application range determining unit 32 e , based on the roi set by the monitor image analysis unit 31 e , determines an application range for an iterative approximation projection data correction process for monitoring projection data. application ranges in the body-axis direction (slice direction) and the channel direction are calculated. the iterative approximation projection data correction processing unit 33 e executes an iterative approximation projection data correction process by restricting to an application range determined by the application range determining unit 32 e from among monitoring projection data. the image reconstruction unit 34 e reconstructs a monitor image based on correction projection data input from the iterative approximation projection data correction processing unit 33 . the monitor image reconstruction unit 34 e outputs the reconstructed monitor image to the display device 211 . the application range display region calculation unit 35 e performs calculation to display an application range determined by the application range determining unit 32 e . for example, an application range position on a monitor image is calculated. fig. 27 is a flow chart explaining the process flow that the calculation device 202 d executes of the fifth embodiment. the calculation device 202 e selects a cross section (for contrast monitoring) to monitor a contrast agent concentration (step s 701 ). next, the calculation device 202 e performs monitoring scanning for the cross section for contrast monitoring selected in step s 701 at each predetermined time (step s 702 ). the calculation device 202 e (the monitor image analysis unit 31 e ) reconstructs a ct image using the monitoring projection data acquired by monitoring scanning for each time monitoring scanning is performed (step s 703 ). the calculation device 202 e further generates a time differential image between the image scanned last time and that scanned this time (step s 704 ). also, the calculation device 202 e (the monitor image analysis unit 31 e ) analyzes the time differential image generated in step s 702 for each monitoring scanning to set an roi (step s 705 ). here, the roi is set so as to include a region where relatively large difference values concentrate in the time differential image. the processes after step s 706 are the same as the process flow after setting an roi in the third embodiment (the processes after step s 403 of fig. 21 ). the calculation device 202 e calculates a slice direction application range and a channel direction application range for monitoring projection data based on the roi information set in step s 705 (step s 706 ). after application ranges in the slice and channel directions are set, the calculation device 202 determines a range of the index “j” (a range for a position) of the update formula to be used for the calculation in an iterative approximation projection data correction process based on the slice direction application range. similarly, a range of the index “i” for time is determined based on a rotation direction application range. the calculation device 202 e sets application range margins corresponding to each application range (step s 707 ). also, the calculation device 202 e calculates a smoothing coefficient to be applied to an application range and application range margins (step s 708 ). next, the calculation device 202 e applies a smoothing coefficient calculated in step s 708 to the application range and the application range margins determined in the process of steps s 706 to 707 and perform an iterative approximation projection data correction process (step s 709 ). the calculation device 202 e outputs correction projection data as a result of the iterative approximation projection data correction process and transmits it to the reconstruction processing device 221 . the reconstruction processing device 221 performs image reconstruction using the correction projection data corrected by the iterative approximation projection data correction process to generate a monitor image (step s 710 ). also, the calculation device 202 e calculates display data to display an application range (step s 711 ). the calculation device 202 e displays the generated monitor image on the display device 211 (step s 712 ). also, the calculation device 202 e displays a range to which an iterative approximation projection data correction process was applied on the monitor image as shown in fig. 11 , for example (step s 713 ). as described above, an iterative approximation projection data correction process can be performed for projection data acquired in contrast agent monitoring scanning during contrast scanning by restricting an application range. therefore, image noise can be reduced on a contrast spot of a monitor image, which can perform highly accurate concentration monitoring in real time. also, a monitoring image can be generated at a low dose by performing the iterative approximation projection data correction process, which can reduce an exposure dose. although the embodiments suitable for the x-ray ct apparatus and the image reconstruction method related to the present invention were described above, the present invention is not limited to the above embodiments. it is apparent that those skilled in the art can consider various changes or modifications within the technical ideas disclosed in the present application, and it is understood that these naturally belong to the technical scope of the present invention. description of reference numerals 1 : x-ray ct apparatus, 3 : object, 10 : scanner, 20 : operation unit, 100 : gantry, 101 : bed device, 102 : x-ray generating device, 103 : x-ray detection device, 104 : collimator device, 105 : high-voltage generating device, 106 : data collection device, 107 : driving device, 109 : electrocardiograph, 200 : central control device, 201 : input/output device, 202 : calculation device, 211 : display device, 212 : input device, 213 : storage device, 221 : reconstruction processing device, 222 : image processing device, 31 and 31 a : application range determining parameter acquisition unit, 31 b : irradiation dose information acquisition unit, 31 c : roi information acquisition unit, 31 d : optimal cardiac phase determination unit, 31 e : monitor image analysis unit, 32 and 32 a to 32 e : application range determining unit, 33 : an iterative approximation projection data correction processing unit, 34 : image reconstruction unit, 34 e : monitor image reconstruction unit, 35 and 35 a to 35 e : application range display region calculation unit, 4 : fov, 300 : electrocardiogram, 501 a and 501 c : application range setting/display screen, 501 b : application range display screen, 55 , 56 , and 57 : slide bar (operation input unit), 600 : irradiation dose variation curve, 601 : positioning image, 700 : image variation curve, 701 and 702 : optimal cardiac phase, 1000 and 1000 b : sinogram, 1001 , 1001 a , and 1001 b : slice direction application range, 1002 : channel direction application range, 1003 and 1003 a to 1003 g : rotation direction application range, 2001 and 2001 a to 2001 c : slice direction application range margins, 2002 : channel direction application range margins, 2003 : rotation direction application range margins, 1005 : application range, 2005 : application range margins
164-025-801-126-390
US
[ "US" ]
A43C15/16,A43B5/06
2018-12-20T00:00:00
2018
[ "A43" ]
spike and key system and method
a spike and key system is provided that includes a spike having a conical tip with a hole extending therethrough, and a key member including a first key having a racetrack shaped opening, and a second key having a rotatable key member designed to interact with the hole in the spike.
1 . a spike and key system, comprising: a spike having a conical tip with a hole extending therethrough; and a key including a first key having a racetrack shaped opening and a second key having a key member designed to interact with the hole in the spike. 2 . the spike and key system of claim 1 , wherein the hole is cylindrical and extends entirely through the spike. 3 . the spike and key system of claim 1 , wherein the shape of the hole of the spike is substantially the same as the cross-sectional profile of the key member of the second key portion. 4 . the spike and key system of claim 1 , wherein the spike includes a cylindrical base having threads that circumscribe an exterior surface of the base. 5 . the spike and key system of claim 1 , wherein the key includes a handle portion designed to be grasped by a user and the first key and the second key. 6 . the spike and key system of claim 1 , wherein the key member is provided as a substantially l-shaped body having a first section and a second section, the first section protruding outwardly at a substantially right angle as compared to the second section. 7 . the spike and key system of claim 6 , wherein the first section of the key member includes a tip that is provided in a shape that is matched to the hole in the spike. 8 . the spike and key system of claim 6 , wherein the second section of the key member extends from the first section and terminates at an attachment point, the attachment point being enclosed on the inside of the key member and being designed to rotatably hold the key member in a first configuration, and a second, different configuration. 9 . the spike and key system of claim 8 , wherein the first configuration is a non-use position and the second configuration is an in-use position. 10 . a spike for an athletic shoe, comprising: a spike having a cylindrical base with threads circumscribing an exterior surface; a disc member protruding outwardly from the base; and a tapered spike body extending upwardly from the disc member and terminating at a point, the spike body having a lower portion defined by a rectangular cross-sectional profile and having a cylindrical hole extending therethrough, and an upper conical portion. 11 . the spike for an athletic shoe of claim 10 , wherein the hole extends entirely through the spike. 12 . the spike for an athletic shoe of claim 10 , wherein the spike is provided in a length of 1/16 inch, ⅛ inch, ¼ inch, ½ inch, ¾ inch, or 1 inch. 13 . the spike for an athletic shoe of claim 10 , wherein the spike is metal. 14 . the spike for an athletic shoe of claim 10 , wherein the spike is designed to be removed from the athletic shoe using a first key and a second key. 15 . the spike for an athletic shoe of claim 14 , wherein the first key and the second key are integrally provided on a single key body. 16 . the spike for an athletic shoe of claim 15 , wherein the first key includes a racetrack shaped opening designed to receive the rectangular cross-sectional profile of the lower portion of the spike body. 17 . the spike for an athletic shoe of claim 16 , wherein the second key includes a cylindrical key member designed to be received within the cylindrical hole of the spike body. 18 . an athletic shoe kit with a plurality of removable spikes and a key, comprising: a shoe having a body and a sole positioned on the underside of the body, the sole further including a plurality of bores designed to receive the plurality of removable spikes; the plurality of removable spikes, each removable spike provided in the form of a cylindrical base having threads that circumscribe the base, and a spike body having a hole extending therethrough; and the key including a first key having a tip provided in a shape that matches each of the holes in the plurality of removable spikes such that the tip extends through each of the holes of the plurality of spikes when the plurality of spikes are being secured or removed from the shoe. 19 . the athletic shoe kit with a plurality of removable spikes and a key of claim 18 further including a second key having a racetrack shaped opening designed to receive each of the plurality of spikes. 20 . the athletic shoe kit with a plurality of removable spikes and a key of claim 19 , wherein the first key and the second key are integrally provided on the same key body.
background individuals that participate in various sports and activities utilize shoes having cleats or spikes worn on the feet of the individual to help the individual retain stability and balance on different types of surfaces. for example, cleats are used in several sports, including, but not limited to, soccer, football, lacrosse and the like. cleats generally protrude from a bottom surface/sole of a shoe and are designed to at least partially extend into the ground or surface when the shoe contacts the surface. the interaction between the cleat and surface provides a gripping interaction, which provides stability to the wearer. in the case of track and field events, cross-country events, and other sports, this same purpose is served by metal spikes which are typically screwed into the bottom surface/sole of the shoe. as a result, track and field shoes, and cross-country event shoes, are commonly referred to as ‘spikes” owing to this feature. in some sports, cleats may be provided as an integral part of the athlete's shoe and are not designed to be removed from the shoe. in this scenario, the cleats may be molded and made from a substantially similar material as the sole of the shoe. in other instances, the cleats may be provided in a different material with respect to the sole of the shoe, but are still integrally attached to the shoe and are not designed to be removed. in other instances, (e.g., track and field or cross-country events) spikes associated with the shoe are designed to be exchanged, replaced, and/or removed from the shoe. for example, an athlete may be running on a softer surface, such as a dirt trail or grass, where a longer spike may be helpful, so the athlete could remove the standard spikes that came with the shoe and replace them with longer or shorter spikes as needed depending on the nature of the running surface. in addition, over time, spikes become worn because of ordinary wear and tear. therefore, there are various scenarios where it would be desirable for an individual to have the flexibility to replace one or more spikes in their shoes. a substantial amount of time is consumed by those wishing to replace the spikes using methods known in the prior art. in particular, prior art spikes are typically provided as a substantially unitary piece that include a thread that is designed to be screwed into a corresponding threaded hole in the bottom of the shoe. in this case, the user may hand thread the spike into the bottom of a shoe or use a specially designed spike key that features a hole designed to be placed onto, or fitted over, the spike. the prior art key then interacts with the spike and may be manually rotated in a clockwise direction to secure the spike onto the bottom of the shoe, or in a counter-clockwise direction to remove the spike from the bottom of the shoe. this process is repeated numerous times until all spikes are either secured to, or removed from, the shoe. the process is also repeated, and the prior art spike key is utilized in the removal of the spikes when the user desires to change the spikes. due to ordinary wear and tear of prior art spikes from repeated use over time, and due to ordinary wear and tear from repeatedly installing/removing the prior art spikes, both the prior art spikes and prior art spike key become stripped or worn to such a degree that the spike key will not fit properly over the spikes. this often makes it difficult for prior art spikes to be installed or removed at all. in these instances, athletes or coaches may use grip plyers or other inconvenient and time-consuming measures to remove spikes from shoes. in extreme instances, a spike may be cut (using a saw or other device) if the spike cannot be removed from the shoe. this scenario can be especially stressful to athletes preparing for a race about to begin when they are unable to remove worn spikes. for the above instances, it would be desirable to have a spike and key system that allows for easy installation and removal and that overcomes one or more of the aforementioned obstacles. summary a spike and key system is provided that includes a spike having a conical body with a hole extending therethrough, and a key member including a first key having a racetrack shaped opening, and a second key having a rotatable key member designed to interact with the hole in the cleat. description of the drawings fig. 1 is a bottom isometric view of a traditional shoe, and a spike and key system according to the prior art; fig. 2 is an isometric view of a shoe, and a spike and key system according to one embodiment; fig. 3 is a side elevational view of a shoe with a plurality of spikes extending downwardly from a bottom surface thereof; fig. 4 is a bottom elevational view of the shoe of fig. 3 , with the plurality of spikes removed therefrom; fig. 5a is a front elevational view of a spike; fig. 5b is a side elevational view of the spike of fig. 5a ; fig. 6 is a front elevational view of a key designed for use with the spikes of figs. 5a and 5b ; fig. 7 is a rear elevational view of a key designed for use with the spikes of figs. 5a and 5b ; fig. 8 is a side elevational view of a key designed for use with the spikes of figs. 5a and 5b ; and fig. 9 is a top view of a key designed for use with the spikes of figs. 5a and 5b . detailed description before any embodiments of the invention are explained in detail, it is to be understood that the invention is not limited in its application to the details of construction and the arrangement of components set forth in the following description or illustrated in the following drawings. the invention is capable of other embodiments and of being practiced or of being carried out in various ways. also, it is to be understood that the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. the use of “including,” “comprising,” or “having” and variations thereof herein is meant to encompass the items listed thereafter and equivalents thereof as well as additional items. unless specified or limited otherwise, the terms “mounted,” “connected,” “supported,” and “coupled” and variations thereof are used broadly and encompass both direct and indirect mountings, connections, supports, and couplings. further, “connected” and “coupled” are not restricted to physical or mechanical connections or couplings. the following discussion is presented to enable a person skilled in the art to make and use embodiments of the invention. various modifications to the illustrated embodiments will be readily apparent to those skilled in the art, and the generic principles herein can be applied to other embodiments and applications without departing from embodiments of the invention. thus, embodiments of the invention are not intended to be limited to embodiments shown, but are to be accorded the widest scope consistent with the principles and features disclosed herein. the following detailed description is to be read with reference to the figures, in which like elements in different figures have like reference numerals. the figures, which are not necessarily to scale, depict selected embodiments and are not intended to limit the scope of embodiments of the invention. skilled artisans will recognize the examples provided herein have many useful alternatives and fall within the scope of embodiments of the invention. fig. 1 illustrates a traditional spike and key system 100 according to the prior art. the system 100 can include a shoe 102 having a bottom surface/sole 104 . the sole 104 includes a plurality of cylindrical openings 106 that are designed to receive a plurality of removable spikes 108 . a spike key 110 is provided that is designed to interact with each of the plurality of removable spikes 108 so that the spikes 108 may be installed or removed from the shoe 102 . the key 110 includes a racetrack shaped opening 112 that corresponds to the profile of the spikes 108 . to install or remove each of the spikes 108 , the opening 112 of the key 110 is placed over the spike 108 , and the key 110 is turned in a clockwise direction to install the spikes 108 and a counterclockwise direction to remove the spikes 108 . fig. 2 depicts a shoe and a spike and key system 200 according to one embodiment. the shoe, and spike and key system 200 includes one or more of a shoe 202 , a plurality of spikes 204 , and a key 206 . the shoe 202 , spike(s) 204 , and/or key 206 may be provided as individual components or may be provided as a kit. for example, the spikes 204 and the key 206 may be provided as a kit and may act as a retrofitting kit, whereby the spikes 204 and the key 206 can be utilized with a shoe 202 that is already owned by an individual. in other examples, the spikes 204 and the key 206 may be provided with a pair of shoes 202 that a user may purchase. in further examples, the spikes 204 and/or key 206 may be provided individually. as shown in figs. 3 and 4 , the shoe 202 includes a body 210 with an opening 212 designed to receive a foot (not shown). the shoe 202 may further include laces 214 or another securement mechanism that is designed to help retain the shoe 202 on a user's foot. the shoe 202 further includes a sole 216 positioned on the underside (e.g., bottom surface) of the body 210 . the sole 216 includes a plurality of cylindrical bores 218 (see fig. 4 ) that are designed to receive the spikes 204 . the bores 218 may be positioned at various locations on the sole 216 and, in one instance, may be positioned in a front half of the body 210 of the shoe 202 . in other embodiments, the bores 218 may be positioned at other locations on the sole 216 including, for example, in a rear half of the body 210 of the shoe 202 , or covering the substantial entirety of the bottom surface of the shoe 202 . the bores 218 may optionally include a raised surface 220 that circumscribes openings 222 defined by the bores 218 . the bores 218 further include threads (not shown) on an interior surface thereof. the threads are designed to releasably interact with corresponding threads on the spikes 204 , as will be described in more detail below. the sole 216 of the shoe 202 may be provided in any material including, for example, rubber, polymer, and/or other combinations of natural or synthetic materials. at least a portion of the bores 218 , including the interior surface, may be provided as a metal. figs. 5a and 5b depict one embodiment of a spike 204 . the spike 204 includes a cylindrical base 230 having threads 232 that circumscribe an exterior surface of the base 230 . the threads 232 terminate at a disc member 234 that protrudes outwardly from the base 230 . a tapered spike body 236 extends upwardly from the disc member 234 and terminates at a point 238 . the spike body 236 includes a lower portion 240 having a substantially rectangular cross-sectional profile and an upper conical portion 242 . a hole 250 extends through the lower portion 240 of the spike body 236 . in some embodiments, the hole 250 is circular or cylindrical. in other embodiments, the hole 250 may be provided in other shapes/sizes including, for example, triangular, hexagonal, square, and the like. in one embodiment, the hole 250 extends entirely through the lower portion 240 of the spike body 236 from a first surface 252 to a second surface 254 . in another embodiment, the hole 250 may extend only partially through the lower portion 240 . the hole 250 is designed to extend through the lower portion 240 of the spike 204 as opposed to the spike body 236 or base 230 . although the hole 250 is depicted extending through the spike 204 along a lateral axis, the hole 250 could be provided along a longitudinal axis. the spikes 204 may be provided in groups or may be provided as a single unit. additionally, the spikes 204 may be provided in various lengths including, for example, 1/16 inch, ⅛ inch, ¼ inch, ½ inch, ¾ inch, 1 inch, and the like. the spikes 204 also may be provided as a metal (e.g., steel), a polymer, or another suitable material. fig. 6 depicts the key 206 designed to be used with the spikes 204 and a shoe 202 . the key 206 includes a body 260 having a handle portion 262 and one or more keyed sections. the handle portion 262 may be contoured to be comfortably grasped by a user. the body 260 may further include a circular opening 263 extending through a central portion of the body 260 . a first key 264 may be provided at an end of the body 260 that includes a racetrack shaped opening 266 designed to interact with spikes that are known in the prior art (see fig. 1 ). a second key 270 may be provided that includes an elongate key member 272 that is shaped to correspond to and fit into the hole 250 of the spikes 204 . for example, if the hole 250 of the spikes 204 is circular, the key member 272 may be provided as a cylindrical member. the key member 272 is preferably sized such that the key member 272 can extend through the hole 250 of the spikes 204 . as best shown in fig. 6 , the key member 272 is provided as a substantially l-shaped body having a first section 274 and a second section 276 . the first section 274 protrudes outwardly at a substantially right angle as compared to the second section 276 . a tip 278 of the first section 274 is provided in a shape that is matched to the hole 250 in the spike 204 . in some instances, the entire profile of the key member 272 may be provided in a uniform cross-section. in other embodiments, the tip 278 of the first section 274 is provided and matches the hole 250 in the spikes 204 , irrespective of the cross-sectional profile of the rest of the key member 272 . the second section 276 extends from the first section 274 and terminates at an attachment point 280 . the attachment point 280 is enclosed on the inside of the key member 272 and is designed to rotatably hold the key member 272 in a rest (non-use) position as depicted in fig. 6 and an in-use position (not shown). to use the key member 272 , a user can extend their fingers into the opening 263 to grasp the first section 274 of the key member 272 and rotate the key member 272 upwardly along a pivot axis p formed by the attachment point 280 . although the spikes 204 and key 206 are depicted being used with a specific shoe 202 , it is contemplated that the spikes 204 and/or key 206 could be used with any shoe having the appropriate cylindrical openings (or other attachment mechanism) in the soles thereof. in use, each of the spikes 204 may be positioned adjacent to their respective bores 218 on the sole 216 of a shoe 202 such that the thread 232 of the spikes 204 contacts the threaded surface of the bore 218 . optionally, a user may at least partially attach the spike 204 onto the shoe 202 by manually rotating the spike 204 to engage the threads. a user may grasp the handle portion 262 of the key 206 such that the opening 266 of the first key 264 is positioned adjacent to, and is contacting the upper surface of the disc member 234 of the spike 204 . in this configuration, the spike body 236 of the spike 204 will be positioned within the interior of the first key 264 and surrounded by the racetrack shaped opening 266 . once the key 206 is in position, the user may rotate the key 264 in a clockwise manner until the thread 232 of the spike 204 is completely engaged with the thread of the bore 218 . this process may be repeated until all spikes 204 are installed on the shoe 202 . to disengage one or more spikes 204 from the shoe 202 , a user grasps the key 206 adjacent the opening 263 and grasps the first section 274 of the second key 270 . the second key 270 is then rotated about the attachment point p such that the first section 274 of the second key 270 protrudes upwardly from, and extends beyond, a top surface of the key 206 . once the second key 270 is in position, the user can align the tip 278 of the first section 274 of the second key 270 with the hole 250 in the spike body 236 . once positioned, the user can insert the tip 278 and a portion of the first section 274 of the second key 270 into and through the hole 250 , and rotate the second key 270 in a counter-clockwise manner until the thread 232 of the spike 204 is completely disengaged with the thread of the bore 218 . in this way, the second key 270 engages with the hole 250 in the spike body 236 and allows for removal of the spike 204 without stripping the spike 204 . this process may be repeated until all spikes 204 are uninstalled on the shoe 202 . further, although discussed with respect to removal of the spikes 204 from the shoe 202 , the second key 270 may be utilized to install the spikes 203 onto the shoe 202 in the manner discussed above, but rotating the second key 270 in a clockwise manner. this attachment and removal process may be repeated until all spikes 204 are installed/uninstalled on the shoe 202 . further, once the user is finished, the second key 270 is then rotated about the attachment point p again such that the first section 274 of the second key 270 is returned to the non-use position depicted in figs. 6 and 7 . it should be apparent that one advantage of the key 206 is the inclusion of both the first key 264 and the second key 270 that may be utilized in the appropriate situation. in other instances, a key 206 may be provided that only includes the second key 270 . in a further instance, the second key 270 may be provided as an attachment that is designed to be releasably attached to a key 110 known in the prior art. in this version, the second key 270 may include a pin or other mechanism that allows for releasable attachment between the second key 270 and prior art key 110 . it will be appreciated by those skilled in the art that while the invention has been described above in connection with particular embodiments and examples, the invention is not necessarily so limited, and that numerous other embodiments, examples, uses, modifications and departures from the embodiments, examples and uses are intended to be encompassed by the claims attached hereto. the entire disclosure of each patent and publication cited herein is incorporated by reference, as if each such patent or publication were individually incorporated by reference herein. various features and advantages of the invention are set forth in the following claims.
164-161-938-900-849
US
[ "CA", "CN", "EP", "PH", "JP", "US", "MX", "TW", "AR", "IL", "MY", "SG", "WO", "AU", "KR" ]
A61K39/395,A61K31/41,A61K31/4162,A61K31/519,A61K31/517,A61P35/02,C07K16/28,A61K45/06,A61K31/4985,A61K31/52,A61K31/5383,A61P35/00,A61P43/00,A61K9/00,A61K39/00,A61K31/505
2016-06-27T00:00:00
2016
[ "A61", "C07" ]
cancer treatment combinations
there are provided, inter alia, compositions and methods for treatment of cancer. the methods include administering to a subject in need a therapeutically effective amount of a bruton's tyrosine kinase (btk) antagonist and a ror-1 antagonist. further provided are pharmaceutical compositions including a btk antagonist, ror-1 antagonist and a pharmaceutically acceptable excipient. in embodiments, the btk antagonist is ibrutinib and the ror-1 antagonist is cirmtuzumab.
1 . a method of treating cancer in a subject in need thereof, said method comprising administering to said subject a therapeutically effective amount of a bruton's tyrosine kinase (btk) antagonist and a tyrosine kinase-like orphan receptor 1 (ror-1) antagonist. 2 . the method of claim 1 , wherein said btk antagonist is a small molecule. 3 . the method of claim 1 , wherein said btk antagonist is ibrutinib, idelalisib, fostamatinib, acalabrutinib, ono/gs-4059, bgb-3111 or cc-292 (avl-292). 4 . the method of claim 1 , wherein said btk antagonist is ibrutinib. 5 . the method of claim 1 , wherein said ror-1 antagonist is an antibody or a small molecule. 6 . the method of claim 1 , wherein said ror-1 antagonist is an anti-ror-1 antibody. 7 . the method of claim 5 , wherein said antibody comprises a humanized heavy chain variable region and a humanized light chain variable region, wherein said humanized heavy chain variable region comprises the sequences set forth in seq id no:1, seq id no:2, and seq id no:3; and wherein said humanized light chain variable region comprises the sequences set forth in seq id no:4, seq id no:5, and seq id no:6. 8 . the method of claim 5 , wherein said antibody is cirmtuzumab. 9 . the method of claim 5 , wherein said antibody comprises a humanized heavy chain variable region and a humanized light chain variable region, wherein said humanized heavy chain variable region comprises the sequences set forth in seq id no:7, seq id no:8, and seq id no:9; and wherein said humanized light chain variable region comprises the sequences set forth in seq id no:10, seq id no:11, and seq id no:12. 10 . the method of claim 1 , wherein said btk antagonist and said ror-1 antagonist are administered in a combined synergistic amount. 11 . the method of claim 1 , wherein said btk antagonist and said ror-1 antagonist are administered simultaneously or sequentially. 12 . the method of claim 1 , wherein said ror-1 antagonist is administered at a first time point and said btk antagonist is administered at a second time point, wherein said first time point precedes said second time point. 13 . the method of claim 1 , wherein said btk antagonist and said ror-1 antagonist are admixed prior to administration. 14 . the method of claim 1 , wherein said btk antagonist is administered at an amount of about 1 mg/kg, 2 mg/kg, 5 mg/kg, 10 mg/kg or 15 mg/kg. 15 . the method of claim 1 , wherein said btk antagonist is administered at an amount of about 5 mg/kg. 16 . the method of claim 1 , wherein said btk antagonist is administered at an amount of about 420 mg. 17 . the method of claim 1 , wherein said ror-1 antagonist is administered at an amount of about 1 mg/kg, 2 mg/kg, 3 mg/kg, 5 mg/kg or 10 mg/kg. 18 . the method of claim 1 , wherein said ror-1 antagonist is administered at an amount of about 2 mg/kg. 19 . the method claim 1 , wherein said btk antagonist is administered at an amount of about 5 mg/kg and said ror-1 antagonist is administered at about 2 mg/kg. 20 . the method of claim 1 , wherein said btk antagonist is administered at an amount of about 5 mg/kg and said ror-1 antagonist is administered at about 1 mg/kg. 21 . the method of claim 1 , wherein said btk antagonist is administered daily over the course of at least 14 days. 22 . the method of claim 1 , wherein said btk antagonist is administered daily over the course of about 28 days. 23 . the method of claim 1 , wherein said ror-1 antagonist is administered once over the course of about 28 days. 24 . the method of claim 1 , wherein said btk antagonist is administered intravenously. 25 . the method of claim 1 , wherein said ror-1 antagonist is administered intravenously. 26 . the method of claim 1 , wherein said subject is a mammal. 27 . the method of claim 1 , wherein said subject is a human. 28 . the method of claim 1 , wherein said cancer is lymphoma, leukemia, myeloma, aml, b-all, t-all, renal cell carcinoma, colon cancer, colorectal cancer, breast cancer, epithelial squamous cell cancer, melanoma, stomach cancer, brain cancer, lung cancer, pancreatic cancer, cervical cancer, ovarian cancer, liver cancer, bladder cancer, prostate cancer, testicular cancer, thyroid cancer, head and neck cancer, uterine cancer, adenocarcinoma, or adrenal cancer. 29 . the method of claim 1 , wherein said cancer is chronic lymphocytic leukemia (cll), small lymphocytic lymphoma, marginal cell b-cell lymphoma, burkitt's lymphoma, or b cell leukemia. 30 . a pharmaceutical composition comprising a btk antagonist, a ror-1 antagonist and a pharmaceutically acceptable excipient. 31 . a pharmaceutical composition comprising a btk antagonist, an anti-ror-1 antibody and a pharmaceutically acceptable excipient, wherein said btk antagonist and said anti-ror-1 antibody are present in a combined synergistic amount, wherein said combined synergistic amount is effective to treat cancer in a subject in need thereof. 32 . the pharmaceutical composition of claim 30 , wherein said btk antagonist is a small molecule. 33 . the pharmaceutical composition of claim 30 , wherein said btk antagonist is ibrutinib, idelalisib, fostamatinib, acalabrutinib, ono/gs-4059, bgb-3111 or cc-292 (avl-292). 34 . the pharmaceutical composition of claim 30 , wherein said btk antagonist is ibrutinib. 35 . the pharmaceutical composition of claim 30 , wherein said ror-1 antagonist is an antibody or a small molecule. 36 . the pharmaceutical composition of claim 30 , wherein said ror-1 antagonist is an anti-ror-1 antibody. 37 . the pharmaceutical composition of claim 35 , wherein said antibody comprises a humanized heavy chain variable region and a humanized light chain variable region, wherein said humanized heavy chain variable region comprises the sequences set forth in seq id no:1, seq id no:2, and seq id no:3; and wherein said humanized light chain variable region comprises the sequences set forth in seq id no:4, seq id no:5, and seq id no:6. 38 . the pharmaceutical composition of claim 35 , wherein said antibody is cirmtuzumab. 39 . the pharmaceutical composition of claim 35 , wherein said antibody comprises a humanized heavy chain variable region and a humanized light chain variable region, wherein said humanized heavy chain variable region comprises the sequences set forth in seq id no:7, seq id no:8, and seq id no:9; and wherein said humanized light chain variable region comprises the sequences set forth in seq id no:10, seq id no:11, and seq id no:12.
cross references to related applications this application claims priority to u.s. provisional application no. 62/355,171, filed jun. 27, 2016 which is hereby incorporated by reference in its entirety and for all purposes. statement as to rights to inventions made under federally sponsored research and development this invention was made with government support under grant no. ca081534 awarded by the national institutes of health. the government has certain rights in the invention. reference to a “sequence listing” a table or a computer program listing appendix submitted as an ascii file the sequence listing written in file 48537-582001us sl/st25.txt, created on june 19, 2017, 10,919 bytes, machine format ibm-pc, ms-windows operating system, is hereby incorporated by reference. background signaling via bcr (b-cell receptor) signaling is thought to play a role in the pathogenesis and/or progression of disease, e.g., chronic lymphocytic leukemia (cll). moreover, agents that target b-cell receptor (bcr) signaling in lymphoid and leukemia malignancies including ibrutinib and acalabrutinib (4-{8-amino-3-[(2s)-1-(2-butynoyl)-2-pyrrolidinyl]imidazo[1,5-a]pyrazin-1-yl }-n- (2-pyridinyl)benzamide), which inhibit bruton's tyrosine kinase (btk), have shown significant clinical activity. by disrupting b-cell signaling pathways, btk treatment has been associated with a dramatic lymph node response, but eradication of disease and relapse in high risk disease remain challenges. provided here are solutions to these and other problems in the art. summary the compositions and methods provided herein are, inter alia, useful for the treatment of leukemia. for example, provided herein are surprisingly effective methods for using the combination of anti-ror-1 antibody with bcr inhibitors to treat chronic lymphocytic leukemia (cll). in an aspect is provided a method of treating cancer in a subject in need thereof, the method including administering to the subject a therapeutically effective amount of a bruton's tyrosine kinase (btk) antagonist and a tyrosine kinase-like orphan receptor 1 (ror-1) antagonist. in an aspect is provided a pharmaceutical composition including a btk antagonist, a ror-1 antagonist and a pharmaceutically acceptable excipient. in an aspect is provided a pharmaceutical composition including a btk antagonist, an anti-ror-1 antibody and a pharmaceutically acceptable excipient, wherein the btk antagonist and the anti-ror-1 antibody are present in a combined synergistic amount, wherein the combined synergistic amount is effective to treat cancer in a subject in need thereof. in an aspect, there is provided a method of treating cancer in a subject in need thereof. the method includes administering to the subject a therapeutically effective amount of a bruton's tyrosine kinase (btk) antagonist and an anti-ror-1 antibody. in another aspect, there is provided a pharmaceutical composition including a bruton's tyrosine kinase (btk) antagonist, an anti-ror-1 antibody, and a pharmaceutically acceptable excipient. brief description of the drawings figs. 1a-1d . uc-961 inhibits wnt5a-induced rac1 activation in ibrutinib-treated cll cells. ( fig. 1a ) activated rac1 was measured in cll cells incubated with or without wnt5a and treated with uc-961 or ibrutinib, as indicated on the top of each lane. ( fig. 1b ) wnt5a-induced activation of rac1 in cll cells without treatment or treated with uc-961 (10 μg/ml) and/or ibrutinib (0.5 μm). mean rac1 activation observed in five independent experiments is shown (n=5). ( fig. 1c ) cll cells were collected from ibrutinib treated patients (n=5). activated rac1 was measured in these cll cells treated with or without wnt5a or uc-961 indicated above each lane in vitro. ( fig. 1d ) rac1 activation was measured in cll cells collected from patient treated with ibrutinib, which were treated with wnt5a and/or uc-961. mean rac1 activation observed in five independent experiments is shown (n=5). the numbers below each lane are ratios of band iod of activated versus total gtpase normalized to untreated samples. data are shown as mean±sem for each group. **p<0.01; ***p<0.001; ****p<0.0001, as calculated using one-way anova with tukey's multiple comparisons test. figs. 2a-2b . uc-961 inhibits wnt5a-enhanced proliferation in ibrutinib-treated cll cells. ( fig. 2a ) cd154-induced proliferation of cfse-labeled cll cells (n=6) with or without wnt5a and treated with uc-961 or ibrutinib. one representative cll sample is shown with the percent of dividing cells. ( fig. 2b ) the bars indicate the mean proportions of cll cells with diminished cfse fluorescence from each of 6 different patients for each culture condition indicated at the bottom. data are shown as mean±sem, *p<0.05; **p<0.01, as determined by one-way anova with tukey's multiple comparisons test. fig. 3 . additive inhibitory effect of uc-961 and ibrutinib in cll patient derived xenograft mice. cll cells were injected to the peritoneal cavity of rag2 −/− γ c −/− mice 1 d before treatment as indicated. peritoneal lavage was collected 7 d after cell injection and subjected to residual cll determination by cell counting and flow cytometry analysis following staining with mab specific for cd5, cd19, and cd45. each bar in the graph represents percentage of residual cll cells harvested form mice after treatment, normalized with respect to cells harvested from mice without treatment. data shown are mean±sem from 3 different patients with 5 mice in each group. p-values were determined by one-way anova with tukey's multiple comparisons test. figs. 4a-4d . uc-961 inhibits wnt5a-enhanced proliferation in ibrutinib-treated ror-1×tcl1 leukemia cells. ( fig. 4a ) activated rac1 was measured in ror-1×tcl1 leukemia cells incubated with or without wnt5a and treated with uc-961 (10 μg/ml) and/or ibrutinib (0.5 μm), as indicated on the top of each lane. the numbers below each lane are ratios of band iod of activated versus total gtpase normalized to untreated samples. ( fig. 4b ) wnt5a-induced activation of rac1 in ror-1×tcl1 leukemia cells without treatment or treated with uc-961 (10 μg/ml) and/or ibrutinib (0.5 μm). mean rac1 activation observed in five independent experiments is shown (n=5). ( fig. 4c ) cd154-induced proliferation of cfse-labeled ror-1×tcl1 leukemia cells (n=6) with or without wnt5a and treated with uc-961 or ibrutinib. one representative ror-1×tcl1 leukemia cell sample is shown with the percent of dividing cells. ( fig. 4d ) the bars indicate the mean proportions of ror-1×tcl1 leukemia cells with diminished cfse fluorescence from each of 5 different mice for each culture condition indicated at the bottom. data are shown as mean±sem; **p<0.01; ***p<0.001; ****p<0.0001, as calculated using one-way anova with tukey's multiple comparisons test. figs. 5a-5c . additive inhibitory effect of uc-961 and ibrutinib in ror-1×tcl1 leukemia xenograft mice. ( fig. 5a ) representative spleens of rag2 −/− γ c −/− mice were shown, which were collected 25 days after receiving an intravenous infusion of 2×10 4 ror-1×tcl1 leukemia cells. ( fig. 5b ) combination of uc-961 and ibrutinib inhibits engraftment of ror-1×tcl1 leukemia cells in rag2 −/− γ c −/− mice. rag2 −/− γ c −/− mice were engrafted with 2×10 4 ror-1×tcl1 leukemia cells and then given single i.v injection of 1 mg/kg uc-961 on day 1 or daily does 5-mg/kg ibrutinib. contour plots depicting the fluorescence of splenic lymphocytes harvested on day 25 post adoptive transfer from representative mice (n=5) that received treatment indicated at the top, as determined by light scatter characteristics, after staining the cells with fluorochrome-conjugated mab specific for b220 (abscissa) and human ror-1 (ordinate). the percentages in the top right of each contour plot indicate the proportion of the blood mononuclear cells having cd5 + b220 low ror-1 + phenotype of the leukemia cells. ( fig. 5c ) total number of ror-1×tcl1 leukemia cells in spleens of recipient rag2 −/− γ c −/− mice 25 days after adoptive transfer of 2×10 4 ror-1×tcl1 leukemia cells that received single injection of 1-mg/kg uc-961 or daily injections of 5-mg/kg ibrutinib, as determined by flow cytometric analysis and cell count. each shape represents the number of leukemia cells found in individual mice. data are shown as mean±sem for each group of animals (n=5); ***p<0.001, as calculated using one-way anova with tukey's multiple comparisons test. figs. 6a-6d . ibrutinib inhibits bcr signaling, but not wnt5a/ror-1 signaling. ( fig. 6a ) activated rac1 was measured in cll cells incubated with or without wnt5a or ibrutinib at concentrations of 0, 0.25, 0.5 or 1.0 μm, as indicated on the top of each lane. ( fig. 6b ) cll cells were treated with increasing doses of ibrutinib for 1 hour and then assayed for occupancy of the btk active site. ( fig. 6c ) anti-μ-induced calcium mobilization in cll cells after treatment with or without different dose of ibrutinib. the relative mean fluorescence intensity in intracellular calcium is plotted as a function of time. the arrow labeled “igm” indicates the time at which the anti-μ was added to the cells. ( fig. 6d ) determination of cell viability by staining with dioc6 and pi. presented are dot maps of cll cells from a representative patient defining the relative green (dioc6) and red (pi) fluorescence intensities of the leukemia cells on the horizontal and vertical axes, respectively. the vital cell population (dioc6 + pi − ) was determined for cll cells after treatment with different doses of ibrutinib. the percentage of vital cells is displayed in each dot map. figs. 7a-7e . wnt5a induces rac1 activation in cll cells treated with ibrutinib. ( fig. 7a ) cf se assay for cll proliferation induced by wnt5a without cd154. fluorescence of cfse-labeled cll cells (n=6) co-cultured for 5 days with wild-type hela cells without (left panel) or with (right panel) exogenous wnt5a in the presence of il-4/10. the results of assays on one representative cll sample are shown with the percent of dividing cells indicated in the lower left of each panel. ( fig. 7b ) cfse assay for ror-1×tcl1 leukemia cell proliferation induced by wnt5a without cd154. fluorescence of cfse-labeled cll cells (n=6) co-cultured for 5 days with hela cells without (left panel) or with (right panel) exogenous wnt5a in the presence of il-4/10. the results of assays on one representative cll sample are shown with the percent of dividing cells indicated in the lower left of each panel. ( fig. 7c ) rac1 activation was measured in serum starved tcl1 leukemia cells, which were treated with wnt5a for 30 min. whole-cell lysates were run on parallel gels to examine total rac1. the numbers below each lane are ratios of band iod of activated versus total gtpase, normalized with respect to that of untreated samples. ( fig. 7d ) cfse assay for tcl1 leukemia cell proliferation induced by wnt5a and/or cd154. fluorescence of cfse-labeled tcl1 leukemia cells (n=3) co-cultured for 5 days with wild-type hela or hela cd154 cells without or with exogenous wnt5a in the presence of il-4/10. the results of assays on one representative tcl1 leukemia sample are shown with the percent of dividing cells indicated in the lower left of each panel. ( fig. 7e ) mean proportions of dividing cll cells from tcl1 leukemia cells (n=3) under conditions indicated at the bottom. data are shown as mean±sem for each group; p-values were calculated using one-way anova with tukey's multiple comparisons test; ns: non-significant. figs. 8a-8b . dose-dependent inhibitory effect of uc-961 or ibrutinib in ror-1×tcl1 leukemia xenograft mice. ( fig. 8a ) dose-dependent inhibitory effect of ibrutinib in ror-1×tcl1 leukemia xenograft mice. ( fig. 8b ) dose-dependent inhibitory effect of uc-961 in ror-1×tcl1 leukemia xenograft mice. each shape represents the number of leukemia cells found in individual mice. data are shown as mean±sem for each group of animals (n=6); *p<0.05; ***p<0.001, as calculated using one-way anova with tukey's multiple comparisons test. fig. 9 . ror-1 was induced by bcr signaling inhibitors. fig. 10 . additive effect of anti-ror-1 antibody combined with ibrutinib on clearing cll cells in niche dependent animal model. fig. 11 . additive effect of anti-ror-1 antibody combined with ibrutinib on ror-1×tcl1 mice leukemia. figs. 12a-12f . cirmtuzumab inhibits wnt5a-induced rac1 activation in ibrutinib-treated cll cells. ( fig. 12a ) activated rac1 was measured in the freshly isolated ibrutinib-treated cll cells or isolated ibrutinib-treated cll cells cultured in serum free media without or with exogenous wnt5a (200 ng/ml), as indicated on the top of each lane. ( fig. 12b ) activated rac1 was measured in the freshly isolated ibrutinib-treated cll cells or isolated ibrutinib-treated cll cells cultured in serum free media with or without wnt5a (200 ng/ml). mean rac1 activation observed in four independent experiments is shown (n=4). ( fig. 12c ) cll cells were collected from ibrutinib treated patients (n=4). activated rac1 was measured in cll cells treated with or without wnt5a (200 ng/ml) or cirmtuzumab (10 μg/ml), as indicated above each lane of the immunoblot ( fig. 12d ) rac1 activation was measured in cll cells collected from patients undergoing therapy with ibrutinib, which were treated with wnt5a (200 ng/ml) and/or cirmtuzumab (10 μg/ml). the average rac1 activation observed in five independent experiments is shown (n=5). ( fig. 12e ) activated rac1 was measured in cll cells incubated with or without wnt5a and treated with cirmtuzumab (10 μg/ml) or ibrutinib (0.5 μm), as indicated on the top of each lane. ( fig. 12f ) wnt5a-induced activation of rac1 in cll cells without treatment or treated with cirmtuzumab and/or ibrutinib. mean rac1 activation observed in five independent experiments is shown (n=5). the numbers below each lane are ratios of band iod of activated versus total gtpase normalized to untreated samples. data are shown as mean±sem for each group. **p<0.01; ***p<0.001; ****p<0.0001, as calculated using one-way anova with tukey's multiple comparisons test. figs. 13a-13d . cirmtuzumab inhibits wnt5a-enhanced proliferation in ibrutinib-treated cll cells. ( fig. 13a ) cd154-induced proliferation of cfse-labeled cll cells (n=6) with or without wnt5a and treated with cirmtuzumab (10 μg/ml) or ibrutinib (0.5 μm). one representative cll sample is shown with the percent of dividing cells. ( fig. 13b ) the bars indicate the mean proportions of cll cells with diminished cfse fluorescence from each of 6 different patients for each culture condition indicated at the bottom. ( fig. 13c ) cll cells were co-cultured on hela cd154 in the presence of il-4/10 or wnt5a, and then treated with cirmtuzumab (10 μg/ml) or ibrutinib (0.5 μm) for 4 days, subjected to cell-cycle analysis following pi staining. one representative cll sample is shown. ( fig. 13d ) the mean fraction of cells in s/g2/m phase for all 4 patients tested is presented. data are shown as mean±sem, *p<0.05; **p<0.01, as determined by one-way anova with tukey's multiple comparisons test. figs. 14a-14b . effect of treatment with cirmtuzumab and/or ibrutinib on cll patient derived xenografts. ( fig. 14a ) cll cells were injected to the peritoneal cavity of rag2 −/− γ c −/− mice 1 day before treatment. peritoneal lavage was collected 7 days after cell injection and subjected to residual cll determination by cell counting and flow cytometry analysis following staining with mab specific for cd5, cd19, and cd45. the percentages shown in the top right of each contour plot indicates the proportion of cll cells among the cells harvested from mice after treatment. ( fig. 14b ) each bar in the graph represents the percentage of cll cells among harvested cells from mice after treatment, normalized with respect to the percentage of cll cells among cells harvested from mice without treatment, which was to 100%. data shown are mean±sem from 3 different patients with 5 mice in each group; ***p<0.001; ****p<0.0001, as calculated using one-way anova with tukey's multiple comparisons test. figs. 15a-15c . cirmtuzumab inhibits wnt5a-enhanced proliferation in ibrutinib-treated ror-1×tcl1 leukemia cells. ( fig. 15a ) activated rac1 was measured in ror-1×tcl1 leukemia cells incubated with or without wnt5a (200 ng/ml) and treated with cirmtuzumab (10 μg/ml) and/or ibrutinib (0.5 μm), as indicated on the top of each lane. the numbers below each lane are ratios of the band densities of activated versus total gtpase, normalized to untreated samples. ( fig. 15b ) activation of rac1 in ror-1×tcl1 leukemia cells treated with wnt5a with or without cirmtuzumab (10 μg/ml) and/or ibrutinib (0.5 μm). the average rac1 activation observed in five independent experiments is shown (n=5). ( fig. 15c ) cd154-induced proliferation of cfse-labeled ror-1×tcl1 leukemia cells (n=5) with or without treatment with wnt5a (200 ng/ml) and/or cirmtuzumab (10 μg/ml) or ibrutinib (0.5 μm). the bars indicate the mean proportions of ror-1×tcl1 leukemia cells from each of 5 different mice that have diminished cfse fluorescence for each culture condition, as indicated at the bottom. data are shown as mean±sem; *p<0.05; **p <0.01; ****p<0.0001, as calculated using one-way anova with tukey's multiple comparisons test. figs. 16a-16c . additive inhibitory effect of treatment with cirmtuzumab and ibrutinib in immunodeficient mice engrafted histocompatible ror-1 + leukemia. ( fig. 16a ) representative spleens of rag2 −/− γ c −/− mice were shown, which were collected 25 days after receiving an intravenous infusion of 2×10 4 ror-1×tcl1 leukemia cells. ( fig. 16b ) combination of cirmtuzumab and ibrutinib inhibits engraftment of ror-1×tcl1 leukemia cells in rag2 −/− γ c −/− mice. rag2 −/− γ c −/− mice were engrafted with 2×10 4 ror-1×tcl1 leukemia cells and then given single intravenous injection of 1 mg/kg cirmtuzumab on day 1, or daily doses of 5 mg/kg ibrutinib via oral gavage. contour plots depicting the fluorescence of lymphocytes harvested on day 25 post adoptive transfer from representative mice (n=5) that received treatment, as indicated at the top, after staining the cells with fluorochrome-conjugated mab specific for b220 (abscissa) and human ror-1 (ordinate). the percentages in the top right of each contour plot indicate the proportion of the blood mononuclear cells having cd5 + b220 low ror-1 + phenotype of the leukemia cells. ( fig. 16c ) total number of ror-1×tcl1 leukemia cells in spleens of recipient rag2 −/− γ c −/− mice 25 days after adoptive transfer of 2×10 4 ror-1×tcl1 leukemia cells that received single injection of 1 mg/kg cirmtuzumab or daily injections of 5 mg/kg ibrutinib. each symbol represents the number of leukemia cells found in individual mice. data are shown as mean±sem for each group of animals (n=5); *p<0.05, **p<0.01, ***p<0.001, as calculated using one-way anova with tukey's multiple comparisons test. figs. 17a-17c . additive inhibitory effect of treatment with cirmtuzumab and ibrutinib in immunocompetent mice engrafted histocompatible ror-1 + leukemia. ( fig. 17a ) representative spleens of ror-1-tg mice were shown, which were collected 25 days after receiving an intravenous infusion of 2×10 4 ror-1×tcl1 leukemia cells. ( fig. 17b ) combination of cirmtuzumab and ibrutinib inhibits engraftment of ror-1×tcl1 leukemia cells in ror-1-tg mice. ror-1-tg mice were engrafted with 2×10 4 ror-1×tcl1 leukemia cells and then given weekly intravenous injection of 10 mg/kg cirmtuzumab or daily does 5 mg/kg ibrutinib via oral gavage. contour plots depicting the fluorescence of lymphocytes harvested 25 days after adoptive transfer of representative mice (n=6) that received treatment, as indicated at the top, after staining the cells with fluorochrome-conjugated mab specific for b220 (abscissa) and human ror-1 (ordinate). the percentages in the top right of each contour plot indicate the proportion of the blood mononuclear cells having cd5 + b220 low ror-1 + phenotype of the leukemia cells. ( fig. 17c ) total number of ror-1×tcl1 leukemia cells in spleens of recipient ror-1-tg mice 28 days after adoptive transfer of 2×10 4 ror-1×tcl1 leukemia cells that received weekly injection of 10 mg/kg cirmtuzumab and/or daily doses of ibrutinib (at 5 mg/kg). each symbol represents the number of leukemia cells found in individual mice. data are shown as mean±sem for each group of animals (n=6); *p<0.05, **p<0.01, as calculated using one-way anova with tukey's multiple comparisons test. figs. 18a-18b . cfse assay for cll proliferation induced by wnt5a without cd154. ( fig. 18a ) gating strategy for dividing cll cells. cells were first gated on size and singularity followed by pi exclusion to identify live cells for further analysis. live cd5 and cd19 cll cells were examined for fluorescence after staining with cfse. the percentages of dividing cll cells were calculated by computing the proportion of cells that had lower cfse fluorescence. ( fig. 18b ) fluorescence of cf se-labeled cll cells (n=6) co-cultured for 5 days with wild-type hela cells without (top panel) or with (lower panel) exogenous wnt5a in the presence of il-4/10. the results of one representative cll sample are shown with the percent of dividing cells indicated in the lower left corner of each histogram. figs. 19a-19b . cell cycle analysis of cll cells treated with cirmtuzumab or ibrutinib, with or without exogenous of wnt5a. ( fig. 19a ) leukemia cells were co-cultured on hela cd154 in the presence of il-4/10 or wnt5a, and then treated with cirmtuzumab (10 μg/ml) or ibrutinib (0.5 μm) for 4 days, subjected to cell-cycle analysis following pi staining. one representative leukemia sample is shown. ( fig. 19b ) the mean proportions of leukemia cells in s/g2/m phase for all 3 samples tested is presented. data are shown as mean±sem, *p<0.05; **p <0.01, as determined by one-way anova with tukey's multiple comparisons test. figs. 20a-20b . dose-dependent inhibitory effect of cirmtuzumab or ibrutinib on ror-1×tcl1 leukemia engrafted mice. ( fig. 20a ) dose-dependent inhibitory effect of ibrutinib in ror-1×tcl1 leukemia engrafted mice. ( fig. 20b ) dose-dependent inhibitory effect of cirmtuzumab in ror-1×tcl1 leukemia engrafted mice. each symbol represents the number of leukemia cells found in individual mice. data are shown as mean±sem for each group of animals (n=6); *p<0.05; ***p<0.001, as calculated using one-way anova with tukey's multiple comparisons test. fig. 21 . cfse assay for tcl1 leukemia cell proliferation induced by wnt5a and/or cd154. fluorescence of cfse-labeled tcl1 leukemia cells (n=3) co-cultured for 5 days with wild-type hela or hela cd154 cells without or with exogenous wnt5a in the presence of il-4/10. data are shown as mean±sem for each group; p-values were calculated using one-way anova with tukey's multiple comparisons test. figs. 22a-22c . antigen expression in primary mcl and wnt5a level in mcl patient plasma. ( fig. 22a ) gating on the mcl cells, which express cd5 and cd19 (top left). the shaded histograms show the fluorescence of the gated mcl cells stained with fluorochrome-conjugated mab specific for other surface antigens. in contrast to cll cells, the mcl cells failed to stain with a mab specific for cd200 (top right) or cd23 (bottom left). similar with cll, mcl expresses high levels of ror-1 (bottom right). the open histograms depict fluorescence of cells stained with an isotype control antibody. ( fig. 22b ) δmfi of ror-1 in mcl vs cll. ns=not significant. ( fig. 22c ) plasma wnt5a in patients with mcl vs. age-matched control subjects (n=4 per group; p<0.05, student's t test). figs. 23a-23d . analysis of rac1 activation and cell-cycle in mcl cells. ( fig. 23a ) activated rac1 was measured in mcl cells treated with or without wnt5a (200 ng/ml), with or without ibrutinib (0.5 μm) or with or without cirmtuzumab (10 μg/ml), as indicated above each lane of the immunoblot. the numbers below each lane are ratios of band iod of activated versus total gtpase normalized to untreated samples. ( fig. 23b ) wnt5a-induced activation of rac1 in cll cells without treatment or treated with cirmtuzumab and/or ibrutinib. mean rac1 activation observed in five independent experiments is shown (n=3). the numbers below each lane are ratios of band iod of activated versus total gtpase normalized to untreated samples. data are shown as mean±sem for each group. ****p<0.0001, as calculated using one-way anova with tukey's multiple comparisons test. ( fig. 23c ) mcl cells were co-cultured on hela cd154 in the presence of il-4/10 or wnt5a, and then treated with cirmtuzumab (10 μg/ml) or ibrutinib (0.5 μm) for 4 days, subjected to cell-cycle analysis following pi staining. one representative cll sample is shown. ( fig. 23d ) the mean fraction of cells in s/g2 phase for all mcl patients tested is presented (n=3). data are shown as mean±s.e.m.; *p<0.05; **p<0.01, as determined by one-way anova with tukey's multiple comparisons test. ns=not significant. detailed description definitions while various embodiments and aspects of the present invention are shown and described herein, it will be obvious to those skilled in the art that such embodiments and aspects are provided by way of example only. numerous variations, changes, and substitutions will now occur to those skilled in the art without departing from the invention. it should be understood that various alternatives to the embodiments of the invention described herein may be employed in practicing the invention. the section headings used herein are for organizational purposes only and are not to be construed as limiting the subject matter described. all documents, or portions of documents, cited in the application including, without limitation, patents, patent applications, articles, books, manuals, and treatises are hereby expressly incorporated by reference in their entirety for any purpose. the abbreviations used herein have their conventional meaning within the chemical and biological arts. the chemical structures and formulae set forth herein are constructed according to the standard rules of chemical valency known in the chemical arts. where substituent groups are specified by their conventional chemical formulae, written from left to right, they equally encompass the chemically identical substituents that would result from writing the structure from right to left, e.g., —ch 2 o— is equivalent to —och 2 —. the term “alkyl,” by itself or as part of another substituent, means, unless otherwise stated, a straight (i.e., unbranched) or branched non-cyclic carbon chain (or carbon), or combination thereof, which may be fully saturated, mono-or polyunsaturated and can include di- and multivalent radicals, having the number of carbon atoms designated (i.e., c 1 -c 10 means one to ten carbons). examples of saturated hydrocarbon radicals include, but are not limited to, groups such as methyl, ethyl, n-propyl, isopropyl, n-butyl, t-butyl, isobutyl, sec-butyl, (cyclohexyl)methyl, homologs and isomers of, for example, n-pentyl, n-hexyl, n-heptyl, n-octyl, and the like. an unsaturated alkyl group is one having one or more double bonds or triple bonds. examples of unsaturated alkyl groups include, but are not limited to, vinyl, 2-propenyl, crotyl, 2-isopentenyl, 2-(butadienyl), 2,4-pentadienyl, 3-(1,4-pentadienyl), ethynyl, 1- and 3-propynyl, 3-butynyl, and the higher homologs and isomers. an alkoxy is an alkyl attached to the remainder of the molecule via an oxygen linker (—o—). an alkyl moiety may be an alkenyl moiety. an alkyl moiety may be an alkynyl moiety. an alkyl moiety may be fully saturated. an alkenyl may include more than one double bond and/or one or more triple bonds in addition to the one or more double bonds. an alkynyl may include more than one triple bond and/or one or more double bonds in addition to the one or more triple bonds. the term “alkylene,” by itself or as part of another substituent, means, unless otherwise stated, a divalent radical derived from an alkyl, as exemplified, but not limited by, —ch 2 ch 2 ch 2 ch 2 —. typically, an alkyl (or alkylene) group will have from 1 to 24 carbon atoms, with those groups having 10 or fewer carbon atoms being preferred in the present invention. a “lower alkyl” or “lower alkylene” is a shorter chain alkyl or alkylene group, generally having eight or fewer carbon atoms. the term “alkenylene,” by itself or as part of another substituent, means, unless otherwise stated, a divalent radical derived from an alkene. the term “heteroalkyl,” by itself or in combination with another term, means, unless otherwise stated, a stable straight or branched non-cyclic chain, or combinations thereof, including at least one carbon atom and at least one heteroatom (e.g. o, n, p, si, and s), and wherein the nitrogen and sulfur atoms may optionally be oxidized, and the nitrogen heteroatom may optionally be quaternized. the heteroatom(s) (e.g. o, n, p, s, and si) may be placed at any interior position of the heteroalkyl group or at the position at which the alkyl group is attached to the remainder of the molecule. examples include, but are not limited to: —ch 2 —ch 2 —o—ch 3 , —ch 2 —ch 2 —nh—ch 3 , —ch 2 —ch 2 —n(ch 3 )—ch 3 , —ch 2 —s—ch 2 —ch 3 , —ch 2 —ch 2 , —s(o)—ch 3 , —ch 2 —ch 2 —s(o) 2 —ch 3 , —ch═ch—o—ch 3 , —si(ch 3 ) 3 , —ch 2 —ch═n—och 3 , —ch═ch—n(ch 3 )—ch 3 , —o—ch 3 , —o—ch 2 —ch 3 , and —cn. up to two or three heteroatoms may be consecutive, such as, for example, —ch 2 —nh—och 3 and —ch 2 —o—si(ch 3 ) 3 . a heteroalkyl moiety may include one heteroatom (e.g., o, n, s, si, or p). a heteroalkyl moiety may include two optionally different heteroatoms (e.g., o, n, s, si, or p). a heteroalkyl moiety may include three optionally different heteroatoms (e.g., o, n, s, si, or p). a heteroalkyl moiety may include four optionally different heteroatoms (e.g., o, n, s, si, or p). a heteroalkyl moiety may include five optionally different heteroatoms (e.g., o, n, s, si, or p). a heteroalkyl moiety may include up to 8 optionally different heteroatoms (e.g., o, n, s, si, or p). the term “heteroalkenyl,” by itself or in combination with another term, means, unless otherwise stated, a heteroalkyl including at least one double bond. a heteroalkenyl may optionally include more than one double bond and/or one or more triple bonds in additional to the one or more double bonds. the term “heteroalkynyl,” by itself or in combination with another term, means, unless otherwise stated, a heteroalkyl including at least one triple bond. a heteroalkynyl may optionally include more than one triple bond and/or one or more double bonds in additional to the one or more triple bonds. similarly, the term “heteroalkylene,” by itself or as part of another substituent, means, unless otherwise stated, a divalent radical derived from heteroalkyl, as exemplified, but not limited by, —ch 2 —ch 2 —s—ch 2 —ch 2 —and —ch 2 —s—ch 2 —ch 2 —nh—ch 2 —. for heteroalkylene groups, heteroatoms can also occupy either or both of the chain termini (e.g., alkyleneoxy, alkylenedioxy, alkyleneamino, alkylenediamino, and the like). still further, for alkylene and heteroalkylene linking groups, no orientation of the linking group is implied by the direction in which the formula of the linking group is written. for example, the formula —c(o) 2 r′— represents both —c(o) 2 r′— and —r′c(o) 2 —. as described above, heteroalkyl groups, as used herein, include those groups that are attached to the remainder of the molecule through a heteroatom, such as —c(o)r′, —c(o)nr′, —nr′r″, —or′, —sr′, and/or —so 2 r′. where “heteroalkyl” is recited, followed by recitations of specific heteroalkyl groups, such as —nr′r″ or the like, it will be understood that the terms heteroalkyl and —nr′r″ are not redundant or mutually exclusive. rather, the specific heteroalkyl groups are recited to add clarity. thus, the term “heteroalkyl” should not be interpreted herein as excluding specific heteroalkyl groups, such as —nr′r″ or the like. the terms “cycloalkyl” and “heterocycloalkyl,” by themselves or in combination with other terms, mean, unless otherwise stated, non-aromatic cyclic versions of “alkyl” and “heteroalkyl,” respectively, wherein the carbons making up the ring or rings do not necessarily need to be bonded to a hydrogen due to all carbon valencies participating in bonds with non-hydrogen atoms. additionally, for heterocycloalkyl, a heteroatom can occupy the position at which the heterocycle is attached to the remainder of the molecule. examples of cycloalkyl include, but are not limited to, cyclopropyl, cyclobutyl, cyclopentyl, cyclohexyl, 1-cyclohexenyl, 3-cyclohexenyl, cycloheptyl, 3-hydroxy-cyclobut-3-enyl-1,2, dione, 1h-1,2,4-triazolyl-5(4h)-one, 4h-1,2,4-triazolyl, and the like. examples of heterocycloalkyl include, but are not limited to, 1-(1,2,5,6-tetrahydropyridyl), 1-piperidinyl, 2-piperidinyl, 3-piperidinyl, 4-morpholinyl, 3-morpholinyl, tetrahydrofuran-2-yl, tetrahydrofuran-3-yl, tetrahydrothien-2-yl, tetrahydrothien-3-yl, 1-piperazinyl, 2-piperazinyl, and the like. a “cycloalkylene” and a “heterocycloalkylene,” alone or as part of another substituent, means a divalent radical derived from a cycloalkyl and heterocycloalkyl, respectively. a heterocycloalkyl moiety may include one ring heteroatom (e.g., o, n, s, si, or p). a heterocycloalkyl moiety may include two optionally different ring heteroatoms (e.g., o, n, s, si, or p). a heterocycloalkyl moiety may include three optionally different ring heteroatoms (e.g., o, n, s, si, or p). a heterocycloalkyl moiety may include four optionally different ring heteroatoms (e.g., o, n, s, si, or p). a heterocycloalkyl moiety may include five optionally different ring heteroatoms (e.g., o, n, s, si, or p). a heterocycloalkyl moiety may include up to 8 optionally different ring heteroatoms (e.g., o, n, s, si, or p). the terms “halo” or “halogen,” by themselves or as part of another substituent, mean, unless otherwise stated, a fluorine, chlorine, bromine, or iodine atom. additionally, terms such as “haloalkyl” are meant to include monohaloalkyl and polyhaloalkyl. for example, the term “halo(c i -c 4 )alkyl” includes, but is not limited to, fluoromethyl, difluoromethyl, trifluoromethyl, 2,2,2-trifluoroethyl, 4-chlorobutyl, 3-bromopropyl, and the like. the term “acyl” means, unless otherwise stated, —c(o)r where r is a substituted or unsubstituted alkyl, substituted or unsubstituted cycloalkyl, substituted or unsubstituted heteroalkyl, substituted or unsubstituted heterocycloalkyl, substituted or unsubstituted aryl, or substituted or unsubstituted heteroaryl. the term “aryl” means, unless otherwise stated, a polyunsaturated, aromatic, hydrocarbon substituent, which can be a single ring or multiple rings (preferably from 1 to 3 rings) that are fused together (i.e., a fused ring aryl) or linked covalently. a fused ring aryl refers to multiple rings fused together wherein at least one of the fused rings is an aryl ring. the term “heteroaryl” refers to aryl groups (or rings) that contain at least one heteroatom such as n, o or s, wherein the nitrogen and sulfur atoms are optionally oxidized, and the nitrogen atom(s) are optionally quaternized. thus, the term “heteroaryl” includes fused ring heteroaryl groups (i.e., multiple rings fused together wherein at least one of the fused rings is a heteroaromatic ring). a 5,6-fused ring heteroarylene refers to two rings fused together, wherein one ring has 5 members and the other ring has 6 members, and wherein at least one ring is a heteroaryl ring. likewise, a 6,6-fused ring heteroarylene refers to two rings fused together, wherein one ring has 6 members and the other ring has 6 members, and wherein at least one ring is a heteroaryl ring. and a 6,5-fused ring heteroarylene refers to two rings fused together, wherein one ring has 6 members and the other ring has 5 members, and wherein at least one ring is a heteroaryl ring. a heteroaryl group can be attached to the remainder of the molecule through a carbon or heteroatom. non-limiting examples of aryl and heteroaryl groups include phenyl, 1-naphthyl, 2-naphthyl, 4-biphenyl, 1-pyrrolyl, 2-pyrrolyl, 3-pyrrolyl, 3-pyrazolyl, 2-imidazolyl, 4-imidazolyl, pyrazinyl, 2-oxazolyl, 4-oxazolyl, 2-phenyl-4-oxazolyl, 5-oxazolyl, 3-isoxazolyl, 4-isoxazolyl, 5-isoxazolyl, 2-thiazolyl, 4-thiazolyl, 5-thiazolyl, 2-furyl, 3-furyl, 2-thienyl, 3-thienyl, 2-pyridyl, 3-pyridyl, 4-pyridyl, 2-pyrimidyl, 4-pyrimidyl, 5-benzothiazolyl, purinyl, 2-b enzimidazolyl, 5-indolyl, 1-isoquinolyl, 5-isoquinolyl, 2-quinoxalinyl, 5-quinoxalinyl, 3-quinolyl, and 6-quinolyl. substituents for each of the above noted aryl and heteroaryl ring systems are selected from the group of acceptable substituents described below. an “arylene” and a “heteroarylene,” alone or as part of another substituent, mean a divalent radical derived from an aryl and heteroaryl, respectively. non-limiting examples of aryl and heteroaryl groups include pyridinyl, pyrimidinyl, thiophenyl, thienyl, furanyl, indolyl, benzoxadiazolyl, benzodioxolyl, benzodioxanyl, thianaphthanyl, pyrrolopyridinyl, indazolyl, quinolinyl, quinoxalinyl, pyridopyrazinyl, quinazolinonyl, benzoisoxazolyl, imidazopyridinyl, benzofuranyl, benzothienyl, benzothiophenyl, phenyl, naphthyl, biphenyl, pyrrolyl, pyrazolyl, imidazolyl, pyrazinyl, oxazolyl, isoxazolyl, thiazolyl, furylthienyl, pyridyl, pyrimidyl, benzothiazolyl, purinyl, benzimidazolyl, isoquinolyl, thiadiazolyl, oxadiazolyl, pyrrolyl, diazolyl, triazolyl, tetrazolyl, benzothiadiazolyl, isothiazolyl, pyrazolopyrimidinyl, pyrrolopyrimidinyl, benzotriazolyl, benzoxazolyl, or quinolyl. the examples above may be substituted or unsubstituted and divalent radicals of each heteroaryl example above are non-limiting examples of heteroarylene. a heteroaryl moiety may include one ring heteroatom (e.g., o, n, or s). a heteroaryl moiety may include two optionally different ring heteroatoms (e.g., o, n, or s). a heteroaryl moiety may include three optionally different ring heteroatoms (e.g., o, n, or s). a heteroaryl moiety may include four optionally different ring heteroatoms (e.g., o, n, or s). a heteroaryl moiety may include five optionally different ring heteroatoms (e.g., o, n, or s). an aryl moiety may have a single ring. an aryl moiety may have two optionally different rings. an aryl moiety may have three optionally different rings. an aryl moiety may have four optionally different rings. a heteroaryl moiety may have one ring. a heteroaryl moiety may have two optionally different rings. a heteroaryl moiety may have three optionally different rings. a heteroaryl moiety may have four optionally different rings. a heteroaryl moiety may have five optionally different rings. a fused ring heterocyloalkyl-aryl is an aryl fused to a heterocycloalkyl. a fused ring heterocycloalkyl-heteroaryl is a heteroaryl fused to a heterocycloalkyl. a fused ring heterocycloalkyl-cycloalkyl is a heterocycloalkyl fused to a cycloalkyl. a fused ring heterocycloalkyl-heterocycloalkyl is a heterocycloalkyl fused to another heterocycloalkyl. fused ring heterocycloalkyl-aryl, fused ring heterocycloalkyl-heteroaryl, fused ring heterocycloalkyl-cycloalkyl, or fused ring heterocycloalkyl-heterocycloalkyl may each independently be unsubstituted or substituted with one or more of the substituents described herein. the term “oxo,” as used herein, means an oxygen that is double bonded to a carbon atom. the term “alkylsulfonyl,” as used herein, means a moiety having the formula —s(o 2 )—r′, where r′ is a substituted or unsubstituted alkyl group as defined above. r′ may have a specified number of carbons (e.g., “c 1 -c 4 alkylsulfonyl”). each of the above terms (e.g., “alkyl”, “heteroalkyl”, “cycloalkyl”, “heterocycloalkyl”, “aryl”, and “heteroaryl”) includes both substituted and unsubstituted forms of the indicated radical. preferred substituents for each type of radical are provided below. substituents for the alkyl and heteroalkyl radicals (including those groups often referred to as alkylene, alkenyl, heteroalkylene, heteroalkenyl, alkynyl, cycloalkyl, heterocycloalkyl, cycloalkenyl, and heterocycloalkenyl) can be one or more of a variety of groups selected from, but not limited to, —or′, ═o, ═nr′, ═n—or′, —nr′r″, —sr′, -halogen, —sir′r″r′″, —oc(o)r′, —c(o)r′, —co 2 r′, —conr′r″, —oc(o)nr′r″, —nr″c(o)r′, —nr′—c(o)nr″r′″, —nr″c(o) 2 r′, —nr—c(nr′r″r′″)═nr′″, —nr′c(nr′r″)═nr′″, —s(o)r′, —s(o) 2 r′, —s(o) 2 nr′r″, —nrso 2 r′, —nr′nr″r′″, —onr′r″, —nr′c═(o)nr″nr′″r″″, —cn, —no 2 , in a number ranging from zero to (2m′+1), where m′ is the total number of carbon atoms in such radical. r, r′, r″, r′″, and r″″ each preferably independently refer to hydrogen, substituted or unsubstituted heteroalkyl, substituted or unsubstituted cycloalkyl, substituted or unsubstituted heterocycloalkyl, substituted or unsubstituted aryl (e.g., aryl substituted with 1-3 halogens), substituted or unsubstituted heteroaryl, substituted or unsubstituted alkyl, alkoxy, or thioalkoxy groups, or arylalkyl groups. when a compound of the invention includes more than one r group, for example, each of the r groups is independently selected as are each r′, r″, r′″, and r″″ group when more than one of these groups is present. when r′ and r″ are attached to the same nitrogen atom, they can be combined with the nitrogen atom to form a 4-, 5-, 6-, or 7-membered ring. for example, —nr′r″ includes, but is not limited to, 1-pyrrolidinyl and 4-morpholinyl. from the above discussion of substituents, one of skill in the art will understand that the term “alkyl” is meant to include groups including carbon atoms bound to groups other than hydrogen groups, such as haloalkyl (e.g., —cf 3 and —ch 2 cf 3 ) and acyl (e.g., —c(o)ch 3 , —c(o)cf 3 , —c(o)ch 2 och 3 , and the like). similar to the substituents described for the alkyl radical, substituents for the aryl and heteroaryl groups are varied and are selected from, for example: —or′, —nr′r″, —sr′, -halogen, —sir′r″r′″, —oc(o)r′, —c(o)r′, —co 2 r′, —conr′r″, —oc(o)nr′r″, —nr″c(o)r′, —nr′—c(o)nr″r′″, —nr″c(o) 2 r′, —nr—c(nr′r″r′″)═nr″″, —nr—c(nr′r′)═nr′″, —s(o)r′, —s(o) 2 r′, —s(o) 2 nr′r″, —nrso 2 r′, —nr′nr″r″′, —onr′r″, —nr′c═(o)nr″nr″′r″″, —cn, —no 2 , —r′, —n 3 , —ch(ph) 2 , fluoro(c 1 -c 4 )alkoxy, and fluoro(c 1 -c 4 )alkyl, in a number ranging from zero to the total number of open valences on the aromatic ring system; and where r′, r″, r″′, and r″″ are preferably independently selected from hydrogen, substituted or unsubstituted alkyl, substituted or unsubstituted heteroalkyl, substituted or unsubstituted cycloalkyl, substituted or unsubstituted heterocycloalkyl, substituted or unsubstituted aryl, and substituted or unsubstituted heteroaryl. when a compound of the invention includes more than one r group, for example, each of the r groups is independently selected as are each r′, r″, r″′, and r″″ groups when more than one of these groups is present. two or more substituents may optionally be joined to form aryl, heteroaryl, cycloalkyl, or heterocycloalkyl groups. such so-called ring-forming substituents are typically, though not necessarily, found attached to a cyclic base structure. in embodiments, the ring-forming substituents are attached to adjacent members of the base structure. for example, two ring-forming substituents attached to adjacent members of a cyclic base structure create a fused ring structure. in another embodiment, the ring-forming substituents are attached to a single member of the base structure. for example, two ring-forming substituents attached to a single member of a cyclic base structure create a spirocyclic structure. in yet another embodiment, the ring-forming substituents are attached to non-adjacent members of the base structure. two of the substituents on adjacent atoms of the aryl or heteroaryl ring may optionally form a ring of the formula —t—c(o)—(crr′) q —u—, wherein t and u are independently —nr—, —o—, —crr′—, or a single bond, and q is an integer of from 0 to 3. alternatively, two of the substituents on adjacent atoms of the aryl or heteroaryl ring may optionally be replaced with a substituent of the formula —a—(ch 2 ) r —b—, wherein a and b are independently—crr′—, —o—, —nr—, —s—, —s(o)—, —s(o) 2 —, —s(o) 2 nr′—, or a single bond, and r is an integer of from 1 to 4. one of the single bonds of the new ring so formed may optionally be replaced with a double bond. alternatively, two of the substituents on adjacent atoms of the aryl or heteroaryl ring may optionally be replaced with a substituent of the formula —(crr′) s —x′—(c″r″r′″) d —, where s and d are independently integers of from 0 to 3, and x′ is —o—, —nr′—, —s—, —s(o)—, —s(o) 2 —, or —s(o) 2 nr′—. the substituents r, r′, r″, and r″′ are preferably independently selected from hydrogen, substituted or unsubstituted alkyl, substituted or unsubstituted heteroalkyl, substituted or unsubstituted cycloalkyl, substituted or unsubstituted heterocycloalkyl, substituted or unsubstituted aryl, and substituted or unsubstituted heteroaryl. as used herein, the terms “heteroatom” or “ring heteroatom” are meant to include, oxygen (o), nitrogen (n), sulfur (s), phosphorus (p), and silicon (si). a “substituent group,” as used herein, means a group selected from the following moieties: (a) oxo, halogen, —cf 3 , —cn, —oh, —nh 2 , —cooh, —conh 2 , —no 2 , —sh, —so 3 h, —so 4 h, —so 2 nh 2 , —nhnh 2 , —onh 2 , —nhc═(o)nhnh 2 , —nhc═(o) nh 2 , —nhso 2 h, —nhc═(o)h, —nhc(o)—oh, —nhoh, —ocf 3 , —ochf 2 , unsubstituted alkyl, unsubstituted heteroalkyl, unsubstituted cycloalkyl, unsubstituted heterocycloalkyl, unsubstituted aryl, unsubstituted heteroaryl, and(b) alkyl, heteroalkyl, cycloalkyl, heterocycloalkyl, aryl, heteroaryl, substituted with at least one substituent selected from: (i) oxo, halogen, —cf 3 , —cn, —oh, —nh 2 , —cooh, —conh 2 , —no 2 , —sh, —so 3 h, —so 4 h, —so 2 nh 2 , —nhnh 2 , —onh 2 , —nhc═(o)nhnh 2 , —nhc═(o) nh 2 , —nhso 2 h, —nhc═(o)h, —nhc(o)—oh, —nhoh, —ocf 3 , —ochf 2 , unsubstituted alkyl, unsubstituted heteroalkyl, unsubstituted cycloalkyl, unsubstituted heterocycloalkyl, unsubstituted aryl, unsubstituted heteroaryl, and(ii) alkyl, heteroalkyl, cycloalkyl, heterocycloalkyl, aryl, heteroaryl, substituted with at least one substituent selected from: (a) oxo, halogen, —cf 3 , —cn, —oh, —nh 2 , —cooh, —conh 2 , —no 2 , —sh, —so 3 h, —so 4 h, —so 2 nh 2 , —nhnh 2 , —onh 2 , —nhc═(o)nhnh 2 , —nhc═(o) nh 2 , —nhso 2 h, —nhc═(o)h, —nhc(o)—oh, —nhoh, —ocf 3 , —ochf 2 , unsubstituted alkyl, unsubstituted heteroalkyl, unsubstituted cycloalkyl, unsubstituted heterocycloalkyl, unsubstituted aryl, unsubstituted heteroaryl, and(b) alkyl, heteroalkyl, cycloalkyl, heterocycloalkyl, aryl, heteroaryl, substituted with at least one substituent selected from: oxo, halogen, —cf 3 , —cn, —oh, —nh 2 , —cooh, —conh 2 , —no 2 , —sh, —so 3 h, —so 4 h, —so 2 nh 2 , —nhnh 2 , —onh 2 , —nhc═(o)nhnh 2 , —nhc═(o) nh 2 , —nhso 2 h, —nhc═(o)h, —nhc(o)—oh, —nhoh, —ocf 3 , —ochf 2 , unsubstituted alkyl, unsubstituted heteroalkyl, unsubstituted cycloalkyl, unsubstituted heterocycloalkyl, unsubstituted aryl, unsubstituted heteroaryl. a “size-limited substituent” or “ size-limited substituent group,” as used herein, means a group selected from all of the substituents described above for a “substituent group,” wherein each substituted or unsubstituted alkyl is a substituted or unsubstituted c 1 -c 20 alkyl, each substituted or unsubstituted heteroalkyl is a substituted or unsubstituted 2 to 20 membered heteroalkyl, each substituted or unsubstituted cycloalkyl is a substituted or unsubstituted c 3 -c 8 cycloalkyl, each substituted or unsubstituted heterocycloalkyl is a substituted or unsubstituted 3 to 8 membered heterocycloalkyl, each substituted or unsubstituted aryl is a substituted or unsubstituted c 6 -c 10 aryl, and each substituted or unsubstituted heteroaryl is a substituted or unsubstituted 5 to 10 membered heteroaryl. a “lower substituent” or “ lower substituent group,” as used herein, means a group selected from all of the substituents described above for a “substituent group,” wherein each substituted or unsubstituted alkyl is a substituted or unsubstituted c 1 -c 8 alkyl, each substituted or unsubstituted heteroalkyl is a substituted or unsubstituted 2 to 8 membered heteroalkyl, each substituted or unsubstituted cycloalkyl is a substituted or unsubstituted c 3 -c 7 cycloalkyl, each substituted or unsubstituted heterocycloalkyl is a substituted or unsubstituted 3 to 7 membered heterocycloalkyl, each substituted or unsubstituted aryl is a substituted or unsubstituted c 6 -c 10 aryl, and each substituted or unsubstituted heteroaryl is a substituted or unsubstituted 5 to 9 membered heteroaryl. in some embodiments, each substituted group described in the compounds herein is substituted with at least one substituent group. more specifically, in some embodiments, each substituted alkyl, substituted heteroalkyl, substituted cycloalkyl, substituted heterocycloalkyl, substituted aryl, substituted heteroaryl, substituted alkylene, substituted heteroalkylene, substituted cycloalkylene, substituted heterocycloalkylene, substituted arylene, and/or substituted heteroarylene described in the compounds herein are substituted with at least one substituent group. in other embodiments, at least one or all of these groups are substituted with at least one size-limited substituent group. in other embodiments, at least one or all of these groups are substituted with at least one lower substituent group. in other embodiments of the compounds herein, each substituted or unsubstituted alkyl may be a substituted or unsubstituted c 1 -c 20 alkyl, each substituted or unsubstituted heteroalkyl is a substituted or unsubstituted 2 to 20 membered heteroalkyl, each substituted or unsubstituted cycloalkyl is a substituted or unsubstituted c 3 -c 8 cycloalkyl, each substituted or unsubstituted heterocycloalkyl is a substituted or unsubstituted 3 to 8 membered heterocycloalkyl, each substituted or unsubstituted aryl is a substituted or unsubstituted c 6 -c 10 aryl, and/or each substituted or unsubstituted heteroaryl is a substituted or unsubstituted 5 to 10 membered heteroaryl. in some embodiments of the compounds herein, each substituted or unsubstituted alkylene is a substituted or unsubstituted c 1 -c 20 alkylene, each substituted or unsubstituted heteroalkylene is a substituted or unsubstituted 2 to 20 membered heteroalkylene, each substituted or unsubstituted cycloalkylene is a substituted or unsubstituted c 3 -c 8 cycloalkylene, each substituted or unsubstituted heterocycloalkylene is a substituted or unsubstituted 3 to 8 membered heterocycloalkylene, each substituted or unsubstituted arylene is a substituted or unsubstituted c 6 -c 10 arylene, and/or each substituted or unsubstituted heteroarylene is a substituted or unsubstituted 5 to 10 membered heteroarylene. in some embodiments, each substituted or unsubstituted alkyl is a substituted or unsubstituted c 1 -c 8 alkyl, each substituted or unsubstituted heteroalkyl is a substituted or unsubstituted 2 to 8 membered heteroalkyl, each substituted or unsubstituted cycloalkyl is a substituted or unsubstituted c 3 -c 7 cycloalkyl, each substituted or unsubstituted heterocycloalkyl is a substituted or unsubstituted 3 to 7 membered heterocycloalkyl, each substituted or unsubstituted aryl is a substituted or unsubstituted c 6 -c 10 aryl, and/or each substituted or unsubstituted heteroaryl is a substituted or unsubstituted 5 to 9 membered heteroaryl. in some embodiments, each substituted or unsubstituted alkylene is a substituted or unsubstituted c 1 -c 8 alkylene, each substituted or unsubstituted heteroalkylene is a substituted or unsubstituted 2 to 8 membered heteroalkylene, each substituted or unsubstituted cycloalkylene is a substituted or unsubstituted c 3 -c 7 cycloalkylene, each substituted or unsubstituted heterocycloalkylene is a substituted or unsubstituted 3 to 7 membered heterocycloalkylene, each substituted or unsubstituted arylene is a substituted or unsubstituted c 6 -c 10 arylene, and/or each substituted or unsubstituted heteroarylene is a substituted or unsubstituted 5 to 9 membered heteroarylene. in some embodiments, the compound is a chemical species set forth in the examples section, figures, or tables below. the term “pharmaceutically acceptable salts” is meant to include salts of the active compounds that are prepared with relatively nontoxic acids or bases, depending on the particular substituents found on the compounds described herein. when compounds of the present invention contain relatively acidic functionalities, base addition salts can be obtained by contacting the neutral form of such compounds with a sufficient amount of the desired base, either neat or in a suitable inert solvent. examples of pharmaceutically acceptable base addition salts include sodium, potassium, calcium, ammonium, organic amino, or magnesium salt, or a similar salt. when compounds of the present invention contain relatively basic functionalities, acid addition salts can be obtained by contacting the neutral form of such compounds with a sufficient amount of the desired acid, either neat or in a suitable inert solvent. examples of pharmaceutically acceptable acid addition salts include those derived from inorganic acids like hydrochloric, hydrobromic, nitric, carbonic, monohydrogencarbonic, phosphoric, monohydrogenphosphoric, dihydrogenphosphoric, sulfuric, monohydrogensulfuric, hydriodic, or phosphorous acids and the like, as well as the salts derived from relatively nontoxic organic acids like acetic, propionic, isobutyric, maleic, malonic, benzoic, succinic, suberic, fumaric, lactic, mandelic, phthalic, benzenesulfonic, p-tolylsulfonic, citric, tartaric, methanesulfonic, and the like. also included are salts of amino acids such as arginate and the like, and salts of organic acids like glucuronic or galactunoric acids and the like (see, e.g., berge et al., journal of pharmaceutical science 66:1-19 (1977)). certain specific compounds of the present invention contain both basic and acidic functionalities that allow the compounds to be converted into either base or acid addition salts. other pharmaceutically acceptable carriers known to those of skill in the art are suitable for the present invention. salts tend to be more soluble in aqueous or other protonic solvents than are the corresponding free base forms. in other cases, the preparation may be a lyophilized powder in 1 mm-50 mm histidine, 0.1%-2% sucrose, 2%-7% mannitol at a ph range of 4.5 to 5.5, that is combined with buffer prior to use. thus, the compounds of the present invention may exist as salts, such as with pharmaceutically acceptable acids. the present invention includes such salts. examples of such salts include hydrochlorides, hydrobromides, sulfates, methanesulfonates, nitrates, maleates, acetates, citrates, fumarates, tartrates (e.g., (+)-tartrates, (−)-tartrates, or mixtures thereof including racemic mixtures), succinates, benzoates, and salts with amino acids such as glutamic acid. these salts may be prepared by methods known to those skilled in the art. the neutral forms of the compounds are preferably regenerated by contacting the salt with a base or acid and isolating the parent compound in the conventional manner. the parent form of the compound differs from the various salt forms in certain physical properties, such as solubility in polar solvents. provided herein are agents (e.g. compounds, drugs, therapeutic agents) that may be in a prodrug form. prodrugs of the compounds described herein are those compounds that readily undergo chemical changes under select physiological conditions to provide the final agents (e.g. compounds, drugs, therapeutic agents). additionally, prodrugs can be converted to agents (e.g. compounds, drugs, therapeutic agents) by chemical or biochemical methods in an ex vivo environment. prodrugs described herein include compounds that readily undergo chemical changes under select physiological conditions to provide agents (e.g. compounds, drugs, therapeutic agents) to a biological system (e.g. in a subject). certain compounds of the present invention can exist in unsolvated forms as well as solvated forms, including hydrated forms. in general, the solvated forms are equivalent to unsolvated forms and are encompassed within the scope of the present invention. certain compounds of the present invention may exist in multiple crystalline or amorphous forms. in general, all physical forms are equivalent for the uses contemplated by the present invention and are intended to be within the scope of the present invention. as used herein, the term “salt” refers to acid or base salts of the compounds used in the methods of the present invention. illustrative examples of acceptable salts are mineral acid (hydrochloric acid, hydrobromic acid, phosphoric acid, and the like) salts, organic acid (acetic acid, propionic acid, glutamic acid, citric acid and the like) salts, quaternary ammonium (methyl iodide, ethyl iodide, and the like) salts. certain compounds of the present invention possess asymmetric carbon atoms (optical or chiral centers) or double bonds; the enantiomers, racemates, diastereomers, tautomers, geometric isomers, stereoisometric forms that may be defined, in terms of absolute stereochemistry, as (r)- or (s)- or, as (d)- or (l)- for amino acids, and individual isomers are encompassed within the scope of the present invention. the compounds of the present invention do not include those which are known in art to be too unstable to synthesize and/or isolate. the present invention is meant to include compounds in racemic and optically pure forms. optically active (r)- and (s)-, or (d)- and (l)-isomers may be prepared using chiral synthons or chiral reagents, or resolved using conventional techniques. when the compounds described herein contain olefinic bonds or other centers of geometric asymmetry, and unless specified otherwise, it is intended that the compounds include both e and z geometric isomers. as used herein, the term “isomers” refers to compounds having the same number and kind of atoms, and hence the same molecular weight, but differing in respect to the structural arrangement or configuration of the atoms. the term “tautomer,” as used herein, refers to one of two or more structural isomers which exist in equilibrium and which are readily converted from one isomeric form to another. it will be apparent to one skilled in the art that certain compounds of this invention may exist in tautomeric forms, all such tautomeric forms of the compounds being within the scope of the invention. unless otherwise stated, structures depicted herein are also meant to include all stereochemical forms of the structure; i.e., the r and s configurations for each asymmetric center. therefore, single stereochemical isomers as well as enantiomeric and diastereomeric mixtures of the present compounds are within the scope of the invention. unless otherwise stated, structures depicted herein are also meant to include compounds which differ only in the presence of one or more isotopically enriched atoms. for example, compounds having the present structures except for the replacement of a hydrogen by a deuterium or tritium, or the replacement of a carbon by 13 c- or 14 c-enriched carbon are within the scope of this invention. the compounds of the present invention may also contain unnatural proportions of atomic isotopes at one or more of the atoms that constitute such compounds. for example, the compounds may be radiolabeled with radioactive isotopes, such as for example tritium ( 3 h), iodine-125 ( 125 i), or carbon-14 ( 14 c). all isotopic variations of the compounds of the present invention, whether radioactive or not, are encompassed within the scope of the present invention. analog” and “analogue” are used interchangeably and are used in accordance with their plain ordinary meaning within chemistry and biology and refers to a chemical compound that is structurally similar to another compound (i.e., a so-called “reference” compound) but differs in composition, e.g., in the replacement of one atom by an atom of a different element, or in the presence of a particular functional group, or the replacement of one functional group by another functional group, or the absolute stereochemistry of one or more chiral centers of the reference compound, including isomers thereof. accordingly, an analog is a compound that is similar or comparable in function and appearance but not in structure or origin to a reference compound. the symbol “ ” denotes the point of attachment of a chemical moiety to the remainder of a molecule or chemical formula. in embodiments, a compound as described herein may include multiple instances of r 2 and/or other variables. in such embodiments, each variable may optional be different and be appropriately labeled to distinguish each group for greater clarity. for example, where each r 2 is different, they may be referred to, for example, as r 2.1 , r 2.2 , r 2.3 , and/or r 2.4 respectively, wherein the definition of r 2 is assumed by r 2.1 , r 2.1 , r 2.3 , and/or r 2.4 . the variables used within a definition of r 2 and/or other variables that appear at multiple instances and are different may similarly be appropriately labeled to distinguish each group for greater clarity. in some embodiments, the compound is a compound described herein (e.g., in an aspect, embodiment, example, claim, table, scheme, drawing, or figure). the terms “a” or “an,” as used in herein means one or more. in addition, the phrase “substituted with a[n],” as used herein, means the specified group may be substituted with one or more of any or all of the named substituents. for example, where a group, such as an alkyl or heteroaryl group, is “substituted with an unsubstituted c 1 -c 20 alkyl, or unsubstituted 2 to 20 membered heteroalkyl,” the group may contain one or more unsubstituted c 1 -c 20 alkyls, and/or one or more unsubstituted 2 to 20 membered heteroalkyls. where a moiety is substituted with an r substituent, the group may be referred to as “r-substituted.” where a moiety is r-substituted, the moiety is substituted with at least one r substituent and each r substituent is optionally different. for example, where a moiety herein is r 12 -substituted or unsubstituted alkyl, a plurality of r 12 substituents may be attached to the alkyl moiety wherein each r 12 substituent is optionally different. where an r-substituted moiety is substituted with a plurality r substituents, each of the r-substituents may be differentiated herein using a prime symbol (′) such as r′, r″, etc. for example, where a moiety is r 12 -substituted or unsubstituted alkyl, and the moiety is substituted with a plurality of r 12 substituents, the plurality of r 12 substituents may be differentiated as r 12 ′, r 12 ″, r 12 ′″, etc. in embodiments, the plurality of r substituents is 3. in embodiments, the plurality of r substituents is 2. in embodiments, a compound as described herein may include multiple instances of r 1 , r 2 , r 3 , r 4 , r 5 , r 6 , r 7 , r 8 , r 9 , r 10 , r 11 , r 12 , r 13 , r 14 , and/or other variables. in such embodiments, each variable may optional be different and be appropriately labeled to distinguish each group for greater clarity. for example, where each r 1 , r 2 , r 3 , r 4 , r 5 , r 6 , r 7 , r 8 , r 9 , r 10 , r 11 , r 12 , r 13 , and/or r 14 , is different, they may be referred to, for example, as r 1.1 , r 1.2 , r 1.3 , r 1.4 , r 2.1 , r 2.2 , r 2.3 , r 2.4 , r 3.1 , r 3.2 , r 3.3 , r 3.4 , r 4.1 , r 4.2 , r 4.3 , r 4.4 , r 5.1 , r 5.2 , r 5.3 , r 5.4 , r 6.1 , r 6.2 , r 6.3 , r 6.4 , r 7.1 , r 7.2 , r 7.3 , r 7.4 , r 9.1 , r 9.2 , r 9.3 , r 9.4 , r 10.1 , r 10.2 , r 10.3 , r 10.4 , r 11.1 , r 11.2 , r 11.3 , r 11.4 , r 12.1 , r 12.2 , r 12.3 , r 12.4 , r 13.1 , r 13.2 , r 13.3 , r 13.4 , r 14.1 , r 14.2 , r 14.3 , and/or r 14.4, respectively, wherein the definition of r 1 is assumed by r 1.1 , r 1.2 , r 1.3 , and/or r 14.3 , the definition of r 2 is assumed by r 2.1 , r 2.2 , r 2.3 , and/or r 2.4 , the definition of r 3 is assumed by r 3.1 , r 3.2 , r 3.3 , and/or r 3.4 , the definition of r 4 is assumed by r 4.1 , r 4.2 , r 4.3 , and/or r 4.4 , the definition of r 5 is assumed by r 5.1 , r 5.2 , r 5.3 , and/or r 5.4 , the definition of r 6 is assumed by r 6.1 , r 6.2 , r 6.3 , and/or r 6.4 , the definition of r 7 is assumed by r 7.1 , r 7.2 , r 7.3 , and/or r 7.4 , the definition of r 9 is assumed by r 9.1 , r 9.2 , r 9.3 , and/or r 9.4 , the definition of r 10 is assumed by r 10.1 , r 10.2 , r 10.3 , and/or r 10.4 , the definition of r 11 is assumed by r 11.1 , r 11.2 , r 11.3 , and/or r 11.4 , the definition of r 12 is assumed by r 12.1 , r 12.2 , r 12.3 , and/or r 12.4 , the definition of r 13 is assumed by r 13.1 , r 13.2 , r 13.3 , and/or r 13.4 , the definition of r 14 is assumed by r 14.1 , r 14.2 , r 14.3 , and/or r 14.4 . the variables used within a definition of r 1 , r 2 , r 3 , r 4 , r 5 , r 6 , r 7 , r 9 , r 10 , r 11 , r 12 , r 13 and/or r 14 , and/or other variables that appear at multiple instances and are different may similarly be appropriately labeled to distinguish each group for greater clarity. descriptions of compounds of the present invention are limited by principles of chemical bonding known to those skilled in the art. accordingly, where a group may be substituted by one or more of a number of sub stituents, such substitutions are selected so as to comply with principles of chemical bonding and to give compounds which are not inherently unstable and/or would be known to one of ordinary skill in the art as likely to be unstable under ambient conditions, such as aqueous, neutral, and several known physiological conditions. for example, a heterocycloalkyl or heteroaryl is attached to the remainder of the molecule via a ring heteroatom in compliance with principles of chemical bonding known to those skilled in the art thereby avoiding inherently unstable compounds. antibodies are large, complex molecules (molecular weight of ˜150,000 or about 1320 amino acids) with intricate internal structure. a natural antibody molecule contains two identical pairs of polypeptide chains, each pair having one light chain and one heavy chain. each light chain and heavy chain in turn consists of two regions: a variable (“v”) region involved in binding the target antigen, and a constant (“c”) region that interacts with other components of the immune system. the light and heavy chain variable regions come together in 3-dimensional space to form a variable region that binds the antigen (for example, a receptor on the surface of a cell). within each light or heavy chain variable region, there are three short segments (averaging 10 amino acids in length) called the complementarity determining regions (“cdrs”). the six cdrs in an antibody variable domain (three from the light chain and three from the heavy chain) fold up together in 3-dimensional space to form the actual antibody binding site which docks onto the target antigen. the position and length of the cdrs have been precisely defined by kabat, e. et al., sequences of proteins of immunological interest, u.s. department of health and human services, 1983, 1987. the part of a variable region not contained in the cdrs is called the framework (“fr”), which forms the environment for the cdrs. the term “antibody” is used according to its commonly known meaning in the art. antibodies exist, e.g., as intact immunoglobulins or as a number of well-characterized fragments produced by digestion with various peptidases. thus, for example, pepsin digests an antibody below the disulfide linkages in the hinge region to produce f(ab)′ 2 , a dimer of fab which itself is a light chain joined to v h —c h1 by a disulfide bond. the f(ab)′ 2 may be reduced under mild conditions to break the disulfide linkage in the hinge region, thereby converting the f(ab)′ 2 dimer into an fab′ monomer. the fab′ monomer is essentially fab with part of the hinge region (see fundamental immunology (paul ed., 3d ed. 1993). while various antibody fragments are defined in terms of the digestion of an intact antibody, one of skill will appreciate that such fragments may be synthesized de novo either chemically or by using recombinant dna methodology. thus, the term antibody, as used herein, also includes antibody fragments either produced by the modification of whole antibodies, or those synthesized de novo using recombinant dna methodologies (e.g., single chain fv) or those identified using phage display libraries (see, e.g., mccafferty et al., nature 348:552-554 (1990)). for preparation of monoclonal or polyclonal antibodies, any technique known in the art can be used (see, e.g., kohler & milstein, nature 256:495-497 (1975); kozbor et al., immunology today 4:72 (1983); cole et al., pp. 77-96 in monoclonal antibodies and cancer therapy (1985)). “monoclonal” antibodies (mab) refer to antibodies derived from a single clone. techniques for the production of single chain antibodies (u.s. pat. no. 4,946,778) can be adapted to produce antibodies to polypeptides of this invention. also, transgenic mice, or other organisms such as other mammals, may be used to express humanized antibodies. alternatively, phage display technology can be used to identify antibodies and heteromeric fab fragments that specifically bind to selected antigens (see, e.g., mccafferty et al., nature 348:552-554 (1990); marks et al., biotechnology 10:779-783 (1992)). the epitope of a mab is the region of its antigen to which the mab binds. two antibodies bind to the same or overlapping epitope if each competitively inhibits (blocks) binding of the other to the antigen. that is, a 1×, 5×, 10×, 20× or 100× excess of one antibody inhibits binding of the other by at least 30% but preferably 50%, 75%, 90% or even 99% as measured in a competitive binding assay (see, e.g., junghans et al., cancer res. 50:1495, 1990). alternatively, two antibodies have the same epitope if essentially all amino acid mutations in the antigen that reduce or eliminate binding of one antibody reduce or eliminate binding of the other. two antibodies have overlapping epitopes if some amino acid mutations that reduce or eliminate binding of one antibody reduce or eliminate binding of the other. antibodies exist, for example, as intact immunoglobulins or as a number of well-characterized fragments produced by digestion with various peptidases. thus, for example, pepsin digests an antibody below the disulfide linkages in the hinge region to produce f(ab)′2, a dimer of fab which itself is a light chain joined to vh—ch1 by a disulfide bond. the f(ab)′2 may be reduced under mild conditions to break the disulfide linkage in the hinge region, thereby converting the f(ab)′2 dimer into an fab′ monomer. the fab′ monomer is essentially the antigen dinging portion with part of the hinge region (see fundamental immunology (paul ed., 3d ed. 1993). while various antibody fragments are defined in terms of the digestion of an intact antibody, one of skill will appreciate that such fragments may be synthesized de novo either chemically or by using recombinant dna methodology. thus, the term antibody, as used herein, also includes antibody fragments either produced by the modification of whole antibodies, or those synthesized de novo using recombinant dna methodologies (e.g., single chain fv) or those identified using phage display libraries (see, e.g., mccafferty et al., nature 348:552-554 (1990)). a single-chain variable fragment (scfv) is typically a fusion protein of the variable regions of the heavy (vh) and light chains (vl) of immunoglobulins, connected with a short linker peptide of 10 to about 25 amino acids. the linker may usually be rich in glycine for flexibility, as well as serine or threonine for solubility. the linker can either connect the n-terminus of the vh with the c-terminus of the vl, or vice versa. for preparation of suitable antibodies of the invention and for use according to the invention, e.g., recombinant, monoclonal, or polyclonal antibodies, many techniques known in the art can be used (see, e.g., kohler & milstein, nature 256:495-497 (1975); kozbor et al., immunology today 4: 72 (1983); cole et al., pp. 77-96 in monoclonal antibodies and cancer therapy, alan r. liss, inc. (1985); coligan, current protocols in immunology (1991); harlow & lane, antibodies, a laboratory manual (1988); and goding, monoclonal antibodies: principles and practice (2d ed. 1986)). the genes encoding the heavy and light chains of an antibody of interest can be cloned from a cell, e.g., the genes encoding a monoclonal antibody can be cloned from a hybridoma and used to produce a recombinant monoclonal antibody. gene libraries encoding heavy and light chains of monoclonal antibodies can also be made from hybridoma or plasma cells. random combinations of the heavy and light chain gene products generate a large pool of antibodies with different antigenic specificity (see, e.g., kuby, immunology (3rd ed. 1997)). techniques for the production of single chain antibodies or recombinant antibodies (u.s. pat. no. 4,946,778, u.s. pat. no. 4,816,567) can be adapted to produce antibodies to polypeptides of this invention. also, transgenic mice, or other organisms such as other mammals, may be used to express humanized or human antibodies (see, e.g., u.s. pat. nos. 5,545,807; 5,545,806; 5,569,825; 5,625,126; 5,633,425; 5,661,016, marks et al., bio/technology 10:779-783 (1992); lonberg et al., nature 368:856-859 (1994); morrison, nature 368:812-13 (1994); fishwild et al., nature biotechnology 14:845-51 (1996); neuberger, nature biotechnology 14:826 (1996); and lonberg & huszar, intern. rev. immunol. 13:65-93 (1995)). alternatively, phage display technology can be used to identify antibodies and heteromeric fab fragments that specifically bind to selected antigens (see, e.g., mccafferty et al., nature 348:552-554 (1990); marks et al., biotechnology 10:779-783 (1992)). antibodies can also be made bispecific, i.e., able to recognize two different antigens (see, e.g., wo 93/08829, traunecker et al., embo j. 10:3655-3659 (1991); and suresh et al., methods in enzymology 121:210 (1986)). antibodies can also be heteroconjugates, e.g., two covalently joined antibodies, or immunotoxins (see, e.g., u.s. pat. no. 4,676,980 , wo 91/00360; wo 92/200373; and ep 03089). methods for humanizing or primatizing non-human antibodies are well known in the art (e.g., u.s. pat. nos. 4,816,567; 5,530,101; 5,859,205; 5,585,089; 5,693,761; 5,693,762; 5,777,085; 6,180,370; 6,210,671; and 6,329,511; wo 87/02671; ep patent application 0173494; jones et al. (1986) nature 321:522; and verhoyen et al. (1988) science 239:1534). humanized antibodies are further described in, e.g., winter and milstein (1991) nature 349:293. generally, a humanized antibody has one or more amino acid residues introduced into it from a source which is non-human. these non-human amino acid residues are often referred to as import residues, which are typically taken from an import variable domain. humanization can be essentially performed following the method of winter and co-workers (see, e.g., morrison et al., pnas usa, 81:6851-6855 (1984), jones et al., nature 321:522-525 (1986); riechmann et al., nature 332:323-327 (1988); morrison and 0i, adv. immunol., 44:65-92 (1988), verhoeyen et al., science 239:1534-1536 (1988) and presta, curr. op. struct. biol. 2:593-596 (1992), padlan, molec. immun., 28:489-498 (1991); padlan, molec. immun., 31(3):169-217 (1994)), by substituting rodent cdrs or cdr sequences for the corresponding sequences of a human antibody. accordingly, such humanized antibodies are chimeric antibodies (u.s. pat. no. 4,816,567), wherein substantially less than an intact human variable domain has been substituted by the corresponding sequence from a non-human species. in practice, humanized antibodies are typically human antibodies in which some cdr residues and possibly some fr residues are substituted by residues from analogous sites in rodent antibodies. for example, polynucleotides comprising a first sequence coding for humanized immunoglobulin framework regions and a second sequence set coding for the desired immunoglobulin complementarity determining regions can be produced synthetically or by combining appropriate cdna and genomic dna segments. human constant region dna sequences can be isolated in accordance with well known procedures from a variety of human cells. a “chimeric antibody” is an antibody molecule in which (a) the constant region, or a portion thereof, is altered, replaced or exchanged so that the antigen binding site (variable region) is linked to a constant region of a different or altered class, effector function and/or species, or an entirely different molecule which confers new properties to the chimeric antibody, e.g., an enzyme, toxin, hormone, growth factor, drug, etc.; or (b) the variable region, or a portion thereof, is altered, replaced or exchanged with a variable region having a different or altered antigen specificity. the preferred antibodies of, and for use according to the invention include humanized and/or chimeric monoclonal antibodies. techniques for conjugating therapeutic agents to antibodies are well known (see, e.g., arnon et al., “monoclonal antibodies for immunotargeting of drugs in cancer therapy”, in monoclonal antibodies and cancer therapy, reisfeld et al. (eds.), pp. 243-56 (alan r. liss, inc. 1985); hellstrom et al., “antibodies for drug delivery”in controlled drug delivery (2 nd ed.), robinson et al. (eds.), pp. 623-53 (marcel dekker, inc. 1987); thorpe, “antibody carriers of cytotoxic agents in cancer therapy: a review” in monoclonal antibodies ‘84: biological and clinical applications, pinchera et al. (eds.), pp. 475-506 (1985); and thorpe et al., “the preparation and cytotoxic properties of antibody-toxin conjugates”, immunol. rev., 62:119-58 (1982)). as used herein, the term “antibody-drug conjugate” or “adc” refers to a therapeutic agent conjugated or otherwise covalently bound to an antibody. a “therapeutic agent” as referred to herein, is a composition useful in treating or preventing a disease such as cancer. the phrase “specifically (or selectively) binds” to an antibody or “specifically (or selectively) immunoreactive with,” when referring to a protein or peptide, refers to a binding reaction that is determinative of the presence of the protein, often in a heterogeneous population of proteins and other biologics. thus, under designated immunoassay conditions, the specified antibodies bind to a particular protein at least two times the background and more typically more than 10 to 100 times background. specific binding to an antibody under such conditions requires an antibody that is selected for its specificity for a particular protein. for example, polyclonal antibodies can be selected to obtain only a subset of antibodies that are specifically immunoreactive with the selected antigen and not with other proteins. this selection may be achieved by subtracting out antibodies that cross-react with other molecules. a variety of immunoassay formats may be used to select antibodies specifically immunoreactive with a particular protein. for example, solid-phase elisa immunoassays are routinely used to select antibodies specifically immunoreactive with a protein (see, e.g., harlow & lane, using antibodies, a laboratory manual (1998) for a description of immunoassay formats and conditions that can be used to determine specific immunoreactivity). a “ligand” refers to an agent, e.g., a polypeptide or other molecule, capable of binding to a receptor. “contacting” is used in accordance with its plain ordinary meaning and refers to the process of allowing at least two distinct species (e.g. chemical compounds including biomolecules or cells) to become sufficiently proximal to react, interact or physically touch. it should be appreciated; however, the resulting reaction product can be produced directly from a reaction between the added reagents or from an intermediate from one or more of the added reagents which can be produced in the reaction mixture. the term “contacting” may include allowing two species to react, interact, or physically touch, wherein the two species may be, for example, a pharmaceutical composition as provided herein and a cell. in embodiments contacting includes, for example, allowing a pharmaceutical composition as described herein to interact with a cell or a patient. unless defined otherwise, technical and scientific terms used herein have the same meaning as commonly understood by a person of ordinary skill in the art. see, e.g., singleton et al., dictionary of microbiology and molecular biology 2nd ed., j. wiley & sons (new york, n.y. 1994); sambrook et al., molecular cloning, a laboratory manual, cold springs harbor press (cold springs harbor, n.y. 1989). any methods, devices and materials similar or equivalent to those described herein can be used in the practice of this invention. the following definitions are provided to facilitate understanding of certain terms used frequently herein and are not meant to limit the scope of the present disclosure. “nucleic acid” refers to deoxyribonucleotides or ribonucleotides and polymers thereof in either single-or double-stranded form, and complements thereof. the term “polynucleotide” refers to a linear sequence of nucleotides. the term “nucleotide” typically refers to a single unit of a polynucleotide, i.e., a monomer. nucleotides can be ribonucleotides, deoxyribonucleotides, or modified versions thereof. examples of polynucleotides contemplated herein include single and double stranded dna, single and double stranded rna (including sirna), and hybrid molecules having mixtures of single and double stranded dna and rna. nucleic acid as used herein also refers to nucleic acids that have the same basic chemical structure as a naturally occurring nucleic acid. such analogues have modified sugars and/or modified ring substituents, but retain the same basic chemical structure as the naturally occurring nucleic acid. a nucleic acid mimetic refers to chemical compounds that have a structure that is different from the general chemical structure of a nucleic acid, but that functions in a manner similar to a naturally occurring nucleic acid. examples of such analogues include, without limitation, phosphorothiolates, phosphoramidates, methyl phosphonates, chiral-methyl phosphonates, 2-o-methyl ribonucleotides, and peptide-nucleic acids (pnas). the terms “polypeptide,” “peptide” and “protein” are used interchangeably herein to refer to a polymer of amino acid residues, wherein the polymer may in embodiments be conjugated to a moiety that does not consist of amino acids. the terms apply to amino acid polymers in which one or more amino acid residue is an artificial chemical mimetic of a corresponding naturally occurring amino acid, as well as to naturally occurring amino acid polymers and non-naturally occurring amino acid polymers. a “fusion protein” refers to a chimeric protein encoding two or more separate protein sequences that are recombinantly expressed as a single moiety. the term “peptidyl” and “peptidyl moiety” means a monovalent peptide. the term “amino acid” refers to naturally occurring and synthetic amino acids, as well as amino acid analogs and amino acid mimetics that function in a manner similar to the naturally occurring amino acids. naturally occurring amino acids are those encoded by the genetic code, as well as those amino acids that are later modified, e.g., hydroxyproline, γ-carboxyglutamate, and o-phosphoserine. amino acid analogs refers to compounds that have the same basic chemical structure as a naturally occurring amino acid, i.e., an a carbon that is bound to a hydrogen, a carboxyl group, an amino group, and an r group, e.g., homoserine, norleucine, methionine sulfoxide, methionine methyl sulfonium. such analogs have modified r groups (e.g., norleucine) or modified peptide backbones, but retain the same basic chemical structure as a naturally occurring amino acid. amino acid mimetics refers to chemical compounds that have a structure that is different from the general chemical structure of an amino acid, but that functions in a manner similar to a naturally occurring amino acid. the terms “non-naturally occurring amino acid” and “unnatural amino acid” refer to amino acid analogs, synthetic amino acids, and amino acid mimetics which are not found in nature. amino acids may be referred to herein by either their commonly known three letter symbols or by the one-letter symbols recommended by the iupac-iub biochemical nomenclature commission. nucleotides, likewise, may be referred to by their commonly accepted single-letter codes. “conservatively modified variants” applies to both amino acid and nucleic acid sequences. with respect to particular nucleic acid sequences, “conservatively modified variants” refers to those nucleic acids that encode identical or essentially identical amino acid sequences. because of the degeneracy of the genetic code, a number of nucleic acid sequences will encode any given protein. for instance, the codons gca, gcc, gcg and gcu all encode the amino acid alanine. thus, at every position where an alanine is specified by a codon, the codon can be altered to any of the corresponding codons described without altering the encoded polypeptide. such nucleic acid variations are “silent variations,” which are one species of conservatively modified variations. every nucleic acid sequence herein which encodes a polypeptide also describes every possible silent variation of the nucleic acid. one of skill will recognize that each codon in a nucleic acid (except aug, which is ordinarily the only codon for methionine, and tgg, which is ordinarily the only codon for tryptophan) can be modified to yield a functionally identical molecule. accordingly, each silent variation of a nucleic acid which encodes a polypeptide is implicit in each described sequence. as to amino acid sequences, one of skill will recognize that individual substitutions, deletions or additions to a nucleic acid, peptide, polypeptide, or protein sequence which alters, adds or deletes a single amino acid or a small percentage of amino acids in the encoded sequence is a “conservatively modified variant” where the alteration results in the substitution of an amino acid with a chemically similar amino acid. conservative substitution tables providing functionally similar amino acids are well known in the art. such conservatively modified variants are in addition to and do not exclude polymorphic variants, interspecies homologs, and alleles of the invention. the following eight groups each contain amino acids that are conservative substitutions for one another: 1) alanine (a), glycine (g); 2) aspartic acid (d), glutamic acid (e); 3) asparagine (n), glutamine (q); 4) arginine (r), lysine (k); 5) isoleucine (i), leucine (l), methionine (m), valine (v); 6) phenylalanine (f), tyrosine (y), tryptophan (w); 7) serine (s), threonine (t); and 8) cysteine (c), methionine (m) (see, e.g., creighton, proteins (1984)). the terms “numbered with reference to” or “corresponding to,” when used in the context of the numbering of a given amino acid or polynucleotide sequence, refer to the numbering of the residues of a specified reference sequence when the given amino acid or polynucleotide sequence is compared to the reference sequence. an amino acid residue in a protein “corresponds” to a given residue when it occupies the same essential structural position within the protein as the given residue. one skilled in the art will immediately recognize the identity and location of residues corresponding to a specific position in a protein (e.g., ror-1) in other proteins with different numbering systems. for example, by performing a simple sequence alignment with a protein (e.g., ror-1) the identity and location of residues corresponding to specific positions of said protein are identified in other protein sequences aligning to said protein. for example, a selected residue in a selected protein corresponds to glutamic acid at position 138 when the selected residue occupies the same essential spatial or other structural relationship as a glutamic acid at position 138. in some embodiments, where a selected protein is aligned for maximum homology with a protein, the position in the aligned selected protein aligning with glutamic acid 138 is said to correspond to glutamic acid 138. instead of a primary sequence alignment, a three dimensional structural alignment can also be used, e.g., where the structure of the selected protein is aligned for maximum correspondence with the glutamic acid at position 138, and the overall structures compared. in this case, an amino acid that occupies the same essential position as glutamic acid 138 in the structural model is said to correspond to the glutamic acid 138 residue. “percentage of sequence identity” is determined by comparing two optimally aligned sequences over a comparison window, wherein the portion of the polynucleotide or polypeptide sequence in the comparison window may comprise additions or deletions (i.e., gaps) as compared to the reference sequence (which does not comprise additions or deletions) for optimal alignment of the two sequences. the percentage is calculated by determining the number of positions at which the identical nucleic acid base or amino acid residue occurs in both sequences to yield the number of matched positions, dividing the number of matched positions by the total number of positions in the window of comparison and multiplying the result by 100 to yield the percentage of sequence identity. the terms “identical” or percent “identity,” in the context of two or more nucleic acids or polypeptide sequences, refer to two or more sequences or subsequences that are the same or have a specified percentage of amino acid residues or nucleotides that are the same (i.e., 60% identity, optionally 65%, 70%, 75%, 80%, 85%, 90%, 95%, 98%, or 99% identity over a specified region, e.g., of the entire polypeptide sequences of the invention or individual domains of the polypeptides of the invention), when compared and aligned for maximum correspondence over a comparison window, or designated region as measured using one of the following sequence comparison algorithms or by manual alignment and visual inspection. such sequences are then said to be “substantially identical.” this definition also refers to the complement of a test sequence. optionally, the identity exists over a region that is at least about 50 nucleotides in length, or more preferably over a region that is 100 to 500 or 1000 or more nucleotides in length. for sequence comparison, typically one sequence acts as a reference sequence, to which test sequences are compared. when using a sequence comparison algorithm, test and reference sequences are entered into a computer, subsequence coordinates are designated, if necessary, and sequence algorithm program parameters are designated. default program parameters can be used, or alternative parameters can be designated. the sequence comparison algorithm then calculates the percent sequence identities for the test sequences relative to the reference sequence, based on the program parameters. a “comparison window”, as used herein, includes reference to a segment of any one of the number of contiguous positions selected from the group consisting of, e.g., a full length sequence or from 20 to 600, about 50 to about 200, or about 100 to about 150 amino acids or nucleotides in which a sequence may be compared to a reference sequence of the same number of contiguous positions after the two sequences are optimally aligned. methods of alignment of sequences for comparison are well-known in the art. optimal alignment of sequences for comparison can be conducted, e.g., by the local homology algorithm of smith and waterman (1970) adv. appl. math. 2:482c, by the homology alignment algorithm of needleman and wunsch (1970)1 j. mol. biol. 48:443, by the search for similarity method of pearson and lipman (1988) proc. nat'l. acad. sci. usa 85:2444, by computerized implementations of these algorithms (gap, bestfit, fasta, and tfasta in the wisconsin genetics software package, genetics computer group, 575 science dr., madison, wis.), or by manual alignment and visual inspection (see, e.g., ausubel et al., current protocols in molecular biology (1995 supplement)). an example of an algorithm that is suitable for determining percent sequence identity and sequence similarity are the blast and blast 2.0 algorithms, which are described in altschul et al. (1977) nuc. acids res . 25:3389-3402, and altschul et al. (1990) j. mol. biol. 215:403-410, respectively. software for performing blast analyses is publicly available through the national center for biotechnology information (website at ncbi.nlm.nih.gov/). this algorithm involves first identifying high scoring sequence pairs (hsps) by identifying short words of length w in the query sequence, which either match or satisfy some positive-valued threshold score t when aligned with a word of the same length in a database sequence. t is referred to as the neighborhood word score threshold (altschul et al., supra). these initial neighborhood word hits act as seeds for initiating searches to find longer hsps containing them. the word hits are extended in both directions along each sequence for as far as the cumulative alignment score can be increased. cumulative scores are calculated using, for nucleotide sequences, the parameters m (reward score for a pair of matching residues; always>0) and n (penalty score for mismatching residues; always<0). for amino acid sequences, a scoring matrix is used to calculate the cumulative score. extension of the word hits in each direction are halted when: the cumulative alignment score falls off by the quantity x from its maximum achieved value; the cumulative score goes to zero or below, due to the accumulation of one or more negative-scoring residue alignments; or the end of either sequence is reached. the blast algorithm parameters w, t, and x determine the sensitivity and speed of the alignment. the blastn program (for nucleotide sequences) uses as defaults a word length (w) of 11, an expectation (e) or 10, m=5, n=-4 and a comparison of both strands. for amino acid sequences, the blastp program uses as defaults a word length of 3, and expectation (e) of 10, and the blosum62 scoring matrix (see henikoff and henikoff (1989) proc. natl. acad. sci. usa 89:10915) alignments (b) of 50, expectation (e) of 10, m=5, n=−4, and a comparison of both strands. the blast algorithm also performs a statistical analysis of the similarity between two sequences (see, e.g., karlin and altschul (1993) proc. natl. acad. sci. usa 90:5873-5787). one measure of similarity provided by the blast algorithm is the smallest sum probability (p(n)), which provides an indication of the probability by which a match between two nucleotide or amino acid sequences would occur by chance. for example, a nucleic acid is considered similar to a reference sequence if the smallest sum probability in a comparison of the test nucleic acid to the reference nucleic acid is less than about 0.2, more preferably less than about 0.01, and most preferably less than about 0.001. an indication that two nucleic acid sequences or polypeptides are substantially identical is that the polypeptide encoded by the first nucleic acid is immunologically cross reactive with the antibodies raised against the polypeptide encoded by the second nucleic acid, as described below. thus, a polypeptide is typically substantially identical to a second polypeptide, for example, where the two peptides differ only by conservative substitutions. another indication that two nucleic acid sequences are substantially identical is that the two molecules or their complements hybridize to each other under stringent conditions, as described below. yet another indication that two nucleic acid sequences are substantially identical is that the same primers can be used to amplify the sequence. the term “isolated,” when applied to a protein, denotes that the protein is essentially free of other cellular components with which it is associated in the natural state. it is preferably in a homogeneous state although it can be in either a dry or aqueous solution. purity and homogeneity are typically determined using analytical chemistry techniques such as polyacrylamide gel electrophoresis or high performance liquid chromatography. a protein that is the predominant species present in a preparation is substantially purified. the term “purified” denotes that a protein gives rise to essentially one band in an electrophoretic gel. particularly, it means that the protein is at least 85% pure, more preferably at least 95% pure, and most preferably at least 99% pure. the phrase “specifically (or selectively) binds” to an antibody or “specifically (or selectively) immunoreactive with,” when referring to a protein or peptide, refers to a binding reaction that is determinative of the presence of the protein in a heterogeneous population of proteins and other biologics. thus, under designated immunoassay conditions, the specified antibodies bind to a particular protein at least two times the background and do not substantially bind in a significant amount to other proteins present in the sample. typically a specific or selective reaction will be at least twice background signal or noise and more typically more than 10 to 100 times background. a “cell” as used herein, refers to a cell carrying out metabolic or other function sufficient to preserve or replicate its genomic dna. a cell can be identified by well-known methods in the art including, for example, presence of an intact membrane, staining by a particular dye, ability to produce progeny or, in the case of a gamete, ability to combine with a second gamete to produce a viable offspring. cells may include prokaryotic and eukaryotic cells. prokaryotic cells include but are not limited to bacteria. eukaryotic cells include but are not limited to yeast cells and cells derived from plants and animals, for example mammalian, insect (e.g., spodoptera) and human cells. as defined herein, the term “inhibition”, “inhibit”, “inhibiting” and the like in reference to a protein-inhibitor (e.g., an receptor antagonist or a signaling pathway inhibitor) interaction means negatively affecting (e.g., decreasing) the activity or function of the protein (e.g., decreasing the activity of a receptor or a protein) relative to the activity or function of the protein in the absence of the inhibitor. in some embodiments, inhibition refers to reduction of a disease or symptoms of disease (e.g., cancer). thus, inhibition includes, at least in part, partially or totally blocking stimulation, decreasing, preventing, or delaying activation, or inactivating, desensitizing, or down-regulating signal transduction or enzymatic activity or the amount of a protein (e.g., a receptor). similarly an “inhibitor” is a compound or protein that inhibits a receptor or another protein, e.g., by binding, partially or totally blocking, decreasing, preventing, delaying, inactivating, desensitizing, or down-regulating activity (e.g., a receptor activity or a protein activity). the term “btk antagonist” as provided herein refers to a substance capable of inhibiting btk activity compared to a control. the inhibited activity of btk can be 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80%, 90% or less than that in a control. in certain instances, the inhibition is 1.5-fold, 2-fold, 3-fold, 4-fold, 5-fold, 10-fold, or more in comparison to a control. a btk antagonist inhibits btk activity e.g., by at least in part, partially or totally blocking stimulation, decreasing, preventing, or delaying activation, or inactivating, desensitizing, or down-regulating signal transduction, activity or amount of btk relative to the absence of the btk antagonist. the term “ror-1 antagonist” as provided herein refers to a substance capable of inhibiting ror-1 activity compared to a control. the inhibited activity of ror-1 can be 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80%, 90% or less than that in a control. in certain instances, the inhibition is 1.5-fold, 2-fold, 3-fold, 4-fold, 5-fold, 10-fold, or more in comparison to a control. a ror-1 antagonist inhibits ror-1 activity e.g., by at least in part, partially or totally blocking stimulation, decreasing, preventing, or delaying activation, or inactivating, desensitizing, or down-regulating signal transduction, activity or amount of ror-1 relative to the absence of the ror-1 antagonist. in embodiments, the ror-1 antagonist is an antibody or small molecule. the term “antagonist” may alternatively be used herein as inhibitor. by “therapeutically effective dose or amount” as used herein is meant a dose that produces effects for which it is administered (e.g. treating or preventing a disease). the exact dose and formulation will depend on the purpose of the treatment, and will be ascertainable by one skilled in the art using known techniques (see, e.g., lieberman, pharmaceutical dosage forms (vols. 1-3, 1992); lloyd, the art, science and technology of pharmaceutical compounding (1999); remington: the science and practice of pharmacy, 20th edition, gennaro, editor (2003), and pickar, dosage calculations (1999)). for example, for the given parameter, a therapeutically effective amount will show an increase or decrease of at least 5%, 10%, 15%, 20%, 25%, 40%, 50%, 60%, 75%, 80%, 90%, or at least 100%. therapeutic efficacy can also be expressed as “-fold” increase or decrease. for example, a therapeutically effective amount can have at least a 1.2-fold, 1.5-fold, 2-fold, 5-fold, or more effect over a standard control. a therapeutically effective dose or amount may ameliorate one or more symptoms of a disease. a therapeutically effective dose or amount may prevent or delay the onset of a disease or one or more symptoms of a disease when the effect for which it is being administered is to treat a person who is at risk of developing the disease. “anti-cancer agent” is used in accordance with its plain ordinary meaning and refers to a composition (e.g., compound, drug, antagonist, inhibitor, modulator) having antineoplastic properties or the ability to inhibit the growth or proliferation of cells. in some embodiments, an anti-cancer agent is a chemotherapeutic. in some embodiments, an anti-cancer agent is an agent identified herein having utility in methods of treating cancer. in some embodiments, an anti-cancer agent is an agent approved by the fda or similar regulatory agency of a country other than the usa, for treating cancer. examples of anti-cancer agents include, but are not limited to, mek (e.g. mek1, mek2, or mek1 and mek2) inhibitors (e.g. xl518, ci-1040, pd035901, selumetinib/azd6244, gsk1120212/trametinib, gdc-0973, arry-162, arry-300, azd8330, pd0325901, u0126, pd98059, tak-733, pd318088, as703026, bay 869766), alkylating agents (e.g., cyclophosphamide, ifosfamide, chlorambucil, busulfan, melphalan, mechlorethamine, uramustine, thiotepa, nitrosoureas, nitrogen mustards (e.g., mechloroethamine, cyclophosphamide, chlorambucil, meiphalan), ethylenimine and methylmelamines (e.g., hexamethlymelamine, thiotepa), alkyl sulfonates (e.g., busulfan), nitrosoureas (e.g., carmustine, lomusitne, semustine, streptozocin), triazenes (decarbazine)), anti-metabolites (e.g., 5-azathioprine, leucovorin, capecitabine, fludarabine, gemcitabine, pemetrexed, raltitrexed, folic acid analog (e.g., methotrexate), or pyrimidine analogs (e.g., fluorouracil, floxouridine, cytarabine), purine analogs (e.g., mercaptopurine, thioguanine, pentostatin), etc.), plant alkaloids (e.g., vincristine, vinblastine, vinorelbine, vindesine, podophyllotoxin, paclitaxel, docetaxel, etc.), topoisomerase inhibitors (e.g., irinotecan, topotecan, amsacrine, etoposide (vp16), etoposide phosphate, teniposide, etc.), antitumor antibiotics (e.g., doxorubicin, adriamycin, daunorubicin, epirubicin, actinomycin, bleomycin, mitomycin, mitoxantrone, plicamycin, etc.), platinum-based compounds or platinum containing agents (e.g. cisplatin, oxaloplatin, carboplatin), anthracenedione (e.g., mitoxantrone), substituted urea (e.g., hydroxyurea), methyl hydrazine derivative (e.g., procarbazine), adrenocortical suppressant (e.g., mitotane, aminoglutethimide), epipodophyllotoxins (e.g., etoposide), antibiotics (e.g., daunorubicin, doxorubicin, bleomycin), enzymes (e.g., l-asparaginase), inhibitors of mitogen-activated protein kinase signaling (e.g. u0126, pd98059, pd184352, pd0325901, arry-142886, sb239063, sp600125, bay 43-9006, wortmannin, or ly294002, syk inhibitors, mtor inhibitors, antibodies (e.g., rituxan), gossyphol, genasense, polyphenol e, chlorofusin, all trans-retinoic acid (atra), bryostatin, tumor necrosis factor-related apoptosis-inducing ligand (trail), 5-aza-2′-deoxycytidine, all trans retinoic acid, doxorubicin, vincristine, etoposide, gemcitabine, imatinib (gleevec®), geldanamycin, 17-n-allylamino-17-demethoxygeldanamycin (17-aag), flavopiridol, ly294002, bortezomib, trastuzumab, bay 11-7082, pkc412, pd184352, 20-epi-1, 25 dihydroxyvitamin d3; 5-ethynyluracil; abiraterone; aclarubicin; acylfulvene; adecypenol; adozelesin; aldesleukin; all-tk antagonists; altretamine; ambamustine; amidox; amifostine; aminolevulinic acid; amrubicin; amsacrine; anagrelide; anastrozole; andrographolide; angiogenesis inhibitors; antagonist d; antagonist g; antarelix; anti-dorsalizing morphogenetic protein-1; antiandrogen, prostatic carcinoma; antiestrogen; antineoplaston; antisense oligonucleotides; aphidicolin glycinate; apoptosis gene modulators; apoptosis regulators; apurinic acid; ara-cdp-dl-ptba; arginine deaminase; asulacrine; atamestane; atrimustine; axinastatin 1; axinastatin 2; axinastatin 3; azasetron; azatoxin; azatyrosine; baccatin iii derivatives; balanol; batimastat; bcr/abl antagonists; benzochlorins; benzoylstaurosporine; beta lactam derivatives; beta-alethine; betaclamycin b; betulinic acid; bfgf inhibitor; bicalutamide; bisantrene; bisaziridinylspermine; bisnafide; bistratene a; bizelesin; breflate; bropirimine; budotitane; buthionine sulfoximine; calcipotriol; calphostin c; camptothecin derivatives; canarypox il-2; capecitabine; carboxamide-amino-triazole; carboxyamidotriazole; carest m3; carn 700; cartilage derived inhibitor; carzelesin; casein kinase inhibitors (icos); castanospermine; cecropin b; cetrorelix; chlorins; chloroquinoxaline sulfonamide; cicaprost; cis-porphyrin; cladribine; clomifene analogues; clotrimazole; collismycin a; collismycin b; combretastatin a4; combretastatin analogue; conagenin; crambescidin 816; crisnatol; cryptophycin 8; cryptophycin a derivatives; curacin a; cyclopentanthraquinones; cycloplatam; cypemycin; cytarabine ocfosfate; cytolytic factor; cytostatin; dacliximab; decitabine; dehydrodidemnin b; deslorelin; dexamethasone; dexifosfamide; dexrazoxane; dexverapamil; diaziquone; didemnin b; didox; diethylnorspermine; dihydro-5-azacytidine; 9-dioxamycin; diphenyl spiromustine; docosanol; dolasetron; doxifluridine; droloxifene; dronabinol; duocarmycin sa; ebselen; ecomustine; edelfosine; edrecolomab; eflornithine; elemene; emitefur; epirubicin; epristeride; estramustine analogue; estrogen agonists; estrogen antagonists; etanidazole; etoposide phosphate; exemestane; fadrozole; fazarabine; fenretinide; filgrastim; finasteride; flavopiridol; flezelastine; fluasterone; fludarabine; fluorodaunorunicin hydrochloride; forfenimex; formestane; fostriecin; fotemustine; gadolinium texaphyrin; gallium nitrate; galocitabine; ganirelix; gelatinase inhibitors; gemcitabine; glutathione inhibitors; hepsulfam; heregulin; hexamethylene bisacetamide; hypericin; ibandronic acid; idarubicin; idoxifene; idramantone; ilmofosine; ilomastat; imidazoacridones; imiquimod; immunostimulant peptides; insulin-like growth factor-1 receptor inhibitor; interferon agonists; interferons; interleukins; iobenguane; iododoxorubicin; ipomeanol, 4-; iroplact; irsogladine; isobengazole; isohomohalicondrin b; itasetron; jasplakinolide; kahalalide f; lamellarin-n triacetate; lanreotide; leinamycin; lenograstim; lentinan sulfate; leptolstatin; letrozole; leukemia inhibiting factor; leukocyte alpha interferon; leuprolide+estrogen+progesterone; leuprorelin; levamisole; liarozole; linear polyamine analogue; lipophilic disaccharide peptide; lipophilic platinum compounds; lissoclinamide 7; lobaplatin; lombricine; lometrexol; lonidamine; losoxantrone; lovastatin; loxoribine; lurtotecan; lutetium texaphyrin; lysofylline; lytic peptides; maitansine; mannostatin a; marimastat; masoprocol; maspin; matrilysin inhibitors; matrix metalloproteinase inhibitors; menogaril; merbarone; meterelin; methioninase; metoclopramide; mif inhibitor; mifepristone; miltefosine; mirimostim; mismatched double stranded rna; mitoguazone; mitolactol; mitomycin analogues; mitonafide; mitotoxin fibroblast growth factor-saporin; mitoxantrone; mofarotene; molgramostim; monoclonal antibody, human chorionic gonadotrophin; monophosphoryl lipid a+myobacterium cell wall sk; mopidamol; multiple drug resistance gene inhibitor; multiple tumor suppressor 1-based therapy; mustard anticancer agent; mycaperoxide b; mycobacterial cell wall extract; myriaporone; n-acetyldinaline; n-substituted benzamides; nafarelin; nagrestip; naloxone+pentazocine; napavin; naphterpin; nartograstim; nedaplatin; nemorubicin; neridronic acid; neutral endopeptidase; nilutamide; nisamycin; nitric oxide modulators; nitroxide antioxidant; nitrullyn; o6-benzylguanine; octreotide; okicenone; oligonucleotides; onapristone; ondansetron; ondansetron; oracin; oral cytokine inducer; ormaplatin; osaterone; oxaliplatin; oxaunomycin; palauamine; palmitoylrhizoxin; pamidronic acid; panaxytriol; panomifene; parabactin; pazelliptine; pegaspargase; peldesine; pentosan polysulfate sodium; pentostatin; pentrozole; perflubron; perfosfamide; perillyl alcohol; phenazinomycin; phenylacetate; phosphatase inhibitors; picibanil; pilocarpine hydrochloride; pirarubicin; piritrexim; placetin a; placetin b; plasminogen activator inhibitor; platinum complex; platinum compounds; platinum-triamine complex; porfimer sodium; porfiromycin; prednisone; propyl bis-acridone; prostaglandin j2; proteasome inhibitors; protein a-based immune modulator; protein kinase c inhibitor; protein kinase c inhibitors, microalgal; protein tyrosine phosphatase inhibitors; purine nucleoside phosphorylase inhibitors; purpurins; pyrazoloacridine; pyridoxylated hemoglobin polyoxyethylerie conjugate; raf antagonists; raltitrexed; ramosetron; ras farnesyl protein transferase inhibitors; ras inhibitors; ras-gap inhibitor; retelliptine demethylated; rhenium re 186 etidronate; rhizoxin; ribozymes; rii retinamide; rogletimide; rohitukine; romurtide; roquinimex; rubiginone b 1; ruboxyl; safingol; saintopin; sarcnu; sarcophytol a; sargramostim; sdi 1 mimetics; semustine; senescence derived inhibitor 1; sense oligonucleotides; signal transduction inhibitors; signal transduction modulators; single chain antigen-binding protein; sizofuran; sobuzoxane; sodium borocaptate; sodium phenylacetate; solverol; somatomedin binding protein; sonermin; sparfosic acid; spicamycin d; spiromustine; splenopentin; spongistatin 1; squalamine; stem cell inhibitor; stem-cell division inhibitors; stipiamide; stromelysin inhibitors; sulfinosine; superactive vasoactive intestinal peptide antagonist; suradista; suramin; swainsonine; synthetic glycosaminoglycans; tallimustine; tamoxifen methiodide; tauromustine; tazarotene; tecogalan sodium; tegafur; tellurapyrylium; telomerase inhibitors; temoporfin; temozolomide; teniposide; tetrachlorodecaoxide; tetrazomine; thaliblastine; thiocoraline; thrombopoietin; thrombopoietin mimetic; thymalfasin; thymopoietin receptor agonist; thymotrinan; thyroid stimulating hormone; tin ethyl etiopurpurin; tirapazamine; titanocene bichloride; topsentin; toremifene; totipotent stem cell factor; translation inhibitors; tretinoin; triacetyluridine; triciribine; trimetrexate; triptorelin; tropisetron; turosteride; tyrosine kinase inhibitors; tyrphostins; ubc inhibitors; ubenimex; urogenital sinus-derived growth inhibitory factor; urokinase receptor antagonists; vapreotide; variolin b; vector system, erythrocyte gene therapy; velaresol; veramine; verdins; verteporfin; vinorelbine; vinxaltine; vitaxin; vorozole; zanoterone; zeniplatin; zilascorb; zinostatin stimalamer, adriamycin, dactinomycin, bleomycin, vinblastine, cisplatin, acivicin; aclarubicin; acodazole hydrochloride; acronine; adozelesin; aldesleukin; altretamine; ambomycin; ametantrone acetate; aminoglutethimide; amsacrine; anastrozole; anthramycin; asparaginase; asperlin; azacitidine; azetepa; azotomycin; batimastat; benzodepa; bicalutamide; bisantrene hydrochloride; bisnafide dimesylate; bizelesin; bleomycin sulfate; brequinar sodium; bropirimine; busulfan; cactinomycin; calusterone; caracemide; carbetimer; carboplatin; carmustine; carubicin hydrochloride; carzelesin; cedefingol; chlorambucil; cirolemycin; cladribine; crisnatol mesylate; cyclophosphamide; cytarabine; dacarbazine; daunorubicin hydrochloride; decitabine; dexormaplatin; dezaguanine; dezaguanine mesylate; diaziquone; doxorubicin; doxorubicin hydrochloride; droloxifene; droloxifene citrate; dromostanolone propionate; duazomycin; edatrexate; eflornithine hydrochloride; elsamitrucin; enloplatin; enpromate; epipropidine; epirubicin hydrochloride; erbulozole; esorubicin hydrochloride; estramustine; estramustine phosphate sodium; etanidazole; etoposide; etoposide phosphate; etoprine; fadrozole hydrochloride; fazarabine; fenretinide; floxuridine; fludarabine phosphate; fluorouracil; fluorocitabine; fosquidone; fostriecin sodium; gemcitabine; gemcitabine hydrochloride; hydroxyurea; idarubicin hydrochloride; ifosfamide; iimofosine; interleukin il (including recombinant interleukin ii, or rll.sub.2), interferon alfa-2a; interferon alfa-2b; interferon alfa-n1; interferon alfa-n3; interferon beta-1a; interferon gamma-1b; iproplatin; irinotecan hydrochloride; lanreotide acetate; letrozole; leuprolide acetate; liarozole hydrochloride; lometrexol sodium; lomustine; losoxantrone hydrochloride; masoprocol; maytansine; mechlorethamine hydrochloride; megestrol acetate; melengestrol acetate; melphalan; menogaril; mercaptopurine; methotrexate; methotrexate sodium; metoprine; meturedepa; mitindomide; mitocarcin; mitocromin; mitogillin; mitomalcin; mitomycin; mitosper; mitotane; mitoxantrone hydrochloride; mycophenolic acid; nocodazoie; nogalamycin; ormaplatin; oxisuran; pegaspargase; peliomycin; pentamustine; peplomycin sulfate; perfosfamide; pipobroman; piposulfan; piroxantrone hydrochloride; plicamycin; plomestane; porfimer sodium; porfiromycin; prednimustine; procarbazine hydrochloride; puromycin; puromycin hydrochloride; pyrazofurin; riboprine; rogletimide; safingol; safingol hydrochloride; semustine; simtrazene; sparfosate sodium; sparsomycin; spirogermanium hydrochloride; spiromustine; spiroplatin; streptonigrin; streptozocin; sulofenur; talisomycin; tecogalan sodium; tegafur; teloxantrone hydrochloride; temoporfin; teniposide; teroxirone; testolactone; thiamiprine; thioguanine; thiotepa; tiazofurin; tirapazamine; toremifene citrate; trestolone acetate; triciribine phosphate; trimetrexate; trimetrexate glucuronate; triptorelin; tubulozole hydrochloride; uracil mustard; uredepa; vapreotide; verteporfin; vinblastine sulfate; vincristine sulfate; vindesine; vindesine sulfate; vinepidine sulfate; vinglycinate sulfate; vinleurosine sulfate; vinorelbine tartrate; vinrosidine sulfate; vinzolidine sulfate; vorozole; zeniplatin; zinostatin; zorubicin hydrochloride, agents that arrest cells in the g2-m phases and/or modulate the formation or stability of microtubules, (e.g. taxol™ (i.e. paclitaxel), taxotere™, compounds comprising the taxane skeleton, erbulozole (i.e. r-55104), dolastatin 10 (i.e. dls-10 and nsc-376128), mivobulin isethionate (i.e. as ci-980), vincristine, nsc-639829, discodermolide (i.e. as nvp-xx-a-296), abt-751 (abbott, i.e. e-7010), altorhyrtins (e.g. altorhyrtin a and altorhyrtin c), spongistatins (e.g. spongistatin 1, spongistatin 2, spongistatin 3, spongistatin 4, spongistatin 5, spongistatin 6, spongistatin 7, spongistatin 8, and spongistatin 9), cemadotin hydrochloride (i.e. lu-103793 and nsc-d-669356), epothilones (e.g. epothilone a, epothilone b, epothilone c (i.e. desoxyepothilone a or depoa), epothilone d (i.e. kos-862, depob, and desoxyepothilone b), epothilone e, epothilone f, epothilone b n-oxide, epothilone a n-oxide, 16-aza-epothilone b, 21-aminoepothilone b (i.e. bms-310705), 21-hydroxyepothilone d (i.e. desoxyepothilone f and depof), 26-fluoroepothilone, auristatin pe (i.e. nsc-654663), soblidotin (i.e. tzt-1027), vincristine sulfate, cryptophycin 52 (i.e. ly-355703), vitilevuamide, tubulysin a, canadensol, centaureidin (i.e. nsc-106969), oncocidin al (i.e. bto-956 and dime), fijianolide b, laulimalide, narcosine (also known as nsc-5366), nascapine, hemiasterlin, vanadocene acetylacetonate, monsatrol, lnanocine (i.e. nsc-698666), eleutherobins (such as desmethyleleutherobin, desaetyleleutherobin, lsoeleutherobin a, and z-eleutherobin), caribaeoside, caribaeolin, halichondrin b, diazonamide a, taccalonolide a, diozostatin, (−)-phenylahistin (i.e. nscl-96f037), myoseverin b, resverastatin phosphate sodium, steroids (e.g., dexamethasone), finasteride, aromatase inhibitors, gonadotropin-releasing hormone agonists (gnrh) such as goserelin or leuprolide, adrenocorticosteroids (e.g., prednisone), progestins (e.g., hydroxyprogesterone caproate, megestrol acetate, medroxyprogesterone acetate), estrogens (e.g., diethlystilbestrol, ethinyl estradiol), antiestrogen (e.g., tamoxifen), androgens (e.g., testosterone propionate, fluoxymesterone), antiandrogen (e.g., flutamide), immunostimulants (e.g., bacillus calmette-guérin (bcg), levamisole, interleukin-2, alpha-interferon, etc.), monoclonal antibodies (e.g., anti-cd20, anti-her2, anti-cd52, anti-hla-dr, and anti-vegf monoclonal antibodies), immunotoxins (e.g., anti-cd33 monoclonal antibody-calicheamicin conjugate, anti-cd22 monoclonal antibody-pseudomonas exotoxin conjugate, etc.), radioimmunotherapy (e.g., anti-cd20 monoclonal antibody conjugated to 111 in, 90 y, or 131i etc.), triptolide, homoharringtonine, dactinomycin, doxorubicin, epirubicin, topotecan, itraconazole, vindesine, cerivastatin, vincristine, deoxyadenosine, sertraline, pitavastatin, irinotecan, clofazimine, 5-nonyloxytryptamine, vemurafenib, dabrafenib, erlotinib, gefitinib, egfr inhibitors, epidermal growth factor receptor (egfr)-targeted therapy or therapeutic (e.g. gefitinib (iressa ™), erlotinib (tarceva ™), cetuximab (erbitux™), lapatinib (tykerb™), panitumumab (vectibix™), vandetanib (caprelsa™) afatinib/bibw2992, ci-1033/canertinib, neratinib/hki-272, cp-724714, tak-285, ast-1306, arry334543, arry-380, ag-1478, dacomitinib/pf299804, osi-420/desmethyl erlotinib, azd8931, aee788, pelitinib/ekb-569, cudc-101, wz8040, wz4002, wz3146, ag-490, xl647, pd153035, bms-599626), sorafenib, imatinib, sunitinib, dasatinib, hormonal therapies, or the like. the term “ibrutinib,” also known as imbruvica®, pci 32765 or the like, refers in the usual and customary sense, to 1-[(3r)-3[4-amino-3-(4-phenoxyphenyl)-1h-pyrazolo[3,4-d]pyrimidin-1-yl]piperidin-l-yl]prop-2-en-l-one (cas registry number 936563-96-1). in embodiments, the btk antagonist is any one of the compounds disclosed in us pat. nos. 7,514,444; 8,008,309; 8,497,277; 8,476,284; 8,697,711, and 8,703,780 which are incorporated by reference herein in their entirety and for all purposes. the term “idelalisib,” also known as cal101, gs-1101, zydelig® or the like, refers in the usual and customary sense to 5-fluoro-3-phenyl-2-[(15)-1-(7h-purin-6-ylamino)propyl]-4(3h)-quinazolinone (cas registry number 870281-82-6). in embodiments, the btk antagonist is any one of the compounds disclosed in us patent nos. 9,469,643; 9,492,449; 8,139,195; 8,492,389; 8,865,730; and 9,149,477 which are incorporated by reference herein in their entirety and for all purposes. the term “r406” or the like refers, in the usual and customary sense, to 6-((5-fluoro-2-((3,4,5-trimethoxyphenyl)amino)pyrimidin-4-yl)amino)-2,2-dimethyl-2h-pyrido[3 ,2-b][1,4] oxazin-3(4h)-one benzenesulfonate. the term “fostamatinib” or the like refers in the usual and customary sense, to 6-((5-fluoro-2-((3 ,4,5-trimethoxyphenyl)amino)pyrimidin-4-yl)amino)-2,2-dimethyl-2h-pyrido[3 ,2-b][1,4]oxazin-3(4h)-one benzenesulfonate (cas registry number 901119-35-5 or 1025687-58-4(disodium salt)). fostamatinib is a prodrug of r406. in embodiments, the btk antagonist is any one of the compounds disclosed in us. pat. no. 7,449,458 which is incorporated by reference herein in its entirety and for all purposes. the term “acalabrutinib,” also known as acp-196 or the like, refers in the usual and customary sense to 4-[8-amino-3-[(25)-1-but-2-ynoylpyrrolidin-2-yl]imidazo[1,5-a]pyrazin-1-yl]-n-pyridin-2-ylbenzamide (cas registry number 1420477-60-6). in embodiments, the btk antagonist is any one of the compounds disclosed in us pat. application nos. 20140155385, 20160151364, 20160159810 which are incorporated by reference herein in their entirety and for all purposes. the term “ono/gs-4059” or the like, refers in the usual and customary sense to 6-amino-7-(4-phenoxyphenyl)-9-[(3s)-1-prop-2-enoylpiperidin-3-yl]purin-8-one (cas registry number 1351636-18-4). in embodiments, the btk antagonist is any one of the compounds disclosed in us pat. nos. 8,940,725 and 8,557,803 and us patent application no. 20150094299 which are incorporated by reference herein in their entirety and for all purposes. the term “bgb-3111” or the like, refers in the usual and customary sense to 2-(4-phenoxyphenyl)-7-(1-prop-2-enoylpiperidin-4-yl)-1,5,6, 7-tetrahydropyrazolo[1,5-a]pyrimidine-3-carboxamide (cas registry number 1633350-06-7). in embodiments, the btk antagonist is any one of the compounds disclosed in us patent application nos. 20150259354 and 20160083392 which are incorporated by reference herein in their entirety and for all purposes. the term “cc-292,” also known as avl-292, spebrutinib or the like, refers in the usual and customary sense to n-[3-[[5-fluoro-2[4-(2-methoxyethoxy)anilino]pyrimidin-4-yl]amino]phenyl]prop-2-enamide (cas registry number 1202757-89-8). in embodiments, the btk antagonist is any one of the compounds disclosed in us pat. no. 8,338,439 which is incorporated by reference herein in its entirety and for all purposes. the terms “cirmtuzumab”, “uc-961”, and “99961.1” refer to a humanized monoclonal antibody capable of binding the extracellular domain of the human receptor tyrosine kinase-like orphan receptor 1 (ror-1). in embodiments, cirmtuzumab is any one of the antibodies or fragments thereof disclosed in u.s. patent application ser. no. 14/422,519, which is incorporated by reference herein in its entirety and for all purposes. as used herein, the term “about” means a range of values including the specified value, which a person of ordinary skill in the art would consider reasonably similar to the specified value. in embodiments, about means within a standard deviation using measurements generally acceptable in the art. in embodiments, about means a range extending to+/−10% of the specified value. in embodiments, about means the specified value. the terms “disease” or “condition” refer to a state of being or health status of a patient or subject capable of being treated with a compound, pharmaceutical composition, or method provided herein. in embodiments, the disease is cancer (e.g. lung cancer, ovarian cancer, osteosarcoma, bladder cancer, cervical cancer, liver cancer, kidney cancer, skin cancer (e.g., merkel cell carcinoma), testicular cancer, leukemia, lymphoma, head and neck cancer, colorectal cancer, prostate cancer, pancreatic cancer, melanoma, breast cancer, neuroblastoma). the disease may be an autoimmune, inflammatory, cancer, infectious, metabolic, developmental, cardiovascular, liver, intestinal, endocrine, neurological, or other disease. as used herein, the term “cancer” refers to all types of cancer, neoplasm or malignant tumors found in mammals, including leukemias, lymphomas, melanomas, neuroendocrine tumors, carcinomas and sarcomas. exemplary cancers that may be treated with a compound, pharmaceutical composition, or method provided herein include lymphoma, sarcoma, bladder cancer, bone cancer, brain tumor, cervical cancer, colon cancer, esophageal cancer, gastric cancer, head and neck cancer, kidney cancer, myeloma, thyroid cancer, leukemia, prostate cancer, breast cancer (e.g. triple negative, er positive, er negative, chemotherapy resistant, herceptin resistant, her2 positive, doxorubicin resistant, tamoxifen resistant, ductal carcinoma, lobular carcinoma, primary, metastatic), ovarian cancer, pancreatic cancer, liver cancer (e.g., hepatocellular carcinoma) , lung cancer (e.g. non-small cell lung carcinoma, squamous cell lung carcinoma, adenocarcinoma, large cell lung carcinoma, small cell lung carcinoma, carcinoid, sarcoma), glioblastoma multiforme, glioma, melanoma, prostate cancer, castration-resistant prostate cancer, breast cancer, triple negative breast cancer, glioblastoma, ovarian cancer, lung cancer, squamous cell carcinoma (e.g., head, neck, or esophagus), colorectal cancer, leukemia, acute myeloid leukemia, lymphoma, b cell lymphoma, or multiple myeloma. additional examples include, cancer of the thyroid, endocrine system, brain, breast, cervix, colon, head & neck, esophagus, liver, kidney, lung, non-small cell lung, melanoma, mesothelioma, ovary, sarcoma, stomach, uterus or medulloblastoma, hodgkin's disease, non-hodgkin's lymphoma, multiple myeloma, neuroblastoma, glioma, glioblastoma multiforme, ovarian cancer, rhabdomyosarcoma, primary thrombocytosis, primary macroglobulinemia, primary brain tumors, cancer, malignant pancreatic insulanoma, malignant carcinoid, urinary bladder cancer, premalignant skin lesions, testicular cancer, lymphomas, thyroid cancer, neuroblastoma, esophageal cancer, genitourinary tract cancer, malignant hypercalcemia, endometrial cancer, adrenal cortical cancer, neoplasms of the endocrine or exocrine pancreas, medullary thyroid cancer, medullary thyroid carcinoma, melanoma, colorectal cancer, papillary thyroid cancer, hepatocellular carcinoma, paget's disease of the nipple, phyllodes tumors, lobular carcinoma, ductal carcinoma, cancer of the pancreatic stellate cells, cancer of the hepatic stellate cells, or prostate cancer. the term “leukemia” refers broadly to progressive, malignant diseases of the blood-forming organs and is generally characterized by a distorted proliferation and development of leukocytes and their precursors in the blood and bone marrow. leukemia is generally clinically classified on the basis of (1) the duration and character of the disease-acute or chronic; (2) the type of cell involved; myeloid (myelogenous), lymphoid (lymphogenous), or monocytic; and (3) the increase or non-increase in the number abnormal cells in the blood-leukemic or aleukemic (subleukemic). exemplary leukemias that may be treated with a compound, pharmaceutical composition, or method provided herein include, for example, acute nonlymphocytic leukemia, chronic lymphocytic leukemia, acute granulocytic leukemia, chronic granulocytic leukemia, acute promyelocytic leukemia, adult t-cell leukemia, aleukemic leukemia, aleukocythemic leukemia, basophylic leukemia, blast cell leukemia, bovine leukemia, chronic myelocytic leukemia, leukemia cutis, embryonal leukemia, eosinophilic leukemia, gross' leukemia, hairy-cell leukemia, hemoblastic leukemia, hemocytoblastic leukemia, histiocytic leukemia, stem cell leukemia, acute monocytic leukemia, leukopenic leukemia, lymphatic leukemia, lymphoblastic leukemia, lymphocytic leukemia, lymphogenous leukemia, lymphoid leukemia, lymphosarcoma cell leukemia, mast cell leukemia, megakaryocytic leukemia, micromyeloblastic leukemia, monocytic leukemia, myeloblastic leukemia, myelocytic leukemia, myeloid granulocytic leukemia, myelomonocytic leukemia, naegeli leukemia, plasma cell leukemia, multiple myeloma, plasmacytic leukemia, promyelocytic leukemia, rieder cell leukemia, schilling's leukemia, stem cell leukemia, subleukemic leukemia, or undifferentiated cell leukemia. the term “sarcoma” generally refers to a tumor which is made up of a substance like the embryonic connective tissue and is generally composed of closely packed cells embedded in a fibrillar or homogeneous substance. sarcomas that may be treated with a compound, pharmaceutical composition, or method provided herein include a chondrosarcoma, fibrosarcoma, lymphosarcoma, melanosarcoma, myxosarcoma, osteosarcoma, abemethy's sarcoma, adipose sarcoma, liposarcoma, alveolar soft part sarcoma, ameloblastic sarcoma, botryoid sarcoma, chloroma sarcoma, chorio carcinoma, embryonal sarcoma, wilms' tumor sarcoma, endometrial sarcoma, stromal sarcoma, ewing's sarcoma, fascial sarcoma, fibroblastic sarcoma, giant cell sarcoma, granulocytic sarcoma, hodgkin's sarcoma, idiopathic multiple pigmented hemorrhagic sarcoma, immunoblastic sarcoma of b cells, lymphoma, immunoblastic sarcoma of t-cells, jensen's sarcoma, kaposi's sarcoma, kupffer cell sarcoma, angiosarcoma, leukosarcoma, malignant mesenchymoma sarcoma, parosteal sarcoma, reticulocytic sarcoma, rous sarcoma, serocystic sarcoma, synovial sarcoma, or telangiectaltic sarcoma. the term “melanoma” is taken to mean a tumor arising from the melanocytic system of the skin and other organs. melanomas that may be treated with a compound, pharmaceutical composition, or method provided herein include, for example, acral-lentiginous melanoma, amelanotic melanoma, benign juvenile melanoma, cloudman's melanoma, s91 melanoma, harding-passey melanoma, juvenile melanoma, lentigo maligna melanoma, malignant melanoma, nodular melanoma, subungal melanoma, or superficial spreading melanoma. the term “carcinoma” refers to a malignant new growth made up of epithelial cells tending to infiltrate the surrounding tissues and give rise to metastases. exemplary carcinomas that may be treated with a compound, pharmaceutical composition, or method provided herein include, for example, medullary thyroid carcinoma, familial medullary thyroid carcinoma, acinar carcinoma, acinous carcinoma, adenocystic carcinoma, adenoid cystic carcinoma, carcinoma adenomatosum, carcinoma of adrenal cortex, alveolar carcinoma, alveolar cell carcinoma, basal cell carcinoma, carcinoma basocellulare, basaloid carcinoma, basosquamous cell carcinoma, bronchioalveolar carcinoma, bronchiolar carcinoma, bronchogenic carcinoma, cerebriform carcinoma, cholangiocellular carcinoma, chorionic carcinoma, colloid carcinoma, comedo carcinoma, corpus carcinoma, cribriform carcinoma, carcinoma en cuirasse, carcinoma cutaneum, cylindrical carcinoma, cylindrical cell carcinoma, duct carcinoma, ductal carcinoma, carcinoma durum, embryonal carcinoma, encephaloid carcinoma, epiermoid carcinoma, carcinoma epitheliale adenoides, exophytic carcinoma, carcinoma ex ulcere, carcinoma fibrosum, gelatiniforni carcinoma, gelatinous carcinoma, giant cell carcinoma, carcinoma gigantocellulare, glandular carcinoma, granulosa cell carcinoma, hair-matrix carcinoma, hematoid carcinoma, hepatocellular carcinoma, hurthle cell carcinoma, hyaline carcinoma, hypernephroid carcinoma, infantile embryonal carcinoma, carcinoma in situ, intraepidermal carcinoma, intraepithelial carcinoma, krompecher's carcinoma, kulchitzky-cell carcinoma, large-cell carcinoma, lenticular carcinoma, carcinoma lenticulare, lipomatous carcinoma, lobular carcinoma, lymphoepithelial carcinoma, carcinoma medullare, medullary carcinoma, melanotic carcinoma, carcinoma molle, mucinous carcinoma, carcinoma muciparum, carcinoma mucocellulare, mucoepidermoid carcinoma, carcinoma mucosum, mucous carcinoma, carcinoma myxomatodes, nasopharyngeal carcinoma, oat cell carcinoma, carcinoma ossificans, osteoid carcinoma, papillary carcinoma, periportal carcinoma, preinvasive carcinoma, prickle cell carcinoma, pultaceous carcinoma, renal cell carcinoma of kidney, reserve cell carcinoma, carcinoma sarcomatodes, schneiderian carcinoma, scirrhous carcinoma, carcinoma scroti, signet-ring cell carcinoma, carcinoma simplex, small-cell carcinoma, solanoid carcinoma, spheroidal cell carcinoma, spindle cell carcinoma, carcinoma spongiosum, squamous carcinoma, squamous cell carcinoma, string carcinoma, carcinoma telangiectaticum, carcinoma telangiectodes, transitional cell carcinoma, carcinoma tuberosum, tubular carcinoma, tuberous carcinoma, verrucous carcinoma, or carcinoma villosum. as used herein, the terms “metastasis,” “metastatic,” and “metastatic cancer” can be used interchangeably and refer to the spread of a proliferative disease or disorder, e.g., cancer, from one organ or another non-adjacent organ or body part. cancer occurs at an originating site, e.g., breast, which site is referred to as a primary tumor, e.g., primary breast cancer. some cancer cells in the primary tumor or originating site acquire the ability to penetrate and infiltrate surrounding normal tissue in the local area and/or the ability to penetrate the walls of the lymphatic system or vascular system circulating through the system to other sites and tissues in the body. a second clinically detectable tumor formed from cancer cells of a primary tumor is referred to as a metastatic or secondary tumor. when cancer cells metastasize, the metastatic tumor and its cells are presumed to be similar to those of the original tumor. thus, if lung cancer metastasizes to the breast, the secondary tumor at the site of the breast consists of abnormal lung cells and not abnormal breast cells. the secondary tumor in the breast is referred to a metastatic lung cancer. thus, the phrase metastatic cancer refers to a disease in which a subject has or had a primary tumor and has one or more secondary tumors. the phrases non-metastatic cancer or subjects with cancer that is not metastatic refers to diseases in which subjects have a primary tumor but not one or more secondary tumors. for example, metastatic lung cancer refers to a disease in a subject with or with a history of a primary lung tumor and with one or more secondary tumors at a second location or multiple locations, e.g., in the breast. the term “associated” or “associated with” in the context of a substance or substance activity or function associated with a disease (e.g., diabetes, cancer (e.g. prostate cancer, renal cancer, metastatic cancer, melanoma, castration-resistant prostate cancer, breast cancer, triple negative breast cancer, glioblastoma, ovarian cancer, lung cancer, squamous cell carcinoma (e.g., head, neck, or esophagus), colorectal cancer, leukemia, acute myeloid leukemia, lymphoma, b cell lymphoma, or multiple myeloma)) means that the disease (e.g. lung cancer, ovarian cancer, osteosarcoma, bladder cancer, cervical cancer, liver cancer, kidney cancer, skin cancer (e.g., merkel cell carcinoma), testicular cancer, leukemia, lymphoma, head and neck cancer, colorectal cancer, prostate cancer, pancreatic cancer, melanoma, breast cancer, neuroblastoma) is caused by (in whole or in part), or a symptom of the disease is caused by (in whole or in part) the substance or substance activity or function. “patient” or “subject in need thereof” refers to a living organism suffering from or prone to a disease or condition that can be treated by administration of a composition or pharmaceutical composition as provided herein. non-limiting examples include humans, other mammals, bovines, rats, mice, dogs, monkeys, goat, sheep, cows, deer, and other non-mammalian animals. in some embodiments, a patient is human. methods the methods provided herein are, inter alia, useful for the treatment of cancer. in embodiments, the methods and compositions as described herein provide effective treatment for cancers expressing ror-1. in an aspect is provided a method of treating cancer in a subject in need thereof, the method including administering to the subject a therapeutically effective amount of a bruton's tyrosine kinase (btk) antagonist and a tyrosine kinase-like orphan receptor 1 (ror-1) antagonist. in another aspect, there is provided a method of treating cancer in a subject in need thereof. the method includes administering to the subject a therapeutically effective amount of a bruton's tyrosine kinase (btk) antagonist and an anti-ror-1 antibody. the term “bruton's tyrosine kinase,” also known as tyrosine-protein kinase btk, as used herein refers to the any of the recombinant or naturally-occurring forms of bruton's tyrosine kinase (btk) or variants or homologs thereof that maintain btk activity (e.g., within at least 50%, 80%, 90%, 95%, 96%, 97%, 98%, 99% or 100% activity compared to btk). in some aspects, the variants or homologs have at least 90%, 95%, 96%, 97%, 98%, 99% or 100% amino acid sequence identity across the whole sequence or a portion of the sequence (e.g., a 50, 100, 150 or 200 continuous amino acid portion) compared to a naturally occurring btk protein. in embodiments, the btk protein is substantially identical to the protein identified by the uniprot reference number q01687 or a variant or homolog having substantial identity thereto. in embodiments, the btk antagonist is a small molecule. in embodiments, the btk antagonist is ibrutinib, idelalisib, fostamatinib, acalabrutinib, ono/gs-4059, bgb-3111 or cc-292 (avl-292). in embodiments, the btk antagonist is ibrutinib. in embodiments, the btk antagonist is idelalisib. in embodiments, the btk antagonist is fostamatinib. in embodiments, the btk antagonist is acalabrutinib. in embodiments, the btk antagonist is ono/gs-4059. in embodiments, the btk antagonist is bgb-3111. in embodiments, the btk antagonist is cc-292 (avl-292). in embodiments, the btk antagonist is r406. the term “ror-1” as used herein refers to the any of the recombinant or naturally-occurring forms of tyrosine kinase-like orphan receptor 1 (ror-1) or variants or homologs thereof that maintain ror-1 activity (e.g., within at least 50%, 80%, 90%, 95%, 96%, 97%, 98%, 99% or 100% activity compared to ror-1). in some aspects, the variants or homologs have at least 90%, 95%, 96%, 97%, 98%, 99% or 100% amino acid sequence identity across the whole sequence or a portion of the sequence (e.g., a 50, 100, 150 or 200 continuous amino acid portion) compared to a naturally occurring ror-1 protein. in embodiments, the ror-1 protein is substantially identical to the protein identified by accession no. np_005003.1 or a variant or homolog having substantial identity thereto. in embodiments, the ror-1 protein includes the amino acid sequence of seq id no:13. in embodiments, the ror-1 protein is the amino acid sequence of seq id no:13. in embodiments, the ror-1 protein includes the amino acid sequence of seq id no:14. in embodiments, the ror-1 protein includes the amino acid sequence of seq id no:15. in the instance where the ror-1 antagonist is an antibody, the antibody specifically binds to a ror-1 polypeptide. thus, in embodiments, the ror-1 antagonist is an anti-ror-1 antibody. in embodiments, the anti-ror-1 antibody is a humanized antibody. the anti-ror-1 antibody may include amino acid sequences (e.g., cdrs) allowing it to bind portions of a ror-1 polypeptide or a fragment thereof. therefore, in embodiments, the antibody includes a humanized heavy chain variable region and a humanized light chain variable region, wherein the humanized heavy chain variable region includes the sequences set forth in seq id no:1, seq id no:2, and seq id no:3; and wherein the humanized light chain variable region includes the sequences set forth in seq id no:4, seq id no:5, and seq id no:6. in embodiments, the antibody is cirmtuzumab. cirmtuzumab as defined herein is also referred to herein as uc-961 or 99961.1. the development and structure of cirmtuzumab is disclosed in u.s. patent application ser. no. 14/422,519 which is incorporated by reference herein in its entirety and for all purposes. in embodiments, the antibody includes a humanized heavy chain variable region and a humanized light chain variable region, wherein the humanized heavy chain variable region includes the sequences set forth in seq id no:7, seq id no:8, and seq id no:9; and wherein the humanized light chain variable region includes the sequences set forth in seq id no:10, seq id no:11, and seq id no:12. an antibody including the amino acid sequences (i.e., cdrs) set forth by seq id nos:7, 8, 9, 10, 11, 12 may be referred to herein as antibody d10. the development and use of antibody d10 is disclosed in u.s. pat. no. 9,217,040 which is incorporated by reference herein in its entirety and for all purposes. in embodiments, the antibody binds to amino acids 130-160 of ror-1 or a fragment thereof. in embodiments, the antibody binds a peptide including a glutamic acid at a position corresponding to position 138 of ror-1. in embodiments, the antibody specifically binds either the 3′ or middle ig-like region of the extracellular domain of the ror-1 protein. in embodiments, the antibody binds the 3′ end of the ig-like region of the extracellular domain of ror-1 protein from position 1-147. in embodiments, the antibody inhibits metastasis. in embodiments, the antibody is an antibody fragment. in embodiments, the antibody is human. in embodiments, the antibody is humanized. in embodiments, the antibody is a chimeric antibody. in embodiments, the antibody is a single chain antibody. in embodiments, the antibody has a binding affinity of about 500 pm to about 6 nm. in embodiments, the antibody has a binding affinity of about 550 pm to about 6 nm. in embodiments, the antibody has a binding affinity of about 600 pm to about 6 nm. in embodiments, the antibody has a binding affinity of about 650 pm to about 6 nm. in embodiments, the antibody has a binding affinity of about 700 pm to about 6 nm. in embodiments, the antibody has a binding affinity of about 750 pm to about 6 nm. in embodiments, the antibody has a binding affinity of about 800 pm to about 6 nm. in embodiments, the antibody has a binding affinity of about 850 pm to about 6 nm. in embodiments, the antibody has a binding affinity of about 900 pm to about 6 nm. in embodiments, the antibody has a binding affinity of about 950 pm to about 6 nm. in embodiments, the antibody has a binding affinity of about 1 nm to about 6 nm. in embodiments, the antibody has a binding affinity of about 1 nm to about 6 nm. in embodiments, the antibody has a binding affinity of about 1.5 nm to about 6 nm. in embodiments, the antibody has a binding affinity of about 2 nm to about 6 nm. in embodiments, the antibody has a binding affinity of about 2.5 nm to about 6 nm. in embodiments, the antibody has a binding affinity of about 3 nm to about 6 nm. in embodiments, the antibody has a binding affinity of about 3.5 nm to about 6 nm. in embodiments, the antibody has a binding affinity of about 4 nm to about 6 nm. in embodiments, the antibody has a binding affinity of about 4.5 nm to about 6 nm. in embodiments, the antibody has a binding affinity of about 5 nm to about 6 nm. in embodiments, the antibody has a binding affinity of about 5.5 nm to about 6 nm. in embodiments, the antibody has a binding affinity of 500 pm to 6 nm. in embodiments, the antibody has a binding affinity of 550 pm to 6 nm. in embodiments, the antibody has a binding affinity of 600 pm to 6 nm. in embodiments, the antibody has a binding affinity of 650 pm to 6 nm. in embodiments, the antibody has a binding affinity of 700 pm to 6 nm. in embodiments, the antibody has a binding affinity of 750 pm to 6 nm. in embodiments, the antibody has a binding affinity of 800 pm to 6 nm. in embodiments, the antibody has a binding affinity of 850 pm to 6 nm. in embodiments, the antibody has a binding affinity of 900 pm to 6 nm. in embodiments, the antibody has a binding affinity of 950 pm to 6 nm. in embodiments, the antibody has a binding affinity of 1 nm to 6 nm. in embodiments, the antibody has a binding affinity of 1 nm to 6 nm. in embodiments, the antibody has a binding affinity of 1.5 nm to 6 nm. in embodiments, the antibody has a binding affinity of 2 nm to 6 nm. in embodiments, the antibody has a binding affinity of 2.5 nm to 6 nm. in embodiments, the antibody has a binding affinity of 3 nm to 6 nm. in embodiments, the antibody has a binding affinity of 3.5 nm to 6 nm. in embodiments, the antibody has a binding affinity of 4 nm to 6 nm. in embodiments, the antibody has a binding affinity of 4.5 nm to 6 nm. in embodiments, the antibody has a binding affinity of 5 nm to 6 nm. in embodiments, the antibody has a binding affinity of 5.5 nm to 6 nm. in embodiments, the antibody has a binding affinity of about 500 pm to about 5.5 nm. in embodiments, the antibody has a binding affinity of about 500 pm to about 5 nm. in embodiments, the antibody has a binding affinity of about 500 pm to about 4.5 nm. in embodiments, the antibody has a binding affinity of about 500 pm to about 4 nm. in embodiments, the antibody has a binding affinity of about 500 pm to about 3.5 nm. in embodiments, the antibody has a binding affinity of about 500 pm to about 3 nm. in embodiments, the antibody has a binding affinity of about 500 pm to about 3.5 nm. in embodiments, the antibody has a binding affinity of about 500 pm to about 3 nm. in embodiments, the antibody has a binding affinity of about 500 pm to about 2.5 nm. in embodiments, the antibody has a binding affinity of about 500 pm to about 2 nm. in embodiments, the antibody has a binding affinity of about 500 pm to about 1.5 nm. in embodiments, the antibody has a binding affinity of about 500 pm to about 1 nm. in embodiments, the antibody has a binding affinity of about 500 pm to about 950 pm. in embodiments, the antibody has a binding affinity of about 500 pm to about 900 pm. in embodiments, the antibody has a binding affinity of about 500 pm to about 850 pm. in embodiments, the antibody has a binding affinity of about 500 pm to about 800 pm. in embodiments, the antibody has a binding affinity of about 500 pm to about 750 pm. in embodiments, the antibody has a binding affinity of about 500 pm to about 700 pm. in embodiments, the antibody has a binding affinity of about 500 pm to about 650 pm. in embodiments, the antibody has a binding affinity of about 500 pm to about 600 pm. in embodiments, the antibody has a binding affinity of about 500 pm to about 550 pm. in embodiments, the antibody has a binding affinity of 500 pm to 5.5 nm. in embodiments, the antibody has a binding affinity of 500 pm to 5 nm. in embodiments, the antibody has a binding affinity of 500 pm to 4.5 nm. in embodiments, the antibody has a binding affinity of 500 pm to 4 nm. in embodiments, the antibody has a binding affinity of 500 pm to 3.5 nm. in embodiments, the antibody has a binding affinity of 500 pm to 3 nm. in embodiments, the antibody has a binding affinity of 500 pm to 3.5 nm. in embodiments, the antibody has a binding affinity of 500 pm to 3 nm. in embodiments, the antibody has a binding affinity of 500 pm to 2.5 nm. in embodiments, the antibody has a binding affinity of 500 pm to 2 nm. in embodiments, the antibody has a binding affinity of 500 pm to 1.5 nm. in embodiments, the antibody has a binding affinity of 500 pm to 1 nm. in embodiments, the antibody has a binding affinity of 500 pm to 950 pm. in embodiments, the antibody has a binding affinity of 500 pm to 900 pm. in embodiments, the antibody has a binding affinity of 500 pm to 850 pm. in embodiments, the antibody has a binding affinity of 500 pm to 800 pm. in embodiments, the antibody has a binding affinity of 500 pm to 750 pm. in embodiments, the antibody has a binding affinity of 500 pm to 700 pm. in embodiments, the antibody has a binding affinity of 500 pm to 650 pm. in embodiments, the antibody has a binding affinity of 500 pm to 600 pm. in embodiments, the antibody has a binding affinity of 500 pm to 550 pm. in embodiments, the antibody has a binding affinity of about 500 pm. in embodiments, the antibody has a binding affinity of 500 pm. in embodiments, the antibody has a binding affinity of about 550 pm. in embodiments, the antibody has a binding affinity of 550 pm. in embodiments, the antibody has a binding affinity of about 600 pm. in embodiments, the antibody has a binding affinity of 600 pm. in embodiments, the antibody has a binding affinity of about 650 pm. in embodiments, the antibody has a binding affinity of 650 pm. in embodiments, the antibody has a binding affinity of about 700 pm. in embodiments, the antibody has a binding affinity of 700 pm. in embodiments, the antibody has a binding affinity of about 750 pm. in embodiments, the antibody has a binding affinity of 750 pm. in embodiments, the antibody has a binding affinity of about 800 pm. in embodiments, the antibody has a binding affinity of 800 pm. in embodiments, the antibody has a binding affinity of about 850 pm. in embodiments, the antibody has a binding affinity of 850 pm. in embodiments, the antibody has a binding affinity of about 900 pm. in embodiments, the antibody has a binding affinity of 900 pm. in embodiments, the antibody has a binding affinity of about 950 pm. in embodiments, the antibody has a binding affinity of 950 pm. in embodiments, the antibody has a binding affinity of about 1 nm. in embodiments, the antibody has a binding affinity of about 1 nm. in embodiments, the antibody has a binding affinity of 1 nm. in embodiments, the antibody has a binding affinity of about 1.5 nm. in embodiments, the antibody has a binding affinity of 1.5 nm. in embodiments, the antibody has a binding affinity of about 2 nm. in embodiments, the antibody has a binding affinity of 2 nm. in embodiments, the antibody has a binding affinity of about 2.5 nm. in embodiments, the antibody has a binding affinity of 2.5 nm. in embodiments, the antibody has a binding affinity of about 3 nm. in embodiments, the antibody has a binding affinity of 3 nm. in embodiments, the antibody has a binding affinity of about 3.5 nm. in embodiments, the antibody has a binding affinity of 3.5 nm. in embodiments, the antibody has a binding affinity of about 4 nm. in embodiments, the antibody has a binding affinity of 4 nm. in embodiments, the antibody has a binding affinity of about 4.5 nm. in embodiments, the antibody has a binding affinity of 4.5 nm. in embodiments, the antibody has a binding affinity of about 5 nm. in embodiments, the antibody has a binding affinity of 5 nm. in embodiments, the antibody has a binding affinity of about 5.5 nm. in embodiments, the antibody has a binding affinity of 5.5 nm. in embodiments, the antibody has a binding affinity of about 6 nm. in embodiments, the antibody has a binding affinity of 6 nm. in embodiments, the antibody binds to an ror-1 protein with a k d of less than about 40 nm (e.g., 35, 30, 25, 20, 15, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0.5, 0.25, 0.1 nm). in embodiments, the antibody binds to an ror-1 protein with a k d of less than 40 nm (e.g., 35, 30, 25, 20, 15, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0.5, 0.25, 0.1 nm). in embodiments, the antibody binds to an ror-1 protein with a k d of less than about 35 nm. in embodiments, the antibody binds to an ror-1 protein with a k d of less than 35 nm. in embodiments, the antibody binds to an ror-1 protein with a k d of less than about 30 nm. in embodiments, the antibody binds to an ror-1 protein with a k d of less than 30 nm. in embodiments, the antibody binds to an ror-1 protein with a k d of less than about 25 nm. in embodiments, the antibody binds to an ror-1 protein with a k d of less than 25 nm. in embodiments, the antibody binds to an ror-1 protein with a k d of less than about 20 nm. in embodiments, the antibody binds to an ror-1 protein with a k d of less than 20 nm. in embodiments, the antibody binds to an ror-1 protein with a k d of less than about 15 nm. in embodiments, the antibody binds to an ror-1 protein with a k d of less than 15 nm. in embodiments, the antibody binds to an ror-1 protein with a k d of less than about 10 nm. in embodiments, the antibody binds to an ror-1 protein with a k d of less than 10 nm. in embodiments, the antibody binds to an ror-1 protein with a k d of less than about 9 nm. in embodiments, the antibody binds to an ror-1 protein with a k d of less than 9 nm. in embodiments, the antibody binds to an ror-1 protein with a k d of less than about 8 nm. in embodiments, the antibody binds to an ror-1 protein with a k d of less than 8 nm. in embodiments, the antibody binds to an ror-1 protein with a k d of less than about 7 nm. in embodiments, the antibody binds to an ror-1 protein with a k d of less than 7 nm. in embodiments, the antibody binds to an ror-1 protein with a k d of less than about 6 nm. in embodiments, the antibody binds to an ror-1 protein with a k d of less than 6 nm. in embodiments, the antibody binds to an ror-1 protein with a k d of less than about 5 nm. in embodiments, the antibody binds to an ror-1 protein with a k d of less than 5 nm. in embodiments, the antibody binds to an ror-1 protein with a k d of less than about 4 nm. in embodiments, the antibody binds to an ror-1 protein with a k d of less than 4 nm. in embodiments, the antibody binds to an ror-1 protein with a k d of less than about 3 nm. in embodiments, the antibody binds to an ror-1 protein with a k d of less than 3 nm. in embodiments, the antibody binds to an ror-1 protein with a k d of less than about 2 nm. in embodiments, the antibody binds to an ror-1 protein with a k d of less than 2 nm. in embodiments, the antibody binds to an ror-1 protein with a k d of less than about 1 nm. in embodiments, the antibody binds to an ror-1 protein with a k d of less than 1 nm. in embodiments, the antibody binds to an ror-1 protein with a k d of less than about 0.5 nm. in embodiments, the antibody binds to an ror-1 protein with a k d of less than 0.5 nm. in embodiments, the antibody binds to an ror-1 protein with a k d of less than about 0.25 nm. in embodiments, the antibody binds to an ror-1 protein with a k d of less than 0.25 nm. in embodiments, the antibody binds to an ror-1 protein with a k d of less than about 0.1 nm. in embodiments, the antibody binds to an ror-1 protein with a k d of less than 0.1 nm. in embodiments, the antibody is cirmtuzumab, also referred to herein as 99961.1 or uc-961. in embodiments, the antibody is d10. in embodiments, the btk antagonist and the ror-1 antagonist are administered in a combined synergistic amount. in embodiments, the btk antagonist and anti-ror-1 antibody are administered in a combined synergistic amount. a “combined synergistic amount” as used herein refers to the sum of a first amount (e.g., an amount of a btk antagonist) and a second amount (e.g., an amount of a ror-1 antagonist) that results in a synergistic effect (i.e. an effect greater than an additive effect). therefore, the terms “synergy”, “synergism”, “synergistic”, “combined synergistic amount”, and “synergistic therapeutic effect” which are used herein interchangeably, refer to a measured effect of compounds administered in combination where the measured effect is greater than the sum of the individual effects of each of the compounds administered alone as a single agent. in embodiments, a synergistic amount may be about 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9, 3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, 4.0, 4.1, 4.2, 4.3, 4.4, 4.5, 4.6, 4.7, 4.8, 4.9, 5.0, 5.1, 5.2, 5.3, 5.4, 5.5, 5.6, 5.7, 5.8, 5.9, 6.0, 6.1, 6.2, 6.3, 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, 7.8, 7.9, 8.0, 8.1, 8.2, 8.3, 8.4, 8.5, 8.6, 8.7, 8.8, 8.9, 9.0, 9.1, 9.2, 9.3, 9.4, 9.5, 9.6, 9.7, 9.8, 9.9, 10.0, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, or 99% of the amount of the btk antagonist when used separately from the ror-1 antagonist. in embodiments, a synergistic amount may be about 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9, 3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, 4.0, 4.1, 4.2, 4.3, 4.4, 4.5, 4.6, 4.7, 4.8, 4.9, 5.0, 5.1, 5.2, 5.3, 5.4, 5.5, 5.6, 5.7, 5.8, 5.9, 6.0, 6.1, 6.2, 6.3, 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, 7.8, 7.9, 8.0, 8.1, 8.2, 8.3, 8.4, 8.5, 8.6, 8.7, 8.8, 8.9, 9.0, 9.1, 9.2, 9.3, 9.4, 9.5, 9.6, 9.7, 9.8, 9.9, 10.0, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, or 99% of the amount of the ror-1 antagonist when used separately from the btk antagonist. the synergistic effect may be a btk activity decreasing effect and/or a ror-1 activity decreasing effect. in embodiments, synergy between the btk antagonist and the ror-1 antagonist may result in about 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9, 3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, 4.0, 4.1, 4.2, 4.3, 4.4, 4.5, 4.6, 4.7, 4.8, 4.9, 5.0, 5.1, 5.2, 5.3, 5.4, 5.5, 5.6, 5.7, 5.8, 5.9, 6.0, 6.1, 6.2, 6.3, 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, 7.8, 7.9, 8.0, 8.1, 8.2, 8.3, 8.4, 8.5, 8.6, 8.7, 8.8, 8.9, 9.0, 9.1, 9.2, 9.3, 9.4, 9.5, 9.6, 9.7, 9.8, 9.9, 10.0, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, or 100% greater decrease (e.g., decrease of btk activity or decrease of ror-1 activity) than the sum of the decrease of the btk antagonist or the ror-1 antagonist when used individually and separately. in embodiments, synergy between the btk antagonist and the ror-1 antagonist may result in 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9, 3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, 4.0, 4.1, 4.2, 4.3, 4.4, 4.5, 4.6, 4.7, 4.8, 4.9, 5.0, 5.1, 5.2, 5.3, 5.4, 5.5, 5.6, 5.7, 5.8, 5.9, 6.0, 6.1, 6.2, 6.3, 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, 7.8, 7.9, 8.0, 8.1, 8.2, 8.3, 8.4, 8.5, 8.6, 8.7, 8.8, 8.9, 9.0, 9.1, 9.2, 9.3, 9.4, 9.5, 9.6, 9.7, 9.8, 9.9, 10.0, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, or 100% greater inhibition of the btk protein and/or the ror-1 protein than the sum of the inhibition of the btk antagonist or the ror-1 antagonist when used individually and separately. the synergistic effect may be a cancer-treating effect such as a lymphoma (i.e. a lymphoma-treating synergistic effect), leukemia (i.e. a leukemia-treating synergistic effect), myeloma (i.e. a myeloma-treating synergistic effect), aml (i.e. a aml-treating synergistic effect), b-all (i.e. a b-all-treating synergistic effect), t-all (i.e. a t-all-treating synergistic effect), renal cell carcinoma (i.e. a renal cell carcinoma-treating synergistic effect), colon cancer (i.e. a colon cancer-treating synergistic effect), colorectal cancer (i.e. a colorectal cancer-treating synergistic effect), breast cancer (i.e. a breast cancer-treating synergistic effect), epithelial squamous cell cancer (i.e., epithelial squamous cell cancer-treating synergistic effect), melanoma (i.e., melanoma-treating synergistic effect), stomach cancer (i.e. a stomach cancer-treating synergistic effect), brain cancer (i.e. a brain cancer-treating synergistic effect), lung cancer (i.e. a lung cancer-treating synergistic effect), pancreatic cancer (i.e. a pancreatic cancer-treating synergistic effect), cervical cancer (i.e. a cervical cancer-treating synergistic effect), ovarian cancer (i.e. an ovarian cancer-treating synergistic effect), liver cancer (i.e. a liver cancer-treating synergistic effect), bladder cancer (i.e. a bladder cancer-treating synergistic effect), prostate cancer (i.e. a prostate cancer-treating synergistic effect), testicular cancer (i.e. a testicular cancer-treating synergistic effect), thyroid cancer (i.e. a thyroid cancer-treating synergistic effect), head and neck cancer (i.e. a head and neck cancer-treating synergistic effect), uterine cancer (i.e. an uterine cancer-treating synergistic effect), adenocarcinoma (i.e. an adenocarcinoma-treating synergistic effect), adrenal cancer (i.e. a adrenal cancer-treating synergistic effect), chronic lymphocytic leukemia (i.e. a chronic lymphocytic leukemia-treating synergistic effect), small lymphocytic lymphoma (i.e. a small lymphocytic lymphoma-treating synergistic effect), marginal cell b-cell lymphoma (i.e. a marginal cell b-cell lymphoma-treating synergistic effect), burkitt's lymphoma (i.e. a burkitt's lymphoma-treating synergistic effect),and b cell leukemia (i.e. a b cell leukemia -treating synergistic effect) treating effect. the btk antagonist and the ror-1 antagonist may be administered in combination either simultaneously (e.g., as a mixture), separately but simultaneously (e.g., via separate intravenous lines) or sequentially (e.g., one agent is administered first followed by administration of the second agent). thus, the term combination is used to refer to concomitant, simultaneous or sequential administration of the btk antagonist and the ror-1 antagonist. in embodiments, the btk antagonist and the ror-1 antagonist are administered simultaneously or sequentially. in embodiments, the btk antagonist and the ror-1 antagonist are administered simultaneously. in embodiments, the btk antagonist and the ror-1 antagonist are administered sequentially. during the course of treatment the btk antagonist and ror-1 antagonist may at times be administered sequentially and at other times be administered simultaneously. in embodiments, where the btk antagonist and the ror-1 antagonist are administered sequentially, the ror-1 antagonist is administered at a first time point and the btk antagonist is administered at a second time point, wherein the first time point precedes the second time point. alternatively, in embodiments, where the btk antagonist and the ror-1 antagonist are administered sequentially, the btk antagonist is administered at a first time point and the ror-1 antagonist is administered at a second time point, wherein the first time point precedes the second time point. in embodiments, the btk antagonist and the anti-ror-1 antibody are administered simultaneously or sequentially. in embodiments, the btk antagonist and the anti-ror-1 antibody are administered simultaneously. in embodiments, the btk antagonist and the anti-ror-1 antibody are administered sequentially. during the course of treatment the btk antagonist and anti-ror-1 antibody may at times be administered sequentially and at other times be administered simultaneously. in embodiments, where the btk antagonist and the anti-ror-1 antibody are administered sequentially, the anti-ror-1 antibody is administered at a first time point and the btk antagonist is administered at a second time point, wherein the first time point precedes the second time point. alternatively, in embodiments, where the btk antagonist and the anti-ror-1 antibody are administered sequentially, the btk antagonist is administered at a first time point and the anti-ror-1 antibody is administered at a second time point, wherein the first time point precedes the second time point. the course of treatment is best determined on an individual basis depending on the particular characteristics of the subject and the type of treatment selected. the treatment, such as those disclosed herein, can be administered to the subject on a daily, twice daily, bi-weekly, monthly or any applicable basis that is therapeutically effective. the treatment can be administered alone or in combination with any other treatment disclosed herein or known in the art. the additional treatment can be administered simultaneously with the first treatment, at a different time, or on an entirely different therapeutic schedule (e.g., the first treatment can be daily, while the additional treatment is weekly). in instances where the btk antagonist and ror-1 antagonist are administered simultaneously, the btk antagonist and ror-1 antagonist may be administered as a mixture. thus, in embodiments, the btk antagonist and the ror-1 antagonist are admixed prior to administration. in embodiments, the btk antagonist is administered at an amount of about 1 mg/kg, 2 mg/kg, 5 mg/kg, 10 mg/kg or 15 mg/kg. in embodiments, the btk antagonist is administered at an amount of about 1 mg/kg. in embodiments, the btk antagonist is administered at an amount of 1 mg/kg. in embodiments, the btk antagonist is administered at an amount of about 2 mg/kg. in embodiments, the btk antagonist is administered at an amount of 2 mg/kg. in embodiments, the btk antagonist is administered at an amount of about 5 mg/kg. in embodiments, the btk antagonist is administered at an amount of 5 mg/kg. in embodiments, the btk antagonist is administered at an amount of about 10 mg/kg. in embodiments, the btk antagonist is administered at an amount of 10 mg/kg. in embodiments, the btk antagonist is administered at an amount of about 15 mg/kg. in embodiments, the btk antagonist is administered at an amount of 15 mg/kg. in embodiments, the btk antagonist is administered at an amount of about 420 mg. in embodiments, the btk antagonist is administered at an amount of 420 mg. in embodiments, the ror-1 antagonist is administered at an amount of about 1 mg/kg, 2 mg/kg, 3 mg/kg, 5 mg/kg or 10 mg/kg. in embodiments, the ror-1 antagonist is administered at an amount of about 1 mg/kg. in embodiments, the ror-1 antagonist is administered at an amount of 1 mg/kg. in embodiments, the ror-1 antagonist is administered at an amount of about 2 mg/kg. in embodiments, the ror-1 antagonist is administered at an amount of 2 mg/kg. in embodiments, the ror-1 antagonist is administered at an amount of about 3 mg/kg. in embodiments, the ror-1 antagonist is administered at an amount of 3 mg/kg. in embodiments, the ror-1 antagonist is administered at an amount of about 5 mg/kg. in embodiments, the ror-1 antagonist is administered at an amount of 5 mg/kg. in embodiments, the ror-1 antagonist is administered at an amount of about 10 mg/kg. in embodiments, the ror-1 antagonist is administered at an amount of 10 mg/kg. in embodiments, the btk antagonist is administered at an amount of about 5 mg/kg and the ror-1 antagonist is administered at about 2 mg/kg. in embodiments, the btk antagonist is administered at an amount of 5 mg/kg and the ror-1 antagonist is administered at 2 mg/kg. in embodiments, the btk antagonist is administered at an amount of about 5 mg/kg and the ror-1 antagonist is administered at about 1 mg/kg. in embodiments, the btk antagonist is administered at an amount of 5 mg/kg and the ror-1 antagonist is administered at 1 mg/kg. in embodiments, the btk antagonist is administered daily over the course of at least 14 days (e.g., 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 40, 45, or 50 days). in embodiments, the btk antagonist is administered daily over the course of at least 15 days. in embodiments, the btk antagonist is administered daily over the course of at least 16 days. in embodiments, the btk antagonist is administered daily over the course of at least 17 days. in embodiments, the btk antagonist is administered daily over the course of at least 18 days. in embodiments, the btk antagonist is administered daily over the course of at least 19 days. in embodiments, the btk antagonist is administered daily over the course of at least 20 days. in embodiments, the btk antagonist is administered daily over the course of at least 21 days. in embodiments, the btk antagonist is administered daily over the course of at least 22 days. in embodiments, the btk antagonist is administered daily over the course of at least 23 days. in embodiments, the btk antagonist is administered daily over the course of at least 24 days. in embodiments, the btk antagonist is administered daily over the course of at least 25 days. in embodiments, the btk antagonist is administered daily over the course of at least 26 days. in embodiments, the btk antagonist is administered daily over the course of at least 27 days. in embodiments, the btk antagonist is administered daily over the course of at least 28 days. in embodiments, the btk antagonist is administered daily over the course of at least 29 days. in embodiments, the btk antagonist is administered daily over the course of at least 30 days. in embodiments, the btk antagonist is administered daily over the course of at least 31 days. in embodiments, the btk antagonist is administered daily over the course of at least 32 days. in embodiments, the btk antagonist is administered daily over the course of at least 33 days. in embodiments, the btk antagonist is administered daily over the course of at least 34 days. in embodiments, the btk antagonist is administered daily over the course of at least 35 days. in embodiments, the btk antagonist is administered daily over the course of at least 40 days. in embodiments, the btk antagonist is administered daily over the course of at least 45 days. in embodiments, the btk antagonist is administered daily over the course of at least 50 days. in embodiments, the btk antagonist is administered daily over the course of about 28 days. in embodiments, the btk antagonist is administered daily over the course of 28 days. in embodiments, the ror-1 antagonist is administered once over the course of about 28 days. in embodiments, the ror-1 antagonist is administered once over the course of 28 days. in embodiments, the btk antagonist is administered intravenously. in embodiments, the ror-1 antagonist is administered intravenously. in embodiments, the subject is a mammal. in embodiments, the subject is a human. as mentioned above, the methods and compositions provided herein including embodiments thereof are useful for the treatment of cancer, and specifically cancers expressing ror-1. in embodiments, the cancer is lymphoma, leukemia, myeloma, aml, b-all, t-all, renal cell carcinoma, colon cancer, colorectal cancer, breast cancer, epithelial squamous cell cancer, melanoma, stomach cancer, brain cancer, lung cancer, pancreatic cancer, cervical cancer, ovarian cancer, liver cancer, bladder cancer, prostate cancer, testicular cancer, thyroid cancer, head and neck cancer, uterine cancer, adenocarcinoma, or adrenal cancer. in embodiments, the cancer is chronic lymphocytic leukemia (cll), small lymphocytic lymphoma, marginal cell b-cell lymphoma, burkitt's lymphoma, or b cell leukemia. the administered combination of btk antagonist and ror-1 antagonist as provided herein, including embodiments thereof, may be varied. for example, a specific btk antagonist (e.g., ibrutinib) may be administered in combination with a specific ror-1 antagonist (e.g., cirmtuzumab). thus, in embodiments, the btk antagonist ibrutinib is administered in combination with the ror-1 antagonist cirmtuzumab. in embodiments, the btk antagonist idelalisib is administered in combination with the ror-1 antagonist cirmtuzumab. in embodiments, the btk antagonist fostamatinib is administered in combination with the ror-1 antagonist cirmtuzumab. in embodiments, the btk antagonist acalabrutinib is administered in combination with the ror-1 antagonist cirmtuzumab. in embodiments, the btk antagonist ono/gs-4059 is administered in combination with the ror-1 antagonist cirmtuzumab. in embodiments, the btk antagonist bgb-3111 is administered in combination with the ror-1 antagonist cirmtuzumab. in embodiments, the btk antagonist cc-292 (avl-292) is administered in combination with the ror-1 antagonist cirmtuzumab. in embodiments, the btk antagonist r406 is administered in combination with the ror-1 antagonist cirmtuzumab. in embodiments, the btk antagonist ibrutinib is administered in combination with the ror-1 antagonist cirmtuzumab, and ibrutinib is administered intravenously at an amount of 1 mg/kg and cirmtuzumab is administered intravenously at an amount of 1 mg/kg. in one further embodiment, ibrutinib is administered daily over the course of 28 days and cirmtuzumab is administered once over the course of 28 days. in embodiments, the btk antagonist ibrutinib is administered in combination with the ror-1 antagonist cirmtuzumab, and ibrutinib is administered intravenously at an amount of 1 mg/kg and cirmtuzumab is administered intravenously at an amount of 2 mg/kg. in one further embodiment, ibrutinib is administered daily over the course of 28 days and cirmtuzumab is administered once over the course of 28 days. in embodiments, the btk antagonist ibrutinib is administered in combination with the ror-1 antagonist cirmtuzumab, and ibrutinib is administered intravenously at an amount of 1 mg/kg and cirmtuzumab is administered intravenously at an amount of 3 mg/kg. in one further embodiment, ibrutinib is administered daily over the course of 28 days and cirmtuzumab is administered once over the course of 28 days. in embodiments, the btk antagonist ibrutinib is administered in combination with the ror-1 antagonist cirmtuzumab, and ibrutinib is administered intravenously at an amount of 1 mg/kg and cirmtuzumab is administered intravenously at an amount of 5 mg/kg. in one further embodiment, ibrutinib is administered daily over the course of 28 days and cirmtuzumab is administered once over the course of 28 days. in embodiments, the btk antagonist ibrutinib is administered in combination with the ror-1 antagonist cirmtuzumab, and ibrutinib is administered intravenously at an amount of 1 mg/kg and cirmtuzumab is administered intravenously at an amount of 10 mg/kg. in one further embodiment, ibrutinib is administered daily over the course of 28 days and cirmtuzumab is administered once over the course of 28 days. in embodiments, the btk antagonist ibrutinib is administered in combination with the ror-1 antagonist cirmtuzumab, and ibrutinib is administered intravenously at an amount of 2 mg/kg and cirmtuzumab is administered intravenously at an amount of 1 mg/kg. in one further embodiment, ibrutinib is administered daily over the course of 28 days and cirmtuzumab is administered once over the course of 28 days. in embodiments, the btk antagonist ibrutinib is administered in combination with the ror-1 antagonist cirmtuzumab, and ibrutinib is administered intravenously at an amount of 2 mg/kg and cirmtuzumab is administered intravenously at an amount of 2 mg/kg. in one further embodiment, ibrutinib is administered daily over the course of 28 days and cirmtuzumab is administered once over the course of 28 days. in embodiments, the btk antagonist ibrutinib is administered in combination with the ror-1 antagonist cirmtuzumab, and ibrutinib is administered intravenously at an amount of 2 mg/kg and cirmtuzumab is administered intravenously at an amount of 3 mg/kg. in one further embodiment, ibrutinib is administered daily over the course of 28 days and cirmtuzumab is administered once over the course of 28 days. in embodiments, the btk antagonist ibrutinib is administered in combination with the ror-1 antagonist cirmtuzumab, and ibrutinib is administered intravenously at an amount of 2 mg/kg and cirmtuzumab is administered intravenously at an amount of 5 mg/kg. in one further embodiment, ibrutinib is administered daily over the course of 28 days and cirmtuzumab is administered once over the course of 28 days. in embodiments, the btk antagonist ibrutinib is administered in combination with the ror-1 antagonist cirmtuzumab, and ibrutinib is administered intravenously at an amount of 2 mg/kg and cirmtuzumab is administered intravenously at an amount of 10 mg/kg. in one further embodiment, ibrutinib is administered daily over the course of 28 days and cirmtuzumab is administered once over the course of 28 days. in embodiments, the btk antagonist ibrutinib is administered in combination with the ror-1 antagonist cirmtuzumab, and ibrutinib is administered intravenously at an amount of 5 mg/kg and cirmtuzumab is administered intravenously at an amount of 1 mg/kg. in one further embodiment, ibrutinib is administered daily over the course of 28 days and cirmtuzumab is administered once over the course of 28 days. in embodiments, the btk antagonist ibrutinib is administered in combination with the ror-1 antagonist cirmtuzumab, and ibrutinib is administered intravenously at an amount of 5 mg/kg and cirmtuzumab is administered intravenously at an amount of 2 mg/kg. in one further embodiment, ibrutinib is administered daily over the course of 28 days and cirmtuzumab is administered once over the course of 28 days. in embodiments, the btk antagonist ibrutinib is administered in combination with the ror-1 antagonist cirmtuzumab, and ibrutinib is administered intravenously at an amount of 5 mg/kg and cirmtuzumab is administered intravenously at an amount of 3 mg/kg. in one further embodiment, ibrutinib is administered daily over the course of 28 days and cirmtuzumab is administered once over the course of 28 days. in embodiments, the btk antagonist ibrutinib is administered in combination with the ror-1 antagonist cirmtuzumab, and ibrutinib is administered intravenously at an amount of 5 mg/kg and cirmtuzumab is administered intravenously at an amount of 5 mg/kg. in one further embodiment, ibrutinib is administered daily over the course of 28 days and cirmtuzumab is administered once over the course of 28 days. in embodiments, the btk antagonist ibrutinib is administered in combination with the ror-1 antagonist cirmtuzumab, and ibrutinib is administered intravenously at an amount of 5 mg/kg and cirmtuzumab is administered intravenously at an amount of 10 mg/kg. in one further embodiment, ibrutinib is administered daily over the course of 28 days and cirmtuzumab is administered once over the course of 28 days. in embodiments, the btk antagonist ibrutinib is administered in combination with the ror-1 antagonist cirmtuzumab, and ibrutinib is administered intravenously at an amount of 10 mg/kg and cirmtuzumab is administered intravenously at an amount of 1 mg/kg. in one further embodiment, ibrutinib is administered daily over the course of 28 days and cirmtuzumab is administered once over the course of 28 days. in embodiments, the btk antagonist ibrutinib is administered in combination with the ror-1 antagonist cirmtuzumab, and ibrutinib is administered intravenously at an amount of 10 mg/kg and cirmtuzumab is administered intravenously at an amount of 2 mg/kg. in one further embodiment, ibrutinib is administered daily over the course of 28 days and cirmtuzumab is administered once over the course of 28 days. in embodiments, the btk antagonist ibrutinib is administered in combination with the ror-1 antagonist cirmtuzumab, and ibrutinib is administered intravenously at an amount of 10 mg/kg and cirmtuzumab is administered intravenously at an amount of 3 mg/kg. in one further embodiment, ibrutinib is administered daily over the course of 28 days and cirmtuzumab is administered once over the course of 28 days. in embodiments, the btk antagonist ibrutinib is administered in combination with the ror-1 antagonist cirmtuzumab, and ibrutinib is administered intravenously at an amount of 10 mg/kg and cirmtuzumab is administered intravenously at an amount of 5 mg/kg. in one further embodiment, ibrutinib is administered daily over the course of 28 days and cirmtuzumab is administered once over the course of 28 days. in embodiments, the btk antagonist ibrutinib is administered in combination with the ror-1 antagonist cirmtuzumab, and ibrutinib is administered intravenously at an amount of 10 mg/kg and cirmtuzumab is administered intravenously at an amount of 10 mg/kg. in one further embodiment, ibrutinib is administered daily over the course of 28 days and cirmtuzumab is administered once over the course of 28 days. in embodiments, the btk antagonist ibrutinib is administered in combination with the ror-1 antagonist cirmtuzumab, and ibrutinib is administered intravenously at an amount of 15 mg/kg and cirmtuzumab is administered intravenously at an amount of 1 mg/kg. in one further embodiment, ibrutinib is administered daily over the course of 28 days and cirmtuzumab is administered once over the course of 28 days. in embodiments, the btk antagonist ibrutinib is administered in combination with the ror-1 antagonist cirmtuzumab, and ibrutinib is administered intravenously at an amount of 15 mg/kg and cirmtuzumab is administered intravenously at an amount of 2 mg/kg. in one further embodiment, ibrutinib is administered daily over the course of 28 days and cirmtuzumab is administered once over the course of 28 days. in embodiments, the btk antagonist ibrutinib is administered in combination with the ror-1 antagonist cirmtuzumab, and ibrutinib is administered intravenously at an amount of 15 mg/kg and cirmtuzumab is administered intravenously at an amount of 3 mg/kg. in one further embodiment, ibrutinib is administered daily over the course of 28 days and cirmtuzumab is administered once over the course of 28 days. in embodiments, the btk antagonist ibrutinib is administered in combination with the ror-1 antagonist cirmtuzumab, and ibrutinib is administered intravenously at an amount of 15 mg/kg and cirmtuzumab is administered intravenously at an amount of 5 mg/kg. in one further embodiment, ibrutinib is administered daily over the course of 28 days and cirmtuzumab is administered once over the course of 28 days. in embodiments, the btk antagonist ibrutinib is administered in combination with the ror-1 antagonist cirmtuzumab, and ibrutinib is administered intravenously at an amount of 15 mg/kg and cirmtuzumab is administered intravenously at an amount of 10 mg/kg. in one further embodiment, ibrutinib is administered daily over the course of 28 days and cirmtuzumab is administered once over the course of 28 days. in embodiments, the btk antagonist ibrutinib is administered in combination with the ror-1 antagonist cirmtuzumab, and ibrutinib is administered intravenously at an amount of 420 mg and cirmtuzumab is administered intravenously at an amount of 1 mg/kg. in one further embodiment, ibrutinib is administered daily over the course of 28 days and cirmtuzumab is administered once over the course of 28 days. in embodiments, the btk antagonist ibrutinib is administered in combination with the ror-1 antagonist cirmtuzumab, and ibrutinib is administered intravenously at an amount of 420 mg and cirmtuzumab is administered intravenously at an amount of 2 mg/kg. in one further embodiment, ibrutinib is administered daily over the course of 28 days and cirmtuzumab is administered once over the course of 28 days. in embodiments, the btk antagonist ibrutinib is administered in combination with the ror-1 antagonist cirmtuzumab, and ibrutinib is administered intravenously at an amount of 420 mg and cirmtuzumab is administered intravenously at an amount of 3 mg/kg. in one further embodiment, ibrutinib is administered daily over the course of 28 days and cirmtuzumab is administered once over the course of 28 days. in embodiments, the btk antagonist ibrutinib is administered in combination with the ror-1 antagonist cirmtuzumab, and ibrutinib is administered intravenously at an amount of 420 mg and cirmtuzumab is administered intravenously at an amount of 5 mg/kg. in one further embodiment, ibrutinib is administered daily over the course of 28 days and cirmtuzumab is administered once over the course of 28 days. in embodiments, the btk antagonist ibrutinib is administered in combination with the ror-1 antagonist cirmtuzumab, and ibrutinib is administered intravenously at an amount of 420 mg and cirmtuzumab is administered intravenously at an amount of 10 mg/kg. in one further embodiment, ibrutinib is administered daily over the course of 28 days and cirmtuzumab is administered once over the course of 28 days. pharmaceutical compositions the compositions including a btk antagonist and a ror-1 antagonist as provided herein, including embodiments thereof, are further contemplated as pharmaceutical compositions. thus, in an aspect is provided a pharmaceutical composition including a btk antagonist, a ror-1 antagonist and a pharmaceutically acceptable excipient. in another aspect, there is provided a pharmaceutical composition including a bruton's tyrosine kinase (btk) antagonist, an anti-ror-1 antibody and a pharmaceutically acceptable excipient. in embodiments, the btk antagonist and the anti-ror-1 antibody are present in a combined synergistic amount, wherein the combined synergistic amount is effective to treat cancer in a subject in need thereof. the btk antagonist and ror-1 antagonist included in the pharmaceutical compositions provided herein may be any one of the btk antagonists and/or ror-1 antagonists described herein including embodiments thereof. for example, the btk antagonist may be ibrutinib and the ror-1 antagonist may be cirmtuzumab. likewise, pharmaceutical compositions provided herein may be formulated such that the administered amount of btk antagonist and ror-1 antagonist is any one of the amounts as described herein. for example, the ibrutinib may be present in an amount such that administration of the composition results in a dosage of about 5 mg/kg or 420 mg and cirmtuzumab may be present in an amount that results in a dosage of about 2 mg/kg. the provided compositions are, inter alia, suitable for formulation and administration in vitro or in vivo. suitable carriers and excipients and their formulations are described in remington: the science and practice of pharmacy, 21st edition, david b. troy, ed., lippicott williams & wilkins (2005). by pharmaceutically acceptable carrier is meant a material that is not biologically or otherwise undesirable, i.e., the material is administered to a subject without causing undesirable biological effects or interacting in a deleterious manner with the other components of the pharmaceutical composition in which it is contained. if administered to a subject, the carrier is optionally selected to minimize degradation of the active ingredient and to minimize adverse side effects in the subject. pharmaceutical compositions provided herein include compositions wherein the active ingredient (e.g. compositions described herein, including embodiments or examples) is contained in a therapeutically effective amount, i.e., in an amount effective to achieve its intended purpose. the actual amount effective for a particular application will depend, inter alia, on the condition being treated. when administered in methods to treat a disease, the recombinant proteins described herein will contain an amount of active ingredient effective to achieve the desired result, e.g., modulating the activity of a target molecule, and/or reducing, eliminating, or slowing the progression of disease symptoms. determination of a therapeutically effective amount of a compound of the invention is well within the capabilities of those skilled in the art, especially in light of the detailed disclosure herein. provided compositions can include a single agent or more than one agent. the compositions for administration will commonly include an agent as described herein dissolved in a pharmaceutically acceptable carrier, preferably an aqueous carrier. a variety of aqueous carriers can be used, e.g., buffered saline and the like. these solutions are sterile and generally free of undesirable matter. these compositions may be sterilized by conventional, well known sterilization techniques. the compositions may contain pharmaceutically acceptable auxiliary substances as required to approximate physiological conditions such as ph adjusting and buffering agents, toxicity adjusting agents and the like, for example, sodium acetate, sodium chloride, potassium chloride, calcium chloride, sodium lactate and the like. the concentration of active agent in these formulations can vary widely, and will be selected primarily based on fluid volumes, viscosities, body weight and the like in accordance with the particular mode of administration selected and the subject's needs. solutions of the active compounds as free base or pharmacologically acceptable salt can be prepared in water suitably mixed with a surfactant, such as hydroxypropylcellulose. dispersions can also be prepared in glycerol, liquid polyethylene glycols, and mixtures thereof and in oils. under ordinary conditions of storage and use, these preparations can contain a preservative to prevent the growth of microorganisms. pharmaceutical compositions can be delivered via intranasal or inhalable solutions or sprays, aerosols or inhalants. nasal solutions can be aqueous solutions designed to be administered to the nasal passages in drops or sprays. nasal solutions can be prepared so that they are similar in many respects to nasal secretions. thus, the aqueous nasal solutions usually are isotonic and slightly buffered to maintain a ph of 5.5 to 6.5. in addition, antimicrobial preservatives, similar to those used in ophthalmic preparations and appropriate drug stabilizers, if required, may be included in the formulation. various commercial nasal preparations are known and can include, for example, antibiotics and antihistamines. oral formulations can include excipients as, for example, pharmaceutical grades of mannitol, lactose, starch, magnesium stearate, sodium saccharine, cellulose, magnesium carbonate and the like. these compositions take the form of solutions, suspensions, tablets, pills, capsules, sustained release formulations or powders. in some embodiments, oral pharmaceutical compositions will comprise an inert diluent or assimilable edible carrier, or they may be enclosed in hard or soft shell gelatin capsule, or they may be compressed into tablets, or they may be incorporated directly with the food of the diet. for oral therapeutic administration, the active compounds may be incorporated with excipients and used in the form of ingestible tablets, buccal tablets, troches, capsules, elixirs, suspensions, syrups, wafers, and the like. such compositions and preparations should contain at least 0.1% of active compound. the percentage of the compositions and preparations may, of course, be varied and may conveniently be between about 2 to about 75% of the weight of the unit, or preferably between 25-60%. the amount of active compounds in such compositions is such that a suitable dosage can be obtained. for parenteral administration in an aqueous solution, for example, the solution should be suitably buffered and the liquid diluent first rendered isotonic with sufficient saline or glucose. aqueous solutions, in particular, sterile aqueous media, are especially suitable for intravenous, intramuscular, subcutaneous and intraperitoneal administration. for example, one dosage could be dissolved in 1 ml of isotonic nacl solution and either added to 1000 ml of hypodermoclysis fluid or injected at the proposed site of infusion. sterile injectable solutions can be prepared by incorporating the active compounds or constructs in the required amount in the appropriate solvent followed by filtered sterilization. generally, dispersions are prepared by incorporating the various sterilized active ingredients into a sterile vehicle which contains the basic dispersion medium. vacuum-drying and freeze-drying techniques, which yield a powder of the active ingredient plus any additional desired ingredients, can be used to prepare sterile powders for reconstitution of sterile injectable solutions. the preparation of more, or highly, concentrated solutions for direct injection is also contemplated. dmso can be used as solvent for extremely rapid penetration, delivering high concentrations of the active agents to a small area. the formulations of compounds can be presented in unit-dose or multi-dose sealed containers, such as ampules and vials. thus, the composition can be in unit dosage form. in such form the preparation is subdivided into unit doses containing appropriate quantities of the active component. thus, the compositions can be administered in a variety of unit dosage forms depending upon the method of administration. for example, unit dosage forms suitable for oral administration include, but are not limited to, powder, tablets, pills, capsules and lozenges. the dosage and frequency (single or multiple doses) administered to a mammal can vary depending upon a variety of factors, for example, whether the mammal suffers from another disease, and its route of administration; size, age, sex, health, body weight, body mass index, and diet of the recipient; nature and extent of symptoms of the disease being treated (e.g. symptoms of cancer and severity of such symptoms), kind of concurrent treatment, complications from the disease being treated or other health-related problems. other therapeutic regimens or agents can be used in conjunction with the methods and compounds of the invention. adjustment and manipulation of established dosages (e.g., frequency and duration) are well within the ability of those skilled in the art. for any composition (e.g., the cell-penetrating conjugate provided) described herein, the therapeutically effective amount can be initially determined from cell culture assays. target concentrations will be those concentrations of active compound(s) that are capable of achieving the methods described herein, as measured using the methods described herein or known in the art. as is well known in the art, effective amounts for use in humans can also be determined from animal models. for example, a dose for humans can be formulated to achieve a concentration that has been found to be effective in animals. the dosage in humans can be adjusted by monitoring effectiveness and adjusting the dosage upwards or downwards, as described above. adjusting the dose to achieve maximal efficacy in humans based on the methods described above and other methods is well within the capabilities of the ordinarily skilled artisan. dosages may be varied depending upon the requirements of the patient and the compound being employed. the dose administered to a patient, in the context of the present invention should be sufficient to affect a beneficial therapeutic response in the patient over time. the size of the dose also will be determined by the existence, nature, and extent of any adverse side-effects. determination of the proper dosage for a particular situation is within the skill of the practitioner. generally, treatment is initiated with smaller dosages which are less than the optimum dose of the compound. thereafter, the dosage is increased by small increments until the optimum effect under circumstances is reached. dosage amounts and intervals can be adjusted individually to provide levels of the administered compound effective for the particular clinical indication being treated. this will provide a therapeutic regimen that is commensurate with the severity of the individual's disease state. utilizing the teachings provided herein, an effective prophylactic or therapeutic treatment regimen can be planned that does not cause substantial toxicity and yet is effective to treat the clinical symptoms demonstrated by the particular patient. this planning should involve the careful choice of active compound by considering factors such as compound potency, relative bioavailability, patient body weight, presence and severity of adverse side effects, preferred “pharmaceutically acceptable excipient” and “pharmaceutically acceptable carrier” refer to a substance that aids the administration of an active agent to and absorption by a subject and can be included in the compositions of the present invention without causing a significant adverse toxicological effect on the patient. non-limiting examples of pharmaceutically acceptable excipients include water, nacl normal saline solutions, lactated ringer's, normal sucrose, normal glucose, binders, fillers, disintegrants, lubricants, coatings, sweeteners, flavors, salt solutions (such as ringer's solution), alcohols, oils, gelatins, carbohydrates such as lactose, amylose or starch, fatty acid esters, hydroxymethycellulose, polyvinyl pyrrolidine, and colors, and the like. such preparations can be sterilized and, if desired, mixed with auxiliary agents such as lubricants, preservatives, stabilizers, wetting agents, emulsifiers, salts for influencing osmotic pressure, buffers, coloring, and/or aromatic substances and the like that do not deleteriously react with the compounds of the invention. one of skill in the art will recognize that other pharmaceutical excipients are useful in the present invention. the term “pharmaceutically acceptable salt” refers to salts derived from a variety of organic and inorganic counter ions well known in the art and include, by way of example only, sodium, potassium, calcium, magnesium, ammonium, tetraalkylammonium, and the like; and when the molecule contains a basic functionality, salts of organic or inorganic acids, such as hydrochloride, hydrobromide, tartrate, mesylate, acetate, maleate, oxalate and the like. the term “preparation” is intended to include the formulation of the active compound with encapsulating material as a carrier providing a capsule in which the active component with or without other carriers, is surrounded by a carrier, which is thus in association with it. similarly, cachets and lozenges are included. tablets, powders, capsules, pills, cachets, and lozenges can be used as solid dosage forms suitable for oral administration. in embodiments, the pharmaceutical composition consists of ibrutinib, cirmtuzumab, and a pharmaceutically acceptable excipient. in embodiments, the pharmaceutical composition consists of idelalisib, cirmtuzumab, and a pharmaceutically acceptable excipient. in embodiments, the pharmaceutical composition consists of fostamatinib, cirmtuzumab, and a pharmaceutically acceptable excipient. in embodiments, the pharmaceutical composition consists of acalabrutinib, cirmtuzumab, and a pharmaceutically acceptable excipient. in embodiments, the pharmaceutical composition consists of ono/gs-4059, cirmtuzumab, and a pharmaceutically acceptable excipient. in embodiments, the pharmaceutical composition consists of bgb-3111, cirmtuzumab, and a pharmaceutically acceptable excipient. in embodiments, the pharmaceutical composition consists of cc-292 (avl-292), cirmtuzumab, and a pharmaceutically acceptable excipient. in embodiments, the pharmaceutical composition consists of r406, cirmtuzumab, and a pharmaceutically acceptable excipient. in embodiments, the pharmaceutical composition includes an amount of ibrutinib equivalent to a dose of 1 mg/kg and an amount of cirmtuzumab equivalent to a dose of 1 mg/kg. in embodiments, the pharmaceutical composition includes an amount of ibrutinib equivalent to a dose of 1 mg/kg and an amount of cirmtuzumab equivalent to a dose of 2 mg/kg. in embodiments, the pharmaceutical composition includes an amount of ibrutinib equivalent to a dose of 1 mg/kg and an amount of cirmtuzumab equivalent to a dose of 3 mg/kg. in embodiments, the pharmaceutical composition includes an amount of ibrutinib equivalent to a dose of 1 mg/kg and an amount of cirmtuzumab equivalent to a dose of 5 mg/kg. in embodiments, the pharmaceutical composition includes an amount of ibrutinib equivalent to a dose of 1 mg/kg and an amount of cirmtuzumab equivalent to a dose of 10 mg/kg. in embodiments, the pharmaceutical composition includes an amount of ibrutinib equivalent to a dose of 2 mg/kg and an amount of cirmtuzumab equivalent to a dose of 1 mg/kg. in embodiments, the pharmaceutical composition includes an amount of ibrutinib equivalent to a dose of 2 mg/kg and an amount of cirmtuzumab equivalent to a dose of 2 mg/kg. in embodiments, the pharmaceutical composition includes an amount of ibrutinib equivalent to a dose of 2 mg/kg and an amount of cirmtuzumab equivalent to a dose of 3 mg/kg. in embodiments, the pharmaceutical composition includes an amount of ibrutinib equivalent to a dose of 2 mg/kg and an amount of cirmtuzumab equivalent to a dose of 5 mg/kg. in embodiments, the pharmaceutical composition includes an amount of ibrutinib equivalent to a dose of 2 mg/kg and an amount of cirmtuzumab equivalent to a dose of 10 mg/kg. in embodiments, the pharmaceutical composition includes an amount of ibrutinib equivalent to a dose of 5 mg/kg and an amount of cirmtuzumab equivalent to a dose of 1 mg/kg. in embodiments, the pharmaceutical composition includes an amount of ibrutinib equivalent to a dose of 5 mg/kg and an amount of cirmtuzumab equivalent to a dose of 2 mg/kg. in embodiments, the pharmaceutical composition includes an amount of ibrutinib equivalent to a dose of 5 mg/kg and an amount of cirmtuzumab equivalent to a dose of 3 mg/kg. in embodiments, the pharmaceutical composition includes an amount of ibrutinib equivalent to a dose of 5 mg/kg and an amount of cirmtuzumab equivalent to a dose of 5 mg/kg. in embodiments, the pharmaceutical composition includes an amount of ibrutinib equivalent to a dose of 5 mg/kg and an amount of cirmtuzumab equivalent to a dose of 10 mg/kg. in embodiments, the pharmaceutical composition includes an amount of ibrutinib equivalent to a dose of 10 mg/kg and an amount of cirmtuzumab equivalent to a dose of 1 mg/kg. in embodiments, the pharmaceutical composition includes an amount of ibrutinib equivalent to a dose of 10 mg/kg and an amount of cirmtuzumab equivalent to a dose of 2 mg/kg. in embodiments, the pharmaceutical composition includes an amount of ibrutinib equivalent to a dose of 10 mg/kg and an amount of cirmtuzumab equivalent to a dose of 3 mg/kg. in embodiments, the pharmaceutical composition includes an amount of ibrutinib equivalent to a dose of 10 mg/kg and an amount of cirmtuzumab equivalent to a dose of 5 mg/kg. in embodiments, the pharmaceutical composition includes an amount of ibrutinib equivalent to a dose of 10 mg/kg and an amount of cirmtuzumab equivalent to a dose of 10 mg/kg. in embodiments, the pharmaceutical composition includes an amount of ibrutinib equivalent to a dose of 15 mg/kg and an amount of cirmtuzumab equivalent to a dose of 1 mg/kg. in embodiments, the pharmaceutical composition includes an amount of ibrutinib equivalent to a dose of 15 mg/kg and an amount of cirmtuzumab equivalent to a dose of 2 mg/kg. in embodiments, the pharmaceutical composition includes an amount of ibrutinib equivalent to a dose of 15 mg/kg and an amount of cirmtuzumab equivalent to a dose of 3 mg/kg. in embodiments, the pharmaceutical composition includes an amount of ibrutinib equivalent to a dose of 15 mg/kg and an amount of cirmtuzumab equivalent to a dose of 5 mg/kg. in embodiments, the pharmaceutical composition includes an amount of ibrutinib equivalent to a dose of 15 mg/kg and an amount of cirmtuzumab equivalent to a dose of 10 mg/kg. in embodiments, the pharmaceutical composition includes an amount of ibrutinib equivalent to a dose of 420 mg and an amount of cirmtuzumab equivalent to a dose of 1 mg/kg. in embodiments, the pharmaceutical composition includes an amount of ibrutinib equivalent to a dose of 420 mg and an amount of cirmtuzumab equivalent to a dose of 2 mg/kg. in embodiments, the pharmaceutical composition includes an amount of ibrutinib equivalent to a dose of 420 mg and an amount of cirmtuzumab equivalent to a dose of 3 mg/kg. in embodiments, the pharmaceutical composition includes an amount of ibrutinib equivalent to a dose of 420 mg and an amount of cirmtuzumab equivalent to a dose of 5 mg/kg. in embodiments, the pharmaceutical composition includes an amount of ibrutinib equivalent to a dose of 420 mg and an amount of cirmtuzumab equivalent to a dose of 10 mg/kg. examples signaling via bcr (b-cell receptor) signaling is thought to play a role in the pathogenesis and/or progression of disease, e.g., chronic lymphocytic leukemia (cll). the importance of this cascade in cll biology appears underscored by clinical trials demonstrating clinical activity with small-molecule kinase inhibitors intended to block bcr-signaling. however, almost all the inhibitors intended to block bcr-signaling could not have complete response (cr), suggesting that other mechanisms that counterbalance bcr signaling may be involved in the cr of cll following treatment with bcr signaling. applicants found, inter alia, that ror-1, a survival signal for cll, was induced by the inhibitors and can account for this effect. receptor tyrosine kinase-like orphan receptor 1 (ror-1) is an oncoembryonic antigen that is expression on the cell surface of lymphoma and leukemia cells from patients with chronic lymphocytic leukemia (cll) and mantle cell lymphoma (mcl), but not on normal b-cells or other postpartum tissues. the binding of the ligand wnt5a to ror-1 results in the recruitment of guanine exchange factors (gefs), which activate rac1 and rhoa and promote disease related chemotaxis and proliferation. targeting the bcr and ror-1 signaling pathways with simultaneous inhibition of btkbtk and ror-1 has not yet been reported. the work presented here evaluated the activity of ibrutinib combined with the novel and selective anti-ror-1 antibody cirmtuzumab in primary cll samples. treatment with both btk inhibitor and anti-ror-1 antibodies further reduced cll cell survival when compared to treatment with the single agents alone, and in combination induced synergistic growth inhibition as the result of further disrupted ligand induced signaling. hence, simultaneous targeting of these kinases may significantly increase clinical activity. moreover, the enhanced efficacy observed with the combination treatment of anti-ror-1 and ibrutinib was an unexpected benefit. specifically, combining ibrutinib with anti-cd20 antibodies that display cell-mediated anti-tumor reactivities did not display enhanced efficacy. in fact, it was shown that ibrutinib interfered with the activity of the cd20 antibodies. example 1 combination of cirmtuzumab (uc-961) with ibrutinib for treatment of chronic lymphocytic leukemia abstract. ibrutinib, a small molecule that irreversibly inhibits bruton's tyrosine kinase (btk), has shown efficacy in the treatment of patients with chronic lymphocytic leukemia (cll) by blocking b-cell receptor (bcr) signaling, but does not induce complete responses (cr) or durable remissions. rtk—like orphan receptor—1 (ror-1) is a receptor for wnt5a and plays an important role in non-canonical wnt signaling in cll progression. in this study, applicants tested the effects of ibrutinib on wnt5a/ror-1 signaling-mediated activities in cll cells. applicants found that wnt5a can induce rac1 activation in cll cells treated with ibrutinib and that although ibrutinib treatment can inhibit cll proliferation in the absence of wnt5a; this was reversed by wnt5a stimulation. such effects were blocked by a humanized anti-ror-1 monoclonal antibody (mab), cirmtuzumab (uc-961). moreover, combinatory treatment with uc-961 and ibrutinib significantly inhibited cll proliferation in vitro and engraftment of ror-1 + leukemia cells in vivo, which was more effective than each agent alone. the outcomes of this study provide rationale to combine uc-961 and ibrutinib as therapy for patients with cll and other ror-1-expressing b cell tumors. introduction. signaling via b cell receptor (bcr) plays an important role in the pathogenesis and progression of cll. crosslinking of the bcr leads to phosphorylation of cd79a/b and src family kinase lyn, resulting in the recruitment and activation of the tyrosine kinase syk, which induces a cascade of downstream signaling events, leading to enhanced b-cell survival. the importance of this cascade in cll biology appears underscored by the therapeutic effects of small-molecule inhibitors of kinases such as syk, akt and btk, which are important in bcr-signaling. ibrutinib is an inhibitor of btk and can induce durable clinical responses in most patients, provided that they continue therapy indefinitely. however, most patients generally achieve only partial responses (pr). moreover, patients virtually never achieve complete responses (cr) lacking detectable minimal residual disease (mrd), even after prolonged single-agent therapy. the failure of ibrutinib to achieve deep crs could be due to the presence of alternative survival signaling pathways that are not blocked by inhibitors of btk. one such pathway is that induced by signaling via ror-1, an oncoembryonic antigen expressed on cll cells, but not on normal postpartum tissues. applicants found that ror-1 could serve as a receptor for wnt5a, which could induce non-canonical wnt-signaling leading to activation of rho gtpases, such as rac1, and enhanced leukemia-cell proliferation and survival. activation of rac1 by wnt5a could be inhibited by an anti-ror-1 mab, uc-961, which is a first-in-class humanized monoclonal antibody currently undergoing evaluation in clinical trials for patients with cll. in this study, applicants investigated wnt5a/ror-1 signaling in the presence of the bcr signaling inhibitor ibrutinib, and examined the combinatory effect of ibrutinib and a humanized anti-ror-1 monoclonal antibody (mab), cirmtuzumab (uc-961), for cll treatment in vitro and in vivo. results. uc-961 inhibits wnt5a-induced rac1 activation in cll cells in the presence of ibrutinib. wnt5a can induce activation of rac1 in a variety of cell types,including cll cells. applicants evaluated whether wnt5a could induce rac1 activation in cll cells treated with ibrutinib. for this, applicants treated cll cells with ibrutinib at concentrations of 0, 0.25, 0.5 or 1.0 μm for 2 hours and then treated the cells with exogenous wnt5a for 30 minutes. immunoblot analysis showed that treatment with wnt5a induced rac1 activation and that such activation could not be blocked by ibrutinib, even at concentrations of 1 μm( fig. 6a ), a concentration considered super-physiologic, and one that caused 100% of btk occupancy ( fig. 6b ), consistent with previous reports. moreover, concentrations of ibrutinib as low as 0.25 μm caused complete inhibition of calcium flux induced by bcr-ligation with anti-μ ( fig. s11c ), without acutely affecting cll-cell viability ( fig. 6d ). the maximal concentration of ibrutinib in cll patient plasma is approximately 0.5 μm. as such, applicants treated cll cells with 0.5 μm ibrutinib in subsequent studies. applicants examined for wnt5a-induced rac1 activation with or without uc-961. applicants pretreated cll cells with ibrutinib, uc-961 or combination of ibrutinib and uc-961 for 2 hours, and then treated them with or without wnt5a recombinant protein for 30 minutes. ibrutinib did not inhibit wnt5a-induced rac1 activation, however, treatment of uc-961 reduced wnt5a-induced rac1 activation to levels comparable to those observed in cll cells that did not get treated with exogenous wnt5a ( figs. 1a-1b ). furthermore, the combination of uc-961 with ibrutinib also inhibited wnt5a-induced rac1 to the basal levels ( figs. 1a-1b ). applicants examined whether cll cells of patients undergoing therapy with ibrutinib could be stimulated with wnt5a. for this, blood mononuclear cells were collected from patients undergoing treatment with ibrutinib at the standard therapeutic dose of 420 mg qd. the isolated cll cells were incubated with or without wnt5a and/or uc-961. western blot analysis showed that wnt5a induced rac1 activation in cll cells from these patients, whereas incubation of uc-961 inhibited the wnt5a-induced rac1 activation; the level of active rac1 was similar to that in samples without treatment ( figs. 1c-1d ). these results demonstrate that ibrutinib does not inhibit wnt5a-induced rac1 activation. uc-961 inhibits wnt5a-enhanced proliferation of cll cells in the presence of ibrutinib. activation of rac1-gtpase can enhance proliferation. applicants induced proliferation of cll cells by co-culturing with cells expressing cd154 (hela cd 154) in the presence of exogenous interleukin (il)-4 and il-10. addition of exogenous wnt5a to such cultures significantly enhanced the proportion of dividing cells deduced from the fluorescence intensity of cells labeled with carboxyfluorescein succinimidyl ester (cf se); this effect could be inhibited by uc-961 ( fig. 2a ). in contrast, cll cells co-cultured with wild-type hela cells were not induced proliferate, even in the presence of il-4/10 and/or wnt5a ( figs. 7a-7b ). applicants' results demonstrated that the proliferation induced by wnt5a could be inhibited by uc-961 to levels comparable to those observed in cultures without wnt5a. treatment with ibrutinib could inhibit cd154-induced cll-cell proliferation; however, exogenous wnt5a still could enhance the proportion of dividing cells in the presence of ibrutinib ( fig. 2a ); this could be inhibited by uc-961 ( fig. 2a ). the same effects were observed using cll cells of different patients (n=6) ( fig. 2b ). collectively, these data demonstrate that uc-961 could block wnt5a-signaling that was not affected by treatment with ibrutinib. combination of uc-961 and ibrutinib in cll patient derived xenograft. applicants transferred cll into the peritoneal cavity of immune-deficient rag2 −/− γ c −/− mice to generate xenografts. applicants examined the capacity of the combination of uc-961 with ibrutinib to deplete cll cells in such xenografts. for this, applicants injected 1×10 7 viable primary cll cells in aim-v medium into the peritoneal cavity of each mouse. one day later, the mice were provided no treatment or daily doses of 15 mg/kg ibrutinib via gavage, and/or a single dose of uc-961 at 1 mg/kg. after 7 days, the cll cells were harvested via peritoneal lavage (pl) and were examined by flow cytometry. the calculated cll cell numbers per pl were significantly less in the group of mice treated with uc-961 or ibrutinib than the numbers collected from control non-treated mice. however, animals treated with both uc-961 and ibrutinib had significantly fewer cll cells per pl than each other group, including those treated with single agent ibrutinib or uc-961 ( fig. 3 ). these data demonstrate an additive effect of uc-961 on ibrutinib in clearing leukemia cells in this xenografts model. uc-961 inhibits wnt5a-enhanced rac1 activation and proliferation of ror-1×tcl1 leukemia cells in the presence of ibrutinib. ror-1×tcl1 leukemia cells were isolated from ror-1×tcl1 double-transgenic mice that developed leukemia. applicants pretreated ror-1×tcl1 leukemia cells with ibrutinib or uc-961 for 2 hours and then treated the cells with or without wnt5a recombinant protein for 30 minutes. similar to applicants' findings with human cll cells, wnt5a-induced rac1 activation could be inhibited by uc-961 but not by ibrutinib ( figs. 4a-4b ). the combination of uc-961 with ibrutinib also inhibited wnt5a-induced activation of rac1 to basal levels ( figs. 4a-4b ). however, wnt5a treatment could not induce rac1 activation in leukemia cells derived from single-transgenic tcl1 mice that lack expression of ror-1 ( fig. 8a ). again, ror-1×tcl1 leukemia cells could be induced to proliferate upon culture with hela cd154 in the presence of exogenous il-4 and il-10. addition of exogenous wnt5a treatment significantly enhanced the proportion of dividing cells and the numbers of cell divisions that could be deduced from the fluorescence of cells labeled with cfse ( figs. 4c-4d ). similar to human cll cells, ror-1×tcl1 leukemia cells co-cultured with wild-type hela cells were not induced to proliferate, even in the presence of il-4/10 and/or wnt5a ( fig. 8b ). treatment with ibrutinib partially could inhibit cd154-induced ror-1×tcl1 leukemia-cell proliferation. uc-961, but not ibrutinib, could not inhibit the capacity of wnt5a to enhance ror-1×tcl1 leukemia cells proliferation in response to cd154 and il-4/10 ( figs. 4c-4d ). on the other hand, wnt5a did not enhance the proliferation of ror-1-negative tcl1-leukemia cells co-cultured with hela cd 154 cells and il-4/10. combination of uc-961 and ibrutinib in ror-1×tcl1 leukemia engrafted mice. applicants examined the capacity of the combination of uc-961 with ibrutinib to inhibit engraftment of ror-1×tcl1 leukemia cells (cd5 + b220 low ror-1 + ) in rag2 −/− γ c −/− mice. applicants engrafted rag2 −/− γ c −/− mice each 2×10 4 ror-1×tcl1 leukemia cells and then treated the animals daily with 15, 5, 1.67 mg/kg ibrutinib, or with a single dose of 10, 3, 1 mg/kg of uc-961. after 25 days, the animals were sacrificed and the spleens were examined. ibrutinib and uc-961 inhibited the expansion of ror-1×tcl1 leukemia cells in a dose-dependent manner. applicants selected the 1-mg/kg single dose of uc-961 and the 5-mg/kg daily dose of ibrutinib for combination studies. while mice treated with uc-961 or ibrutinib alone had significantly smaller spleens than did littermates without treatment, the combination treatment of uc-961 and ibrutinib caused the greatest reduction in spleen size ( fig. 5a ). applicants examined the proportions of ror-1×tcl1 leukemia cells in the spleens via flow cytometry ( fig. 5b ). the percentage and total cell numbers of ror-1×tcl1 leukemia cells per spleen were significantly lower in mice treated with uc-961 or ibrutinib compared to mice that did not receive treatment. however, the animals treated with both uc-961 and ibrutinib had significantly fewer ror-1×tcl1 leukemia cells per spleen than all other groups, including those treated with single agent ibrutinib or uc-961 ( fig. 5c ). discussion. cll is characterized by the expansion of monoclonal, mature cd5 + b cells that proliferate in tissue compartments such as the lymph node (ln) and bone marrow (bm). the differences in tumor proliferation likely account for the heterogeneous clinical course of cll and reflect genetic differences among the malignant lymphocytes as well as the activity of external signals that drive tumor proliferation. cll cells depend on interactions with cells and soluble factors present in the tumor microenvironment for proliferation and survival. among the pathways that may support cll proliferation and survival in vivo, bcr signaling appears to be one of the important. btk is involved in the bcr signaling and is vital for many aspects of the cll development. in the present study applicants demonstrated that treatment of ibrutinib caused 100% inhibition of btk, inhibited igm-induced bcr signaling such as, calcium influx, and reduced. cd154-mediated cll proliferation. cellular pathways operate more like networks than superhighways. cancers use a diversity of pathological signaling and gene regulatory mechanisms to promote their survival, proliferation, and malignant phenotypes. applicants have reported that ror-1 was expressed in cll and contribute to cll progression. the functional study revealed that wnt5a, a ligand of ror-1, stimulated ror-1 to activate rac1 in cll cells and wnt5a/ror-1 signaling is important for cll progression. applicants examined the effects of ibrutinib on the function of wnt5a/ror-1 signaling, which has been shown to be important for rac1 activation and cll proliferation. applicants found that even though ibrutinib can inhibit cd154-induced cll proliferation, which is consistent with previously reported data, it was not able to inhibit wnt5a-induced rac1 activation and wnt5a-enhanced cll proliferation upon co-culture with hela cd154 cells in the presence of exogenous il-4/10. moreover, wnt-5a induced significantly rac1 activation in patients on the treatment of ibrutinib. cll patients show primary resistance against ibrutinib because of the resistant clones of cll cells, this might be explained by the fact that ibrutinib did not block the wnt5a induced signaling, which is important for cll cell biology, especially in ln and bm microenvironment. it has been reported that ibrutinib blocks fcγr-mediated calcium signaling and cytokine production, but it has no effect on rac activation, which is responsible for actin polymerization and phagocytosis. combination therapies are often needed to effectively treat many tumors since there are multiple redundancies, or alternate routes, that may be activated in response to the inhibition of a pathway and result in drug resistance and clinical relapse. researchers have engaged in the combination therapy using ibrutinib with other drugs for leukemia treatment. increased bcl-2 protein with a decline in mcl -1 and bcl-xl has also been observed and suggested as a survival mechanism for ibrutinib-treated cll cells. combination of ibrutinib with bcl -2 inhibitor (abt-199) showed synergistic effects on proliferation inhibition and apoptosis in mantle cell lymphoma cells through perturbation of btk and bcl-2 pathways. since btk and pi3k differentially regulate bcr signaling, the combination therapy with ibrutinib and 1′13k inhibitor (idelalisib) results in a more prominent mobilization of mcl and cll cells from their proliferation and survival promoting niches. moreover, ibrutinib and anti-cd20 mabs combination study showed that ibrutinib substantially reduced cd20 expression on cll cells and subsequently diminished complement-mediated cell killing. this negative interaction between ibrutinib and anti-cd20 mabs might reduce the efficacy of the combination therapy. all these studies indicated that it is critical to identify the possible interaction or crosstalk between the bcr signaling with alternative signaling when pursuing combination therapy with ibrutinib. applicants demonstrated here that uc-961 showed significantly inhibitory activity in wnt5a-induced rac1 activation and thereby inhibited wnt5a-enhanced cll proliferation. moreover, administering a combination of uc-961 and ibrutinib eliminated leukemia cells in the recipient rag2 −/− γ c −/− mice due to an additive effect, which was greater than caused by each agent alone. uc-961 and ibrutinib enforce each other's effect. consequently, combination therapy is expected to result in not only a reduced proliferation rate in their growth promoting niches, but also a more prominent mobilization of cll cells from there. from the perspective of drug clearance and protein turnover, the effect may also be stronger and prolonged because btk and ror-1 do not have to be fully occupied when the combination is used. furthermore, lower doses can be given, which might be beneficial for the efficacy/toxicity ratio. of major importance, however, targeting more than one key component of a pathway may overcome innate and overcome or prevent acquired (mono) therapy resistance. for example, uc-961 may still be beneficial in the ibrutinib-treated patients with mutation at the ibrutinib-binding site on btk, mutation in additional molecules in the btk axis such as plcγ2, and sf3b1 mutation, which is associated with poor prognosis was identified. taken together, applicants' use of uc-961 in in vitro and in vivo systems using cll or ror-1×tcl1 leukemia cells support the potential of uc-961 as a therapeutic drug and deserves further investigation of its combination therapy with ibrutinib for cll and possibly other ror-1 expressing b cell malignancies that depend upon active bcr signaling and/or the tumor microenvironment. methods. cells and sample preparation cll specimens. blood samples were collected from cll patients at the university of california san diego moores cancer center. pbmcs were isolated by density centrifugation with ficoll-paque plus (ge healthcare life sciences), and suspended in 90% fetal bovine serum (fbs) (omega scientific) and 10% dmso (sigma-aldrich) for viable storage in liquid nitrogen. samples with>95% cd19 + cd5 + cll cells were used without further purification throughout this study. ibrutinib occupancy assay. cll cells were treated with increasing concentrations of ibrutinib (0, 0.25, 0.5 or 1 μm) for 1 hour. cells were then washed in phosphate buffered saline and stored at−80° c. until a btk occupancy assay was performed as described. btk occupancy was compared using graphpad prism version 6.0 (graphpad, san diego, calif.). calcium flux assay. cll cells were incubated with 0, 0.25, 0.5 or 1.0 μm ibrutinib for 30 min, and then were loaded with 2 mm fluo-4am (molecular probes) in hanks balanced salt solution (hbss), lacking ca 2+ and mg 2+ . cells were kept at 37° c. for stimulation with anti-human igm f(ab) 2 . calcium release was monitored by flow cytometry analysis, as described. cell proliferation assay. primary cll or ror-1×tcl1 leukemia cell proliferation assay was performed as described. leukemia cells were labeled by carboxyfluorescein succinimidyl ester (cfse, life technologies) and plated at 1.5×10 6 /well/m1 in a 24-well tray on a layer of irradiated hela cd154 cells (8000 rad; 80 gray) at a cll/hela cd154 cell ratio of 15:1 in complete rpmi-1640 medium supplemented 5 ng/ml of recombinant human interleukin (il)-4 (r&d systems) and 15 ng/ml recombinant human il-10 (r&d systems). wnt5a (200 ng/ml, r&d systems) or uc-961 (10 μg/ml) as indicated in the text. cf se-labeled cll cells were analyzed by flow cytometry; modfit lt software (version 3.0, verity software house) was used for analysis of cell proliferation as previously described. rac1 activation assay. rac1 activation assay reagents were purchased from cytoskeleton and used as per manufacturer's instruction. briefly, gtp-bound active rac1 were pulled down with pak-pbd beads, and then subjected to immunoblot analysis. immunoblots of whole-cell lysates were used to assess for total rac1. the integrated optical density (iod) of bands was evaluated by densitometry and analyzed using gel-pro analyzer 4.0 software (media cybernetics, md). immunoblot analysis. western blot analysis was performed as described. equal amounts of total protein from each sample were fractionated by sds-page and blotted onto polyvinylidene difluoride membrane. western blot analysis was performed using primary mab specific for rac1, which were detected using secondary antibodies conjugated with horseradish peroxidase (cell signaling technology). human cll patient derived xenograft study. six-to eight-week-old rag2 −/− γ c −/− mice (initially obtained from catriona jamieson, university of california san diego) were housed in laminar-flow cabinets under specific pathogen-free conditions and fed ad libitum. applicants injected 2×10 7 viable primary cll cells in aim-v medium into the peritoneal cavity of each mouse. on the following day, one mg/kg uc-961 was injected once by i.p. and ibrutinib was administrated daily at 15 mg/kg by oral gavage. seven days later, peritoneal lavage (pl) was extracted by injecting the cavity with a total volume of 12 ml of dulbecco's pbs. total recovery of the pl cells was determined by using guava counting. subsequently, cells were blocked with both mouse and human fc blocker for 30 min at 4° c., stained with various human cell-surface markers (e.g., cd19, cd5, cd45), and then processed for flow cytometric analysis. applicants calculated the number of cll cells in each pl by multiplying the percentage of cll cells in the pl by the total pl cell counts. residual leukemia cells from human igg-treated mice were set as baseline at 100%. each treatment group included at least 6 mice, and the data were presented as mean±sem. ror-1×tcl1 leukemia adoptive transfer study. applicants evaluated the anti-leukemia activity of the combination of uc-961 with ibrutinib in immune-deficient rag2 −/− γ c −/− mice. ror-1×tcl1 leukemia b cells (cd5 + b220 low ror-1 + ) were isolated from the spleen, enriched via density gradient centrifugation, suspended in sterile pbs, injected i.v. into rag2 −/− γ c −/− recipient mice at 2×10 4 cells per animal. samples used for transplantation were verified by flow cytometry to be>95% leukemia b cells. for dose dependent therapy of uc-961, recipient mice received either no treatment, or one dose i.v. injections of 10 mg/kg, 3 mg/kg and 1 mg/kg of uc-961 on day 1. for dose dependent therapy of ibrutinib, recipient mice received either no treatment, or daily p.o. injections of 15 mg/kg, 5 mg/kg and 1.67 mg/kg of ibrutinib, beginning on day 1. for combination therapy, recipient mice received either no treatment, or one dose i.v. injections of 1 mg/kg of uc-961 and/or daily p.o. 5 mg/kg of ibrutinib, beginning on day 1. all mice were sacrificed on day 25 and single-cell suspensions of splenocytes were purged of red blood cells by hypotonic lysis in ammonium-chloride-potassium (ack) lysis solution, washed, suspended in 2% (wt/vol) bsa (sigma) in pbs (ph=7.4) and stained for surface expression of cd3 (17a2), cd5 (53-7.3), b220 (ra3-6b2), and ror-1 (4a5) using optimized concentrations of fluorochrome-conjugated mabs. cells were examined by four-color, multiparameter flow cytometry using a dual-laser facscalibur (bd) and the data were analyzed using flowjo software (treestar). the total number of leukemia cells per spleen was calculated by determining the percent of cd5 + b220 low ror-1 + cells of total lymphocytes by flow cytometry and multiplying this number by the total spleen cell count. statistics. data are presented as mean±sem as indicated, for data sets that satisfied conditions for a normal distribution, as determined by the kolmogorov-smirnov test. the statistical significance of the difference between means was assessed by one-way anova with tukey's multiple comparisons test. p values less than 0.05 were considered significant. analysis for significance was performed with graphpad prism 6.0 (graphpad software inc.). example 2 combination studies cll cells were treated with different bcr inhibitors and examined for ror-1 expression, ror-1 expression was significantly induced following bcr inhibitors treatments. applicants cultured cll cells that had increased ror-1 expression induced by bcr signaling inhibitors in the peritoneal cavity of immunodeficient rag2/common-gamma-chain knockout mice (rag2 −/− γ c −/− ), which subsequently were treated with control ig, anti-ror-1 antibody, ibrutinib, or combination of anti-ror-1 antibody and ibrutinib. cll cells were more sensitive to treatment with combination of anti-ror-1 antibody and ibrutinib than with treatment with anti-ror-1 antibody or ibrutinib only. the capacity of combination of anti-ror-1 antibody and ibrutinib to inhibit the adoptive transfer of human-ror-1 expressing murine leukemia cells was tested in immunodeficient recipient mice. six rag2 −/− γc −/− mice were injected intravenously with 1 mg/kg of humanized anti-human ror-1 mab uc-961. two hours later, all mice were given an intravenous injection of human ror-1 + murine leukemia cells derived from a ror-1×tcl1 transgenic mouse. ibrutinib daily treatment was started on the next day after leukemia xenograft. when compared to control animals, animals treated by single agent, the combination treatment of anti-ror-1 antibody and ibrutinib resulted in a over 90% reduction of leukemic cells in the spleen, the major organ of accumulation for these malignant cells. as depicted in fig. 9 , cll cells were treated with different bcr inhibitors and examined for ror-1 expression, ror-1 expression was significantly induced following bcr inhibitor treatment. as depicted in fig. 10 , applicants cultured cll cells that had increased ror-1 expression induced by bcr signaling inhibitors in the peritoneal cavity of immunodeficient rag2/common-gamma-chain knockout mice (rag2 −/− γ c −/− ), which subsequently were treated with control ig, anti-ror-1 antibody, ibrutinib, or combination of anti-ror-1 antibody an ibrutinib. cll cells were more sensitive to treatment with combination of anti-ror-1 antibody and ibrutinib than with treatment with anti-ror-1 antibody or ibrutinib only. as depicted in fig. 11 , rag2 −/− γ c −/− mice were given an intravenous injection of 1×10 4 cd5 + b220 lo human ror-1 + murine leukemia cells derived from a ror-1×tcl1 transgenic mouse, which subsequently were treated with control ig, anti-ror-1 antibody, ibrutinib, or combination of anti-ror-1 antibody and ibrutinib next day. the combination treatment of anti-ror-1 antibody and ibrutinib resulted in a significant reduction of leukemic cells in the spleen, the major organ of accumulation for these malignant cells, compared with treatment of anti-ror-1 antibody or ibrutinib only. example 3 cirmtuzumab inhibits snt5a-induced rac1 activation in chronic lymphocytic leukemia treated with ibrutinib abstract. signaling via the b cell receptor (bcr) plays an important role in the pathogenesis and progression of chronic lymphocytic leukemia (cll). this is underscored by the clinical effectiveness of ibrutinib, an inhibitor of bruton's tyrosine kinase (btk) that can block bcr-signaling. however, ibrutinib cannot induce complete responses (cr) or durable remissions without continued therapy, suggesting alternative pathways also contribute to cll growth/survival that are independent of bcr-signaling. ror-1 is a receptor for wnt5a, which can promote activation of rac1 to enhance cll-cell proliferation and survival. in this study, applicants found that cll cells of patients treated with ibrutinib had activated rac1. moreover, wnt5a could induce rac1 activation and enhance proliferation of cll cells treated with ibrutinib at concentrations that were effective in completely inhibiting btk and bcr-signaling. wnt5a-induced rac1 activation could be blocked by cirmtuzumab (uc-961), an anti-ror-1 mab. applicants found that treatment with cirmtuzumab and ibrutinib was significantly more effective than treatment with either agent alone in clearing leukemia cells in vivo. this study indicates that cirmtuzumab may enhance the activity of ibrutinib in the treatment of patients with cll or other ror-1 + b-cell malignancies. introduction. cll cells depend on interactions with cells and soluble factors present in the tumor microenvironment for proliferation and survival. among the pathways that may support cll proliferation and survival in vivo, bcr-signaling plays a prominent role. crosslinking of the bcr leads to phosphorylation of cd79α/β and src family kinase lyn, resulting in the recruitment and activation of the tyrosine kinase syk, which induces a cascade of downstream signaling events, leading to enhanced b-cell survival. the importance of this cascade in cll biology appears underscored by the clinical activity of small-molecule inhibitors of intracellular kinases, which play critical roles in bcr-signaling, such as syk, phosphoinositide 3-kinase (pi3k), or bruton's tyrosine kinase (btk). ibrutinib is a small molecule inhibitor of btk that has proven highly effective in the treatment of patients with cll. however, despite having excellent clinical activity, ibrutinib generally cannot eradicate the disease or induce durable responses in the absence of continuous therapy. the failure of ibrutinib to induce complete responses could be due to alternative survival-signaling pathways, which are not blocked by inhibitors of btk. one such pathway is that induced by signaling through ror-1, an oncoembryonic antigen expressed on cll cells, but not on normal postpartum tissues. applicants found that ror-1 could serve as a receptor for wnt5a, which could induce non-canonical wnt-signaling that activates rho gtpases, such as rac1, and enhance leukemia-cell proliferation and survival. activation of rac1 by wnt5a could be inhibited by an anti-ror-1 mab, cirmtuzumab (uc-961), which is a first-in-class humanized monoclonal antibody currently undergoing evaluation in clinical trials for patients with cll. in this study, applicants investigated whether wnt5a/ror-1 signaling was affected by treatment with ibrutinib and examined the activity of ibrutinib and cirmtuzumab on cll cells in vitro and in vivo. materials and methods blood samples and animal. blood samples were collected from cll patients at the university of california san diego moores cancer center who satisfied diagnostic and immunophenotypic criteria for common b-cell cll, and who provided written, informed consent, in compliance with the declaration of helsinki and the institutional review board (irb) of the university of california san diego (irb approval number 080918). pbmcs were isolated as described. all experiments with mice were conducted in accordance with the guidelines of the national institutes of health for the care and use of laboratory animals, and university of california san diego approved study protocol. all mice were age and sex matched. btk-occupancy assay. cll cells were treated with increasing concentrations of ibrutinib (0, 0.25, 0.5 or 1 μm) for 1 hour. cells were then washed in phosphate buffered saline and stored at−80° c. until a btk occupancy assay was performed as described. btk occupancy was compared using graphpad prism version 6.0 (graphpad, san diego, calif.). calcium flux assay. cll cells were incubated with 0, 0.25, 0.5, or 1.0 μm ibrutinib for 30 min, and then were loaded with 2 mm fluo-4am (molecular probes) in hanks balanced salt solution (hbss), lacking ca 2+ and mg 2+ . cells were kept at 37° c. for stimulation with anti-human igm f(ab) 2 . calcium release was monitored by flow cytometry analysis, as described. rac1 activation assay. reagents for rac1 activation assay were made in applicnats' lab, as described previously. the rac1 pull-down and immunoblot analyses were performed as decribed. cell proliferation assay. applicants performed leukemia cell proliferation assay according to previous described. for these analyses applicants gated on viable cd5 − cd19 + cells using their characteristic light scatter and capacity to exclude pi ( figs. 18a-18b ). cell cycle analyses. leukemia cells (1×10 7 ) were suspended in 100 μl of pbs and fixed overnight at 4° c. by adding 1 ml cold ethanol. cells were spin at 700 ×g for 2 min and washed twice with pbs containing 1% bsa. the pelleted cells were then suspended in 500 μl of pbs containing 1% bsa and 1 μl of rnase (100 mg/ml); rnase was added to digest rna. pi solution (0.5 mg/ml in 38 mm sodium citrate, ph 7.0), 1 μl boiled rnase a (100 mg/ml), and pi-staining solution (0.5 mg/ml in 38 mm sodium citrate, ph 7.0; 60 μl ) were added to cells and incubated in the dark for 1 hour at room temperature. immediately thereafter, the cells were analyzed via flow cytometry using a facsarray (becton dickinson), and data were analyzed using flowjo software (tree star inc.). cll patient derived xenografts. six-to eight-week-old rag2 −/− γ c −/− mice (initially obtained from catriona jamieson, university of california san diego) were housed in laminar-flow cabinets under specific pathogen-free conditions and fed ad libitum. applicants injected 2×10 7 viable primary cll cells in aim-v medium into the peritoneal cavity of each mouse. on the following day, one mg/kg cirmtuzumab was injected once by i.p. and ibrutinib was administrated daily at 15 mg/kg by oral gavage. seven days later, peritoneal lavage (pl) was extracted by injecting the cavity with a total volume of 12 ml of dulbecco's pbs. total recovery of the pl cells was determined by using guava counting. subsequently, cells were blocked with both mouse and human fc blocker for 30 min at 4° c., stained with various human cell-surface markers (e.g., cd19, cd5, cd45), and then processed for flow cytometric analysis. applicants calculated the number of cll cells in each pl by multiplying the percentage of cll cells in the pl by the total pl cell counts. residual leukemia cells from human igg-treated mice were set as baseline at 100%. each treatment group included at least 5 mice, and the data were presented as mean±sem. ror-1×tcl1 leukemia adoptive transfer study. applicants evaluated the anti-leukemia activity of the combination of cirmtuzumab with ibrutinib in immunodeficient rag2 −/− γ c −/− or immunocompetent ror-1-transgenic mice, as previous described. statistical analyses. data are shown as mean±sem. normal distribution of data sets was determined by the kolmogorov-smirnov test. the statistical significance of the difference between means was assessed by one-way anova with tukey's multiple comparisons test. applicants used graphpad prism 6.0 (graphpad software inc.) to calculate the level of significance using the statistical method described in the text. ap≦0.05 was considered significant. results ibrutinib fails to inhibit wnt5a-induced rac1 activation in cll. applicants examined the blood mononuclear cells of patients who were taking ibrutinib at the standard dose of 420 mg per day. freshly isolated cll cells had activated rac1, which diminished over time in culture in serum-free media unless provided with exogenous wnt5a ( figs. 12a-12b ), as noted for the cll cells of patients not taking ibrutinib. moreover, the cll cells from ibrutinib-treated patients were incubated with or without wnt5a and/or cirmtuzumab. immunoblot analysis showed that wnt5a induced rac1 activation in cll cells from all patients examined, whereas treatment with cirmtuzumab inhibited wnt5a-induced rac1 activation ( figs. 12c-12d ). these results indicate that therapy with ibrutinib does not inhibit ror-1-dependent, wnt5a-induced rac1 activation. applicants examined whether treatment of cll cells with ibrutinib in vitro could inhibit wnt5a-induced rac1 activation in cll. for this, applicants incubated cll cells collected from untreated patients with ibrutinib at concentrations of 0, 0.25, 0.5, or 1.0 μm for 2 hours and then treated the cells with exogenous wnt5a for 30 minutes. immunoblot analysis demonstrated that ibrutinib could not block wnt5a-induced rac1 activation, even at ibrutinib concentrations of 1 μm ( fig. 6a ), which is in large excess of what is required to achieve 100% occupancy of btk and inhibition of btk activity ( fig. 6b ). on the other hand, applicants noted that ibrutinib at concentrations as low as 0.25 μm inhibited the calcium flux induced by anti-igm ( fig. 6c ),without acutely affecting cll-cell viability ( fig. 6d ). the peak plasma concentration of ibrutinib in patients treated with this drug is approximately 0.5 μm, a concentration that can affect 100% occupancy and inhibition of btk. therefore, ibrutinib was used at 0.5 μm for subsequent studies. applicants examined for wnt5a-induced rac1 activation with or without ibrutinib and/or cirmtuzumab. cll cells were cultured with ibrutinib, cirmtuzumab, or both ibrutinib and cirmtuzumab for 2 hours, and then stimulated with exogenous wnt5a for 30 minutes. for comparison, cells from the same cll sample were cultured without wnt5a in parallel. treatment of cll cells with wnt5a induced activation of rac1 to levels that were significantly higher than that of cll cells that were not treated with wnt5a ( figs. 12e-12f ). treatment with cirmtuzumab, but not ibrutinib, could inhibit wnt5a-induced rac1 activation in cll cells ( figs. 12e-12f ). as expected, ibrutinib did not block the capacity of cirmtuzumab to inhibit wnt5a-induced rac1 activation ( figs. 12e-12f ). cirmtuzumab inhibits wnt5a-enhanced proliferation of cll cells treated with ibrutinib. activation of rac1-gtpase can enhance proliferation, whereas loss of rac1 results in impaired hematopoietic-cell growth. applicants induced proliferation of cll cells by co-culturing leukemia cells with hela cells expressing cd154 (hela cd 154) and recombinant interleukin (il)-4 and il-10. addition of exogenous wnt5a to co-cultures of cll cells with hela cd 154 cells and il-4/10 significantly enhanced the proportion of dividing cll cells. treatment of the cll cells with cirmtuzumab, but not ibrutinib, could block wnt5a-enhanced proliferation of cll cells ( fig. 13a ). the same effects were observed for cll cells of different patients (n=6) ( fig. 13b ). il4/10 and/or wnt5a alone could not induce cll-cell proliferation ( figs. 18a-18b ). furthermore, cell-cycle analysis on permeabilized leukemia cells with propidium iodide (pi) demonstrated that wnt5a enhanced the fraction of cd154-stimulated leukemia cells in s/g2/m ( figs. 13c-13d ). the capacity of wnt5a to enhance the proportion of cells in s/g2/m could be inhibited by treatment with cirmtuzumab, but not ibrutinib ( figs. 13c-13d ). activity of cirmtuzumab and/or ibrutinib in cll patient-derived xenografts. applicants transferred cll cells into the peritoneal cavity of immunodeficient rag2 −/− γ c −/− mice, and examined whether treatment with ibrutinib and/or cirmtuzumab could deplete cll cells in vivo. for this, applicants injected 1×10 7 viable primary cll cells in aim-v medium into the peritoneal cavity of each mouse. one day later, the mice were provided no treatment or daily doses of ibrutinib at 15 mg/kg via oral gavage, and/or a single dose of cirmtuzumab at 1 mg/kg via i.p. injection. after 7 days, the cll cells were harvested via peritoneal lavage (pl) and the proportions of cll cells in the harvested peritoneal cells were examined by flow cytometry ( fig. 14a ). the percentages and total numbers of cll cells in pl were significantly lower in mice treated with cirmtuzumab or ibrutinib than in mice that did not receive any treatment. however, significantly fewer cll cells were found in the pl of mice treated with cirmtuzumab and ibrutinib than in the pl of mice treated with either agent alone ( fig. 14b ). cirmtuzumab, but not ibrutinib, inhibits wnt5a-enhanced rac1 activation and proliferation of ror-1×tcl1 leukemia cells. ror-1×tcl1 leukemia cells were isolated from ror-1×tcl1 double-transgenic mice that developed ror-1 + leukemia. applicants pretreated ror-1×tcl1 leukemia cells with ibrutinib or cirmtuzumab for 2 hours and then cultured the cells with or without wnt5a for 30 minutes. similar to findings with human cll cells, wnt5a-induced rac1 activation could be inhibited by cirmtuzumab, but not by ibrutinib ( figs. 15a-15b ). the combination of cirmtuzumab with ibrutinib also inhibited wnt5a-induced activation of rac1 to levels observed in untreated cells ( figs. 15a-15b ). however, wnt5a treatment could not induce activation of rac1 in the leukemia cells of single-transgenic tcl1 mice, which develop a leukemia that lacks expression of ror-1 ( fig. 7c ). again, applicants induced proliferation of ror-1×tcl1 leukemia cells by co-culturing the cells with hela cd154 in the presence of recombinant il-4/10. exogenous wnt5a significantly enhanced the percentage of numbers of cell divisions ( fig. 15c ). as with human cll cells, wnt5a and/or il-4/10 alone could not induce proliferation of ror-1 + leukemia cells of ror-1×tcl1 transgenic mice ( fig. 15c ), indicating a dependency on cd154 for this effect. in agreement with earlier studies, wnt5a did not enhance the proliferation of ror-1-negative tcl1-leukemia cells co-cultured with hela cd154 cells and il-4/10 ( fig. 21a ), indicating a dependency on ror-1 for this effect. treatment with ibrutinib could not inhibit the capacity of wnt5a to enhance the proliferation of cd154-induced ror-1×tcl1 leukemia-cell proliferation. on the other hand, cirmtuzumab blocked the capacity of wnt5a to enhance ror-1×tcl1 leukemia cells proliferation in response to cd154 and il-4/10 ( fig. 15c ). as noted for human cll cells, cell-cycle analysis on permeabilized ror-1×tcl1 leukemia cells using pi demonstrated that wnt5a could increase the fraction of cd154-stimulated ror-1+ leukemia cells in s/g2/m ( figs. 19a-19b ). moreover, the capacity of wnt5a to enhance the fraction of ror-1+ leukemia cells in s/g2/m could be inhibited by treatment with cirmtuzumab, but not ibrutinib ( figs. 19a-19b ). treatment of immunodeficient mice engrafted with ror-1×tcl1 leukemia with cirmtuzumab and/or ibrutinib. applicants examined the capacity of cirmtuzumab and/or ibrutinib to inhibit ror-1×tcl1 leukemia cell engraftment in rag2 −/− γ c −/− mice. applicants engrafted each animal with 2×10 4 ror-1×tcl1 leukemia cells and then administered daily ibrutinib at 15, 5, 1.67 mg/kg via gavage, or provided a single dose of cirmtuzumab at 1, 3, or 10 mg/kg via intravenous injection. after 25 days, the animals were sacrificed and the spleen of each animal was examined. ibrutinib ( fig. 20a ) or cirmtuzumab ( fig. 20b ) reduced the numbers of splenic leukemia cells in a dose-dependent manner. applicants selected the cirmtuzumab dose of 1 mg/kg and the daily dose of ibrutinib 5 mg/kg for combination studies. while the engrafted mice treated with cirmtuzumab or ibrutinib alone had significantly smaller spleens than the engrafted animals that did not receive any treatment, the mice treated with the combination of cirmtuzumab and ibrutinib had the greatest reductions in spleen size ( fig. 16a ). furthermore, the mean proportion and number of leukemia cells in the spleen were significantly lower in mice treated with cirmtuzumab or ibrutinib compared to engrafted mice that did not receive treatment ( figs. 16b-16c ). however, the engrafted animals that were treated with cirmtuzumab and ibrutinib had significantly lower proportions and numbers of leukemia cells per spleen than all other groups ( figs. 16b-16c ). treatment of immunocompetent mice engrafted with ror-1×tcl1 leukemia with cirmtuzumab and/or ibrutinib. applicants examined the capacity of cirmtuzumab and/or ibrutinib to inhibit engraftment of ror-1×tcl1 leukemia cells (cd5 + b220 low ror-1 + ) in immunocompetent human-ror-1 transgenic (ror-1-tg) mice. applicants injected 2×10 4 ror-1×tcl1 leukemia cells to ror-1-tg mice, and administered no treatment, daily doses of ibrutinib at 5 mg/kg via gavage, or weekly doses of cirmtuzumab at 10 mg/kg via intravenous injection. after 28 days, the animals were sacrificed and the spleen of each animal was examined. while the engrafted mice treated with cirmtuzumab or ibrutinib alone had significantly smaller spleens than the engrafted animals that did not receive any treatment, the mice treated with the combination of cirmtuzumab and ibrutinib had the greatest reductions in spleen size ( fig. 17a ). furthermore, the mean proportion and leukemia cell number in the spleen was significantly lower in mice treated with cirmtuzumab or ibrutinib than in mice that did not receive treatment ( figs. 17b-17c ). however, the engrafted animals that were treated with cirmtuzumab and ibrutinib had significantly lower proportions and numbers of leukemia cells per spleen than all other groups ( figs. 17b-17c ). discussion in this study, applicants examined the cll cells of patients undergoing treatment with ibrutinib, which is highly effective at inhibiting bcr-signaling through its capacity to inhibit btk. first, applicants noted that the cll cells of patients treated with ibrutinib had activated rac1, which diminished over time in culture in serum-free media unless applicants supplemented the media with exogenous wnt5a. moreover, applicants found that wnt5a could induce cll to activate rac1, as noted in a variety of cell types, including cll cells. subsequent studies showed that wnt5a could induce rac1 activation even in cll cells that were treated with ibrutinib at supra-physiologic concentrations, which exceeded the levels required to achieve 100% occupancy and inhibition of btk and bcr-signaling. the wnt5a-signaling noted in this study was dependent upon ror-1, as indicated by the capacity of cirmtuzumab to inhibit wnt5a-induced activation of rac1. applicants conclude that ibrutinib cannot block ror-1-dependent, wnt5a-induced activation of rac1, which serves as an intracellular signal transducer that can influence multiple signaling pathways. activated rac1 might mitigate the effectiveness of anti-cancer therapy. prior studies found that activated rac1 can enhance resistance of cll cells to cytotoxic drugs. one study found that activated t cells and fibroblasts could induce cll cells to activate rac1 and acquire resistance to the cytotoxic effects of fludarabine monophosphate; inhibition of activated rac1 could restore the sensitivity of these cll cells to this drug. in another study, rac1 was found to interact with and enhance the function of bcl -2, which is over-expressed in cll. another study involving acute leukemia cells found that treatment with nsc-23766, an inhibitor of activated rac1, could enhance the cytotoxicity of bcl -2 antagonists for leukemia cells. finally loss of p53 in lymphoma cells has been associated with increased activation of rac1, which could be inhibited by nsc-23766 or a dominant-negative form of rac1, rac1n17, leading to a dose-dependent increase in the rate of spontaneous or drug-induced apoptosis. conceivably, the activated rac1 observed in cll cells of patients treated with ibrutinib provides an ancillary signal, which enhances the survival of leukemia cells of patients treated with ibrutinib. furthermore, wnt5a-signaling also could promote leukemia-cell proliferation in patients treated with ibrutinib. the functional consequences of wnt5-signaling in part are demonstrated by the ability of wnt5a to enhance proliferation induced by cd154, which can induce cll proliferation in vitro in the presence of exogenous il4/10 or il-21. although ibrutinib partially could inhibit cd154-induced cll cell proliferation, possibly due to its capacity to inhibit bcr and bcr-independent pathways, applicants found that ibrutinib could not inhibit the capacity of wnt5a to enhance cd154-induced cll proliferation via ror-1-dependent signaling, which could, however, be blocked by treatment with cirmtuzumab. wnt5a most likely is produced by cells in the cll microenvironment, but also plasma of patients with cll has high levels of wnt5a. wnt5a also might be produced by the cll cells themselves, allowing for autocrine activation. indeed, one study found that cll cells that may express high levels of wnt5a apparently have increased motility and chemotactic responses, presumably due to wnt5a-autocrine signaling. applicants also noted in an earlier study that wnt5a could enhance the migration of cll cells toward chemokine via activation of rhoa. however, because btk plays a prominent role in cll signaling via chemokine receptors such as cxcr4, applicants focused attention on the capacity of wnt5a to activate rac1, which could enhance proliferation induced by cd154 via signaling pathways that are relatively independent of btk. because the wnt5a-ror-1 signaling pathway appears intact in cll cells treated with ibrutinib, applicants examined for additive, if not synergistic, effects of treatment with ibrutinib and cirmtuzumab. for mice engrafted with histocompatible ror-1 + leukemia, or human cll xenografts, applicants found that treatment with both cirmtuzumab and ibrutinib was significantly more effective than treatment with either agent alone in clearing leukemia cells in vivo. this study indicates that cirmtuzumab may enhance the activity of ibrutinib in the treatment of patients with cll or other ror-1 + b-cell malignancies. combination therapies are often more effective in treating patients with cancer. investigations are ongoing to evaluate the activity of ibrutinib in combination with other drugs, such as venetoclax or anti-cd20 mabs. because cirmtuzumab and ibrutinib target independent signaling pathways, they have apparent synergistic effects in clearing leukemia cells from the mouse models. by targeting more than one signaling pathway leading to leukemia-cell growth/survival, combined therapy with cirmtuzumab and ibrutinib also could mitigate the risk of acquiring resistance to inhibitors of btk, as sometimes occurs in patients who receive ibrutinib monotherapy. taken together, from the perspective of therapeutic efficacy and drug resistance, these preclinical observations provide a rationale for the combination therapy with cirmtuzumab with ibrutinib, or other inhibitors of btk such as acalabrutinib, for patients with cll or other b-cell malignancies that express ror-1. example 4 combination of anti-ror-1 antibody and ibrutinib for mantle cell lymphoma a recent study by applicants' group demonstrated that cll cells of patients treated with ibrutinib had activated rac1. moreover, wnt5a could induce rac1 activation and enhance proliferation of cll cells treated with ibrutinib at concentrations that were effective in completely inhibiting btk and bcr-signaling. wnt5a-induced rac1 activation could be blocked by cirmtuzumab (uc-961), an anti-ror-1 mab. applicants found that treatment with cirmtuzumab and ibrutinib was significantly more effective than treatment with either agent alone in clearing leukemia cells in vivo. this study indicates that cirmtuzumab may enhance the activity of ibrutinib in the treatment of patients with cll or other ror-1 + b-cell malignancies. thus, applicants examined primary lymphoma cells of patients with mcl for wnt5a-induced ror-1-dependent activation of rac1. mcl cells were cultured with ibrutinib, cirmtuzumab or both ibrutinib and cirmtuzumab for 2 h, and then stimulated with exogenous wnt5a for 30 min. for comparison, cells from the same mcl sample were cultured without wnt5a in parallel. as noted for cll cells, wnt5a induced activation of primary mcl cells in a ror-1-dependent fashion. for example, wnt5a induced rac1 activation in the primary mcl cells ( fig. 23a ). cirmtuzumab, but not ibrutinib, could inhibit the capacity of wnt5a to induce rac1 activation in primary mcl cells, similar to what applicants observed in primary cll cells. activation of rac1-gtpase can enhance proliferation, whereas loss of rac1 results in impaired hematopoietic-cell growth. propidium iodide (pi) is the most commonly used dye for dna content/cell cycle analysis. to evaluate the responsiveness of mcl cells to cd40 ligation and il-4 exposure, applicants induced proliferation of primary mcl cells by co-culturing the lymphoma cells with hela cells expressing cd154 (helacd154) and recombinant il-4 and il-10. addition of exogenous wnt5a to co-cultures of mcl cells with helacd154 cells and il-4/10 significantly enhanced the proportion of mcl cells in s/g2 phase, as assessed using pi-based cell cycle studies, as noted for cll cells. applicants also performed cell-cycle analysis on permeabilized mcl cells using pi and found that wnt5a stimulation significantly increased the fraction of cd154-stimulated mcl cells in s/g2 ( fig. 23b ). the capacity of wnt5a to enhance the proportion of primary mcl cells in s/g2 could be inhibited by treatment with cirmtuzumab, but not with ibrutinib, as noted previously for cll cells. these data demonstrate the functional importance of ror-1 signaling in mcl and the ability of cirmtuzumab to inhibit ror-1-mediated oncogenic activity in this lymphoma. the activity of cirmtuzumab in mcl is identical to what applicants observed in cll, for which applicants found cirmtuzumab to have synergistic anti-tumor activity with ibrutinib in clearing leukemia cells in 3 different animal models. example 5 candidate drugs for the treatment of chronic lymphocytic leukemia and b-cell non-hodgkin lymphoma the novel btk inhibitor ibrutinib and phosphatidyl-4-5-biphosphate 3-kinase-δ inhibitor idelalisib (cal-101) are candidate drugs for the treatment of chronic lymphocytic leukemia and b-cell non-hodgkin lymphoma, either alone or in combination with anti-cd20 antibodies pretreatment with ibrutinib for 1 hour did not increase direct cell death of cell lines or chronic lymphocytic leukemia samples mediated by anti-cd20 antibodies. pre-treatment with ibrutinib did not inhibit complement activation or complement-mediated lysis. in contrast, ibrutinib strongly inhibited all cell-mediated mechanisms induced by anticd20 antibodies rituximab, ofatumumab or obinutuzumab, either in purified systems or whole blood assays. activation of natural killer cells, and antibody-dependent cellular cytotoxicity by these cells, as well as phagocytosis by macrophages or neutrophils can be inhibited by ibrutinib with a half maximal effective concentration of 0.3-3 μm. analysis of anti-cd20 mediated activation of natural killer cells isolated from patients on continued oral ibrutinib treatment suggests that repeated drug dosing inhibits these cells in vivo. it has been shown that the phosphatidyl-4-5-biphosphate 3-kinase-δ inhibitor idelalisib similarly inhibits the immune cell-mediated mechanisms induced by anti-cd20 antibodies, although the effects of this drug at 10 μm were weaker than those observed with ibrutinib at the same concentration. without wishing to be bound by any theory, it is believed that the design of combined treatment schedules of anticd20 antibodies with these kinase inhibitors should consider the multiple negative interactions between these two classes of drugs. references stevenson f k, krysov s, davies a j, steele a j, packham g. b-cell receptor signaling in chronic lymphocytic leukemia. blood 2011 oct. 20; 118: 4313-4320. burger j a, tedeschi a, barr p m, robak t, owen c, ghia p, et al. ibrutinib as initial therapy for patients with chronic lymphocytic leukemia. n engl j med 2015 dec. 17; 373: 2425-2437. byrd j c, furman r r, coutre s e, flinn i w, burger j a, blum k a, et al. targeting btk with ibrutinib in relapsed chronic lymphocytic leukemia. n engl j med 2013 jul. 4; 369: 32-42. byrd j c, o'brien s, james d f. ibrutinib in relapsed chronic lymphocytic leukemia. n engl j med 2013 sep. 26; 369: 1278-1279. komarova n l, burger j a, wodarz d. evolution of ibrutinib resistance in chronic lymphocytic leukemia (cll). proc natl acad sci usa 2014 sep. 23; 111: 13906-13911. fukuda t, chen l, endo t, tang l, lu d, castro j e, et al. antisera induced by infusions of autologous ad-cd154-leukemia b cells identify ror-1 as an oncofetal antigen and receptor for wnt5a. proc natl acad sci usa 2008 feb. 26; 105: 3047-3052. widhopf g f, 2nd, cui b, ghia e m, chen l, messer k, shen z, et al. ror-1 can interact with tcl1 and enhance leukemogenesis in emu-tcl1 transgenic mice. proc natl acad sci usa 2014 jan. 14; 111: 793-798. hofbauer s w, krenn p w, ganghammer s, asslaber d, pichler u, oberascher k, et al. tiam1/rac1 signals contribute to the proliferation and chemoresistance, but not motility, of chronic lymphocytic leukemia cells. blood 2014 apr. 3; 123: 2181-2188. kaucka m, plevova k, pavlova s, jan.ovska p, mishra a, verner j, et al. the planar cell polarity pathway drives pathogenesis of chronic lymphocytic leukemia by the regulation of b-lymphocyte migration. cancer res 2013 mar. 1; 73: 1491-1501. yu j, chen l, cui b, widhopf g f, 2nd, shen z, wu r, et al. wnt5a induces ror-1/ror2 heterooligomerization to enhance leukemia chemotaxis and proliferation. j clin invest 2015 dec. 21. choi m y, widhopf g f, 2nd, wu c c, cui b, lao f, sadarangani a, et al. pre-clinical specificity and safety of uc-961, a first-in-class monoclonal antibody targeting ror-1. clin lymphoma myeloma leuk 2015 june; 15 suppl: s167-169. nishita m, itsukushima s, nomachi a, endo m, wang z, inaba d, et al. ror2/frizzled complex mediates wnt5a-induced ap-1 activation by regulating dishevelled polymerization. mol cell biol 2010 july; 30: 3610-3619. naskar d, maiti g, chakraborty a, roy a, chattopadhyay d, sen m. wnt5a-rac1-nf-kappab homeostatic circuitry sustains innate immune functions in macrophages. j immunol 2014 may 1; 192: 4386-4397. zhu y, shen t, liu j, zheng j, zhang y, xu r, et al. rab35 is required for wnt5a/dv12-induced rac1 activation and cell migration in mcf-7 breast cancer cells. cell signal 2013 may; 25: 1075-1085. ren l, campbell a, fang h, gautam s, elavazhagan s, fatehchand k, et al. analysis of the effects of the bruton's tyrosine kinase (btk) inhibitor ibrutinib on monocyte fcgamma receptor (fcgammar) function. j biol chem 2016 feb. 5; 291: 3043-3052. honigberg l a, smith a m, sirisawad m, verner e, loury d, chang b, et al. the bruton tyrosine kinase inhibitor pci-32765 blocks b-cell activation and is efficacious in models of autoimmune disease and b-cell malignancy. proc natl acad sci usa 2010 jul. 20; 107: 13075-13080. rushworth s a, murray m y, zaitseva l, bowles k m, macewan d j. identification of bruton's tyrosine kinase as a therapeutic target in acute myeloid leukemia. blood 2014 feb. 20; 123: 1229-1238. di paolo j a, huang t, balazs m, barbosa j, barck k h, bravo bj, et al. specific btk inhibition suppresses b cell-and myeloid cell-mediated arthritis. nat chem biol 2011 january; 7: 41-50. de jong j, sukbuntherng j, skee d, murphy j, o'brien s, byrd j c, et al. the effect of food on the pharmacokinetics of oral ibrutinib in healthy participants and patients with chronic lymphocytic leukemia. cancer chemother pharmacol 2015 may; 75: 907-916. advani r h, buggy j j, sharman j p, smith s m, boyd t e, grant b, et al. bruton tyrosine kinase inhibitor ibrutinib (pci-32765) has significant activity in patients with relapsed/refractory b-cell malignancies. j clin oncol 2013 jan. 1; 31: 88-94. etienne-manneville s, hall a. rho gtpases in cell biology. nature 2002 dec. 12; 420: 629-635. gu y, filippi m d, cancelas j a, siefring j e, williams e p, jasti a c, et al. hematopoietic cell regulation by rac1 and rac2 guanosine triphosphatases. science 2003 oct. 17; 302: 445-449. fecteau j f, corral l g, ghia e m, gaidarova s, futalan d, bharati i s, et al. lenalidomide inhibits the proliferation of cll cells via a cereblon/p21(waf1/cip1)-dependent mechanism independent of functional p53. blood 2014 sep. 4; 124: 1637-1644. zhang s, wu c c, fecteau j f, cui b, chen l, zhang l, et al. targeting chronic lymphocytic leukemia cells with a humanized monoclonal antibody specific for cd44. proc natl acad sci usa 2013 apr. 9; 110: 6127-6132. herishanu y, perez-galan p, liu d, biancotto a, pittaluga s, vire b, et al. the lymph node microenvironment promotes b-cell receptor signaling, nf-kappab activation, and tumor proliferation in chronic lymphocytic leukemia. blood 2011 jan. 13; 117: 563-574. burger j a. nurture versus nature: the microenvironment in chronic lymphocytic leukemia. hematology am soc hematol educ program 2011; 2011: 96-103. chiorazzi n, rai k r, ferrarini m. chronic lymphocytic leukemia. n engl j med 2005 feb. 24; 352: 804-815. rossi d, spina v, bomben r, rasi s, dal-bo m, bruscaggin a, et al. association between molecular lesions and specific b-cell receptor subsets in chronic lymphocytic leukemia. blood 2013 jun. 13; 121: 4902-4905. ghia p, chiorazzi n, stamatopoulos k. microenvironmental influences in chronic lymphocytic leukaemia: the role of antigen stimulation. j intern med 2008 december; 264: 549-562. herishanu y, katz b z, lipsky a, wiestner a. biology of chronic lymphocytic leukemia in different microenvironments: clinical and therapeutic implications. hematol oncol clin north am 2013 april; 27: 173-206. woyach j a, bojnik e, ruppert a s, stefanovski m r, goettl v m, smucker k a, et al. bruton's tyrosine kinase (btk) function is important to the development and expansion of chronic lymphocytic leukemia (cll). blood 2014 feb. 20; 123: 1207-1213. herman s e, mustafa r z, gyamfi j a, pittaluga s, chang s, chang b, et al. ibrutinib inhibits bcr and nf-kappab signaling and reduces tumor proliferation in tissue-resident cells of patients with cll. blood 2014 may 22; 123: 3286-3295. cheng s, ma j, guo a, lu p, leonard j p, coleman m, et al. btk inhibition targets in vivo cll proliferation through its effects on b-cell receptor signaling activity. leukemia 2014 march; 28: 649-657. mathews griner l a, guha r, shinn p, young r m, keller j m, liu d, et al. high-throughput combinatorial screening identifies drugs that cooperate with ibrutinib to kill activated b-cell-like diffuse large b-cell lymphoma cells. proc natl acad sci usa 2014 feb. 11; 111: 2349-2354. guo a, lu p, galanina n, nabhan c, smith s m, coleman m, et al. heightened btk-dependent cell proliferation in unmutated chronic lymphocytic leukemia confers increased sensitivity to ibrutinib. oncotarget 2016 jan. 26; 7: 4598-4610. woodcock j, griffin j p, behrman r e. development of novel combination therapies. n engl j med 2011 mar. 17; 364: 985-987. cervantes-gomez f, lamothe b, woyach j a, wierda w g, keating m j, balakrishnan k, et al. pharmacological and protein profiling suggests venetoclax (abt-199) as optimal partner with ibrutinib in chronic lymphocytic leukemia. clin cancer res 2015 aug. 15; 21: 3705-3715. zhao x, bodo j, sun d, durkin l, lin j, smith m r, et al. combination of ibrutinib with abt-199: synergistic effects on proliferation inhibition and apoptosis in mantle cell lymphoma cells through perturbation of btk, akt and bcl2 pathways. b r j haematol 2015 march; 168: 765-768. de rooij m f, kuil a, kater a p, kersten m j, pals s t, spaargaren m. ibrutinib and idelalisib synergistically target bcr-controlled adhesion in mcl and cll: a rationale for combination therapy. blood 2015 apr. 2; 125: 2306-2309. da roit f, engelberts p j, taylor r p, breij e c, gritti g, rambaldi a, et al. ibrutinib interferes with the cell-mediated anti-tumor activities of therapeutic cd20 antibodies: implications for combination therapy. haematologica 2015 january; 100: 77-86. skarzynski m, niemann cu, lee y s, martyr s, maric i, salem d, et al. interactions between ibrutinib and anti-cd20 antibodies: competing effects on the outcome of combination therapy. clin cancer res 2016 jan. 1; 22: 86-95. woyach j a, furman r r, liu t m, ozer h g, zapatka m, ruppert a s, et al. resistance mechanisms for the bruton's tyrosine kinase inhibitor ibrutinib. n engl j med 2014 jun. 12; 370: 2286-2294. yuan y, shen h, franklin d s, scadden d t, cheng t. in vivo self-renewing divisions of haematopoietic stem cells are increased in the absence of the early g1-phase inhibitor, p18ink4c. nat cell biol 2004 may; 6: 436-442. shen h, yu h, liang p h, cheng h, xufeng r, yuan y, et al. an acute negative bystander effect of gamma-irradiated recipients on transplanted hematopoietic stem cells. blood 2012 apr. 12; 119: 3629-3637. ghia p, chiorazzi n, stamatopoulos k. microenvironmental influences in chronic lymphocytic leukaemia: the role of antigen stimulation. j intern med 2008 december; 264: 549-562. herishanu y, katz b z, lipsky a, wiestner a. biology of chronic lymphocytic leukemia in different microenvironments: clinical and therapeutic implications. hematol oncol clin north am 2013 april; 27: 173-206. burger j a. nurture versus nature: the microenvironment in chronic lymphocytic leukemia. hematology am soc hematol educ program 2011; 2011: 96-103. stevenson f k, krysov s, davies a j, steele a j, packham g. b-cell receptor signaling in chronic lymphocytic leukemia. blood 2011 oct. 20; 118: 4313-4320. burger j a, tedeschi a, barr p m, robak t, owen c, ghia p, et al. ibrutinib as initial therapy for patients with chronic lymphocytic leukemia. n engl j med 2015 dec. 17; 373: 2425-2437. byrd j c, furman r r, coutre s e, flinn i w, burger j a, blum k a, et al. targeting btk with ibrutinib in relapsed chronic lymphocytic leukemia. n engl j med 2013 jul. 4; 369: 32-42. byrd j c, o'brien s, james d f. ibrutinib in relapsed chronic lymphocytic leukemia. n engl j med 2013 sep. 26; 369: 1278-1279. komarova n l, burger j a, wodarz d. evolution of ibrutinib resistance in chronic lymphocytic leukemia (cll). proc natl acad sci usa 2014 sep. 23; 111: 13906-13911. fukuda t, chen l, endo t, tang l, lu d, castro j e, et al. antisera induced by infusions of autologous ad-cd154-leukemia b cells identify ror-1 as an oncofetal antigen and receptor for wnt5a. proc natl acad sci usa 2008 feb. 26; 105: 3047-3052. widhopf g f, 2nd, cui b, ghia e m, chen l, messer k, shen z, et al. ror-1 can interact with tcl1 and enhance leukemogenesis in emu-tcl1 transgenic mice. proc natl acad sci usa 2014 jan. 14; 111: 793-798. hofbauer s w, krenn p w, ganghammer s, asslaber d, pichler u, oberascher k, et al. tiam1/rac1 signals contribute to the proliferation and chemoresistance, but not motility, of chronic lymphocytic leukemia cells. blood 2014 apr. 3; 123: 2181-2188. kaucka m, plevova k, pavlova s, jan.ovska p, mishra a, verner j, et al. the planar cell polarity pathway drives pathogenesis of chronic lymphocytic leukemia by the regulation of b-lymphocyte migration. cancer res 2013 mar. 1; 73: 1491-1501. yu j, chen l, cui b, widhopf g f, 2nd, shen z, wu r, et al. wnt5a induces ror-1/ror2 heterooligomerization to enhance leukemia chemotaxis and proliferation. j clin invest 2015 dec. 21. choi m y, widhopf g f, 2nd, wu c c, cui b, lao f, sadarangani a, et al. pre-clinical specificity and safety of uc-961, a first-in-class monoclonal antibody targeting ror-1. clin lymphoma myeloma leuk 2015 june; 15 suppl: s167-169. honigberg l a, smith a m, sirisawad m, verner e, loury d, chang b, et al. the bruton tyrosine kinase inhibitor pci-32765 blocks b-cell activation and is efficacious in models of autoimmune disease and b-cell malignancy. proc natl acad sci usa 2010 jul. 20; 107: 13075-13080. rushworth s a, murray m y, zaitseva l, bowles k m, macewan d j. identification of bruton's tyrosine kinase as a therapeutic target in acute myeloid leukemia. blood 2014 feb. 20; 123: 1229-1238. ren l, campbell a, fang h, gautam s, elavazhagan s, fatehchand k, et al. analysis of the effects of the bruton's tyrosine kinase (btk) inhibitor ibrutinib on monocyte fcgamma receptor (fcgammar) function. j biol chem 2016 feb. 5; 291: 3043-3052. di paolo j a, huang t, balazs m, barbosa j, barck k h, bravo b j, et al. specific btk inhibition suppresses b cell-and myeloid cell-mediated arthritis. nat chem biol 2011 january; 7: 41-50. de jong j, sukbuntherng j, skee d, murphy j, o'brien s, byrd j c, et al. the effect of food on the pharmacokinetics of oral ibrutinib in healthy participants and patients with chronic lymphocytic leukemia. cancer chemother pharmacol 2015 may; 75: 907-916. advani r h, buggy j j, sharman j p, smith s m, boyd t e, grant b, et al. bruton tyrosine kinase inhibitor ibrutinib (pci-32765) has significant activity in patients with relapsed/refractory b-cell malignancies. j clin oncol 2013 jan. 1; 31: 88-94. etienne-manneville s, hall a. rho gtpases in cell biology. nature 2002 dec. 12; 420: 629-635. gu y, filippi md, cancelas j a, siefring j e, williams e p, jasti a c, et al. hematopoietic cell regulation by rac1 and rac2 guanosine triphosphatases. science 2003 oct. 17; 302: 445-449. fecteau j f, corral l g, ghia e m, gaidarova s, futalan d, bharati i s, et al. lenalidomide inhibits the proliferation of cll cells via a cereblon/p21(waf1/cip1)-dependent mechanism independent of functional p53. blood 2014 sep. 4; 124: 1637-1644. zhang s, wu c c, fecteau j f, cui b, chen l, zhang l, et al. targeting chronic lymphocytic leukemia cells with a humanized monoclonal antibody specific for cd44. proc natl acad sci usa 2013 apr. 9; 110: 6127-6132. herman s e, mustafa r z, gyamfi j a, pittaluga s, chang s, chang b, et al. ibrutinib inhibits bcr and nf-kappab signaling and reduces tumor proliferation in tissue-resident cells of patients with cll. blood 2014 may 22; 123: 3286-3295. cheng s, ma j, guo a, lu p, leonard j p, coleman m, et al. btk inhibition targets in vivo cll proliferation through its effects on b-cell receptor signaling activity. leukemiab 2014 march; 28: 649-657. nishita m, itsukushima s, nomachi a, endo m, wang z, inaba d, et al. ror2/frizzled complex mediates wnt5a-induced ap-1 activation by regulating dishevelled polymerization. mol cell biol 2010 july; 30: 3610-3619. naskar d, maiti g, chakraborty a, roy a, chattopadhyay d, sen m. wnt5a-rac1-nf-kappab homeostatic circuitry sustains innate immune functions in macrophages. j immunol 2014 may 1; 192: 4386-4397. zhu y, shen t, liu j, zheng j, zhang y, xu r, et al. rab35 is required for wnt5a/dv12-induced rac1 activation and cell migration in mcf-7 breast cancer cells. cell signal 2013 may; 25: 1075-1085. velaithan r, kang j, hirpara j l, loh t, goh b c, le bras m, et al. the small gtpase rac1 is a novel binding partner of bcl-2 and stabilizes its antiapoptotic activity. blood 2011 jun. 9; 117: 6214-6226. roberts a w, seymour j f, brown j r, wierda w g, kipps t j, khaw s l, et al. substantial susceptibility of chronic lymphocytic leukemia to bcl2 inhibition: results of a phase i study of navitoclax in patients with relapsed or refractory disease. j clin oncol 2012 feb. 10; 30: 488-496. mizukawa b, wei j, shrestha m, wunderlich m, chou f s, griesinger a, et al. inhibition of rac gtpase signaling and downstream prosurvival bcl-2 proteins as combination targeted therapy in mll-af9 leukemia. blood 2011 nov. 10; 118: 5235-5245. bosco e e, ni w, wang l, guo f, johnson j f, zheng y. rac1 targeting suppresses p53 deficiency-mediated lymphomagenesis. blood 2010 apr. 22; 115: 3320-3328. pascutti m f, jak m, tromp j m, derks i a, remmerswaal e b, thij ssen r, et al. il-21 and cd40l signals from autologous t cells can induce antigen-independent proliferation of cll cells. blood 2013 oct. 24; 122: 3010-3019. guo a, lu p, galanina n, nabhan c, smith s m, coleman m, et al. heightened btk-dependent cell proliferation in unmutated chronic lymphocytic leukemia confers increased sensitivity to ibrutinib. oncotarget 2016 jan. 26; 7: 4598-4610. janovska p, poppova l, plevova k, plesingerova h, behal m, kaucka m, et al. autocrine signaling by wnt-5a deregulates chemotaxis of leukemic cells and predicts clinical outcome in chronic lymphocytic leukemia. clin cancer res 2016 jan. 15; 22: 459-469. o'hayre m, salanga c l, kipps t j, messmer d, dorrestein p c, handel t m. elucidating the cxcl12/cxcr4 signaling network in chronic lymphocytic leukemia through phosphoproteomics analysis. plos one 2010; 5: e11716. woodcock j, griffin j p, behrman r e. development of novel combination therapies. n engl j med 2011 mar. 17; 364: 985-987. da roit f, engelberts p j, taylor r p, breij e c, gritti g, rambaldi a, et al. ibrutinib interferes with the cell-mediated anti-tumor activities of therapeutic cd20 antibodies: implications for combination therapy. haematologica 2015 january; 100: 77-86. skarzynski m, niemann c u, lee y s, martyr s, maric i, salem d, et al. interactions between ibrutinib and anti-cd20 antibodies: competing effects on the outcome of combination therapy. clin cancer res 2016 jan. 1; 22: 86-95. woyach j a, furman r r, liu t m, ozer h g, zapatka m, ruppert a s, et al. resistance mechanisms for the bruton's tyrosine kinase inhibitor ibrutinib. n engl j med 2014 jun. 12; 370: 2286-2294. wu j, zhang m, liu d. acalabrutinib (acp-196): a selective second-generation btk inhibitor. j hematol oncol 2016; 9: 21. byrd j c, harrington b, o'brien s, jones j a, schuh a, devereux s, et al. acalabrutinib (acp-196) in relapsed chronic lymphocytic leukemia. n engl j med 2016 jan. 28; 374: 323-332. zhang b, chernoff j, zheng y. interaction of rac 1 with gtpase-activating proteins and putative effectors. a comparison with cdc42 and rhoa. j biol chem 1998 apr. 10; 273: 8776-8782. yuan y, shen h, franklin d s, scadden d t, cheng t. in vivo self-renewing divisions of haematopoietic stem cells are increased in the absence of the early g1-phase inhibitor, p18ink4c. nat cell biol 2004 may; 6: 436-442. shen h, yu h, liang p h, cheng h, xufeng r, yuan y, et al. an acute negative bystander effect of gamma-irradiated recipients on transplanted hematopoietic stem cells. blood 2012 apr. 12; 119: 3629-3637. honigberg l a, smith a m, sirisawad m, verner e, loury d, chang b, et al. the bruton tyrosine kinase inhibitor pci-32765 blocks b-cell activation and is efficacious in models of autoimmune disease and b-cell malignancy. proc natl acad sci usa 2010 jul. 20; 107: 13075-13080. di paolo j a, huang t, balazs m, barbosa j, barck k h, bravo b j, et al. specific btk inhibition suppresses b cell-and myeloid cell-mediated arthritis. nat chem biol 2011 january; 7: 41-50. zhang b, chernoff j, zheng y. interaction of rac 1 with gtpase-activating proteins and putative effectors. a comparison with cdc42 and rhoa. j biol chem 1998 apr. 10; 273: 8776-8782. yu j, chen l, cui b, widhopf g f, 2nd, shen z, wu r, et al. wnt5a induces ror-1/ror2 heterooligomerization to enhance leukemia chemotaxis and proliferation. j clin invest 2015 dec. 21. fecteau j f, corral l g, ghia e m, gaidarova s, futalan d, bharati i s, et al. lenalidomide inhibits the proliferation of cll cells via a cereblon/p21(waf1/cip1)-dependent mechanism independent of functional p53. blood 2014 sep. 4; 124: 1637-1644. yuan y, shen h, franklin d s, scadden d t, cheng t. in vivo self-renewing divisions of haematopoietic stem cells are increased in the absence of the early g1-phase inhibitor, p18ink4c. nat cell biol 2004 may; 6: 436-442. shen h, yu h, liang p h, cheng h, xufeng r, yuan y, et al. an acute negative bystander effect of gamma-irradiated recipients on transplanted hematopoietic stem cells. blood 2012 apr. 12; 119: 3629-3637. castillo r, mascarenhas j, telford w, chadburn a, friedman s m, schattner e j. proliferative response of mantle cell lymphoma cells stimulated by cd40 ligation and il-4. leukemia 2000 february; 14(2): 292-298. visser h p, tewis m, willemze r, kluin-nelemans j c. mantle cell lymphoma proliferates upon il-10 in the cd40 system. leukemia 2000 august; 14(8): 1483-1489. byrd j c, furman r r, coutre s e, flinn i w, burger j a, blum k a, et al. targeting btk with ibrutinib in relapsed chronic lymphocytic leukemia. n engl j med 2013 jul. 4; 369(1): 32-42. de rooij m f, kuil a, geest c r, eldering e, chang b y, buggy j j, et al. the clinically active btk inhibitor pci-32765 targets b-cell receptor-and chemokine-controlled adhesion and migration in chronic lymphocytic leukemia. blood 2012 mar. 15; 119(11): 2590-2594. chang b y, francesco m, de rooij m f, magadala p, steggerda s m, huang m m, et al. egress of cd19(+)cd5(+) cells into peripheral blood following treatment with the bruton tyrosine kinase inhibitor ibrutinib in mantle cell lymphoma patients. blood 2013 oct. 3; 122(14): 2412-2424. spaargaren m, de rooij m f, kater a p, eldering e. btk inhibitors in chronic lymphocytic leukemia: a glimpse to the future. oncogene 2015 may 7; 34(19): 2426-2436. wang m l, rule s, martin p, goy a, auer r, kahl b s, et al. targeting btk with ibrutinib in relapsed or refractory mantle-cell lymphoma. n engl j med 2013 aug. 8; 369(6): 507-516. woyach j a, furman r r, liu t m, ozer h g, zapatka m, ruppert a s, et al. resistance mechanisms for the bruton's tyrosine kinase inhibitor ibrutinib. n engl j med 2014 jun. 12; 370(24): 2286-2294. byrd j c, brown j r, o'brien s, barrientos j c, kay n e, reddy n m, et al. ibrutinib versus ofatumumab in previously treated chronic lymphoid leukemia. n engl j med 2014 jul. 17; 371(3): 213-223. fukuda t, chen l, endo t, tang l, lu d, castro j e, et al. antisera induced by infusions of autologous ad-cd154-leukemia b cells identify ror-1 as an oncofetal antigen and receptor for wnt5a. proc natl acad sci usa 2008 feb. 26; 105(8): 3047-3052. hofbauer s w, krenn p w, ganghammer s, asslaber d, pichler u, oberascher k, et al. tiam1/rac1 signals contribute to the proliferation and chemoresistance, but not motility, of chronic lymphocytic leukemia cells. blood 2014 apr. 3; 123(14): 2181-2188. kaucka m, plevova k, pavlova s, jan.ovska p, mishra a, verner j, et al. the planar cell polarity pathway drives pathogenesis of chronic lymphocytic leukemia by the regulation of b-lymphocyte migration. cancer res 2013 mar. 1; 73(5): 1491-1501. yu j, chen l, cui b, widhopf g f, 2nd, shen z, wu r, et al. wnt5a induces ror-1/ror2 heterooligomerization to enhance leukemia chemotaxis and proliferation. j clin invest 2015 dec. 21. yu j, chen l, cui b, wu c, choi m y, chen y, et al. cirmtuzumab inhibits wnt5a-induced rac1 activation in chronic lymphocytic leukemia treated with ibrutinib. leukemia 2017 jan. 3. krishan a. rapid flow cytofluorometric analysis of mammalian cell cycle by propidium iodide staining. j cell biol 1975 july; 66(1): 188-193. informal sequence listing 99961.1 cdr h1 (seq id no: 1):gyaftayd99961.1 cdr h2 (seq id no: 2):fdpydggs99961.1 cdr h3 (seq id no: 3):gwyyfdy99961.1 cdr l1 (seq id no: 4):ksisky99961.1 cdr l2 (seq id no: 5):sgs99961.1 cdr l3 (seq id no: 6):qqhdespyd10 cdr h1 (seq id no: 7):gfsltsygd10 cdr h2 (seq id no: 8):iwaggftd10 cdr h3 (seq id no: 9):rgssysmdyd10 cdr l1 (seq id no: 10):snvsyd10 cdr l2 (seq id no: 11):eisd10 cdr l3 (seq id no: 12):qqwnyplitfull-length human ror-1 protein (seq id no: 13):meirprrrgtrppllallaalllaargaaaqetelsvsaelvptsswnisselnkdsyltldepmnnittslgqtaelhckvsgnppptirwfkndapvvqeprrlsfrstiygsrlrirnldttdtgyfqcvatngkevvsstgvlfvkfgppptaspgysdeyeedgfcqpyrgiacarfignrtvymeslhmqgeienqitaaftmigtsshlsdkcsqfaipslchyafpycdetssvpkprdlcrdeceilenvlcqteyifarsnpmilmrlklpncedlpqpespeaancirigipmadpinknhkcynstgvdyrgtvsvtksgrqcqpwnsqyphthtftalrfpelngghsycrnpgnqkeapwcftldenfksdlcdipacdskdskeknkmeilyilvpsvaiplaiallffficvcrnnqksssapvqrqpkhvrgqnvemsmlnaykpkskakelplsavrfmeelgecafgkiykghlylpgmdhaqlvaiktlkdynnpqqwmefqqeaslmaelhhpnivcllgavtqeqpvcmlfeyinqgdlheflimrsphsdvgcssdedgtvkssldhgdflhiaiqiaagmeylsshffvhkdlaarniligeqlhvkisdlglsreiysadyyrvqsksllpirwmppeaimygkfssdsdiwsfgvvlweifsfglqpyygfsnqeviemvrkrqllpcsedcpprmyslmtecwneipsrrprfkdihvrlrsweglsshtssttpsggnattqttslsaspvsnlsnprypnymfpsqgitpqgqiagfigppipqnqrfipingypippgyaafpaahyqptgpprviqhcpppksrspssasgststghvtslpssgsnqeanipllphmsipnhpggmgitvfgnksqkpykidskqasllgdanieghtesmisael21 amino acid stretch of human ror-1 includingglutamic acid at position 138 (seq id no: 14):vatngkevvsstgvlfvkfgp15 amino acid stretch of human ror-1 includingglutamic acid at position 138 (seq id no. 15):evvsstgvlfvkfgp p embodiments embodiment p1. a method of treating cancer in a subject in need thereof, said method comprising administering to said subject a therapeutically effective amount of a bruton's tyrosine kinase (btk) antagonist and an anti-ror-1 antibody. embodiment p2. the method according to embodiment p1, wherein said btk antagonist is cal101, r406 or ibrutinib. embodiment p3. the method according to one of embodiments p1-p2, wherein said btk antagonist is ibrutinib. embodiment p4. the method according to one of embodiments p1-p3, wherein said anti-ror-1 antibody is cirmtuzumab. embodiment p5. the method according to one of embodiments p1-p4, wherein said btk antagonist and anti-ror-1 antibody are administered in a combined synergistic amount. embodiment p6. the method according to one of embodiments p1-p5, wherein said btk antagonist and anti-ror-1 antibody are administered simultaneously or sequentially. embodiment p7. the method according to one of embodiments p1-p6, wherein said cancer is a lymphoma or an adenocarcinoma. embodiment p8. the method according to one of embodiments p1-p7, wherein said lymphoma is chronic lymphocytic leukemia, small lymphocytic lymphoma, marginal cell b-cell lymphoma, or burkitt's lymphoma. embodiment p9. the method according to one of embodiments p1-p8, wherein said adenocarcinoma is colon adenocarcinoma or breast adenocarcinoma. embodiment p10. a pharmaceutical composition comprising a bruton's tyrosine kinase (btk) antagonist, an anti-ror-1 antibody and a pharmaceutically acceptable excipient, wherein said btk antagonist and said anti-ror-1 antibody are present in a combined synergistic amount, wherein said combined synergistic amount is effective to treat cancer in a subject in need thereof. embodiments embodiment 1. a method of treating cancer in a subject in need thereof, said method comprising administering to said subject a therapeutically effective amount of a bruton's tyrosine kinase (btk) antagonist and a tyrosine kinase-like orphan receptor 1 (ror-1) antagonist. embodiment 2. the method of embodiment 1, wherein said btk antagonist is a small molecule. embodiment 3. the method of embodiment 1 or 2, wherein said btk antagonist is ibrutinib, idelalisib, fostamatinib, acalabrutinib, ono/gs-4059, bgb-3111 or cc-292 (avl-292). embodiment 4. the method of one of embodiments 1-3, wherein said btk antagonist is ibrutinib. embodiment 5. the method of one of embodiments 1-4, wherein said ror-1 antagonist is an antibody or a small molecule. embodiment 6. the method of one of embodiments 1-5, wherein said ror-1 antagonist is an anti-ror-1 antibody. embodiment 7. the method of one of embodiments 5-6, wherein said antibody comprises a humanized heavy chain variable region and a humanized light chain variable region, wherein said humanized heavy chain variable region comprises the sequences set forth in seq id no:1, seq id no:2, and seq id no:3; and wherein said humanized light chain variable region comprises the sequences set forth in seq id no:4, seq id no:5, and seq id no:6. embodiment 8. the method of one of embodiments 3-7, wherein said antibody is cirmtuzumab. embodiment 9. the method of one of embodiments 5-6, wherein said antibody comprises a humanized heavy chain variable region and a humanized light chain variable region, wherein said humanized heavy chain variable region comprises the sequences set forth in seq id no:7, seq id no:8, and seq id no:9; and wherein said humanized light chain variable region comprises the sequences set forth in seq id no:10, seq id no:11, and seq id no:12. embodiment 10. the method of one of embodiments 1-9, wherein said btk antagonist and said ror-1 antagonist are administered in a combined synergistic amount. embodiment 11. the method of one of embodiments 1-10, wherein said btk antagonist and said ror-1 antagonist are administered simultaneously or sequentially. embodiment 12. the method of one of embodiments 1-11, wherein said ror-1 antagonist is administered at a first time point and said btk antagonist is administered at a second time point, wherein said first time point precedes said second time point. embodiment 13. the method of one of embodiments 1-12, wherein said btk antagonist and said ror-1 antagonist are admixed prior to administration. embodiment 14. the method of one of embodiments 1-13, wherein said btk antagonist is administered at an amount of about 1 mg/kg, 2 mg/kg, 5 mg/kg, 15 mg/kg or 10 mg/kg. embodiment 15. the method of one of embodiments 1-14, wherein said btk antagonist is administered at an amount of about 5 mg/kg. embodiment 16. the method of one of embodiments 1-14, wherein said btk antagonist is administered at an amount of about 420 mg. embodiment 17. the method of one of embodiments 1-16, wherein said ror-1 antagonist is administered at an amount of about 1 mg/kg, 2 mg/kg, 3 mg/kg, 5 mg/kg or 10 mg/kg. embodiment 18. the method of one of embodiments 1-17, wherein said ror-1 antagonist is administered at an amount of about 2 mg/kg. embodiment 19. the method of one of embodiments 1-15 or 17-18, wherein said btk antagonist is administered at an amount of about 5 mg/kg and said ror-1 antagonist is administered at about 2 mg/kg. embodiment 20. the method of one of embodiments 1-15 or 17, wherein said btk antagonist is administered at an amount of about 5 mg/kg and said ror-1 antagonist is administered at about 1 mg/kg. embodiment 21. the method of one of embodiments 1-20, wherein said btk antagonist is administered daily over the course of at least 14 days. embodiment 22. the method of one of embodiments 1-21, wherein said btk antagonist is administered daily over the course of about 28 days. embodiment 23. the method of one of embodiments 1-22, wherein said ror-1 antagonist is administered once over the course of about 28 days. embodiment 24. the method of one of embodiments 1-23, wherein said btk antagonist is administered intravenously. embodiment 25. the method of one of embodiments 1-24, wherein said ror-1 antagonist is administered intravenously. embodiment 26. the method of one of embodiments 1-25, wherein said subject is a mammal. embodiment 27. the method of one of embodiments 1-26, wherein said subject is a human. embodiment 28. the method of one of embodiments 1-27, wherein said cancer is lymphoma, leukemia, myeloma, aml, b-all, t-all, renal cell carcinoma, colon cancer, colorectal cancer, breast cancer, epithelial squamous cell cancer, melanoma, stomach cancer, brain cancer, lung cancer, pancreatic cancer, cervical cancer, ovarian cancer, liver cancer, bladder cancer, prostate cancer, testicular cancer, thyroid cancer, head and neck cancer, uterine cancer, adenocarcinoma, or adrenal cancer. embodiment 29. the method of one of embodiments 1-28, wherein said cancer is chronic lymphocytic leukemia (cll), small lymphocytic lymphoma, marginal cell b-cell lymphoma, burkitt's lymphoma, or b cell leukemia. embodiment 30. a pharmaceutical composition comprising a btk antagonist, a ror-1 antagonist and a pharmaceutically acceptable excipient. embodiment 31. a pharmaceutical composition comprising a btk antagonist, an anti-ror-1 antibody and a pharmaceutically acceptable excipient, wherein said btk antagonist and said anti-ror-1 antibody are present in a combined synergistic amount, wherein said combined synergistic amount is effective to treat cancer in a subject in need thereof. embodiment 32. the pharmaceutical composition of embodiment 30 or 31, wherein said btk antagonist is a small molecule. embodiment 33. the pharmaceutical composition of one of embodiments 30-32, wherein said btk antagonist is ibrutinib, idelalisib, fostamatinib, acalabrutinib, ono/gs-4059, bgb-3111 or cc-292 (avl-292). embodiment 34. the pharmaceutical composition of one of embodiments 30-33, wherein said btk antagonist is ibrutinib. embodiment 35. the pharmaceutical composition of one of embodiments 30-34, wherein said ror-1 antagonist is an antibody or a small molecule. embodiment 36. the pharmaceutical composition of one of embodiments 30-35, wherein said ror-1 antagonist is an anti-ror-1 antibody. embodiment 37. the pharmaceutical composition of embodiment 35 or 36, wherein said antibody comprises a humanized heavy chain variable region and a humanized light chain variable region, — wherein said humanized heavy chain variable region comprises the sequences set forth in seq id no:1, seq id no:2, and seq id no:3; and wherein said humanized light chain variable region comprises the sequences set forth in seq id no:4, seq id no:5, and seq id no:6. embodiment 38. the pharmaceutical composition of one of embodiments 35-37, wherein said antibody is cirmtuzumab. embodiment 39. the pharmaceutical composition of embodiment 35 or 36, wherein said antibody comprises a humanized heavy chain variable region and a humanized light chain variable region, wherein said humanized heavy chain variable region comprises the sequences set forth in seq id no:7, seq id no:8, and seq id no:9; and wherein said humanized light chain variable region comprises the sequences set forth in seq id no:10, seq id no:11, and seq id no:12.
165-855-856-083-868
US
[ "US", "CN", "DE" ]
H02K9/193,B60K1/00,B60K11/02,H02K3/24,H02K5/20,H02K5/04,H02K7/116,H02K9/19
2020-07-27T00:00:00
2020
[ "H02", "B60" ]
end covers configured to direct fluid for thermal management of electric machine for electrified vehicle
this disclosure relates to thermal management for an electric machine, such as an electric motor, of an electrified vehicle. an example assembly includes a stator having a core and a jacket at least partially surrounding the core. the jacket radially encloses a slot and is configured to permit fluid to flow within the slot from a first face of the stator to a second face of the stator. the assembly further includes a first end cover covering the first face of the stator. the first end cover has an inlet port and is configured to direct fluid from the inlet into the slot. further, the assembly includes a second end cover covering the second face of the stator and configured to direct fluid that exits the slot.
1. an assembly for an electric machine of an electrified vehicle, comprising: a stator including a core and a jacket at least partially encapsulating the core, wherein the jacket radially encloses a slot and is configured to permit fluid to flow within the slot from a first face of the stator to a second face of the stator, wherein the jacket includes a channel, and wherein the channel is on a radially opposite side of the core as the slot; a first end cover covering the first face of the stator, wherein the first end cover includes an inlet port and is configured to direct fluid from the inlet port into the slot; and a second end cover covering the second face of the stator and configured to direct fluid that exits the slot, wherein the first end cover includes a divider, wherein the inlet port is sized to permit fluid entering the first end cover to flow on opposite radial sides of the divider, wherein fluid entering the first end cover on a radially inner side of the divider flows toward the second end cover via the slot, and wherein fluid entering the first end cover on a radially outer side of the divider flows toward the second end cover via the channel. 2. the assembly as recited in claim 1 , wherein the jacket is made of epoxy and the core is made of iron. 3. the assembly as recited in claim 1 , wherein the second end cover is configured to direct fluid exiting the slot into the channel. 4. the assembly as recited in claim 1 , wherein: the first end cover includes an outlet port, and the first end cover includes a divider radially between the inlet port and the outlet port. 5. the assembly as recited in claim 1 , wherein: the inlet port is configured to direct fluid into the channel and the slot, and the second end cover includes an outlet port in fluid communication with fluid exiting the channel and the slot. 6. the assembly as recited in claim 1 , wherein the slot is one of a plurality of slots. 7. the assembly as recited in claim 1 , wherein coil windings are arranged in the slot. 8. the assembly as recited in claim 1 , wherein: the first end cover includes a projection and the first face of the stator includes a recess receiving the projection of the first end cover, and the second end cover includes a projection and the second face of the stator includes a recess receiving the projection of the second end cover. 9. the assembly as recited in claim 1 , further comprising a rotor configured to rotate within the stator. 10. the assembly as recited in claim 1 , wherein the electric machine is an electric motor. 11. the assembly as recited in claim 1 , wherein the jacket is formed over the core by a molding process such that the jacket and core provide an integrated structure following the molding process. 12. the assembly as recited in claim 1 , wherein the portion of the jacket radially enclosing the slot is integrally connected to the portion of the jacket forming the channel. 13. an assembly for an electric machine of an electrified vehicle, comprising a stator including a core and a jacket at least partially surrounding the core, wherein the jacket radially encloses a slot and is configured to permit fluid to flow within the slot from a first face of the stator to a second face of the stator; a first end cover covering the first face of the stator, wherein the first end cover includes an inlet port and is configured to direct fluid from the inlet port into the slot; and a second end cover covering the second face of the stator and configured to direct fluid that exits the slot, wherein the jacket includes a channel, and wherein the channel is on a radially opposite side of the stator as the slot, wherein the channel is one of at least two channels, wherein the inlet port is configured to direct fluid into the slot and a first channel of the at least two channels, wherein the second end cover is configured to direct fluid exiting the slot and the first channel into a second channel of the at least two channels, and wherein the first end cover includes an outlet port in fluid communication with the second channel. 14. the assembly as recited in claim 7 , wherein the at least two channels includes three channels circumferentially spaced-apart from one another by mounting tabs of the stator. 15. an assembly for an electric machine of an electrified vehicle, comprising: a stator including a core and a jacket at least partially encapsulating the core, wherein the jacket radially encloses a slot and is configured to permit fluid to flow within the slot from a first face of the stator to a second face of the stator, wherein the jacket includes a channel, and wherein the channel is on a radially opposite side of the core as the slot; a first end cover covering the first face of the stator, wherein the first end cover includes an inlet port and is configured to direct fluid from the inlet port into the slot; and a second end cover covering the second face of the stator and configured to direct fluid that exits the slot, wherein the first end cover includes a divider, wherein the inlet port is sized to permit fluid entering the first end cover to flow on opposite radial sides of the divider, and wherein the inlet port is bisected by the divider.
technical field this disclosure relates to thermal management for an electric machine, such as an electric motor, of an electrified vehicle. background the need to reduce fuel consumption and emissions in vehicles is well known. therefore, vehicles are being developed that reduce or completely eliminate reliance on internal combustion engines. electrified vehicles are one type of vehicle being developed for this purpose. in general, electrified vehicles differ from conventional motor vehicles because they are selectively driven by one or more battery powered electric machines. the electric machines may need to be thermally managed (i.e., heated or cooled) during operation of the electrified vehicle. summary an assembly for an electric machine of an electrified vehicle according to an exemplary aspect of the present disclosure includes, among other things, a stator including a core and a jacket at least partially surrounding the core. the jacket radially encloses a slot and is configured to permit fluid to flow within the slot from a first face of the stator to a second face of the stator. the assembly further includes a first end cover covering the first face of the stator. the first end cover includes an inlet port and is configured to direct fluid from the inlet into the slot. further, the assembly includes a second end cover covering the second face of the stator and configured to direct fluid that exits the slot. in a further non-limiting embodiment of the foregoing assembly, the jacket is made of epoxy and the core is made of iron. in a further non-limiting embodiment of any of the foregoing assemblies, the jacket includes a channel, and the channel is on a radially opposite side of the stator as the slot. in a further non-limiting embodiment of any of the foregoing assemblies, the second end cover is configured to direct fluid exiting the slot into the channel. in a further non-limiting embodiment of any of the foregoing assemblies, the first end cover includes an outlet port, and the first end cover includes a divider radially between the inlet port and the outlet port. in a further non-limiting embodiment of any of the foregoing assemblies, the inlet port is configured to direct fluid into the channel and the slot, and the second end cover includes an outlet port in fluid communication with fluid exiting the channel and the slot. in a further non-limiting embodiment of any of the foregoing assemblies, the channel is one of at least two channels, the inlet port is configured to direct fluid into the slot and a first channel of the at least two channels, the second end cover is configured to direct fluid exiting the slot and the first channel into a second channel of the at least two channels, and the first end cover includes an outlet port in fluid communication with the second channel. in a further non-limiting embodiment of any of the foregoing assemblies, the at least two channels includes three channels circumferentially spaced-apart from one another by mounting tabs of the stator. in a further non-limiting embodiment of any of the foregoing assemblies, the first end cover includes a divider, and the inlet is sized to permit fluid entering the first end cover to flow on opposite radial sides of the divider. in a further non-limiting embodiment of any of the foregoing assemblies, fluid entering the first end cover on a radially inner side of the divider flows toward the second end cover via the slot, and fluid entering the first end cover on a radially outer side of the divider flows toward the second end cover via the channel. in a further non-limiting embodiment of any of the foregoing assemblies, the inlet is bisected by the divider. in a further non-limiting embodiment of any of the foregoing assemblies, the slot is one of a plurality of slots. in a further non-limiting embodiment of any of the foregoing assemblies, coil windings are arranged in the slot. in a further non-limiting embodiment of any of the foregoing assemblies, the first end cover includes a projection and the first face of the stator includes a recess receiving the projection of the first end cover, and the second end cover includes a projection and the second face of the stator includes a recess receiving the projection of the second end cover. in a further non-limiting embodiment of any of the foregoing assemblies, the assembly includes a rotor configured to rotate within stator. in a further non-limiting embodiment of any of the foregoing assemblies, the electric machine is an electric motor. a method according to an exemplary aspect of the present disclosure includes, among other things, directing fluid from a first end cover of a stator toward a second end cover of the stator through a slot formed in a jacket covering a core of the stator. in a further non-limiting embodiment of the foregoing method, the method includes using the second end cover to direct fluid exiting the slot toward the first end cover through a channel in the jacket. in a further non-limiting embodiment of any of the foregoing methods, the slot and channel are arranged on opposite radial sides of the stator. in a further non-limiting embodiment of any of the foregoing methods, the method includes directing fluid from the first end cover to the second end cover through the slot and through a channel in the jacket. brief description of the drawings fig. 1 schematically illustrates an example powertrain of an electrified vehicle. fig. 2 schematically illustrates a portion of an example electric machine in cross-section. fig. 3 is an exploded view of an example assembly configured to thermally manage the electric machine. fig. 4 is an assembled view of the assembly of fig. 3 . fig. 5 is an end view of a core of a stator. fig. 6 is an end view of the stator with a jacket covering the core. fig. 7 is a close-up view of a slot of the stator. fig. 8a is an inner end view of a first embodiment of a first end cover. fig. 8b is an end view of the stator. fig. 8c is an inner end view of a first embodiment of a second end cover. fig. 9a is an inner end view of a second embodiment of the first end cover. fig. 9b is an end view of the stator. fig. 9c is an inner end view of a second embodiment of the second end cover. fig. 10a is an inner end view of a third embodiment of the first end cover. fig. 10b is an end view of the stator. fig. 10c is an inner end view of a third embodiment of the second end cover. fig. 11a is an inner end view of a fourth embodiment of the first end cover. fig. 11b is an end view of the stator. fig. 11c is an inner end view of a fourth embodiment of the second end cover. fig. 12a is a cross-sectional view illustrating a first example interface between the first end cover and the stator. fig. 12b is a cross-sectional view illustrating a second example interface between the first end cover and the stator. detailed description this disclosure relates to thermal management for an electric machine, such as an electric motor, of an electrified vehicle. an example assembly includes a stator having a core and a jacket at least partially surrounding the core. the jacket radially encloses a slot and is configured to permit fluid to flow within the slot from a first face of the stator to a second face of the stator. the assembly further includes a first end cover covering the first face of the stator. the first end cover has an inlet port and is configured to direct fluid from the inlet into the slot. further, the assembly includes a second end cover covering the second face of the stator and configured to direct fluid that exits the slot. this disclosure has a number of other benefits which will be appreciated from the following description. among them, this disclosure directs fluid (i.e., coolant) on radially inner and outer sides of a stator, which enhances heat transfer. the assembly of this disclosure is also relatively easily manufactured. fig. 1 schematically illustrates an example powertrain 10 for an electrified vehicle 12 (“vehicle 12 ”), which in this example is a hybrid electric vehicle (hev). the powertrain 10 may be referred to as a hybrid transmission. although depicted as an hev, it should be understood that the concepts described herein are not limited to hevs and could extend to other electrified vehicles, including, but not limited to, plug-in hybrid electric vehicles (phevs), and battery electric vehicles (bevs). this disclosure also extends to various types of hybrid vehicles including full hybrids, parallel hybrids, series hybrids, mild hybrids, micro hybrids, and plug-in hybrids. further, the vehicle 12 is depicted schematically in fig. 1 , but it should be understood that this disclosure is not limited to any particular type of vehicle, and extends to cars, trucks, sport utility vehicles (suvs), vans, etc. in the embodiment of fig. 1 , the powertrain 10 is a power-split transmission that employs a first drive system and a second drive system. the first drive system includes a combination of an engine 14 and a generator 18 (i.e., a first electric machine). the second drive system includes at least a motor 22 (i.e., a second electric machine) and a battery assembly 24 (which may be referred to simply as a “battery”). in this example, the second drive system is considered an electric drive system of the powertrain 10 . the first and second drive systems generate torque to drive one or more sets of vehicle drive wheels 28 of the electrified vehicle 12 . although a power-split configuration is shown, this disclosure extends to any hybrid or electric vehicle including full hybrids, parallel hybrids, series hybrids, mild hybrids or micro hybrids. the engine 14 , which in one embodiment is an internal combustion engine, and the generator 18 may be connected through a power transfer unit 30 , such as a planetary gear set. of course, other types of power transfer units, including other gear sets and transmissions, may be used to connect the engine 14 to the generator 18 . in one non-limiting embodiment, the power transfer unit 30 is a planetary gear set that includes a ring gear 32 , a sun gear 34 , and a carrier assembly 36 . the generator 18 can be driven by the engine 14 through the power transfer unit 30 to convert kinetic energy to electrical energy. the generator 18 can alternatively function as a motor to convert electrical energy into kinetic energy, thereby outputting torque to a shaft 38 connected to the power transfer unit 30 . because the generator 18 is operatively connected to the engine 14 , the speed of the engine 14 can be controlled by the generator 18 . the ring gear 32 of the power transfer unit 30 may be connected to a shaft 40 , which is connected to vehicle drive wheels 28 through a second power transfer unit 44 . the second power transfer unit 44 may include a gear set having a plurality of gears 46 . other power transfer units may also be suitable. the gears 46 transfer torque from the engine 14 to a differential 48 to ultimately provide traction to the vehicle drive wheels 28 . the differential 48 may include a plurality of gears that enable the transfer of torque to the vehicle drive wheels 28 . in one embodiment, the second power transfer unit 44 is mechanically coupled to an axle 50 through the differential 48 to distribute torque to the vehicle drive wheels 28 . the motor 22 can also be employed to drive the vehicle drive wheels 28 by outputting torque to a shaft 52 that is also connected to the second power transfer unit 44 . in one embodiment, the motor 22 and the generator 18 cooperate as part of a regenerative braking system in which both the motor 22 and the generator 18 can be employed as motors to output torque. for example, the motor 22 and the generator 18 can each output electrical power to the battery assembly 24 . the battery assembly 24 is an example type of electrified vehicle battery. the battery assembly 24 may include a high voltage traction battery pack that includes a plurality of battery cells capable of outputting electrical power to operate the motor 22 and the generator 18 . other types of energy storage devices and/or output devices can also be used to electrically power the electrified vehicle 12 . in one non-limiting embodiment, the vehicle 12 has two basic operating modes. the vehicle 12 may operate in an electric vehicle (ev) mode where the motor 22 is used (generally without assistance from the engine 14 ) for vehicle propulsion, thereby depleting the battery assembly 24 state of charge up to its maximum allowable discharging rate under certain driving patterns/cycles. the ev mode is an example of a charge depleting mode of operation for the vehicle 12 . during ev mode, the state of charge of the battery assembly 24 may increase in some circumstances, for example due to a period of regenerative braking. the engine 14 is generally off under a default ev mode but could be operated as necessary based on a vehicle system state or as permitted by the operator. the electrified vehicle 12 may additionally operate in a hybrid (hev) mode in which the engine 14 and the motor 22 are both used for vehicle propulsion. the hev mode is an example of a charge sustaining mode of operation for the electrified vehicle 12 . during the hev mode, the electrified vehicle 12 may reduce the motor 22 propulsion usage in order to maintain the state of charge of the battery assembly 24 at a constant or approximately constant level by increasing the engine 14 propulsion usage. the electrified vehicle 12 may be operated in other operating modes in addition to the ev and hev modes within the scope of this disclosure. fig. 2 schematically illustrates a portion of an electric machine 60 in cross-section. the electric machine 60 is representative of any electric machine within the electrified vehicle 12 , such as either the generator 18 or the motor 22 . in fig. 2 , the electric machine 60 includes a rotor 62 received within a stator 64 . the rotor 62 is configured to rotate relative to the stator 64 about a central axis a of the electric machine 60 . the stator 64 is stationary and does not rotate during operation of the electric machine 60 . the rotor 62 is directly connected to a shaft 66 , in this example. if the electric machine 60 is used as a motor, rotation of the rotor 62 produces torque which is delivered elsewhere in the vehicle 12 via the shaft 66 . if the electric machine 60 is used as a generator, rotation of the rotor 62 about the axis a can generate electric power. the rotor 62 could rotate in response to a torque input from regenerative braking, for example. fig. 3 is an exploded view of an assembly 68 for the electric machine 60 . specifically, the assembly 68 could be used relative to either the generator 18 and/or the motor 22 . while the assembly 68 is shown and described herein relative to an electric machine, this disclosure may be applicable to other electric devices that would benefit from thermal control. fig. 4 is an assembled view of the assembly 68 . the rotor 62 and shaft 66 are not shown in most drawings of this disclosure for ease of reference, however they would be arranged as in fig. 2 . the assembly 68 includes some components of the electric machine 60 . thus, a reference to the assembly 68 is also a reference to some components of the electric machine 60 . in particular, the assembly 68 includes the stator 64 . the stator 64 includes a core 70 ( fig. 5 ) and a jacket 72 ( fig. 6 ) at least partially surrounding the core 70 . as will be explained below, the jacket 72 is configured to direct fluid relative to the core 70 to thermally manage the stator 64 , and in turn the electric machine 60 . the assembly 68 further includes a first end cover 74 and a second end cover 76 configured to cover opposing first and second axial end faces 78 , 80 of the stator 64 , respectively. the first and second end covers 74 , 76 are configured to direct fluid f from a fluid source 82 and through the jacket 72 . the fluid source 82 provides the fluid f. example fluids include refrigerants, oil, or water. although shown schematically, one would understand that fluid f is directed from the fluid source 82 to and from the assembly 68 via various conduits or passages such as tubes, hoses, pipes, etc. one would also understand that various valves, pumps, seals, and gaskets could be used to direct the fluid f in a particular manner. in the example of figs. 3 and 4 , the first end cover 74 includes an inlet 84 fluidly coupled to (i.e., in fluid communication with) the fluid source 82 . the first end cover 74 is configured to direct fluid f to flow from the inlet 84 and through the jacket 72 toward the second end cover 76 . the second end cover 76 is configured to direct fluid f that has passed through the jacket 72 . the term direct is used herein to mean to cause to turn or move to follow a specific course. this disclosure includes a number of embodiments in which the first and second end covers 74 , 76 direct the fluid f in different manners. in the example of figs. 3 and 4 , the second end cover 76 is configured to direct fluid f that has exited the jacket 72 such that it flows through the jacket 72 again in an opposite direction toward the first end cover 74 . the fluid f exiting the jacket 72 for a second time is directed out an outlet 86 in the first end cover 74 , in this example. that fluid f then flows to another location, such as a heat exchanger. the stator 64 and the first and second end covers 74 , 76 will now be described in more detail. fig. 5 is a view of the core 70 of the stator 64 along the axis a. the core 70 may be made of a metallic material, such as iron or steel. the core 70 includes a radially outer surface 88 from which a plurality of mounting tabs 90 (sometimes called mounting ears) project. mounting tabs are not required in all examples. the stator 64 could be mounted in another manner, such as by using a slot-key mechanism, for example. the mounting tabs 90 are equally spaced-apart apart from one another about the circumference of the core 70 , in this example. the core 70 further includes plurality of teeth 92 which project radially inward toward the axis a. the teeth 92 are circumferentially spaced-apart from one another and define slots 94 therebetween. the slots 94 are formed adjacent a radially inner-most surface of the stator 64 . the terms “radial” and “circumferential” are used herein with reference to the axis a. fig. 6 is a view of the stator 64 including the jacket 72 . the jacket 72 is made of a polymer material, in one example. in a particular example, the jacket 72 is made of epoxy. the first and second end covers 74 , 76 may be made of the same material as the jacket 72 or a different material. the jacket 72 is formed over the core 70 by molding, in one example. in particular, the jacket 72 is formed over the core 70 using a transfer molding and/or an overmolding process. overmolding is the process of adding material over already-existing pieces or parts using a molding process. the result is an integrated component including the original piece or pieces and the additional material added via the overmolding process. here, the core 70 is placed into a mold and the jacket 72 is molded over the core 70 to form a new, combined structure of the stator 64 . the jacket 72 at least partially encapsulates the core 70 . in this example, the jacket 72 encapsulates the majority of the core 70 , with the exception of the mounting tabs 90 . before the jacket 72 is added to the core 70 , the slots 94 are radially open, meaning there is a circumferential gap between radially-inner ends of the teeth 92 . the jacket 72 is formed such that the slots 94 are radially enclosed. in fig. 7 , an example slot 94 is shown. the slot 94 is radially enclosed between a radially inner boundary 96 and a radially outer boundary 98 . the radially inner boundary 96 is provided by the jacket 72 , which circumferentially spans the gap 100 (shown in phantom) between adjacent teeth 92 (shown in phantom) of the core 70 . the slot 94 is circumferentially bound by the jacket 72 covering adjacent teeth 92 of the core 70 . the slot 94 extends axially throughout the entire stator 64 , specifically from the first axial end face 78 to the second axial end face 80 . further, in this example, coil windings 102 are received in the slot 94 . gaps exist between the coil windings 102 and the radial and circumferential boundaries of the slot 94 such that fluid can flow through the slot 94 from one axial end of the stator 64 to another, in either axial direction. the slot 94 illustrated in fig. 7 is representative of each of the slots 94 of the stator 64 . the jacket 72 also includes at least one channel configured to permit fluid to flow therethrough. in fig. 6 , there are three channels 104 a- 104 c provided by the jacket 72 . the channels 104 a- 104 c are on a radially opposite side of the stator 64 as the slots 94 , and in particular are formed outward of the radially outer surface 88 of the core 70 . the channels 104 a- 104 c are circumferentially spaced-apart from one another about the circumference of the stator 64 . in the example of fig. 6 , adjacent channels 104 a- 104 c are spaced-apart from one another by a mounting tab 90 arranged circumferentially therebetween. while three channels 104 a- 104 c are shown, this disclosure extends to arrangements with one or more channels. further, if mounting tabs 90 are not present, there may be additional channels, and the channels may be circumferentially closer to one another. while not shown, the channels 104 a- 104 c could include turbulators to induce turbulent flow, thereby leading to enhanced heat transfer. the channels 104 a- 104 c extend axially along the entire stator 64 , specifically from the first axial end face 78 to the second axial end face 80 . with respect to the channel 104 a, the channels are radially bound by a radially inner boundary 106 , which is a surface of the jacket 72 applied over the radially outer surface 88 of the core 70 , and a radially outer boundary 108 . the radially outer boundary 108 is spaced-apart from the radially inner boundary 106 by radially-extending walls 110 , 112 , which are circumferentially spaced-apart from one another and provide circumferential boundaries of the channel 104 a. fluid is configured to flow through the channels 104 a- 104 c from one axial side of the stator 64 to another in either axial direction. the first and second end covers 74 , 76 are configured to direct fluid relative to the slots 94 and channels 104 a- 104 c in a particular manner in order to thermally manage the stator 64 , and in turn the electric machine 60 . with reference to figs. 8a-8c , the first and second end covers 74 , 76 each include a radially inner wall 113 , 115 and a radially outer wall 114 , 116 projecting axially toward the stator 64 from an axial face 118 , 119 ( fig. 3 ). axial ends of the radially outer walls 114 , 116 directly contact a respective first and second axial end face 78 , 80 to define fluid plenums on opposite axial sides of the stator 64 . for instance, a first plenum is defined axially between the axial face 118 of the first end cover 74 and the first axial end face 78 , and is radially bound by the radially inner and outer walls 113 , 114 . a second plenum on the opposite axial side of the stator 64 is defined axially between the axial face 119 of the second end cover 76 and the second end face 80 , and is radially bound by the radially inner and outer walls 115 , 116 . the radially inner walls 113 , 115 of the first and second end covers 74 , 76 are configured to contact the first and second end faces 78 , 80 adjacent the radially inner-most surfaces thereof, namely adjacent the gaps 100 and the radially inner-most ends of the teeth 92 . the radially outer walls 114 , 116 are configured to contact the first and second end faces 78 , 80 at radially-outer surfaces thereof, including adjacent the radially outer surface 88 of the core 70 and adjacent the radial outer boundaries of the channels 104 a- 104 c. in figs. 8a and 8c , the first and second end covers 74 , 76 each include three sections 120 a- 120 c, 122 a- 122 c projecting radially outward from a remainder of the respective first and second end cover 74 , 76 corresponding to the channels 104 a- 104 c. each of the sections 120 a- 120 c, 122 a- 122 c is configured such that the radially outer walls 114 , 116 directly contacts the boundaries of one of the channels 104 a- 104 c. either or both of the first and second end covers 74 , 76 may include a divider configured to direct fluid to the slots 94 and/or the channels 104 a- 104 c in a particular manner. in fig. 8a , the first end cover 74 includes a divider 124 radially between the radially inner wall 113 and the radially outer wall 114 . the divider 124 projects axially from the axial end face 118 by the same distance as the radially inner and outer walls 113 , 114 . a free end of the divider 124 contacts the first axial end face 78 . the inlet 84 and outlet 86 are on opposite radial sides of the divider 124 . in this example, the divider 124 provides a complete radial boundary between the inlet 84 and the section 120 c. the divider 124 includes a circumferential gap 126 , however, circumferentially between the sections 120 a and 120 b permitting fluid to flow from the inlet 84 to the sections 120 a, 120 b. in the example of figs. 8a-8c , fluid f enters the inlet 84 and is directed through the slots 94 toward the second end cover 76 . fluid f also flows from the inlet 84 , through the gap 126 , and through the channels 104 a, 104 b toward the second end cover 76 . the second end cover 76 directs fluid exiting the slots 94 and channels 104 a, 104 b back toward the first end cover 74 through the channel 104 c via section 122 c, where the fluid f ultimately flows out the outlet 86 . while this disclosure references inlets and outlets, the flows could be reversed. with reference to figs. 8a-8c , fluid f could flow into the outlet 86 , through the channel 104 c, and back through the stator 64 via the channels 104 a, 104 b and the slots 94 , and out the inlet 84 . thus, the terms inlet and outlet are not intended to be limiting in any of the embodiments in this disclosure. figs. 9a-9c illustrate another example embodiment. in this embodiment, the first end cover 74 includes a divider 128 without any circumferential gaps. the divider 128 is configured such that fluid f can flow, within the first end cover 74 , between the sections 120 b and 120 c, but section 120 a is fluidly isolated from the sections 120 b and 120 c. the second end cover 76 may be configured as in fig. 8c or, as shown in fig. 9c , may include a divider 130 with a circumferential gap 132 circumferentially between sections 122 a and 122 b, similar to how the divider 124 is arranged in fig. 8a . further, in this example, the first end cover includes an inlet 134 which is larger than the inlet 84 , and is radially bisected by the divider 128 . thus, fluid f entering the inlet 134 flows on both radial sides of the divider 128 . in particular, in an example, fluid f entering the inlet 134 flows through the channel 104 a and the slots 94 toward the second end cover 76 . the second end cover 76 then directs the fluid back toward the first end cover 74 through channels 104 b and 104 c, where it flows out the outlet 136 . in another embodiment, in figs. 10a-10c , the first and second end covers 74 , 76 include dividers 138 , 140 without circumferential gaps. further, the first end cover 74 includes an inlet 142 bisected by divider 138 and the second end cover 76 includes an outlet 144 bisected by divider 140 . the sections 120 a- 120 c are fluidly coupled together on the radially outer side of the divider 138 , and the sections 122 a- 122 c are fluidly coupled together on the radially outer side of divider 140 . fluid f entering the inlet 142 on the radially outer side of the divider 138 flows toward the second end cover 76 via the channels 104 a- 104 c, and exits the second end cover 76 via the outlet 144 on the radially outer side of the divider 140 . fluid entering the inlet 142 on the radially inner side of the divider 138 flows toward the second end cover 76 via the slots 94 and exits the second end cover 76 via the outlet 144 on the radially inner side of the divider 142 . in yet another embodiment, in figs. 11a-11c , the first and cover 74 includes a divider 146 without any circumferential gaps. the sections 120 a- 120 b are fluidly coupled together on the radially outer side of the divider 146 . the second end cover 76 includes a divider 148 with two circumferential gaps 150 , 152 to permit fluid f to flow radially across the divider 148 . in this embodiment, fluid f entering an inlet 154 on a radially inner side of the divider 146 flows toward the second end cover 76 via the slots 94 . the second end cover 76 directs fluid f through the gaps 150 , 152 and through the channels 104 a- 104 c back toward the first end cover 74 , where the fluid f flows out an outlet 156 on the radially outer side of the divider 146 . figs. 12a and 12b are illustrations representative of example interfaces between the first and second end covers 74 , 76 and the stator 64 . only the first end cover 74 is shown in figs. 12a and 12b , however. for instance, in fig. 12a , each of the radially inner wall 113 , the radially outer wall 114 , and the divider 124 includes a free end having a projection resembling a t-shape in cross-section. relative to the radially inner wall 113 , the free end includes a projection 158 and a notch 160 , 162 on each radial side of the projection. the notches 160 , 162 directly abut the first axial end face 78 , and the projection 158 is received in a recess 164 in the first axial end face 78 . the interfaces may be arranged differently, and, for example, in fig. 12b the radially inner and outer walls 113 , 114 include projections that resemble an l-shape in cross-section. in particular, relative to the radially inner wall 113 , the free end includes a rim 166 and a notch 168 on a radially outer side thereof. the first axial end face 78 includes a notch 170 in a radially inner corner surface thereof, which receives the rim 166 . the first axial end face 78 directly abuts the notch 168 . in fig. 12b , the radially outer wall 114 is arranged similar to the radially inner wall 113 , except it includes a rim received in a notch in the radially outer corner surface of the first axial end face 78 . the divider 124 in fig. 12b is arranged as in fig. 12a . these interfaces are exemplary. this disclosure extends to other interface arrangements, including combinations of various interface arrangements. the first and second end covers 74 , 76 may be attached to the stator 64 using known techniques, including welding or gluing, and the interfaces prevent fluid leakage between the components. it should be understood that directional terms such as “axial,” “radial,” and “circumferential” are used for purposes of explanation in the context of the components being described, and should not otherwise be considered limiting. further, terms such as “generally,” “substantially,” and “about” are not intended to be boundaryless terms, and should be interpreted consistent with the way one skilled in the art would interpret those terms. although the different examples have the specific components shown in the illustrations, embodiments of this disclosure are not limited to those particular combinations. it is possible to use some of the components or features from one of the examples in combination with features or components from another one of the examples. in addition, the various figures accompanying this disclosure are not necessarily to scale, and some features may be exaggerated or minimized to show certain details of a particular component or arrangement. one of ordinary skill in this art would understand that the above-described embodiments are exemplary and non-limiting. that is, modifications of this disclosure would come within the scope of the claims. accordingly, the following claims should be studied to determine their true scope and content.
166-654-906-594-267
US
[ "US" ]
G10D13/00,G10D13/02,G10D13/06,G10D13/11,G10D13/08,G10D13/10,G10D13/18
2018-05-30T00:00:00
2018
[ "G10" ]
system and method for compact bass chamber with internal beater and hi-hat apparatus
a versatile cajón having a compact footprint that incorporates actuators for an internal bass-beater and an external hi-hat. the cajón may further serve as a base for supporting additional percussive instruments, such as snare drums, tom drums, cymbals, and latin percussion. in an embodiment, a bass drum pedal may be secured inside the cajón and having a rotating shaft protruding through a side wall of the cajón. the shaft protrusion may be coupled to a foot pedal in an actuating manner. as such, when a percussionist presses down on the foot pedal (e.g., with a foot action), the shaft rotates the beater head to strike an internal wall of the cajón, thereby producing a bass-like percussive sound. similarly, the system may include a hi-hat pedal and shaft combination that is also attached directly to one or more external side walls of the cajón.
1 . a percussive instrument, comprising: a bass chamber having substantially flat walls forming an internal cavity; an actuator pedal attached to the bass chamber external to the internal cavity; and a beater attached to the bass chamber internal to the internal chamber and configured to strike at least one wall among the substantially flat walls when the externally attached actuator pedal is actuated. 2 . the percussive instrument of claim 1 , wherein the substantially flat walls comprise a top wall, a bottom wall, a left wall, a right wall, a front wall and a beater wall, such that the top wall is contiguous with the left wall, the right wall, the front wall and the beater wall, but separate from the bottom wall and such that the bottom wall is contiguous with the left wall, the right wall, the front wall and the beater wall but separate from the top wall. 3 . the percussive instrument of claim 2 , wherein the actuator pedal is attached to the right wall and the beater is configured to strike the beater wall. 4 . the percussive instrument of claim 1 , further comprising an externally attached biasing member configured to bias the actuator pedal to a resting position after an actuation. 5 . the percussive instrument of claim 1 , further comprising percussive snares disposed inside the internal cavity and configured to enhance a sound produced by the beater striking the at least one wall. 6 . the percussive instrument of claim 5 , further comprising a snare switch having an externally attached actuator for engaging or disengaging the percussive snares with at least one wall of the internal cavity. 7 . the percussive instrument of claim 1 , further comprising a second actuator pedal attached to the bass chamber external to the internal cavity, the second actuator pedal configured to actuate a hi-hat cymbal. 8 . the percussive instrument of claim 1 , further comprising a second chamber having a second internal cavity, the second chamber smaller than the bass chamber, the second chamber disposed contiguous with a top wall of the bass chamber. 9 . the percussive instrument of claim 1 , further comprising at least one tom drum attached to a mount configured to attach to a top wall of the bass chamber. 10 . the percussive instrument of claim 1 , further comprising at least one cymbal mount attached to at least one wall of the bass chamber. 11 . the percussive instrument of claim 1 , further comprising a first resonance port disposed through a first wall of the bass chamber and a second resonance port disposed through the first wall wherein the first and second resonance ports are disposed adjacent to each other forming a narrow portion of the first wall suited to be grasped by a human hand. 12 . a cajón, comprising: a bass chamber having substantially flat walls forming an internal cavity; an actuator pedal attached to the bass chamber external to the internal cavity; and a beater attached to the bass chamber internal to the internal chamber and configured to strike at least one wall among the substantially flat walls when the externally attached actuator pedal is actuated. 13 . the cajón of claim 12 , wherein the substantially flat walls comprise a top wall, a bottom wall, a left wall, a right wall, a front wall and a beater wall, such that the top wall is contiguous with the left wall, the right wall, the front wall and the beater wall, but separate from the bottom wall and such that the bottom wall is contiguous with the left wall, the right wall, the front wall and the beater wall but separate from the top wall; wherein the front wall comprises an area that less than an area of the beater wall. 14 . the cajón of claim 13 , wherein the actuator pedal is attached to the right wall and the beater is configured to strike the beater wall. 15 . the cajón of claim 12 , further comprising: percussive snares disposed inside the internal cavity and configured to enhance a sound produced by the beater striking the at least one wall; and a snare switch having an externally attached actuator for engaging or disengaging the percussive snares with at least one wall of the internal cavity. 16 . the cajón of claim 12 , further comprising a second actuator pedal attached to the bass chamber external to the internal cavity, the second actuator pedal configured to actuate a hi-hat cymbal. 17 . the cajón of claim 12 , further comprising a second chamber having a second internal cavity, the second chamber smaller than the bass chamber, the second chamber disposed contiguous with a top wall of the bass chamber. 18 . the cajón of claim 12 , further comprising a first resonance port disposed through a first wall of the bass chamber and a second resonance port disposed through the first wall wherein the first and second resonance ports are disposed adjacent to each other forming a narrow portion of the first wall suited to be grasped by a human hand. 19 . a percussion system, comprising: a bass chamber having a top wall, a bottom wall, a left wall, a right wall, a front wall, and a beater wall that collectively from a substantially rectangular internal cavity; a first actuator pedal attached to the bass chamber external to the internal cavity; a beater attached to the bass chamber internal to the internal chamber and configured to strike at least one wall among the substantially flat walls when the externally attached actuator pedal is actuated; at least one tom drum attached to a mount configured to attach to a top wall of the bass chamber; a second actuator pedal attached to the bass chamber external to the internal cavity, the second actuator pedal configured to actuate a hi-hat cymbal; and at least one cymbal mount attached to at least one wall of the bass chamber. 20 . the percussive instrument of claim 1 , further comprising: a hi-hat storage mount disposed on an external wall of the bass chamber and configured to store the hi-hat cymbal when not in use; and a cymbal storage mount disposed on an external wall of the bass chamber and configured to store the cymbal mount when not in use.
priority claim this application claims the benefit of u.s. provisional application no. 62/678,109, entitled “system and method for compact bass chamber with internal beater and hi-hat apparatus” filed may 30, 2018, which is incorporated by reference in its entirety herein for all purposes. background modern drum sets are exhibiting smaller and smaller profiles for percussionist that can occupy a small space on a stage or have a drum kit that is easily put away, transported and redeployed in various locations and venues. as a result, drum kits have trended toward a smaller overall profile to be highly portable and take up a small footprint of space. one persistent problem has been the space necessary for a proper bass drum sound, as the bass drum tends to be quite large in order to produce the low bass percussive sound. in the past, a cajón, which is a small box-like resonant chamber, has been used by percussionists to produce a low-bass percussive sound by slapping or pounding on one external surface. such an instrument may sometimes double as a seat for the percussionist as well, otherwise known as a throne. a problem with a manually sounded cajón is that producing the sound involves hand slap or pound, thereby occupying one or both of the percussionists' hands. some solutions have included use of a bass-drum beater pedal that simulates the pound or slap of a percussionist's hand with a bass-drum beater actuated by a foot pedal. this allows the percussionist to produce the bass drum sound with a foot and allows both hands to be freed up for other uses (e.g., snare drum, hit-hat, cymbals) utilizing drum sticks. with space and compactness as a goal, a need has arisen for additional efficiency of space use in and around a drum kit using a cajón or other similar resonant-chamber percussive instruments. brief description of the drawings embodiments of the subject matter disclosed herein in accordance with the present disclosure will be described with reference to the drawings, in which: fig. 1 is an isometric view of a compact drum kit utilizing a cajón according to an embodiment of the subject matter disclosed herein; fig. 2 is a rear view of the compact drum kit of fig. 1 showing additional details inside the cajón according to an embodiment of the subject matter disclosed herein; fig. 3 is a second isometric diagram of the compact drum kit of fig. 1 showing the hi-hat side in greater detail according to an embodiment of the subject matter disclosed herein; and fig. 4 is a front view of the compact drum kit of fig. 1 showing a front wall covering installed having resonance ports according to an embodiment of the subject matter disclosed herein. note that the same numbers are used throughout the disclosure and figures to reference like components and features. detailed description the subject matter of embodiments disclosed herein is described here with specificity to meet statutory requirements, but this description is not necessarily intended to limit the scope of the claims. the claimed subject matter may be embodied in other ways, may include different elements or steps, and may be used in conjunction with other existing or future technologies. this description should not be interpreted as implying any particular order or arrangement among or between various steps or elements except when the order of individual steps or arrangement of elements is explicitly described. embodiments will be described more fully hereinafter with reference to the accompanying drawings, which form a part hereof, and which show, by way of illustration, exemplary embodiments by which the systems and methods described herein may be practiced. this systems and methods may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will satisfy the statutory requirements and convey the scope of the subject matter to those skilled in the art. by way of an overview, the systems and methods discussed herein may be directed to a versatile cajón having a compact footprint that incorporates actuators for an internal bass-beater and an external hi-hat. the cajón may further serve as a base for supporting additional percussive instruments, such as snare drums, tom drums, cymbals, and latin percussion. in an embodiment, a bass drum pedal may be secured inside the cajón and having a rotating shaft protruding through a side wall of the cajón. the shaft protrusion may be coupled to a foot pedal in an actuating manner. as such, when a percussionist presses down on the foot pedal (e.g., with a foot action), the shaft rotates the beater head to strike an internal wall of the cajón, thereby producing a bass-like percussive sound. similarly, the system may include a hi-hat pedal and shaft combination that is also attached directly to one or more external side walls of the cajón. these and other aspects are described in greater detail below with respect to figs. 1-5 . fig. 1 is a diagram of a compact drum kit 100 utilizing a versatile cajón 101 according to an embodiment of the subject matter disclosed herein. a cajón (spanish: [ka'xon]; “box”, “crate” or “drawer”) is a box-shaped percussion instrument originally from peru, played by slapping the front or rear faces (generally thin plywood) with the hands, fingers, or sometimes various implements such as brushes, mallets, or sticks. cajónes are primarily played in afro-peruvian music, as well as contemporary styles of flamenco and jazz among other genres. the term cajón is also applied to other box drums used in latin american music such as the cajón de rumba used in cuban rumba and the cajón de tapeo used in mexican folk music. in fig. 1 , the improved versatile cajón 101 is shown having attached foot pedals on either side of the cajón. a first foot pedal 110 is configured to actuate a bass drum beater 119 that is internally secured within the cajón 101 . a second foot pedal (shown in fig. 3 as element 310 ) may be used to actuate a hi-hat 150 . further, the cajón 101 may be used to support additional percussion, such as a snare drum 130 , tom drums 135 and 136 , and cymbals (not shown). as skilled artisan understands that the foot pedals 110 and 310 could be swapped from left to right depending on the preference of the percussionist. for example, a right-handed percussionist typically prefers the hi-hat pedal 310 on the left side and the bass pedal 110 on the right side, while a left-handed percussionist prefers the reciprocal. with the compact nature of the versatile cajón 101 , cymbal stands (not shown) or other stands may be stationed close to the versatile cajón 101 for reducing overall footprint size of drum kit 100 . the cajón 101 typically comprises a bass chamber 105 that includes six walls that form an internal cavity 107 . these walls may be substantially flat walls through other shapes and contours are possible. further, the internal cavity may be formed by more or fewer than six walls. for brevity, six wall configurations are discussed herein. the six walls may typically comprise a top wall (obscured by other drums), a bottom wall 108 b , a left wall 108 d , a right wall 108 a , a front wall (not visible in this perspective) and a beater wall 108 c (shown as transparent hatched wall so as to reveal internal components). with this arrangement, the top wall is contiguous with the left wall 108 d , the right wall 108 a , the front wall and the beater wall 108 c , but separate from the bottom wall 108 b and the bottom wall 108 b is contiguous with the left wall 108 d , the right wall 108 a , the front wall and the beater wall 108 c but separate from the top wall. looking deeper into the aspect of the bass beater pedal 110 , fig. 1 shows the extra bass beater pedal 110 is attached to the right wall 108 a . the bass beater pedal 110 may include a pedal base portion 111 that may be attached to the right wall 108 a and flush with the bottom wall 108 b . the pedal base portion 111 is rotatably attached to a main pedal portion 112 that is suited to engage a human foot. the main pedal portion 112 may be tapered outward such the that the main pedal portion 112 becomes wider the further way from the base pedal portion 111 . the beater pedal assembly further includes an externally attached biasing member 115 (e.g., a spring) configured to bias the actuator pedal to a resting position after an actuation. the biasing member 115 may be coupled to the main pedal portion 112 through first and second beater pedal linkage members 113 and 114 . turning attention to the internal cavity 107 of the bass chamber 105 , additional components of the overall bass chamber beater assembly 110 are disposed. specifically, the internal structure includes a beater shaft 117 that is configured to rotate in a first rotational direction when the bass beater pedal is actuated downward and to rotate back in an opposite rotational direction when the pedal is forced back upward by the biasing member 115 . in order to facilitate the lateral motion of an attached beater 119 , the beater 119 may be attached to the beater shaft 117 through a beater shaft linkage member 118 . the fluid rotational motion of the beater shaft 117 may be further facilitated by beater shaft mounts 120 that hold the beater shaft 117 securely, yet rotatably affixed within a single axis. further yet, aspects of the actuation motion may be adjusted by shifting the linkage point on an adjustment wheel 116 affixed to the exterior of the bass chamber 105 on the right-side wall. the compact nature of the bass chamber 105 allows for a custom compact drum kit to be realized in conjunction with additional attachable and/or separate components. in one embodiment, the bass chamber will include stabilization spikes (not shown) that may be mounted to right and left walls and extendable at an angle toward the floor to assist with support the bass chamber to prevent overall movement when being played. further, the bass chamber 105 may have a small footprint with dimensions that are considered small in the industry. in one embodiment, the dimensions of the bass chamber are 22″ in width×18.5″ in height×16″ in depth with an overall weight of 16.9 pounds. the compact nature and lightweight design are improved by the use of custom machined parts are made from aircraft grade aluminum. additional percussive instrument may attach to top wall of the bass chamber 105 of the cajón 101 . in this embodiment, one can see a snare drum 130 and two tom-tom drums 135 and 136 mounted to the top of the cajón 101 . further, an additional resonator chamber 106 may be attached to the top of the bass chamber 105 of the cajón 101 to provide added resonance for the bass and snare hits as well as height to place the snare 130 and toms 135 and 136 at useful positions. this second chamber 106 includes a second internal cavity wherein the second chamber 106 is smaller than the bass chamber 105 and may typically be contiguous with a top wall of the bass chamber 105 . the additional resonator chamber 106 may include one or more resonance holes (not visible in fig. 1 ) in one or more walls. further, other percussion, such as cymbals, auxiliary percussion, and latin percussion (e.g., cowbell, tambourine, claves, wood blocks,) may be attached as well (not shown in fig. 1 for clarity). these may be attached by means of dedicated mounts that are affixed to an external wall of the bass chamber 105 (again, not shown for clarity). further yet, the external walls of the bass chamber may include additional storage mounts for storing various components for transport and storage. fig. 2 is a front view of the compact drum kit of fig. 1 showing additional details inside the cajón according to an embodiment of the subject matter disclosed herein. in this view, one can see directly inside the cajón 101 as the beater wall ( 108 c of fig. 1 ) is not shown. typically, the cajón walls comprise sheets of 0.5 to 0.75 inches thick wood, (e.g., solid wood or plywood). in some embodiments, a thinner sheet of plywood is used as the side wall intended to be the striking surface or head on the beater wall 108 c . the striking surface of the cajón drum is commonly referred to as the “tapa”. in some embodiments, one or more resonance holes (shown in greater detail with respect to fig. 4 ) are cut into one or more other walls (e.g., typically the side opposite the striking surface. the top edges can often be left unattached and can be slapped against the box (e.g., like closing a lid on a box). in further embodiments, the cajón may have supports or feet made of rubber or other resilient substance, and may include several adjustors (e.g., screws) at one or more sides for adjusting percussive timbre. in the embodiment of fig. 2 , one can see the internal shaft 117 that protrudes out a side wall attaching to a bass beater pedal 110 that be actuated up and down to rotate the shaft 117 . the rotating shaft 117 may be attached to an internal bass drum beater 119 that strikes the (missing) beater wall (e.g., the striking surface) when rotated. generally speaking, the harder one pushes down on the pedal, the harder the bass drum beater 119 will strike the striking surface. the cajón may also include stretched cords or snares 223 pressed against the one or more walls for a buzz-like effect or tone. these effects may be adjusted through external actuators 222 attached to the stretched cords or snares. in other embodiments, guitar strings, rattles, or maracas may serve this purpose. bells may also be installed inside near the snares 223 . in this embodiment, the bass drum beater pedal 110 that actuates the internal beater 119 is disposed on the right-hand side (player perspective) of the cajón (the view in fig. 2 is from the rear). this is typical for right-handed players. in other embodiments, the bass drum beater pedal 110 that actuates the internal beater 119 is disposed on the left-hand side of the cajón which is typical for left-handed players. further, the internal drum beater head 121 may be swapped in and out with different configurations and styles of beater heads. still referring to fig. 2 , the embodiment shown includes a pedal on left-hand side for a hi-hat (again, this is a view from the rear side of the cajón). the hi-hat is external but attached to cajón to reduce the overall footprint of the drum kit. as before, this feature may be on right-side for left-handed players. fig. 3 is a second isometric diagram of the compact drum kit 100 of fig. 1 showing the hi-hat side in greater detail according to an embodiment of the subject matter disclosed herein. in this view, one can see that the beater wall 108 c is affixed to the front of the bass chamber 105 . in some embodiments, the beater wall 108 c is removably fixed with fasteners (e.g., bolts, screws, rivets, and the like). in other embodiments, the beater wall 108 c is more permanently attached via wood glue and interlocking protrusions (not shown). as such, the beater wall 108 c is in place for the internal drum actuator (not seen in fig. 3 ) to strike the beater wall 108 c when the exterior beater pedal actuator 110 is actuated. the beater wall 108 c may be different styles of beater wall including a tapa faceplate or bass drum batter head. the material may be word, mylar, leather or other suitable batter head or beater wall. looking deeper into the aspect of the hi-hat pedal 310 , fig. 3 shows the external hi-hat pedal 310 is attached to the left wall 108 d . the hi-hat pedal 310 may include a pedal base portion 311 that may be attached to the left wall 108 d and flush with the bottom wall 108 b . the pedal base portion 311 is rotatably attached to a main pedal portion 312 that is suited to engage a human foot. the main pedal portion 312 may be tapered outward such the that the main pedal portion 312 becomes wider the further way from the base pedal portion 311 . the hi-hat pedal 310 assembly further includes an externally attached biasing member 315 (e.g., a spring) configured to bias the actuator pedal 312 to a resting position after an actuation. the biasing member 315 may be coupled to the main pedal portion 312 through first and second beater pedal linkage members 313 and 314 . one can see that the biasing member 315 allows the pedal 312 to return to a first position after each hi-hat actuation to be ready for the next actuation. this mechanism allows the pedal 312 to pull two cymbals of a hi-hat together and then return to a first position after each clasp to be ready for the next actuation. thus, the rotating shaft is actuated by pedal 312 by human foot action but then returned to the first position by the potential energy stored in the force transfer mechanism (typically another reciprocating spring 315 ). the pedal tension (e.g., the force of the spring) is adjustable as is the linkage to the pedal. further, the pedal may be detachable at the left-wall 108 d coupling for storage and transport. fig. 4 is a front view of the compact drum kit of fig. 1 showing a front wall 108 e installed having resonance ports 470 and 471 according to an embodiment of the subject matter disclosed herein. in this view, the bass chamber front wall 108 e further includes a pair of resonance ports 470 and 471 . resonance ports are used with percussive instruments to provide a means for air to be pushed out of the resonance chamber (e.g., the bass chamber 105 ) so that the internal reverberation or resonance can be more audibly heard outside of the percussive instruments. some instruments may include smaller or larger resonance ports and more or fewer ports as shown here. further, the shape and style of resonance ports may vary. in this embodiment, there are two oblong oval shaped ports that are disposed side-by-side on the beater wall 108 c . this is disposed in a manner such that a first resonance port 470 cut through the beater wall 108 c as is the second resonance port 471 . the first and second resonance ports 470 and 471 are disposed adjacent to each other forming a narrow portion 472 of the first wall suited to be grasped by a human hand. that is, the two ports 470 and 471 form a handle so that one can carry the bass chamber like a suitcase. all references, including publications, patent applications, and patents, cited herein are hereby incorporated by reference to the same extent as if each reference were individually and specifically indicated to be incorporated by reference and/or were set forth in its entirety herein. the use of the terms “a” and “an” and “the” and similar referents in the specification and in the following claims are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. the terms “having,” “including,” “containing” and similar referents in the specification and in the following claims are to be construed as open-ended terms (e.g., meaning “including, but not limited to,”) unless otherwise noted. recitation of ranges of values herein are merely indented to serve as a shorthand method of referring individually to each separate value inclusively falling within the range, unless otherwise indicated herein, and each separate value is incorporated into the specification as if it were individually recited herein. all methods described herein can be performed in any suitable order unless otherwise indicated herein or clearly contradicted by context. the use of any and all examples, or exemplary language (e.g., “such as”) provided herein, is intended merely to better illuminate embodiments and does not pose a limitation to the scope of the disclosure unless otherwise claimed. no language in the specification should be construed as indicating any non-claimed element as essential to each embodiment of the present disclosure. different arrangements of the components depicted in the drawings or described above, as well as components and steps not shown or described are possible. similarly, some features and sub-combinations are useful and may be employed without reference to other features and sub-combinations. embodiments have been described for illustrative and not restrictive purposes, and alternative embodiments will become apparent to readers of this patent. accordingly, the present subject matter is not limited to the embodiments described above or depicted in the drawings, and various embodiments and modifications can be made without departing from the scope of the claims below.
167-135-233-518-295
US
[ "US" ]
C25B11/04,C25B1/04,C25B11/03,C25B11/051,C25B11/031,C25B11/057,C25B11/061,C25B11/075
2020-03-12T00:00:00
2020
[ "C25" ]
noble metal free catalyst for hydrogen generation
a method for generating hydrogen including contacting a catalyst with a proton source, the catalyst having a catalytic component with a first surface comprising a plurality of catalytic sites and a carbon component provided as a layer on the first surface, wherein the carbon component comprises a plurality of pores. also provided are catalysts for catalyzing the hydrogen evolution reaction and methods of making the same.
1 . a catalyst for hydrogen generation, the catalyst comprising: a catalytic component having a first surface, the first surface comprising a plurality of catalytic sites, and a carbon component provided as a layer on the first surface, wherein the carbon component comprises a plurality of pores. 2 . the catalyst according to claim 1 , wherein the catalytic component comprises nickel. 3 . the catalyst according to claim 1 , wherein the carbon component comprises graphene. 4 . the catalyst according to claim 1 , wherein the catalytic component is provided in a form selected from the group consisting of a particle and a sheet. 5 . the catalyst according to claim 1 , wherein the plurality of pores are configured for at least one proton to traverse therethrough. 6 . the catalyst according to claim 1 , wherein the carbon component is provided a first distance from the first surface, the first distance being configured for a presence and/or passage of hydrogen between the catalytic component and the carbon component. 7 . the catalyst according to claim 6 , wherein the carbon component interacts with the first surface via at least van der waals forces. 8 . the catalyst according to claim 1 , wherein the layer of carbon component comprises discrete islands. 9 . the catalyst according to claim 1 , wherein the layer of carbon component on the first surface of the catalytic component provides a degree of coverage of less than 1 l of langmuir. 10 . a method for generating hydrogen, the method comprising contacting a catalyst with a proton source, wherein the catalyst comprises: a catalytic component having a first surface, the first surface comprising a plurality of catalytic sites, and a carbon component provided as a layer on the first surface, wherein the carbon component comprises a plurality of pores. 11 . the catalyst according to claim 10 , wherein the catalytic component comprises nickel. 12 . the catalyst according to claim 10 , wherein the carbon component comprises graphene. 13 . the catalyst according to claim 10 , wherein the catalytic component is provided in a form selected from the group consisting of a particle and a sheet. 14 . the catalyst according to claim 10 , wherein the plurality of pores are configured for at least one proton to traverse therethrough. 15 . the catalyst according to claim 10 , wherein the carbon component is provided a first distance from the first surface, the first distance being configured for a presence and/or passage of hydrogen between the catalytic component and the carbon component. 16 . the catalyst according to claim 15 , wherein the carbon component interacts with the first surface via at least van der waals forces. 17 . a method for preparing a catalyst, the method comprising: providing a catalytic component having a first surface, the first surface comprising a plurality of catalytic sites, and forming a sub-coverage layer of a carbon component on the first surface such that the carbon component comprises a plurality of pores. 18 . a method for preparing a catalyst, the method comprising: providing a catalytic component having a first surface, the first surface comprising a plurality of catalytic sites, providing a full-coverage layer of a carbon component on the first surface, and creating one or more pores in the full-coverage layer such that the carbon component comprises a plurality of pores. 19 . the method according to claim 18 , wherein creating the one or more pores in the full-coverage layer comprises applying a shadow mask to the full-coverage layer and separating a portion of the full-coverage layer from the catalytic component to provide the one or more pores. 20 . the method according to claim 18 , wherein creating the one or more pores in the full-coverage layer comprises etching the full-coverage layer with an etching agent to provide the one or more pores.
technical field the present disclosure is directed to a method of hydrogen generation via the hydrogen evolution reaction and catalysts useful for the same. background of the disclosure hydrogen is a promising candidate for an environmentally-friendly fuel option with various potential applications. methods for producing hydrogen have therefore generated considerable interest. however, many materials currently used to catalyze the hydrogen evolution reaction, such as noble metals, are expensive and/or provide an unacceptably low catalytic efficiency. there is thus a need in the art for methods of hydrogen generation via the hydrogen evolution reaction, and in particular, new catalysts capable of catalyzing such reactions. brief description of the disclosure the present disclosure is directed to a method of generating hydrogen, and in particular, to a method of hydrogen generation via the hydrogen evolution reaction (her). the method comprises providing a catalyst for hydrogen generation, the catalyst comprising a catalytic component and a carbon component having a plurality of pores. the method may comprise contacting the catalyst with a proton source such that protons traverse the plurality of pores and adsorb to the catalytic component, wherein at least a portion of the protons take part in the her. the present disclosure is also directed to a catalyst as described herein as well as methods of making the same. brief description of the drawings fig. 1a shows an example catalyst according to aspects of the present disclosure. fig. 1b shows an example catalyst according to aspects of the present disclosure. detailed description of the disclosure the present disclosure is directed to a method of generating hydrogen, and in particular, a method of hydrogen generation via the her. the method comprises providing a catalyst for hydrogen generation, the catalyst comprising a catalytic component and a carbon component having a plurality of pores. the method may comprise contacting the catalyst with a proton source such that protons traverse the plurality of pores and adsorb to the catalytic component, wherein at least a portion of the protons take part in the her. the present disclosure is also directed to a catalyst as described herein as well as methods of making the same. as used herein, the term “hydrogen evolution reaction” or “her” refers to a chemical reaction that produces hydrogen. it should be understood that “hydrogen” may refer to atomic hydrogen (h), that is, an atom having one proton and one electron. alternatively or additionally, “hydrogen” may refer to molecular hydrogen (h 2 ), that is, a diatomic molecule having two protons and two electrons. the method comprises providing a catalyst configured to catalyze the her, the catalyst comprising a catalytic component and a carbon component. the catalytic component may comprise a catalytic material. as used herein, the term “catalytic material” refers to a material capable of catalyzing the her, and in particular, a material having one or more catalytic sites configured for proton adsorption thereto. according to some aspects, the catalytic material may comprise an electrode material, that is, a material capable of conducting electrical charge. examples of catalytic materials useful according to the present disclosure include, but are not limited to, metals such as nickel (ni), platinum (pt), alloys thereof, and combinations thereof. according to some aspects, the catalytic material is free of noble metals. the catalytic component may be provided in any form capable of catalyzing the her as disclosed herein. for example, the catalytic component may be provided as a particle (such as a nanoparticle), a sheet (such as a metal foil), porous surfaces, or any combination thereof. it should be understood that the catalytic component should have a form such that at least a first surface containing catalytic sites is provided. the catalyst also comprises a carbon component. examples of carbon components useful according to the present disclosure include, but are not limited to, graphene, including monolayer graphene, bilayer graphene, and multi-layer graphene. however, it should be understood that any carbon component may be used so long as it enables the passage of protons therethrough, as describe herein. according to some aspects, the carbon component may be provided as a layer on at least one surface of the catalytic component, and in particular, on the at least first surface of the catalytic component containing the catalytic sites, as described herein. the carbon component as described herein comprises a plurality of pores. as used herein, the term “pore” refers to an opening extending completely through a material. for example, according to some aspects of the present disclosure, a “pore” may corresponding to an opening formed by a discontinuous surface, for example, a discontinuous layer of graphene. it should be understood that in some examples, the plurality of pores may be provided by a carbon component that does not completely cover the catalytic component. for example, in the case where the carbon component is provided as discrete islands on the catalytic component, the plurality of pores may refer to the space between the islands. the plurality of pores may be provided proximal to the at least first surface of the catalytic component, wherein at least a portion of the plurality of pores are sized so as to enable the passage of protons therethrough, and thus, through the carbon component. according to some aspects, the carbon component may be provided on the at least first surface of the catalytic component such that coverage of the first surface of the catalytic component is less than about 1 l of langmuir, also referred to herein as “sub-coverage.” in this example, the carbon component will enable the passage of protons to the at least first surface of the catalytic component where the carbon component does not cover the catalytic component. fig. 1 shows two example catalyst configurations according to aspects of the present disclosure. in particular, fig. 1a shows a catalyst 11 having a catalytic component 12 and a first layer of carbon component 13 . as shown in fig. 1a , the catalytic component 12 may be provided in the form of a foil having a first surface 15 proximal the first layer of carbon component 13 . the first layer of carbon component 13 may have a plurality of pores 14 proximal the first surface 15 of the carbon component 13 , the plurality of pores 14 being sized so as to enable passage of protons through the first layer of carbon component 13 . as described herein, the first surface 15 of the catalytic component 12 may be provided with one or more catalytic sites capable of adsorbing protons thereto, as described herein. as shown in fig. 1a , the catalytic component 12 may have a second surface 16 also having one or more catalytic sites as described herein. the second surface 16 may be proximal a second layer of carbon component 17 having a plurality of pores 18 as described herein. fig. 1b shows another example catalyst configuration as described herein. in particular, fig. 1b shows a catalyst 11 having a catalytic component 19 and a first layer of carbon component 110 . as shown in fig. 1b , the catalytic component 19 may be provided in the form of a particle having a first surface 111 proximal the first layer of carbon component 110 . as described herein, the first surface 111 of the catalytic component 19 may be provided with one or more catalytic sites. similar to the catalyst shown in fig. 1a , fig. 1b shows the first layer of carbon component 110 having a plurality of pores 112 being sized so as to enable the passage of protons through the first layer of carbon component 110 . it should be understood that each pore of the plurality of pores will provide a pathway from an outer surface of the catalyst (i.e., the surface of the carbon component opposite the catalytic component) to the surface of the carbon component proximal the catalytic component. according to some aspects, the pathway may be substantially straight, which in this context, may mean a pathway that spans from the outer surface of the catalyst to the surface of the carbon component proximal the catalytic component without curves, bends, etc. it should be understood that the surface area of the first surface provided with the carbon component will not be completely covered by the carbon component due to the presence of the plurality of pores. that is, the plurality of pores will expose at least a portion of the surface area of the first surface provided with the carbon component. according to some aspects, the plurality of pores may expose at least about 10% of the surface area of the first surface provided with the carbon component, optionally at least about 20%, optionally at least about 30%, optionally at least about 40%, optionally at least about 50%, optionally at least about 60%, optionally at least about 70%, optionally at least about 80%, optionally at least about 90%. according to some aspects, the percent of the surface area of the first surface covered by the carbon component (referred to herein as the degree of coverage) may correspond with, for example, the carbon component growth conditions, including, but not limited to, growth time, rate of carbon source introduction, growth temperature, or a combination thereof. according to some aspects, the carbon component may be provided on at least about 50% of the surface area of the first surface of the catalytic component (i.e., a degree of coverage of about 50%), optionally at least about 60%, optionally at least about 70%, optionally at least about 80%, optionally at least about 90%, optionally at least about 95%, and optionally about 100%. according to some aspects, the degree of coverage of the carbon coating on the first surface of the catalytic component may be less than 1 l of langmuir. as shown in figs. 1a and 1b , at least the first layer of carbon component 13 , 110 may be provided a certain distance d 1 from the first surface 15 , 111 of the catalytic component 12 , 19 . according to some aspects, distance d 1 may be sized so as to enable the presence and/or passage of hydrogen 113 in the space between the carbon component 13 , 110 and the catalytic component 12 , 19 . for example, distance d 1 may be sized such that hydrogen 113 is able diffuse from the first surface 15 , 111 of the catalytic component 12 , 19 as it is generated via the her. according to some aspects, the second layer of carbon component 17 may be provided a second distance d 2 from the second surface 16 , wherein distance d 2 is sized such that hydrogen is able diffuse from the second surface 16 of the catalytic component 12 as it is generated via the her, as described herein. distance d 2 may be the same or different from distance d 1 . according to some aspects, distance d 1 and/or distance d 2 may result from the one or more forces maintaining the carbon component in its position relative to the catalytic component. for example, the catalytic component and the carbon component may be at least partially maintained in their positions relative one another by one or more chemical interactions, and in particular, by van der waals forces. it should be understood that van der waals forces comprise forces provided by the attraction and repulsion between atoms, molecules, and/or surfaces, such as the attraction and repulsion between atoms, molecules, and/or surfaces comprised by the carbon component and the catalytic component. the method may comprise contacting the catalyst as described herein with a proton source such that protons traverse the plurality of pores and adsorb to the catalytic component at the catalytic sites. according to some aspects, the proton source may comprise water with or without additives. as used herein, the term “additive” refers to a substance contained by the proton source at a concentration of less than 50% w/v. example additives include, but are not limited to, detergents, such as sodium dodecyl sulfate. it should be understood that the catalyst as described herein is configured such that, after protons traverse the plurality of pores and adsorb to the catalytic sites of the catalytic component, they may combine with electrons transmitted by the catalytic component to form hydrogen atoms. according to some aspects, two hydrogen atoms may combine to form molecular hydrogen via associative desorption from the catalytic sites of the catalytic component. it should be understood that this adsorption, formation of hydrogen, and/or associative desorption may be collectively referred to herein as the her. it is believed that the rate of the her may be determined by a rate determining step selected from the adsorption of protons to the catalytic sites of the catalytic component and the associative desorption of molecular hydrogen from the catalytic component. according to some aspects, the catalyst as described herein is configured to enhance the rate of the her by lowering the energy required by the rate determining step. for example, in the case wherein the rate determining step is the associative desorption of molecular hydrogen from the catalytic component, the catalyst as described herein may be configured to reduce the adsorption energy between hydrogen and the catalytic component at least in part due to the presence of the carbon component. in this way, the catalytic efficiency of the catalyst may be enhanced as compared with the catalytic efficiency of other known catalysts used for the her, such as catalysts containing the catalytic component without the carbon component. the present disclosure is also directed to a catalyst as described herein. for example, the catalyst may comprise a catalytic component having at least a first surface comprising one or more catalytic sites as described herein and a carbon component provided as a layer on at least the first surface. as described herein, the carbon component may have a plurality of pores configured to enable passage of protons from a proton source through the carbon component to at least the first surface of the catalytic component so that the her may take place at the one or more catalytic sites. as described herein, the carbon component may be provided a certain distance from the first surface of the catalytic component, the distance being sized so as to enable the presence and/or passage of hydrogen in the space between the carbon component and the catalytic component. the present disclosure is also directed to methods of making the catalyst as described herein. according to some aspects, the method comprises providing a catalytic component as described herein and forming a carbon component thereon as described herein. it should be understood that providing the catalytic component may comprise any method for preparing a catalytic component as described herein that is suitable for use with the present disclosure. for example, u.s. pat. no. 8,163,263, incorporated by reference herein in its entirety, describes methods for providing supported catalyst nanoparticles, which may be used for providing the catalytic component as described herein. in another example, u.s. pat. no. 6,974,492, incorporated by reference herein in its entirety, describes methods for producing metal nanoparticles, which may be used for providing the catalytic component as described herein. according to some aspects, forming the carbon component on the catalytic component may comprise providing the carbon component as a sub-coverage layer or providing the carbon component as a sub- or full-coverage layer and subsequently providing a plurality of pores as described herein. according to some aspects, providing the carbon component as a full-coverage layer may be performed using the methods disclosed in u.s. pat. no. 10,273,574, which is incorporated by reference herein in its entirety. according to some aspects, providing the carbon component as a sub-coverage layer may be performed by varying, for example, the rate of carbon source introduction, growth temperature, or a combination thereof, as described herein. according to some aspects, subsequently providing a plurality of pores as described herein may be performed by applying a shadow mask to the formed carbon component and separating the carbon component from the catalytic component in order to provide one or more pores. additionally or alternatively, subsequently providing a plurality of pores as described herein may be performed by etching the formed carbon component with an etching agent in order to provide one or more pores. example etching agents include, but are not limited to, water, oxygen, and a combination thereof. this detailed description uses examples to present the disclosure, including the preferred aspects and variations, and also to enable any person skilled in the art to practice the disclosed aspects, including making and using any devices or systems and performing any incorporated methods. the patentable scope of the disclosure is defined by the claims, and may include other examples that occur to those skilled in the art. such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal language of the claims. aspects from the various embodiments described, as well as other known equivalents for each such aspect, can be mixed and matched by one of ordinary skill in the art to construct additional embodiments and techniques in accordance with principles of this application. while the aspects described herein have been described in conjunction with the example aspects outlined above, various alternatives, modifications, variations, improvements, and/or substantial equivalents, whether known or that are or may be presently unforeseen, may become apparent to those having at least ordinary skill in the art. accordingly, the example aspects, as set forth above, are intended to be illustrative, not limiting. various changes may be made without departing from the spirit and scope of the disclosure. therefore, the disclosure is intended to embrace all known or later-developed alternatives, modifications, variations, improvements, and/or substantial equivalents. reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more.” all structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference. moreover, nothing disclosed herein is intended to be dedicated to the public. further, the word “example” is used herein to mean “serving as an example, instance, or illustration.” any aspect described herein as “example” is not necessarily to be construed as preferred or advantageous over other aspects. unless specifically stated otherwise, the term “some” refers to one or more. combinations such as “at least one of a, b, or c,” “at least one of a, b, and c,” and “a, b, c, or any combination thereof” include any combination of a, b, and/or c, and may include multiples of a, multiples of b, or multiples of c. specifically, combinations such as “at least one of a, b, or c,” “at least one of a, b, and c,” and “a, b, c, or any combination thereof” may be a only, b only, c only, a and b, a and c, b and c, or a and b and c, where any such combinations may contain one or more member or members of a, b, or c. as used herein, the term “about” and “approximately” are defined to being close to as understood by one of ordinary skill in the art. in one non-limiting embodiment, the term “about” and “approximately” are defined to be within 10%, preferably within 5%, more preferably within 1%, and most preferably within 0.5%.
167-247-189-339-295
US
[ "AU", "US", "CA", "WO", "EP" ]
G07F17/32,G06F12/00,A63F9/24,A63F13/00,G06F19/00
2012-01-13T00:00:00
2012
[ "G07", "G06", "A63" ]
automated discovery of gaming preferences
systems and methods for automated discovery of gaming preferences and delivery of gaming choices based gaming preferences are disclosed. the systems and methods may operate in real time and may detect and analyze data representing various game features and/or game player behavior and match the data with predetermined models, profiles or game player types. game choices may then be presented to the game player based on the analysis of the data. systems and methods to analyze and categorize the game player behavior are also disclosed, including mining data in a cluster model based analysis to identify and develop the models, profiles or game player types and to select the games to be provided for each of the identified models, profiles or game player types. a different collection of games may be provided for each of the identified models, profiles or game player types.
1 - 19 . (canceled) 20 . a computer implemented method for analyzing a set of data representing game player behavior, comprising the steps of: partitioning, with a processor, a set of data into one or more game play periods, the set of data being related to one or more game factors; analyzing, with a processor, the set of data within each game play period, and creating, with a processor, at least one game player type based at least in part on the analysis of data from one or more game play periods. 21 . the method of claim 20 , further comprising the steps of: prior to the step of partitioning, collecting, with a processor, the set of data related to one or more game factors. 22 . the method of claim 20 , further comprising the steps of: identifying a selection of games for a game player type based on data related to the games and the data from the game player type. 23 . the method of claim 22 , further comprising the steps of: collecting a second set of data related to game factors for game play in an ongoing game by a current game player; analyzing the second set of data; determining at least one game player type for the current game player based on the analysis of the second set of data; and displaying, on a video display, the selection of games identified for the determined game player type. 24 . the method of claim 20 , wherein the step of analyzing the set of data comprises: performing a cluster analysis of the set of data. 25 . the method of claim 20 , wherein the step of analyzing the set of data comprises: detecting indicators from within each game play period. 26 . the method of claim 20 , wherein the step of analyzing the set of data comprises: detecting trends within the set of data. 27 . the method of claim 20 , wherein the step of analyzing the set of data comprises: detecting trends within one or more game play periods. 28 . the method of claim 23 , wherein the step of analyzing the second set of data comprises: detecting indicators from within the second set of data. 29 . the method of claim 23 , wherein the step of analyzing the second set of data comprises: matching the second set of data to at least one game player type. 30 . the method of claim 23 , wherein the step of analyzing the second set of data comprises: detecting trends within the second set of data. 31 . the method of claim 20 , wherein the data represents at least one feature selected from the group consisting of: game session length, play behavior, game behavior, game language, game location, game selection, elapsed time with one game, wagering behavior, game type, game theme, wager amounts, wager denominations, play rates, typical bonus values, game brand, prize distributions, amounts of incremental wagers, frequency of wagering, for instance the presence or absence of multiple rounds of wagering in a game, the number of rounds of wagers permitted in a game, maximum wager amounts permitted, minimum wager amounts permitted, amount of wagering, elapsed time between selected events for instance starting a new game, reaction to bonus rounds, reaction to progressive outputs, pay table features, amount of incremental wagers, frequency of wagering, elapsed time for player reaction, amount of wagering, elapsed time between wagers, frequency of player action, game rules, game complexity, ability for a player to control or have an effect on a game outcome, whether an outcome is predetermined, whether parallel wagering is provided, average game speed, average wager amounts, average wager rate, presence or frequency of bonus rounds, presence and frequency of progressive outputs, payout percentages, win rates, win percentages, loss rates, loss percentages, use of special features, frequency of use of special features, number of lines played, total amount wagered, and type of payment received. 32 . the method of claim 23 , wherein the step of collecting a second set of data related to game factors involves collecting data from the moment that the player begins to play a game or indicates a desire to play a game. 33 . the method of claim 23 , wherein the step of collecting a second set of data related to game factors involves collecting data for a predetermined length of time. 34 . the method of claim 23 , further comprising the step of: receiving a selection of a game to play from the game player. 35 . the method of claim 23 , wherein the game player is unregistered. 36 . the method of claim 23 , wherein the game player has no player account or other information identifying the player to a wagering game system. 37 . the method of claim 23 , where the second set of data is collected during a game play session and there is no historical data related to the game player's game player type prior to the game play session. 38 . the method of claim 23 , further comprising the step of providing a module to perform the step of collecting a second set of data related to game factors. 39 . the method of claim 38 , wherein the module is provided on one or more of: a gaming device, a controller in a gaming venue, a local system in the gaming venue, a system in a data center, a system in a social media network or in a private cloud, public cloud, hybrid cloud or community cloud. 40 . the method of claim 39 , wherein the gaming device is a video lottery terminal, electronic gaming machine, personal computer, laptop computer, tablet, mobile phone, or a functional equivalent of one of the foregoing. 41 . the method of claim 23 , further comprising the steps of: receiving a selection of one of the displayed games from a player; and presenting the selected game to the player on the video display. 42 . the method of claim 23 , where the second set of data is collected during a game play session and the game player type for the game player is previously identified. 43 . the method of claim 42 , wherein the step of determining the at least one game player type for the current game player includes factoring in the previously identified game player type. 44 . the method of claim 42 , further comprising the step of: updating the previously identified game player type for the current game player. 45 . the method of claim 42 , wherein the step of determining at least one game player type for the current game player based on the analysis of the second set of data involves determining at least one updated game player type that is different from a previously identified game player type; and the method further comprises the step of: changing the previously identified game player type for the current game player to the updated game player type. 46 . the method of claim 20 , further comprising modifying at least one game player type based at least in part on the analysis of data from an additional set of one or more game play periods. 47 . a wagering game system comprising: an electronic gaming machine configured to collect a set of data related to one or more game factors, and a modeling module to receive the set of data and configured to: partition the set of data into one or more game play periods; analyze the set of data within each game play period, and create at least one game player type based at least in part on data from at least one game play period. 48 . a system for modeling game player behavior comprising: a modeling module to receive a set of data related to one or more game factors and configured to: partition the set of data into one or more game play periods; analyze the set of data within each game play period, and create at least one game player type based at least in part on data from at least one game play period. 49 . the system of claim 48 , further comprising: an electronic gaming machine configured to collect a set of data related to one or more game factors. 50 - 53 . (canceled)
prior applications this application is a divisional of u.s. patent application ser. no. 13/738,790 filed jan. 10, 2013 which claims the benefit of u.s. provisional patent application no. 61/586,547 filed jan. 13, 2012, the entire disclosures of which are expressly incorporated herein by reference. field of the invention the disclosure relates to systems and methods for providing wagering games on electronic gaming machines. in particular, systems and methods are provided for automated identification of gaming preferences and presentation of a customized set of games to a player based on the identified gaming preferences. background with the emergence of server network and cloud based gaming in the wagering gaming industry a known approach is to download a library of game content to gaming machines from a centralized system. the library of game content is typically not personalized or targeted to a player's preferences, behavior, changing habits or to different types of player segments. the library of game content may be specifically targeted based on a fixed gaming property or history, so that the library is tailored to specific player types based on market research, player research or focus group studies, for instance marketing studies. furthermore, the operators of the gaming machines and game suppliers may waste time, money and other resources developing and downloading games to thousands of machines in casinos and other venues and these games may not satisfy the unique needs and desired player experience or behavior for specific players. to enhance the game playing experience, it may be beneficial to personalize the selection of games that are offered to individual game players. one method of personalizing a game selection is based on player identification. the method provides a choice of game content that matches the player's previous game selections or demographic information specific to the identified player. us20080032787a1 discloses a system that recommends games to a player, where the recommendation is based on personal game selection information including demographic information and/or historical game play information specific to the player. us20100298040a1 discloses a gaming recommender system where games are recommended based on theme, brand, game player demographic, past games played by the player, and length of play of games. both of these disclosures describe the use historical data that is tied to a specifically identified player. these systems depend on player identification and cannot provide good recommendations for new players for whom there is no historical data or even for regular players when new sets of games are introduced to the gaming system. instead, the tendency with these systems is for the player to play the same games over and over. us20070219000a1 discloses a gaming system that recommends specific games where the recommendation data is determined by the operators of the gaming system. this forces the player to select a game from among choices of games provided by the game operators. the game player preferences are secondary to the selections of game operators. a significant problem with this approach is that the game operator recommended games might not match player's preferences. us20070054738a1 and w02009/097538a1 describe games selections being based on a keyword provided by a player. these systems may not provide suitable selections since they are dependent on matching algorithms that work against the player provided keyword. therefore, there is a need for gaming systems and methods invoking new ways to provide game recommendations to regular players and to new players. summary new systems and methods for automatic discovery of gaming preferences are provided herein. in certain embodiments, the systems and methods provide personalized content for a game player in real-time. the systems and methods allow gaming machines to dynamically and in real time predict and offer game content that satisfies the real player experience as opposed to pre-loaded or downloading games based on market research or focus group studies. the systems and methods disclosed herein provide recommendations to a player without the need for historical preferences or demographic information about the player. player behavioral data monitoring and analysis is performed with anonymous player data, that is, the player need not be specifically identified. further, the data can be collected and analyzed during live game play. in certain embodiments, the systems and methods monitor the player behavior in real-time and then offers game content to match, track or reflect the behavior and even mood of the player at or near that particular instant in time. in one embodiment, a computer implemented method of operating a wagering is provided. preferably the method operates in real time, that is, during live, actual game play by a game player. the method includes the steps of: using a processor to collect a first set of data related to game factors for game play in an ongoing game by a current game player; analyzing, with a processor, the first set of data; determining, with a processor, at least one game player type from among a set of predefined game player types for the current game player based on the analysis of the first set of data; and displaying, on a video display, a selection of games identified for the determined at least one game player type. in another embodiment, a computer implemented method analyzing a set of data representing game player behavior is provided. preferably the method comprises the steps of: partitioning, with a processor, a set of data into one or more game play periods, the set of data being related to one or more game factors; analyzing, with a processor, the set of data within each game play period, and creating, with a processor, at least one game player type based at least in part on the analysis of data from one or more game play periods. in another embodiment, a wagering game system is provided. the system comprises: an electronic gaming machine configured to collect a set of data related to one or more game factors, and a modeling module to receive the set of data. the modeling module is configured to: partition the set of data into one or more game play periods; analyze the set of data within each game play period, and create at least one game player type based at least in part on data from at least one game play period. in another embodiment, a system for modeling game player behavior is provided. the system comprises: a modeling module to receive a set of data related to one or more game factors. the modeling module is configured to: partition the set of data into one or more game play periods; analyze the set of data within each game play period, and create at least one game player type based at least in part on data from at least one game play period. in yet another embodiment, a wagering game system is provided comprising: an electronic gaming machine configured to provide a selection of wagering games to a game player having a wagering game system registration; and a processor configured to analyze a set of data and determine at least one game player type from among a set of predefined game player types for a game player based on the analysis of a set of data related to game play by the game player. in another embodiment, a non-transitory computer readable medium having instructions stored therein thereon is provided. when executed, the instructions are operable to cause a computerized wagering game system to: collect, with a processor, a first set of data related to game factors for game play in an ongoing game by a current game player; analyze, with a processor, the first set of data; determine, with a processor, at least one game player type from among a set of predefined game player types for the current game player based on the analysis of the first set of data; and display, on a video display, a selection of games identified for the determined at least one game player type. brief description of the figures certain embodiments of the invention are illustrated in the figures of the accompanying drawings in which: fig. 1 is a block diagram depicting a wagering game network according to one embodiment of the invention. fig. 2 is a block diagram depicting components of a system that generates the gaming and play behavior model according to certain embodiments of the invention. fig. 3 illustrates a representation of play data in accordance with an embodiment of the invention. fig. 4 is a block diagram depicting components of an exemplary system for automatic discovery of gaming preferences in accordance with an embodiment of the invention. fig. 5 is a flowchart illustrating a method of determining and providing a selection of games for a player. fig. 6 is a flowchart illustrating another method of determining and providing a section of games for a player. fig. 7 is a flowchart illustrating a method of creating a game player type and identifying games suitable for the created game player type. detailed description for simplicity and illustrative purposes, the principles of the present invention are described by referring mainly to various exemplary embodiments thereof. although the preferred embodiments of the invention are particularly disclosed herein, one of ordinary skill in the art will readily recognize that the same principles are equally applicable to, and can be implemented in other systems, and that any such variation would be within such modifications that do not part from the true spirit and scope of the present invention. before explaining the disclosed embodiments of the present invention in detail, it is to be understood that the invention is not limited in its application to the details of any particular arrangement shown, since the invention is capable of other embodiments. throughout this description, certain acronyms and shorthand notations are used. these acronyms and shorthand notations are intended to assist in communicating the ideas expressed herein and are not intended to limit the scope of the present invention. other terminology used herein is for the purpose of description and not of limitation. methods and systems for providing automated discovery of gaming preferences are provided. the gaming preferences can then be used to assemble individualized recommendations of suitable games for a player. the system may operate anonymously, for instance, where the game player is unidentified or unrecognized by the gaming system. alternatively, the game player may be identified to the gaming system, for instance through a game player account, a responsible gaming account, a social network account, or other suitable indicia of identification. in one embodiment, player game session data may be used to build a gaming and play behavior model that represents different aspects such as play, game and wagering behavior. as used herein, gaming and play behavior is represented data related to any one or more of a plurality of different game features. game features may include, for instance: game session length; wager denominations, play rates (number of games played per time segment), typical bonus values, and other features as described below. for example, the model could include a cluster of games that are suited to players that like to play games for a shorter time with large amounts of money wagered. another cluster includes games that are more suitable for players that like to play for longer times with smaller amounts of money. in one embodiment, when a player begins to play a game, data related to the player's game playing behavior is detected an analyzed. based on the analysis of this data, the player can be classified, in real time, into one of the existing clusters. once classified, the games associated the most relevant cluster are suggested to the player. the suggested games can be offered to the player in any of a variety of ways, for instance on the main game screen, on a service window or on a banner on the top, bottom or side of the screen. the suggested games can also be offered in an online gaming system. components of an exemplary system 10 for automatic discovery of gaming preferences are shown in fig. 1 . these include a central system 12 having a gaming server 14 and a recommendation server 16 . the central system 12 may be connected by a network 18 to various gaming devices 20 a , 20 b , . . . 20 n . the network 18 may include a social media network or other suitable network such as a wan or lan. game play data may be collected from the gaming devices 20 a , 20 b , . . . 20 n and sent through the network 18 infrastructure back to the central system 12 . the gaming devices may be wired or wireless mobile gaming devices in any type of gaming setting, for instance dedicated electronic gaming machines as are commonly found in casinos and other venues. fig. 2 shows the main components of the system 30 that generates the gaming and play behavior model 32 , including a preprocessor 34 , a feature extractor 36 , and an analytic module 38 . in certain embodiments the analytic module 38 is configured to perform a clustering function, as described below. the system 30 of fig. 2 may be provided with access to two databases, a games database 40 and a play data database 42 . the play data database 42 , may include two sub components: (a) raw historical transaction records collected from gaming devices during past sessions and (b) a cluster model of the raw player data. in one embodiment, the data for the historical transaction records may be stored in the form of journal files and includes historical raw play data. in particular, the raw historical transaction records may include data related to player wagering and other real-time game play characteristics including game selection; amounts of incremental wagers; wagering frequency; elapsed time; reaction to bonus rounds; reaction to progressive output as well as others. the games database 40 includes information on game titles available to players along with game data and features such as themes, denominations, characteristics, etc. game characteristics that may be stored in the games database 40 may include average game speed; average wager amounts; average wager rate; presence and frequency of bonus rounds; presence and frequency of progressive outputs; odds of winning; prize distributions, and others. the embodiment, the system 30 performs a training process to generate the gaming and play behavior model 32 using a play data database 42 . this training may use a temporal representation of the raw historical transaction records within the play data database 42 . one embodiment of a temporal representation of the raw play data is depicted in fig. 3 . in this exemplary process, the raw data within the historical transaction records is pre-processed and partitioned into different sessions 50 a , 50 b . in this embodiment, each session represents a continuous game play, meaning a series of games that were played in a generally uninterrupted fashion. alternately, each session might represent a particular time period of game play, for instance 15 minutes, 30 minutes, an hour, or another suitable time period. in another alternative, each session may represent a particular number of rounds of a game, for instance 5, 10, 20 or another suitable number of rounds of a game. as shown in fig. 3 , the play data may be represented using a window style or other graphical approach which includes a variety of different “game features” ( fig. 3 , y-axis, f 0 . . . fn). in one embodiment, data for 28 different game features is tracked for each session. exemplary game features include: game session length, play behavior, game behavior, game language, game location, game selection, elapsed time with one game, wagering behavior, game type, game theme, wager amounts, wager denominations, play rates, typical bonus values, game brand, prize distributions, amounts of incremental wagers, frequency of wagering, for instance the presence or absence of multiple rounds of wagering in a game, the number of rounds of wagers permitted in a game, maximum wager amounts permitted, minimum wager amounts permitted, amount of wagering, elapsed time between selected events for instance starting a new game, reaction to bonus rounds, reaction to progressive outputs, pay table features, amount of incremental wagers, frequency of wagering, elapsed time for player reaction, amount of wagering, elapsed time between wagers, frequency of player action, game rules, game complexity, ability for a player to control or have an effect on a game outcome, whether an outcome is predetermined, whether parallel wagering is provided, average game speed, average wager amounts, average wager rate, presence or frequency of bonus rounds, presence and frequency of progressive outputs, payout percentages, win rates, win percentages, loss rates, loss percentages, use of special features, frequency of use of special features, number of lines played, total amount wagered, and type of payment received. as shown in fig. 3 , the x-axis represents time in the game session. the game features may be organized into time windows w 0 , w 1 showing the occurrence of the features over time. collectively the representation of the data as shown in fig. 3 allows for analysis and detection of “play patterns” through the data and through the various sessions. the size of the window is adjustable and defines a minimum number of incidents necessary to categorize behavior. for instance, in one embodiment, the window size may be set to, for instance, 12 play actions, so that whenever there are 12 play actions in a session the feature may be used as part of the characterization of the game play behavior. this representation has several advantages: 1) captures behavior as temporal patterns of the play features;2) variations in session length are not a factor (so long as sessions meet the minimum length);3) game titles can be introduced to map player behavior into game preferences. referring back to fig. 2 , the analytic module 38 is a software application or program used to perform a statistical data analysis. in one embodiment, the analytic module 38 is configured to perform a cluster analysis, for instance to group play data into different clusters. additionally, the analytic module 38 may be configured to analyze the play data and identify the different clusters based on this analysis, before grouping the data into the different clusters. any suitable clustering algorithm may be used for performing the statistical data analysis and grouping the data into appropriate clusters to form a cluster model. preferably, a scalable clustering approach that allows for a selection of the number of clusters and support for automatic feature selection is used. in one embodiment, a cluster model is developed automatically using clustering techniques operative for handling and working with large datasets. preferably the data analysis techniques support streaming (i.e., where the cluster model is updated as new data supports development or modification to the clusters, for instance based on drift in the underlying game play data and behavioral concepts). as used herein, the cluster model includes the identification of different clusters as well as the features relied on to distinguish these clusters. in one embodiment a two stage hierarchical training process is employed. the analytic module 38 generates a gaming and behavior model. the model includes a number of clusters where each cluster represents a set of game features. suitable game features are described throughout this disclosure. groups of clusters may be assembled and assigned to particular gaming trends or behaviors. for instance, a group of clusters may be assembled to identify game players that prefer short games with relatively low wagers. another group may be assembled for game players that prefer games with multiple rounds of betting or larger wager amounts. as an alternative to or in addition to clustering, the statistical analysis may employ other data analytic techniques such as factor or regression analysis. fig. 4 depicts components of an exemplary system for automatic discovery of gaming preferences 60 . in this embodiment, a player actions collector 58 collects data related to actions taken by a player during game play. this data may include various game features, suitable game features are described throughout this disclosure. in one embodiment, the player actions collector 58 collects player data from the moment a player inserts a player card or begins a wagering game, for instance by inserting a wager, or pressing a start button or otherwise providing an indication of a player's desire to play a wagering game. in certain embodiments, the player data comes directly from the gaming device 62 . the system may be configured to collect data for a predetermined or preset length of time, which time period may be adjustable by the game operator. software in the system may be configured to perform a preprocessing step, involving cleaning the data collected by the player actions collector 58 with a preprocessor 64 . cleaning the data may involve any one or more of the following subtasks: noise reduction or removal, identification and removal of outlying data entries, and resolving inconsistencies in the data. cleaning may also refer to taking data in a raw or uncleaned state or form and converting the data into a form that is better suited for a mining or modeling task. for instance, cleaning may include processing or removal of extraneous or unnecessary data such as meta data, tags, or empty fields. software in the system may also be configured to filter the data from the player actions collector 58 with a features extractor 66 . in this context, filtering refers to a specific approach to feature extraction where redundancies (i.e., attributes carrying less information) are eliminated by a function or ranking process. other techniques for data manipulation may also be used or they may be used in the alternative, for instance wrapper, embedded and search based models of data management and manipulation. the preprocessing step and feature extracting steps may be performed separately, in sequence or in parallel, or they may be performed together. similarly, the software module or engine(s) that perform these steps may be provided separately or together. in one embodiment, in a pre-defined time period, for instance a time period beginning from the start of game play, the system for automatic discovery of gaming preferences 60 begins to attempt to match the player's session gaming behavior with one or more specific clusters of game content that have previously been identified by the data mining steps, described herein (those steps involved in cluster model generation or other suitable analysis). the result of this matching are used to determine which of the one or more previously identified clusters of game content are most closely matched with the player and game wagering behavior. in one embodiment, each previously identified cluster of game content is matched to at least one unique game player type. in this way, the player may be assigned one of several game player types. the matching and determination of a game player type may be determined by a classifier 68 in a classifying or determining step where the game player is classified into a game player type. in another embodiment, a player may provide and the system may receive a selection of a game to play from the game player. this selection may be used in the determination of the at least one game player type. in another embodiment, a player may provide or the system may receive (either from the player or otherwise) geographical data related to the location of the game. this geographical data may be provided by the game operator. this geographical data may be used in the determination of the at least one game player type. in another embodiment, a player may provide or the system may receive (either from the player or otherwise) data related to the language of the game. this language data may be provided by the game operator or a game itself. this language data may be used in the determination of the at least one game player type. after or responsive to the determination of a game player type, the player is provided with a plurality of games from which to choose from. the plurality of games may be provided to the player (chosen via a game selector process 70 ) that is better matched to the game player type through a real-time window on the gaming machine. the player may be offered a choice on whether they would like to be informed of new games before initiating the first game play. the recommender system may send an alert message to the gaming machine during the game play or at the end of a game. the alert message may provide new or different game selections expected to satisfy the player experience for the identified game player type. alternatively, or additionally, the selection of different games may be provided to the player on a video screen between rounds of a game. alternatively, the player may be offered a choice of selected games based on the identified game play type through a separate area on the screen of the gaming machine. in such an embodiment, the new game may run and operate and be displayed in the same separate area on the screen of the gaming machine. in this embodiment, the player has the option of playing the pre-loaded game on the machine and, at the same time, trying out one or more games suggested based on the identified game player type. the new games suggested to the player could be different themes and genre (linked, community, social, progressive, tournament, episodic etc.) than the pre-loaded games on the gaming machines. in addition, in another embodiment the system may recommend games based one or more time slices, where a time slice represents a discrete duration of activity, such as game play. for instance analysis of a 7 day time slice may provide a different game player type and selection of games than an analysis for the same player based on a longer time slice, for instance a 10 day time slice. the system may be configured to calculate the differences between the two analyses (the 7 day time slice and the 10 day time slice). the system may then recommend games based wholly or in part on only the more recent or longer duration time slice. alternatively, the system may recommend games based on a combination of the recent time slice match and the longer time slice match. further, the system is configured to have the ability to store and partition data to later defined time slice based patterns. in this instance, the system is configured to allow for time slicing a data set into discrete time slices, as an example, 1 hour slices, or 1 day slices, or whatever time period is deemed desirable by the game operator. in another embodiment, a player, either unregistered or registered, may be prompted, at least once, by an electronic gaming machine, to agree to the system monitoring his game playing. alternatively, or additionally, the player may be prompted to agree to the system collecting game play data related to the activity of the player. accordingly, the methods may include the steps of: receiving an indication of agreement to monitoring of game play from the game player, and/or receiving an indication of agreement to collection of game play data from the game player. according to subsequent live game playing data collection and analysis, the player may then be presented with a set of games selected to match the player's gaming preferences. further, the system may update or change the player's game player type based on live or near live game playing data or metrics. in one embodiment, the system may update the player's game player type after a predetermined number of games are played or after a predetermined length of time. the predetermined number of games or predetermined length of time may be set by a game operator, for instance a casino of electronic gaming machine operator or by the game player, for instance by requesting that the game player input how often or frequently they would like to be presented with a new selection of games. the unregistered player may be prompted again to agree to the system monitoring his game playing at another electronic gaming machine within the same establishment (for instance a casino or a video lottery terminal system with geographical limits, or within geographical limits, for instance, by an online gaming system). in another embodiment, a registered player having an account or other method by which the player might be identifiable to a gaming system is logged into the system, for instance with an electronic gaming machine, or online, and is prompted for approval at least once, at the electronic gaming machine or online, to agree to the system monitoring his game playing. according to subsequent live game playing data collection and analysis the player may then be presented with a set of games selected to match the player's gaming preferences. for instance, the system may have previously assigned the player a game player type based on historical game play data. further, the system may update or change the player's game player type based on live or near live game playing data or metrics. in one embodiment, the system may update the player's game player type after a predetermined number of games are played or after a predetermined length of time. the predetermined number of games or predetermined length of time may be set by a game operator, for instance a casino of electronic gaming machine operator or by the game player, for instance by requesting that the game player input how often they would like to be presented with a new selection of games. in one embodiment, a registered player has a responsible gaming account or profile. in such an embodiment, the system is configured to consider data or other information from the responsible gaming account in determining the profile for the player or in adjusting a game selection previously offered to a player or previously determined without consideration of the existence of a responsible gaming account or data associated with that account. in adjusting a game selection, the system may take a selection of games based on a determined player profile and then add or remove games, the addition or subtraction of games being based on the data associated with or the presence of the player registration or the responsible gaming account. in one embodiment, the system may recommend a selection of games in whole or in part also due to the existence of the responsible gaming profile of the player, in addition to, or as an alternative to, data associated with the responsible gaming profile. the responsible gaming data may be processed by the system but stored separately, for instance in a separate responsible gaming database or module. in one embodiment, the methods include the step of determining that the wagering game system has responsible gaming data related to registered game players and including the responsible gaming data in the determination of the at least one game player type. additionally, the system may recommend at least one game to the current player that has previously been recommended to registered game players having the same game player type, a similar game player type or a substantially similar game player type. for a non-registered player, or a player that is unidentified to the gaming system, if the player profile resulting from a live session based analysis falls within a particular risk category, or otherwise identifies certain risk factors, then the system, may, in part or whole, recommend a selection of games which it would otherwise recommend to registered players also having that risk category. in another embodiment, the player may request to be presented with a new selection of games, for instance at any time during game player. in one such embodiment, the player would press a button or other indicator to cause the machine to present a new selection of games based on recent or historical game play behavior. referring to fig. 5 , in a method that may either be performed as a separate embodiment of the inventive concepts of this disclosure or as a continuation of the steps described below to create a game player type, or identify a selection of games, from a collection of data related to game play, a set of method steps 80 may be used to discover the gaming preferences of a game player and to present the player with a selection of games predicted to match those gaming preferences. the steps including collecting at or near real-time data 82 representative of ongoing game play, analyzing this data 84 , optionally determining a game player type 86 and then presenting the game player with a set of games to play 88 , where the set of games is selected based on preferences detected from the player's unique behaviors or preferences detected from the data representative of ongoing game play. additionally, the gaming preferences of a player and even game player type may be derived or obtained from a player's social networking accounts. in this instance, the system would customarily request permission to access the player's social networking account. this embodiment where social networking information or data is factored in to the selection of games or the determination of the game player type may be used only with registered players, or it may also be used with players that are unregistered or unidentified or even those that do not have a player account. in such instance, the wagering game system may be hold or have no access to information or any player account identifying the player to the wagering game system. the persona may be derived through proprietary software or third party available software. the persona may be used in part to recommend games to registered players or even to players which have patterns similar to registered players being offered the selections. in addition, eligible games may be offered for selections which are non-wagering games, online games as well as wagering games for electronic gaming machines. referring again to fig. 5 , in a method similar to that described above, that may be performed as a separate embodiment of the inventive concepts of this disclosure, a set of method steps may be used to discover the gaming preferences of a game player 80 . the steps including collecting at or near real-time data representative of ongoing game play 82 , analyzing this data 84 , optionally determining a game player type 86 and then optionally presenting the game player with a set of games to play 88 , where the set of games is selected based on preferences detected from the player's unique behaviors or preferences detected from the data representative of ongoing game play. in this method, the gaming preferences of a game player may be determined to some extent even without the steps of determining a game player type 86 and displaying a selection of games 88 . the method includes the step of collecting a set of data related to game factors for game play in an ongoing game by a current game player 82 . this collection of data is performed during a game player's actual game play, in real time or near real time. these game factors may be the same as or a larger set or subset of the game factors described above with respect to analyzing the larger data set used to generate game player types. a separate software module may be provided to handle collection of the data and this module may be provided in any suitable location or device, for instance, a gaming device, a controller in a gaming venue, a local system in the gaming venue, a system in a data center, a system in a social media network or in a private cloud, public cloud, hybrid cloud or community cloud. the method also includes the step of analyzing the collected set of data 84 . certain game factors, or indicators, may be weighted or the dimensions of measurement adjusted so that they are more important or less important than other factors in the overall analysis of the data. in one embodiment, the data analysis is performed using a cluster analysis of the collected set of data. additionally, or alternatively, the analysis may simply involve identification of particular game factors, the frequency of these game factors, any trends in the appearance of the game factors (for instance, whether particular actors tend to appear closer together in time), or a combination of these different indicators. the method may also include determining at least one game player type 86 for the current game player based on the analysis of the collected set of data. as described above, for instance, the analysis may reveal that a game player continually selects different games. the system may, for instance, interpret and determine this as an indicator that the player does not favor games of the type that he stopped playing and use this information to assign the player an appropriate game player type. in another example, if the game player continues to play longer games with multiple rounds of wagers, then the system would identify a game player type that exhibits these features. the system may then display, on a video display, the selection of games 88 identified for the game player type determined by the analysis of the collected set of data. the player may then make a selection of the one of the displayed games and the game machine presents the selected game to the player. for instance, the selection of games may be presented on a video lottery terminal, electronic gaming machine, personal computer, laptop computer, tablet, mobile phone, or a functional equivalent of one of the foregoing. fig. 6 shows another embodiment of a method 100 to discover the gaming preferences of a game player and to present the player with a selection of games predicted to match those gaming preferences. the method of fig. 6 including steps of collecting data 102 representative of ongoing game play, analyzing this data 104 , optionally determining a game player type 106 and then presenting the game player with a set of games to play 108 , similar to the steps described above with reference to fig. 5 . additionally, fig. 6 shows the step of collecting a second or additional set of data 110 related to game factors for game play in an ongoing game by a current game player. specifically the collection of data 102 and analysis of this data 104 may be performed in a manner similar to that described above with reference to fig. 5 . thus, this collection of data 110 is performed at or near real time during ongoing actual game play by a game player. the second or additional set of data may be provided in a time period separate from (for instance after) or overlapping the first set of data. the second set of data may relate to a longer period of time than the first set of data. alternatively, the second set of data may relate to a different set of game factors than the first set of data. these game factors may be the same as or a larger set or subset of the game factors described above with respect to analyzing the larger data set used to generate game player types and suitable game factors are described throughout this disclosure. additionally, the second set of data may be larger than the first set of data. the method may also include the step of analyzing the second set of data 112 . certain game factors may be weighted or the dimensions of measurement adjusted so that they are more important or less important than other factors in the overall analysis of the data. in one embodiment, the data analysis is performed using a cluster analysis of the second set of data. additionally, or alternatively, the analysis may simply involve identification of particular game factors, the frequency of these game factors, any trends in the appearance of the game factors (for instance, whether particular actors tend to appear closer together in time), or a combination of these different indicators. the method may also include determining at least one game player type 114 for the current game player based on the analysis of the second set of data. for instance, the analysis may reveal that a game player continually selects different games. the system may, for instance, interpret and determine this as an indicator that the player does not favor games of the type that he stopped playing and use this information to assign the player an appropriate game player type. in another example, if the game player continues to play longer games with multiple rounds of wagers, then the system would identify a game player type that exhibits these features. thus, in this way, the system may continually monitor, collect data, and update a current game player's previously-determined game player type. in one embodiment, the step of determining the at least one game player type for the current game player includes factoring and/or updating a previously identified game player type. in one embodiment, the step of determining at least one game player type for the current game player based on the analysis of the second set of data involves determining that at least one updated game player type is different from a previously identified game player type. in this embodiment, the method may further include the step of changing the previously identified game player type for the current game player to the updated game player type. in another embodiment, a game player type may be updated based on an analysis of an additional set of not just one, but a plurality of game play periods, data sets, factors, or a combination of any of the foregoing. the system can then optionally make a determination as to whether to update the game player type 116 . in certain embodiments, the system default may be set to update the game player type and no separate determination step is necessary. in an instance where the game player type is updated, the method may proceed to display a selection of games 108 associated with the newly identified, updated, game player type. in an instance where the game player type remains unchanged, the process may continue to collect a new or the same second set of data 110 and then work back through the steps of analyzing the new or updated second set of data 112 and a subsequent determination of the game player type 114 . alternatively, where the game player type remains unchanged, the method may end (not shown). in another embodiment, the system may request and receive feedback from the game player related to the player's rating of the recently played game. for instance the system may be configured so that a player can assign a numeric rating to the recently played game. data from this rating may be combined with data about the recently played game to update a previously-determined game player type. in another embodiment, the method involves updating a previously-determined at least one game player type based on an additional set of data, the additional set of data related to game player feedback reflecting a player indication of how often the player would play the game. the indication of how often the player would play the game may be received from the player in the form of a selected set of responses, for instance indicating the player would play often, sometimes, or never. the system may then display 108 , on a video display, the selection of games identified for the game player type determined by the analysis of the second set of data. the player may then make a selection of the one of the displayed games and the game machine presents the selected game to the player. referring now to fig. 7 , in another embodiment, a computer implemented method 120 is provided for creating a set of game player types for use in operating a wagering game. the method may include a first step (not shown) of collecting a set of data related to one or more game factors or game features, for instance based on actual, simulated or historical game play. in another embodiment of the method, the set of data related to one or more game factors may be previously available so that the step of collecting the data may not be required for the inventive method. suitable game factors, also referred to herein as game features, are described throughout this disclosure. an optional step involves partitioning the set of data 122 into one or more game play periods. each game play period may represent a continuous or relatively continuous period of game play, for instance, a series of consecutive games played by a player in one sitting at an electronic gaming machine. this step may be combined with the step of collecting the data and it may also be combined with the step of analyzing the data 124 . in addition, gaming data may be held in a central repository and be partitioned based on geo zones which may reflect local or country based partitioning. the system may offer a mix of selection from within various partitions based upon language; geo zones as well as time sliced processed data. the data is analyzed 124 to identify instances of the game factors described above, including the frequency of appearance of the game factors, their distribution within the data set, and clusters, trends or other patters are identified. certain game factors, or indicators, may be weighted or the dimensions of measurement adjusted so that they are more important or less important than other factors in the overall analysis of the data. in one embodiment, the data analysis is performed using a cluster analysis of the set of data within each game play period. additionally, or alternatively, the analysis may be performed against the set of data without partitioning into game play periods. the data analysis allows the system to create at least one game player type 126 . in one embodiment, the game player type is an association or collection of one or more game factors, such as those described above. this association or collection may represent a particular model of game player. for instance, the data analysis may show that certain players prefer games that are quickly resolved (from start to finish) and have small wager amounts. data suggesting this trend could be used to create a game player type based on this trend. in one embodiment, the game player type is a collection of data including an identifier that allows the system to identify the collection of data, and, optionally, that the data provides a game player type. the game player type may also include data which indicates the game factors defining the particular features of the games to be affiliated with the game player type. these features may be identified in the affirmative, for instance as features that should or are preferably present in the games to be affiliated with the game player type. alternatively, or additionally, some features may be identified in the negative, for instance features that should not be or are preferably not present in the games to be affiliated with the game player type. in one embodiment, the method may include the system selecting games for the game player type based at least in part on the analysis of data from one or more game play periods or from analysis of the data set at large, without any partitioning or consideration of partitioning of the data into game play periods. the method may also include the step of identifying a selection of games for a game player type 128 . the identification is based on data related to the games and the information or data from the game player type. for instance, if the game player type is for players that like longer games with multiple rounds of wagers, then the system would identify a selection of games that exhibit these features. data related to a game could be provided manually or it could be generated in a separate data analysis step, for instance analysis of data representative of game play activity, for instance, live, virtual or historical play of a given game. the data related to the games could include a combination of data entered manually, for instance game theme data, as well as other data collected or assembled through analysis of game play activity. alternatively, the system may identify a selection of games based directly on the analysis of the set of data, without any creation of a game player type. in this embodiment, the selection of games may be based directly on the results of the cluster or trend analysis. the above-described embodiments of the present invention can be implemented in any of numerous ways. for example, the embodiments may be implemented using hardware, software or a suitable combination thereof. when implemented in software, the software code can be executed on any suitable processor or collection of processors, whether provided in a single computer or distributed among multiple computers. such processors may be implemented as integrated circuits, with one or more processors in an integrated circuit component. further, a processor may be implemented using circuitry in any suitable format. it should be appreciated that a computer may be embodied in any of a number of forms, such as a rack-mounted computer, a desktop computer, a laptop computer, or a tablet computer. additionally, a computer may be embedded in a device perhaps not generally regarded as a computer but with suitable processing capabilities, including an electronic gaming machine, a web tv, a personal digital assistant (pda), a smart phone or any other suitable portable or fixed electronic device. also, a computer may have one or more input and output devices. these devices can be used, among other things, to present a user interface. examples of output devices that can be used to provide a user interface include printers or display screens for visual presentation of output and speakers or other sound generating devices for audible presentation of output. examples of input devices that can be used for a user interface include keyboards, and pointing devices, such as mice, touch pads, and digitizing tablets. as another example, a computer may receive input information through speech recognition or in other audible format. such computers may be interconnected by one or more networks in any suitable form, including as a local area network or a wide area network, such as an enterprise network or the internet. such networks may be based on any suitable technology and may operate according to any suitable protocol and may include wireless networks, wired networks or fiber optic networks. as used herein, the term “online” refers to such networked systems, including computers networked using, e.g., dedicated lines, telephone lines, cable or isdn lines as well as wireless transmissions. online systems include remote computers using, e.g., a local area network (lan), a wide area network (wan), the internet, as well as various combinations of the foregoing. suitable user devices may connect to a network for instance, any computing device that is capable of communicating over a network, such as a desktop, laptop or notebook computer, a mobile station or terminal, an entertainment appliance, a set-top box in communication with a display device, a wireless device such as a phone or smartphone, a game console, etc. the term “online gaming” refers to those systems and methods that make use of such a network to allow a game player to make use of and engage in gaming activity through networked, or online systems, both remote and local. for instance, “online gaming” includes gaming activity that is made available through a website on the internet. also, the various methods or processes outlined herein may be coded as software that is executable on one or more processors that employ any one of a variety of operating systems or platforms. additionally, such software may be written using any of a number of suitable programming languages and/or programming or scripting tools, and also may be compiled as executable machine language code or intermediate code that is executed on a framework or virtual machine. in this respect, the invention may be embodied as a tangible, non-transitory computer readable storage medium (or multiple computer readable storage media) (e.g., a computer memory, one or more floppy discs, compact discs (cd), optical discs, digital video disks (dvd), magnetic tapes, flash memories, circuit configurations in field programmable gate arrays or other semiconductor devices, or other non-transitory, tangible computer-readable storage media) encoded with one or more programs that, when executed on one or more computers or other processors, perform methods that implement the various embodiments of the invention discussed above. the computer readable medium or media can be transportable, such that the program or programs stored thereon can be loaded onto one or more different computers or other processors to implement various aspects of the present invention as discussed above. as used herein, the term “non-transitory computer-readable storage medium” encompasses only a computer-readable medium that can be considered to be a manufacture (i.e., article of manufacture) or a machine and excludes transitory signals. the terms “program” or “software” are used herein in a generic sense to refer to any type of computer code or set of computer-executable instructions that can be employed to program a computer or other processor to implement various aspects of the present invention as discussed above. additionally, it should be appreciated that according to one aspect of this embodiment, one or more computer programs that when executed perform methods of the present invention need not reside on a single computer or processor, but may be distributed in a modular fashion amongst a number of different computers or processors to implement various aspects of the present invention. computer-executable instructions may be in many forms, such as program modules, executed by one or more computers or other devices. generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. typically the functionality of the program modules may be combined or distributed as desired in various embodiments. also, data structures may be stored in computer-readable media in any suitable form. for simplicity of illustration, data structures may be shown to have fields that are related through location in the data structure. such relationships may likewise be achieved by assigning storage for the fields with locations in a computer-readable medium that conveys relationship between the fields. however, any suitable mechanism may be used to establish a relationship between information in fields of a data structure, including through the use of pointers, tags, addresses or other mechanisms that establish relationship between data elements. various aspects of the present invention may be used alone, in combination, or in a variety of arrangements not specifically discussed in the embodiments described in the foregoing and the concepts described herein are therefore not limited in their application to the details and arrangement of components set forth in the foregoing description or illustrated in the drawings. for example, aspects described in one embodiment may be combined in any manner with aspects described in other embodiments. also, the invention may be embodied as a method, of which several examples have been provided. the acts performed as part of the method may be ordered in any suitable way. accordingly, embodiments may be constructed in which acts are performed in an order different than illustrated, which may include performing some acts simultaneously, even though shown as sequential acts in illustrative embodiments. while the invention has been described with reference to certain exemplary embodiments thereof, those skilled in the art may make various modifications to the described embodiments of the invention without departing from the true spirit and scope of the invention. the terms and descriptions used herein are set forth by way of illustration only and not meant as limitations. in particular, although the present invention has been described by way of examples, a variety of devices would practice the inventive concepts described herein. although the invention has been described and disclosed in various terms and certain embodiments, the scope of the invention is not intended to be, nor should it be deemed to be, limited thereby and such other modifications or embodiments as may be suggested by the teachings herein are particularly reserved, especially as they fall within the breadth and scope of the claims here appended. those skilled in the art will recognize that these and other variations are possible within the spirit and scope of the invention as defined in the following claims and their equivalents.
167-566-310-079-068
US
[ "US", "WO", "EP", "AU", "PT", "SG", "ES", "NZ", "DK", "CA", "ZA", "LT" ]
F16G11/02,B23P11/00,D07B1/02,F16G11/03,F16G11/14,B65H69/00,D07B7/00
2011-11-18T00:00:00
2011
[ "F16", "B23", "D07", "B65" ]
method of terminating a stranded synthetic filament cable
a method for straightening, constraining, cutting and terminating a multi-stranded, non-parallel cable. the method includes: (1) dividing the cable into smaller components which are in the size range suitable for the prior art termination technology; (2) creating a termination on the end of each of the smaller components; (3) providing a collector which reassembles the individual terminations back into a single unit; and (4) maintaining alignment between the terminations and the smaller components while the terminations and the collector are in a connected state.
1. a method for terminating a cable comprising: a. providing a cable having a central axis, said cable including a plurality of stands made of synthetic tension-carrying filaments; b. splaying said plurality of strands apart in order to gain access to an end on each strand of said plurality of strands; c. providing a plurality of anchors; d attaching each anchor of said plurality of anchors to a single strand of said plurality of strands, so that each one of said strands is connected to only one of said anchors; e. providing a collector, including i. an outer perimeter ii. said outer perimeter of said collector opening into a plurality of anchor receivers, iii. each of said anchor receivers including an aligned cable receiver; f. after the each anchor of said plurality of anchors has been attached to the each strand of said plurality of strands, attaching the each anchor of said plurality of anchors to said collector by moving the each anchor laterally inward through said outer perimeter of said collector and into one of said anchor receivers in said collector; g, providing an alignment fixture, said alignment fixture including an internal passage configured to surround said plurality of strands and guide each of said strands in said plurality of strands along a smoothly diverging path away from said central axis of said cable; h. attaching said alignment fixture to said collector; i. wherein said plurality of strands passes through said internal passage of said alignment fixture; j. wherein said alignment fixture is configured to be removable from said collector while said plurality of anchors remains attached to said collector; and k. providing an attachment feature on said collector. 2. the method for terminating a cable as recited in claim 1 , wherein said internal passage urges said plurality of strands inward toward said central axis. 3. the method for terminating a cable as recited in claim 2 , wherein said internal passage comprises an arcuate shoulder. 4. the method for terminating a cable as recited in claim 2 , wherein said alignment fixture is bolted to said collector. 5. the method for terminating a cable as recited in claim 1 , where said internal passage includes a curved shoulder. 6. the method for terminating a cable as recited in claim 5 , wherein said internal passage comprises a revolved profile. 7. the method for terminating a cable as recited in claim 1 , wherein said internal passage includes a shoulder. 8. the method for terminating a cable as recited in claim 1 , wherein said attaching the each anchor of mid plurality of anchors to said collector is made by: a. providing a plurality of anchor receivers in said collector; and b. placing the each anchor of said plurality of anchors in an anchor receiver of said plurality of anchor receivers. 9. the method for terminating a cable as recited in claim 1 , wherein said internal passage comprises a revolved profile. 10. the method for terminating a cable as recited in a claim 1 , wherein said alignment fixture is bolted to said collector. 11. the method for terminating a cable as recited in claim 1 , further comprising providing a binder proximate said cut end of at least one of strand prior to attaching one of said anchors to said at least one strand. 12. a method for terminating a cable, comprising; a. providing a cable having a central axis, said cable including a plurality of strands made of synthetic tension-carrying filaments, said strands being grouped tightly around said central axis; b. providing a plurality of anchors; c. attaching said plurality of anchors to said plurality of strands, so that each one of said strands is connected to only one of said anchors; d. providing a collector, including i. an outer perimeter ii. said outer perimeter of said collector opening into a plurality of anchor receivers; e. attaching said plurality of anchors to said collector by moving the each anchor laterally inward through said outer perimeter of said collector and into one of said anchor receivers in said collector; f. providing an alignment fixture, said alignment fixture including an internal passage configured to surround said plurality of strands and guide each strand of said plurality of strands along a smoothly diverging path wherein each strand of said plurality of strands diverges away from said central axis of said cable; g. attaching said alignment fixture to said collector; h. wherein said plurality of strands passes through said internal passage of said alignment fixture; and i. wherein said alignment fixture is configured to be removable from said collector. 13. the method for terminating a cable as recited in claim 12 , wherein said internal passage is configured to urge said plurality of strands inward toward said central axis. 14. the method for terminating a cable as recited in claim 13 , wherein said internal passage comprises a curved shoulder. 15. the method for terminating a cable as recited in claim 13 , wherein said alignment fixture is bolted to said collector. 16. the method for terminating a cable as recited in claim 12 , wherein said internal passage includes a shoulder. 17. the method for terminating a cable as recited in claim 16 , wherein said internal passage comprises a revolved profile. 18. the method for terminating a cable as recited in claim 12 , wherein said internal passage comprises a revolved profile. 19. the method for terminating a cable as recited in claim 18 , wherein said revolved profile includes a shoulder. 20. the method for terminating a cable as recited in claim 12 , wherein said attachment between each anchor of and plurality of anchors and said collector is made by; a. providing a plurality of anchor receivers in said collector; and b. placing said plurality of anchors in said plurality of anchor receivers. 21. the method for terminating a cable as recited in claim 12 , wherein said alignment fixture is bolted to said collector. 22. the method for terminating a cable as recited in claim 12 , further comprising providing a binder proximate said cut end of at least one of strand prior to attaching one of said anchors to said at least one strand.
cross-references to related applications this non-provisional patent application claims the benefit of an earlier-filed provisional application. the provisional application was assigned ser. no. 61/561,514. this non-provisional patent application also a continuation-in-part of application ser. no. 12/889,981. all three applications list the same inventors. statement regarding federally sponsored research or development not applicable. microfiche appendix not applicable background of the invention 1. field of the invention this invention relates to the field of synthetic cable terminations. more specifically, the invention comprises a method for terminating a large, multi-stranded cable having at least a partially non-parallel construction. 2. description of the related art synthetic rope/cable materials have become much more common in recent years. these materials have the potential to replace many traditional wire rope assemblies. examples of synthetic fibers used in cables include kevlar, twaron, technora, spectra, dyneema, zylon/pbo, vectran/lcp, nylon, polyester, glass, and carbon (fiber). such fibers offer a significant increase in tensile strength over traditional materials. however, the unique attributes of the synthetic materials can—in some circumstances—make direct replacement of traditional materials difficult. this is particularly true for larger cables. as those skilled in the art will know, it is not practical to simply scale up termination technology used in small synthetic cables and expect it to work on large synthetic cables. this disclosure will employ consistent terminology for the components of a synthetic cable. the reader should note, however, that the terminology used within the industry itself is not consistent. this is particularly apparent when referring to cables of differing sizes. a component of a small cable will be referred to by one name whereas the analogous component in a larger cable will be referred to by a different name. in other instances, the same name will be used for one component in a small cable and an entirely different component in a large cable. in order to avoid confusion, the applicants will present a naming convention for the components disclosed in this application and will use that naming convention throughout. thus, terms within the claims should be interpreted according to the naming convention presented. first, the terms “rope” and “cable” are synonymous within this disclosure. no particular significance should be attached to the use of one term versus the other. the smallest monolithic component of a synthetic cable will be referred to as a filament. a grouping of such filaments will he referred to as a “strand.” the filaments comprising a strand may be twisted, braided, or otherwise gathered together. strands are grouped together to form a cable in one or more stages. as an example, strands may be grouped together into “strand groups” with the strand groups then being grouped together to form a cable. additional layers of complexity may be present for larger cables. a particularly large cable might be grouped as follows (from smallest to largest): filament, strand, strand group, strand group group, cable. the term “strand group” is generally only used for massive cables. however, it is not used consistently in the industry. in any event, the term “strand” is always used to indicate some portion of a cable that is less than the entire cable itself. many different subdivisions of a cable may appropriately be called a strand. the filaments and strands will normally be tension-carrying elements. however, some cables include other elements, such as one or more strands intended to measure strain. the invention is by no means limited to cables including only tension-carrying elements. the process of grouping filaments, strands, or strand groups together commonly involves weaving, braiding, twisting, or wrapping. for example, it is common to wrap six twisted strands around a twisted straight “core” strand in a helical pattern. some examples of cable construction will aid the reader's understanding. fig. 1 shows a prior art cable 10 comprised of seven strands 12 . a single “core” strand is placed in the center. six outer strands are then helically wrapped about the core strand to form the pattern shown. fig. 2 shows an individual strand 12 . strand 12 is comprised of many individual filaments 16 which are also wrapped in a helical pattern. jacket 14 surrounds and encapsulates the filaments in this particular example. a jacket is included on some strands and not on others. a jacket may assume many forms. some are an extruded covering. some are a helical wrapping. still others are a braided or woven layer of filaments which surround the core filaments. the scale of the strand and filaments of fig. 2 is significant to understanding the present invention. each individual filament is quite small, having a diameter which is typically less than the diameter of a human hair. the filaments shown in fig. 2 are larger in comparison to the overall cable diameter than is typical for synthetic cables. the larger filament diameter is shown for purposes of visual clarity. strand 12 in fig. 2 might have an overall diameter between 1 and 15 mm. several such strands may be grouped directly together to make a cable as shown in fig. 1 . fig. 3 shows a cable having three levels of grouping. filaments are grouped together to make strands 12 . seven strands are then grouped together to form a strand group 19 . seven such strand groups 19 are then grouped together to form cable 10 . as explained previously, the term “strand group” may also be referred to as a “strand” (since it is a subdivision of the entire cable). note that the entire cable may be encompassed by a jacket 14 . as for the smaller levels, the jacket may assume many forms. the reader will note that the cable as a whole has central axis 30 running down its center. the strands 12 generally run in the direction of central axis 30 but they are not all parallel to it. for the example of fig. 3 , each strand 12 is wrapped in a helical fashion (except the core strand of each strand group). strand groups 19 are shown as being nearly parallel to central axis 30 . however, in other examples the strand groups may be helically wrapped around the central axis as well. in still other examples they may be braided or woven. in the context of this disclosure, the term “non-parallel” simply means that a strand is not parallel to the cable's overall central axis. the strand may, on average, follow the central axis. but, at any given point a normal vector of the strand's cross section is not parallel to the overall central axis of the cable. the strand follows a curved path (formed by processes such as twisting, braiding, etc.) most prior art cables made using synthetic filaments are relatively small. the example of fig. 1 might have an overall diameter between 1 mm and 15 mm. of course, the individual filaments within the strands are very small. a synthetic filament is analogous to a single steel wire in a bundled wire rope. however, the individual synthetic filament behaves very differently in comparison to a piece of steel wire. when such a comparison is made, the synthetic filament is: (1) significantly smaller in diameter; (2) much less stiff (having very little resistance to buckling and quite vulnerable to bending-induced deformation); and (3) slicker (the synthetic strand has a much lower coefficient of friction). of these differences, the lower stiffness inherent in the use of synthetic filaments is the most significant. another significant difference between the individual filaments comprising a synthetic cable and the steel wires commonly used in wire ropes is the scalability of the most basic component. steel wire is typically created by a drawing process. this allows the wire to be created in a wide range of sizes. a small diameter steel wire is used to make a small wire rope and a large diameter steel wire is used to make a large wire rope. the most basic component of a wire rope—the steel wire—may be easily scaled to match the size of the wire rope. this is not true for the use of synthetic filaments. a synthetic filament having suitable properties is limited to a fairly narrow range of diameters. thus, the basic component of a synthetic cable is not scalable. a very fine filament must be used for a small synthetic cable and essentially the same size of filament must be used for a large synthetic cable. in order to carry a useful tensile load any cable material must have a termination (typically on its end but in rare occasions at some intermediate point). the word “termination” means a load-transferring element attached to the cable that allows the cable to be attached to something else. a portion of the cable itself will typically lie within the termination. for a traditional cable made of steel wire, a termination is often created by passing the cable around a thimble (with an eye in the middle) and clamping or braiding it bank to itself. for higher load situations, the end of a wire rope may be terminated using a socket. the word “socket” in the context of wire rope terminations means a generally cylindrical steel structure with a conical cavity. the sheared end of the wire rope is placed in the cavity and the individual wires are then splayed apart. molten zinc is then poured into the cavity and allowed to solidify (epoxy resins and other synthetic materials may now be substituted for the zinc) such a socket commonly includes an eye or other feature allowing the cable to be attached to an external component. a variation on the socket approach has been successfully employed for synthetic cables having a relatively small diameter. the device actually placed on the end of a synthetic cable in order to create a termination is commonly referred to as an “anchor.” figs. 4-6 show one process for creating a termination on a synthetic cable using such an anchor. in fig. 4 , cable 10 has been cut to a desired length. the individual strands are very flexible. accordingly, binder 20 has been added some distance back from the cut end. this distance is labeled “set-back distance” 36 . the set-back distance is roughly equal to the length of filaments which will be placed within the cavity in a termination. free filaments 26 are unbound and free to flex. the binder wraps around the cable and primarily helps it retain a compressed or otherwise bound cross section to better control filament movements during processing. the use of a binder is preferred. splayed filaments 34 are placed within the cavity of an anchor. they are generally splayed apart before they are placed in the anchor cavity, but they may also be splayed apart after they are placed in the anchor cavity. in a traditional potting process, the cavity is then filled with a liquid potting compound. the term “potting compound” means any substance which transitions from a liquid to a solid over time. a common example is a two-part epoxy. the two epoxy components are mixed and poured or injected into the cavity before they have cross-linked and hardened. other compounds are cured via exposure to ultraviolet light, moisture, or other conditions. fig. 5 shows a section view through such a termination after the potting compound has hardened into a solid. anchor 24 includes a tapered cavity through its center. a length of filaments is locked into potted region 28 by the hardened potting compound. free filaments 26 rest outside the anchor. in the example of fig. 5 , a single strand has attached to a single anchor. this is not the only possibility and the invention is not limited to just this one possibility. it is possible to attach multiple strands to a single anchor (such as by potting a three-strand twisted rope into a single anchor). this would be a connection between a single anchor and a strand group. it is also possible to divide a single strand into a plurality of substrands and attach each of the sub-strands to an anchor. thus, one strand could be attached to two or more anchors. an anchor attached to a cable typically includes a load-transmitting feature designed to transmit a tensile load on the cable to some external component. this could be a hook or an external thread. as such features are well understood in the art, they have not been illustrated. those skilled in the art will know that an anchor may be attached to a cable by many means other than potting. another well-known example is a frictional engagement where the splayed strands are compressed between two adjacent surfaces. a “spike and cone” connection, sometimes referred to as a “barrel and socket” connection, attaches an anchor to a cable using this approach. an example of such a connection is shown in fig. 39 (and described in more detail subsequently). another approach to creating a termination is to cast a composite “plug” on the end filaments of a cable. the plug is preferably cast in a desirable shape that allows it to be easily attached to an external component. the cable of fig. 5 is relatively small—having a diameter between 1 mm and 10 mm. the potting process and other mechanical termination means work fairly well for such cables. fig. 6 shows a perspective view of a completed assembly where the anchor is attached via potting. anchor 24 and potted region 28 collectively form termination 32 on one end of cable 10 . the reader should note that cable 10 is parallel to anchor 24 . the filaments within the cable may be non-parallel (they may for example be helically wrapped or braided). however, the overall centerline of the cable is parallel to the centerline of the anchor. this constraint is significant, because the ultimate strength of synthetic cables decreases significantly if the freely flexing portion of the cable is angularly offset with respect to the anchor. the desired alignment becomes a more difficult problem for larger cables—as will be seen. fig. 7 shows a larger cable 10 . the example shown has a diameter of 50 mm (even larger synthetic cables are presently in use). braided jacket 18 surrounds and encloses smaller strand components and strand group ultimately individual filaments. binder 20 is placed around the cable and the jacket is removed for loose portion 22 . for a cable of this size, loose portion 22 is comprised of tens of thousands to millions of individual filaments. the filaments are very flexible, having a stiffness that is similar to human hair. the loose portion is akin to the head of a mop—though it is in reality even less organized and much more flexible than the head of a mop. it is very difficult to employ the prior art termination process for the synthetic filament cable shown in fig. 7 . fig. 8 shows an anchor 24 which is sized for this cable. the anchor has a diameter of approximately 150 mm. unlike larger steel wires used in the prior art, the loose filaments are not stiff enough to remain organized when they are placed in the cavity within anchor 24 . it is very difficult to maintain any type of organization while the liquid potting compound is added to the cavity (or when any other type of termination technology is used, with the “spike and cone” frictional type of anchor being another example). the filaments tend to lose the aligned orientation needed to produce a consistent termination. in this potted termination example, the filaments when oriented upward tend to become a disorganized tangle, and are generally inconsistent in alignment. the alignment issue worsens with increasing scale as the filament volume and termination length both increase. the result is a termination which commonly fails well below the ultimate tensile strength of the cable—obviously an undesirable result. in addition, the disorganized nature of the strands within the cavity produces a substantial variation in strength from one termination to the next. in other words, the process of terminating a large synthetic cable is not predictable nor is it repeatable. one prior art approach to this problem has been to subdivide the cavity within anchor 24 using some type of insert. the insert subdivides the tapered cavity into several wedge-shaped sections. the available filaments are then divided evenly among the wedge-shaped sections. this approach helps improve certain performance characteristics but does not address the majority of significant processing challenges inherent with large synthetic cables. the present invention solves the problem of larger cables by (1) dividing the cable into smaller components which are in the size range suitable for the prior art termination technology; (2) providing a collector which reassembles the individual terminations back into a single unit; and (3) maintaining reasonable alignment between the terminations and the smaller cable components while the terminations are “captured” within the collector. the goal of maintaining alignment between the terminations and the smaller cable components is significant. some additional explanation regarding the need for good alignment between the strands and the anchors used to terminate them may aid the reader's understanding. figs. 9 and 10 illustrate the result of flexing a strand 12 before or during the termination process. in fig. 9 , strand 12 has been flexed. jacket 14 has slipped somewhat with respect to filaments 16 it contains. filaments have also slipped with respect to each other. in fig. 10 , the same strand has been straightened. the reader will observe that some of the filament slippage remains. this is the result of the fact that synthetic filaments have very low stiffness. when they slip relative to one another, there is no significant restoring force. a bend or kink may exist in an individual filament, but little restoring force is produced. for a prior art wire cable, the bending or kinking of a wire produces a significant restoring force. when a wire rope bends it generally returns to the same state once the bend is removed. this is not the case for cables made of synthetic filaments. the alignment issues occur with or without a jacket around the strand. further, the alignment differential increases as the size of the cable increases. the reader will thereby perceive the importance of keeping a synthetic cable and/or its component strands straight in the vicinity of the end when creating a termination. it is also important to maintain alignment between a strand and the anchor used to terminate it. the region where the filaments exit the anchor (often called the “neck” of the anchor) is significant. if the freely flexing portion of the synthetic-filament strand is bent with respect to the anchor when loaded, a large stress riser will form in the neck region. the freely flexing portion bends quite easily and it is not able to withstand significant lateral loads without badly reducing the overall strength of the strand/termination. maintaining the desired alignment for these large cables is a more complex problem—with processing and performance issues increasing with increasing scale. the present invention presents a solution to these problems. the present invention seeks to improve both processing and performance issues. the main processing advantage of the invention is the fact that it allows the use of well-developed and repeatable “small cable” termination technologies to be used with larger cables. the main performance advantages of the invention result from the fact that “small cable” terminations produce good repeatability and good overall strength, along with the fact that non-uniform loads are decoupled and/or alignment between the strands and their respective terminations are improved. the invention “collects” multiple small cable terminations into a single collector, thereby allowing the advantages of a small cable termination to exist in a larger cable. brief summary of the present invention the present invention comprises a method for terminating a multi-stranded, non-parallel cable. the method includes: (1) dividing the cable into smaller components which are in the size range suitable for the prior art termination technology; (2) creating a termination on the end of each of the smaller components; (3) providing a collector which reassembles the individual terminations back into a single unit; and (4) maintaining reasonable alignment between the terminations and the smaller components while the terminations and the collector are in a connected state. the collector acts as a unified termination for the cable as a whole. however, each strand or group of strands has been cut, positioned, and locked into a relatively small termination for which strand/anchor alignment is maintained. the relatively large cable is broken into smaller components so that consistent and repeatable termination technology known for use in small cables can be applied to create a termination for a much larger cable. the collector reassembles the smaller components in a manner that minimizes bending stresses in the transition from each anchor to its respective strand/strand-group/sub-strand. brief description of the several views of the drawings fig. 1 is a perspective view, showing a prior art cable made of seven strands. fig. 2 is a perspective view, showing an individual strand comprised of many synthetic filaments encased within a jacket. fig. 3 is a perspective view, showing a prior art cable made of seven strand groups, each of which strand groups includes seven strands wrapped in a helical pattern. fig. 4 is an elevation view, showing a small synthetic cable during the prior art termination process. fig. 5 is a sectional elevation view, showing the synthetic cable of fig. 4 after it has been potted into an anchor. fig. 6 is a perspective view, showing the completed termination. fig. 7 is a perspective view, showing a larger prior art cable. fig. 8 is a sectional elevation view, showing an attempt to use prior art termination technology on the cable of fig. 7 . fig. 9 is a perspective view, showing a prior art strand being flexed. fig. 10 is a perspective view, showing the prior art strand of fig. 9 after it has been straightened. fig. 11 is a perspective view of a prior art synthetic cable. fig. 12 is a perspective view, showing the cable of fig. 11 after a binder has been added. fig. 13 is a perspective view, showing the separation of the strands between the cut end of the cable and the binder. fig. 14 is a perspective view, showing the addition of a binder to each strand of the cable of fig. 13 . fig. 15 is a detailed perspective view, showing the addition of an anchor to each of the individual strands of the cable of fig. 14 . fig. 16 is a perspective view, showing the cable of fig. 14 after an anchor has been added to each individual strand. fig. 17 is a perspective view, showing a collector configured for use with the cable of fig. 16 . fig. 18a is an elevation view, showing the collector of fig. 17 . fig. 18b is a sectional elevation view, showing the collector of fig. 17 . fig. 19 is a perspective view, showing the collector and the cable assembled together. fig. 20 is a sectional view, showing a strand misaligned with its anchor. fig. 21 is an elevation view, showing an alignment fixture for use in the potting process. fig. 22 is an elevation view, showing an alignment fixture for use in the potting process. fig. 23 is an elevation view, showing the assembly of fig. 19 . fig. 24 is a perspective view, showing another embodiment of the invention. fig. 25 is a sectional view, showing details of the stem ball assembly used in the embodiment of fig. 24 . fig. 26 is a sectional view, showing the stem ball of fig. 26 placed in the collector. fig. 27 is a perspective view, showing the assembly of fig. 24 with most of the helically wrapped strands removed. fig. 28 is a perspective view, showing another embodiment for the collector. fig. 29 is an exploded perspective view, showing yet another embodiment in which the anchors combine to actually form the collector. fig. 30 is a perspective view, showing the embodiment of fig. 29 in an assembled state. fig. 31 is a perspective view, showing still another embodiment for the collector. fig. 32 is a perspective view, showing still another embodiment for the collector. fig. 33 is a perspective view, showing the embodiment of fig. 32 from another vantage point. fig. 34 is a perspective view, showing the use of a helical wrap to create a jacket. fig. 35 is a sectional view, showing the inclusion of a fillet near the throat of an anchor. fig. 36 is a sectional view, showing the inclusion of a flexible extension near the throat of an anchor. fig. 37 is a perspective view, showing another embodiment of a collector and an alignment fixture. fig. 38 is a sectional view of the assembly of fig. 37 , showing some internal details. fig. 39 is a sectional view, showing a spike-and-cone anchor configured for use in the present invention. fig. 40 is a perspective view, showing a different embodiment of a collector. fig. 41 is an elevation view, showing the assembly of fig. 40 . fig. 42 is a perspective view, showing the assembly of figs. 39 and 40 from a different vantage point. reference numerals in the drawings 10cable12strand14jacket16filament18braided jacket19strand group strand group20binder22loose portion24anchor26free filaments28potted region30central axis32termination34splayed filaments36set-back distance40cut end42collector44anchor receiver46cable receiver48attachment feature50centerline52alignment fixture54threaded engagement56coupler58threaded engagement62stem64ball66spherical socket68channel70core72pivot joint74pivot joint76receiver78fastener80socket82slot84threaded shaft86alignment channel88alignment fixture90core strand92injection passage94strand cavity96arcuate shoulder98arcuate shoulder100internal passage102bolt104fillet106flexible extension108cone110loading flange112hex head114threaded engagement116nut detailed description of the invention the inventive method can be used for a synthetic cable of almost any size, but it is most advantageous for cables having a medium to large diameter (as the processing and performance benefits over the prior art increase with increasing scale). in the context of synthetic cables, this would be an overall diameter of approximately 15 mm or more. the invention is most advantageous for use with cables having at least a partially non-parallel structure. however, the invention offers some advantages for cables having even a 100% parallel construction. while many variations are possible, figs. 11 through 19 explain the basic steps of the process. fig. 11 shows a seven strand synthetic cable. in this construction, six outer strands 12 are helically wrapped around a single core strand. the resulting cable 10 therefore has a substantially non-parallel construction, meaning that many that the outer strands are not parallel to the central axis of the cable as a whole. the present invention seeks to attach anchors to a substantial portion of the strands and in most instances attach anchors to all of the load-bearing strands. in the specific example of fig. 11 , anchors will be attached to the six outer strands but not the core strand (which will be attached directly to another component instead). fig. 12 shows the same cable with binder 20 in position a set-back distance 36 from cut end 40 . the binder is simplistically represented as two blocks clamped together over the cable. it may assume many different forms, so long as it limits or reduces the ability of the individual strands 12 to move with respect to each other during processing (in larger cables it may restrict the movement of strand groups or groups of strand groups). the binder may assume many different forms, including tape, string, an extruded or overbraided jacket, and even an adhesive infused into a limited section of the cable. fig. 34 shows one example of a binder. jacket 14 is helically wrapped around all of or a portion of the cable. the jacket in this example is an adhesive tape capable of applying some compression to the strand, thereby limiting filament movement during processing by reducing unwanted strand movement. in some instances a strand will come with a binder already installed in the form of a compressive jacket. in those cases a binder will not need to be added. rather, a portion of the existing binder in proximity to the termination may need to be removed. it is not practical to add a termination to the cut end of each of the individual strands 12 while they are still grouped together. fig. 13 shows the same assembly after the cut ends of the six outer strands have been urged apart and away from the core strand. once urged apart, each individual strand is essentially a small synthetic cable to which the prior art termination methods can be applied (such as potting). fig. 14 shows a closer view of the ends of the strands 12 . a binder 20 has been applied to each—with the binder being separated from the cut end by set-back distance 36 . these smaller binders, like the larger binder used for the cable itself, are intended to help maintain filament alignment during processing. strand 12 , for example, may be a braided strand group requiring some form of added binder to prevent unwanted filament movement during processing. the free filaments are placed within an anchor cavity and splayed to create splayed filaments 34 . liquid potting compound is then placed within the cavity and allowed to harden. of course, if the binder was a part of the strand structure and applied to the entire strand (such as an extruded thermoplastic jacket) a length of jacket material would simply be removed from the end. fig. 15 shows the same strands 12 after anchors 24 have been installed on all except core strand 90 . binder 20 has been applied to core strand 90 but it has not yet been potted (for reasons which will be explained subsequently). the reader will observe that the outer strands retain a somewhat-relaxed helical configuration even when they have been urged away from the core strand. this fact is important. if the anchors were reoriented to be parallel with the overall centerline of the cable, then each of the outer strands would have to bend as it entered the anchor. as described previously, such a bend under load is undesirable. once the anchor is attached, the alignment between the filaments entering the anchor and the anchor itself is established. movement may then be allowed. however, while the anchors and the collector are in a connected state, proper alignment is preferably maintained between the filaments, the anchor, and the collector. the present invention divides the cable into smaller constituents in order to apply repeatable and strong “small cable” termination techniques to the smaller constituents. the smaller constituents are then recombined using a collector. the nature of this collector is important as it must accomplish the recombination without introducing unwanted bending stresses or strand misalignment. fig. 17 shows a collector 42 which is configured for use with the cable of fig. 16 . the collector includes six anchor receivers 44 around its perimeter. each anchor receiver 44 is joined to a cable receiver 46 . both the anchor receivers 44 and the cable receivers 46 intersect the exterior of the collector. fig. 18 is an elevation view of collector 42 . the reader will observe that each anchor receiver and cable receiver is concentric about a centerline 50 . the reader will also observe that each centerline 50 is angularly offset from the axis of radial symmetry for the collector as a whole. this offset makes each anchor receiver and cable receiver parallel to the helical path of one of the outer strands it is positioned to receive. a helical path has a treed “helix angle” at any given cross section along its length. if the centerline 50 of each anchor receiver 44 is aligned with this helix angle, then the anchor placed within that anchor receiver will be aligned with the strand to which it is attached. the reader should bear in mind that some small errors in the angles employed are permissible. for example, depending on the cable design, a 1-5 degree misalignment will not typically degrade the cable's performance to any significant extent. however, the goal is to maximize alignment between each anchor and the strand to which it is attached. fig. 18b shows a sectional elevation view through collector 42 —taken through the center of attachment feature 48 . strand cavity 94 is included in the portion of the collector opposite the attachment feature. this strand cavity is configured to receive the splayed strands of core strand 90 so that the core strand can be terminated directly into the collector itself. injection passage 92 is provided so that liquid potting compound can be injected into strand cavity 94 . the air within the cavity can be vented out the open end of the strand cavity during the potting process. optionally, a separate vent can be provided. the reader should note that the concept of attaching some strands to separate anchors and at least one strand directly to the collector itself is somewhat unusual. it would be more typical for this embodiment to provide anchors for all strands and attach the anchors to the collector. however, the attachment of one or more strands directly to the anchor is certainly within the scope of the present invention. also within the scope of the present invention is the concept of employing different attachment mechanisms for different strands being gathered into a single collector. for example, one strand could be attached to the collector via its anchor sliding into a pocket in the collector. a second strand could be attached to the collector via its anchor having a threaded stud that passes through a hole in the collector—with a nut being attached to the protruding portion of the threaded stud. the connections between anchor and collector could occur in multiple configurations and at multiple different levels. a first ring of anchors could be attached to a portion of the collector nearest the freely-flexing portion of the cable. a second ring of anchors could be attached to the collector in a position further away from the freely flexing portion of the cable. this could require stands of slightly differing lengths, but the need for differing lengths can be accommodated in the manufacturing and assembly processes. turning to fig. 19 , the assembly of the collector to the cable for this particular embodiment will now be explained in detail. collector 42 is placed in the center of the six outer strands and moved toward the tightly wrapped portion of the cable (binder 20 —as shown in fig. 13 —is preferably left in place to help stabilize the strands during this process). returning to fig. 17 , the reader will note that if the collector is moved toward the tightly wrapped portion of the cable until anchors 24 lie around attachment fixture 48 , the user can press the strands inward and into the six cable receivers 46 . if the collector is then urged away from the tightly wrapped portion of the cable (in the direction of attachment feature 48 ) then anchors 24 will slide into anchor receivers 44 and become trapped therein. fig. 19 shows this state. each anchor 24 is trapped within collector 24 . from the geometry seen, it is apparent that so long as tension is maintained on the cable the anchors will stay in place. some additional attachment or entrapment features or mechanisms—such as an overall enclosure body, mounting brackets, interlocking features, adhesives or clips can be used to ensure that they remain in position even when no tension is present. fig. 19 obviously represents a simple version of a collector, but it serves well to illustrate the operative principles of the invention. those skilled in the art will recognize that the embodiment of fig. 19 illustrates one possible approach to connecting the anchors to the collector, and that many other approaches will occur to someone skilled in mechanical design. for example, the collector may simply include a series of holes while the anchors may include a threaded shaft sized to it through these holes and then be secured with a nut. with the collector and the outer strands now joined in a suitable fashion, core strand 90 can be potted into strand cavity 94 within the collector. the completed assembly then acts as a unified whole. still looking at fig. 19 , the reader should note that the individual strands are aligned with each individual anchor 24 at the point where the freely flexing portion of the strands enters the anchor. this feature is important to reducing the unwanted bending stresses. the order of operations is not particularly significant. one could just as easily pot the core strand into the collector first. fig. 20 shows a sectional view through a prior art anchor 24 with a potted portion 28 of strand 12 locked therein. if the strand is bent as shown with respect to the anchor, one may easily see how stress will rise in the “throat” region where free filaments 26 pass into potted region 28 . the reader will also perceive the advantages of the collector shown in the preceding figures in this respect. it eliminates or at least largely reduces such bending stresses. returning to fig. 17 , the reader should be aware that attachment feature 48 can assume many different forms. anything that facilitates the connection of the collector to an external component serves this purpose. an eye is shown. the concept of an attachment feature would also include an externally threaded boss, a boss with a hole having internal threads, external threads on the outer surface of the collector itself, multiple threaded holes on the collector, and even a simple flange on the collector which could bear against an external surface. during the process of locking each strand into an anchor it is preferable to maintain the proper alignment. the termination process shown in the examples provided is a typical potting process, but any termination process may be used. other common examples are mechanical interlocks such as a “spike and cone” fastener, external compressions devices, and hybrid resin/compression devices. fig. 39 shows a spike-mi-cone termination configured for use in the present invention. anchor 24 includes a tapered strand cavity 94 as for the potted versions. however, rather than securing the filaments within the cavity using potting compound, the filaments are compressed and frictionally engaged by screwing cone 108 into the cavity. the strands are further compressed and frictionally engaged by applying tension to the cable (and thereby further “seating” the cone). cone 108 is linked to anchor 24 by threaded engagement 114 . the user employs a separate tool to engage and turn hex head 112 —thereby securing the anchor to the end of strand 12 . any suitable feature may be used to transmit tensile forces from the anchor to the collector. an external thread is one example. loading flange 110 is another example. for any of these approaches, alignment within each of the terminated strand group components is important (particularly in the region where the flexible filaments interact with the inflexible anchor). the desired alignment can be created in a wide variety of ways another type of alignment that may be added in the practice of the present invention is the alignment of the filaments within a strand and the anchor being attached to the strand during the process of creating the termination. fig. 21 shows a simplified depiction of an alignment fixture. alignment fixture 52 is designed to engage strand 12 in the freely flexing portion and to engage binder 20 . this fixture holds the binder and the strand in proximity to splayed filaments 34 in alignment. the alignment fixture preferably restricts relative movement in all six degrees of freedom (x, y, z, roll, pitch, and yaw). a more comprehensive version of alignment fixture 52 is shown in fig. 22 . this version grips the freely flexing portion of strand 12 , binder 22 , and anchor 24 . placing the fixture as shown ensures alignment of the critical components during the termination process. if for example potting compound is used, the alignment fixture is preferably retained in position until the potting compound has transitioned into a solid. although the use of an alignment fixture during the process of affixing an anchor to the end of a cable offers advantages in certain circumstances, the reader should bear in mind that the present invention may be carried out without the use of such a fixture. in many embodiments, no alignment fixture will be used. of course, one the termination process for an individual strand is completed the fixture can be removed. while one would not wish to repeatedly bend the strand after the anchor is in place, it is much more able to withstand bending. once suitable terminations are added to the strands, the strands are placed within a collector. fig. 23 shows a view of the anchors 24 after they have been placed in collector 42 (for the specific embodiment of figs. 12-19 ). the reader will note again how the angular displacement of centerline 50 generally aligns the anchor with the free portion of the strand. this minimizes bending stresses and allows the maximum performance (in terms of tensile strength) from the completed assembly. the embodiment shown in figs. 11-19 serves to illustrate the components and exemplary steps of the proposed invention. however, many different and widely varied embodiments will be needed in actual applications, and the embodiment that is suitable for a particular application will depend greatly on the nature of the cable to be terminated and the overall termination design. figs. 24-27 show another embodiment that is useful for cases where the strands on the cable's exterior are routed and collected in one manner and the strands near the cable's core are routed and collected in a different manner. cable 10 in fig. 24 has a relatively large core consisting of a braided strand group that is independently jacketed and wrapped by 16 helical strands. the two major components (core strands and helical strands) respond differently when tension is applied to the cable. tension on the core strands with this particular construction will not generally produce a resulting torque. tension on the outer helical strands, on the other hand, will produce significant torque and will also tend to vary the helix angle. this phenomenon can make the determination of the precise angular offset for each anchor within collector 42 difficult. in the embodiment shown, the problem is solved by using ball and socket joints for each anchor. these allow the helix angle to “float” within the range of motion allowed by the ball and socket joint. collector 42 has a central section which includes attachment feature 48 (in this case a large boss with an external thread). the collector also has a large flange which is used to attach the numerous anchors in a radial array. each anchor is attached to the flange using a ball and socket joint so that the angle between the collector and each individual strand can vary as needed to prevent bending. the reader will note in this example that the collector gathers strands of differing sizes and configurations. the braided core strand or strand group may be potted as a whole into an internal cavity within the collector (or potted into another object that attaches to the collector). the helical strands are significantly smaller and each lies in its own unique orientation with respect to the collector. fig. 25 is a detailed sectional view through one termination used for a helical strand in the embodiment of fig. 24 . strand 12 is potted into anchor 24 as explained previously. anchor 24 is joined to coupler 56 by threaded engagement 54 . stem 62 is joined to coupler 56 by threaded engagement 58 . finally, ball 64 is provided on the end of stem 62 . fig. 26 shows the assembly of fig. 25 attached to the collector. stem 62 is placed into spherical socket 66 with its threaded portion sticking out through channel 68 . ball 64 hears against spherical socket 66 in collector 42 . the threaded portion of stem 62 is threaded into coupler 56 to complete the assembly. the reader will note how the ball and socket joint allows the angle between strand 12 and collector 42 to vary within a modest range. the reader will also note how the threaded engagement between stem 62 and coupler 56 allows the tension of each of the helically wrapped strands to be adjusted individually. fig. 27 shows the assembly with all but one of the helically wrapped strands removed in order to aid visualization. core 70 is potted collectively into a central cavity within collector 42 . each helical strand 12 is then attached to the collector 42 using the previously described ball and socket joints. one the appropriate tension is applied to the helical strands, the cable will act as a unified whole. adding a tensioning or length adjustment feature to better align certain strand positions may be preferred in some cases. for example, those skilled in the art will realize that both the particular cable used and the termination method(s) used will entail some reasonable manufacturing tolerances, and these tolerances may need to be accounted for in locking the terminations into the collector. the inclusion of adjustment features allows the proper balancing of loads among the strands. among other advantages, the ability to individually adjust the tension on each of the helical strands allows the termination to compensate for manufacturing tolerances and ensure that the cable is loaded correctly and evenly. it is even possible to “load set” such an assembly. for some complex assemblies it is preferable to apply a significant amount of tension and then readjust the tension adjustments on each of the helical strands. this operation may even be performed iteratively for a large cable. the ball and socket joints allow the helical strands to adjust themselves so that alignment is maintained between the freely flexing portions of the strands and the anchor into which each strand is terminated. core 70 has been illustrated as a unified collection of parallel strands or filaments. in other embodiments the core may be a grouping of strands of differing configuration (braided, twisted, etc.) and even differing sizes. of course, other mechanical attachment devices can be used to ensure the desired alignment between the individual strands and the anchors that are used to attach them to the collector. fig. 28 shows another embodiment. in this embodiment, each anchor for the exterior strands is attached to collector 42 by a joint which pivots in two perpendicular axes. the reader will observe how each attachment includes pivot joint 72 and pivot joint 74 . these two pivot joints accommodate any needed angular displacement to ensure that each strand enters its anchor in an aligned state. figs. 29 and 30 show a completely different approach to the unification of the anchors and the collector. in this embodiment, the anchor is a portion of the collector. in fig. 29 , the reader will observe that three individual strands 12 are each potted into an anchor 24 . each anchor 24 includes attachment features allowing it to be joined to a neighboring anchor. each anchor includes a threaded receiver 76 and a through-hole sized to accommodate a fastener 78 . three such fasteners 78 can be used to join the three anchors 24 together into a unified collector. fig. 30 shows this embodiment with the three anchors 24 joined to form a collector 42 . binder 20 may be left in place to help secure the transition between the helically wrapped portion of the cable and the straight strands leading to collector 42 . optionally, the potting cavity within each anchor could be given an angular offset so that the helical path of the strands is generally maintained into the anchors themselves. fig. 31 shows still another embodiment for the anchors and the collector. the cable in this case is a twisted assembly of three strands (having no core). this “core-less” construction is similar to many braided ropes. in fig. 31 , each anchor 24 has a spherical exterior. collector 42 has three sockets 80 and slots 82 . the three anchors 24 are placed into the three sockets 80 , with the strands passing through slots 82 . when tension is applied to the cable anchors 24 will naturally be urged toward the center of collector 42 and will thereby be retained in position. an advantage of de-coupling the strands from the core is the ability to create independent alignment at the strand level. several examples of this advantage have been described previously, including the ability maintain a helix angle for an anchor connected to a helically-wrapped strand. it is also possible to provide strand-level alignment for other geometries, including nested and counter-rotating helices. the same principles apply to braided ropes, twisted ropes, served ropes, and any other constructions where tensions and/or alignment may vary between strands. this de-coupling of load components can create significant performance advantages, particularly in large ropes with non-uniformly parallel strands and/or dynamic applications where loads may vary from strand to strand during use. it is also possible to provide an embodiment where a helical strand path is gradually modified into a path which is parallel to the overall centerline of the cable. a gradual transition from a helical path to a straight one can be made without introducing unacceptable stress. however, it has traditionally been difficult to create the desired gradual transition. the present invention is able to create such a gradual transition using many different features and combinations of features, and this is a significant advantage. fig. 32 shows an embodiment in which helically-wrapped strands are gradually transitioned to a parallel path and aligned with an anchor (as the anchor lies within the collector) before being joined to a collector. collector 42 includes alignment fixture 88 . the reader will observe that alignment fixture includes a plurality of radially-spaced alignment channels 86 . each alignment channel gradually straightens from a helical path into a parallel one so that rotational cable movements (torsion) can be unified or otherwise restricted to axial movements at the termination of each strand. in the embodiment shown, alignment fixture 88 is preferably spaced a distance part from the attachment flange on the collector itself. each helically wrapped strand is passed through an alignment channel before being connected to the flange. this form of strand positioning prevents each strand from altering its path at the anchor point during loading. as with other forms of strand alignment, those skilled in the art will know that such positioning can be carried out in many different ways. each anchor is attached to the flange using a tension nut on the end of a threaded shaft 84 . this feature allows the tension on each individual strand to be adjusted independently. fig. 33 shows the same assembly from a different vantage point. the reader will note that each tension nut is accessible. an attachment feature is provided in the center of coupler 42 . as for the prior embodiments, the completed assembly acts like a unified whole. a user need only attach the cable using the attachment feature without having any concern for the operation of the internal components. in this example it would typically be preferable to include additional retaining or entrapment features to the strands to help ensure they maintain position under low load. an example of this would be an attached plate with holes that the strands are pre-fed into prior to terminating. fig. 33 also provides a good example of how the tension on each strand may be individually adjusted. if even tension is desired, a torque wrench may he used to sequentially tighten the nuts shown. another approach is to provide an annular “washer-style” load cell beneath each nut. each of these load cells can then transmit strain information to a data collection unit. this information assists in properly tightening the cable and in monitoring the cable's loading conditions over time. the ability to individually adjust the tension on the strands allows some of the inventive process to be simplified. as explained previously, it is generally preferable to straighten a portion of a cable before it is cut to length. it is also preferable to provide a binder that secures the cable and prevents unwanted slippage between filaments and strands during the cutting and terminating processes. while the use of a binder on a straight cable is certainly ideal, the ability to individually tighten the strands allows these steps to be eliminated in some circumstances. one can simply cut the cable in whatever state it presently lies. the terminations are then added to each strand and gathered into the collector. it is very likely that placing a load on the cable will then produce significantly different strand-to-strand loads. however, a user can iteratively tighten the tension-adjusting devices in order to even out the load. thus, even though the cable may start in an “unbalanced” state, the ability to individually adjust the tension on each strand allows the user to achieve balance. in the illustrations of figs. 32 and 33 , the strands are exposed as they travel from alignment fixture 88 to collector 42 . this is desirable for purposes of illustration, but may be undesirable in actual operation. for example, the assembly of fig. 32 might be used for a large mooring line. such a line might be dragged laterally across an abrasive concrete surface while in use. thus, it is preferable to contain the strands, anchors, and other hardware within a protective enclosure. a shroud may be provided for this purpose. in the embodiment of fig. 32 , the shroud might assume the form of a cylindrical enclosure encompassing the strands, anchors, couplers, etc. the shroud might be part of the collector or might be a separate piece that is secured in place using bolts or other means. in some embodiments the collector itself may contain a portion of the guiding geometry and the alignment fixture may contain a portion of the guiding geometry. the alignment function can be performed in the collector, the anchor, a separate alignment fixture, or some combination among these. figs. 37 and 38 show an embodiment in which some of the alignment function occurs in the collector and some occurs in a separate alignment fixture. fig. 38 , cable 10 consists of four twisted strands (with no core). an anchor 24 is affixed to the end of each strand. collector 42 collects all four anchors 24 and transmits a tensile load via attachment fixture 48 . collector 42 44 includes four anchor receivers 44 (two are visible in fig. 37 ), a cable receiver 46 extends out the bottom of each anchor receiver. fig. 38 shows a sectional view through the center of the assembly of fig. 37 . the assembly process starts with alignment fixture being disconnected from collector 42 . the four anchors are passed through the lower portion of internal passage 100 through alignment fixture 88 . each anchor is then placed in an anchor receiver 44 (sliding each strand laterally inward through a cable receiver 46 ). the reader will note that each anchor receiver includes a shelf that transmits load to the anchor. in other words, if tension is placed on the cable the anchor cannot move downward in the orientation shown in the view. once the four anchors are in place, alignment fixture 88 is moved upward against the base of collector 42 . the alignment fixture is preferably secured to the base of the collector using conventional devices—such as bolts 102 . collector 42 and alignment fixture 88 contain features intended to guide the cable through a smooth transition as described previously. each cable receiver 46 includes an arcuate shoulder 96 . likewise, internal passage 100 in alignment fixture 88 contains arcuate shoulder 98 (the arcuate shoulder is a revolved profile that defines the shape of the internal passage). these two arcuate shoulders—in combination—guide each strand from its exit from the anchor to the point where it joins the twisted cable. a smooth transition is thereby created. some embodiments may also include misalignment-accommodating features in the anchor itself. such features relieve or reduce bending stresses and may be used solely to produce the needed strand alignment, to reduce the complexity of accompanying alignment devices, or to simply minimize stresses resulting from some other misalignment occurring when a particular design is loaded. figs. 35 and 36 provide examples of such features. in fig. 35 , large exit fillet 104 has been added to the anchor in the region where the strand exits the anchor. in the event of a misalignment, free elements 26 will not he forced against a sharp corner. instead, they will be able to bend gently around the radius of the fillet. the reader will note that the bending region is distal to the transition region. this separation allows for bending. in effect, the anchor itself is controlling the bending region and the geometry is controlling the bending stresses. fig. 36 shows an anchor in which flexible extension 106 has been added. the flexible extension is made of a pliable material (similar to a strain relief used in electrical cords). when a misalignment occurs, flexible extension 106 prevents the formation of concentrated stress near the critical transition between the filaments lying within the anchor and the freely flexing filaments. figs. 40-42 show still another embodiment of the present invention that optionally incorporates the misalignment-accommodating features such as shown in figs. 35 and 36 . fig. 40 shows a collector 42 that is essentially a flat plate. the plate includes 8 through holes that accommodate 8 anchors 24 . each anchor incorporates a threaded stud. these protrude through collector 42 . a nut 116 is attached to each threaded stud on each anchor. the nuts are tightened in order to adjust the tension on the strands attached to each anchor. fig. 41 shows the same assembly in an elevation view. the reader will observe that collector 42 is not completely flat. instead, it incorporates a domed shape on one side. the through-hole sized to accept each anchor is drilled in a direction that is normal to the surface of the domed side. this fact causes the eight anchors 24 to be angled inward toward the central axis of the cable. fig. 42 shows the same assembly from the cable side of the collector 42 . the reader will note that each of the anchors 24 includes a fillet in the area where the strand exits the anchor. the fillet is analogous to the one shown in the section view of fig. 35 . the presence of these fillets allows the angle of the individual strands to vary somewhat without placing undue stress on the strand. the cable shown in this example is generally referred to as an “8-strand hollow braid.” it is a braided assembly of 8 strands basing no core element. when such a cable is loaded, the angle formed between each strand and the collector will vary. the presence of a fillet in each anchor (or other suitable bend-accommodating feature) is therefore preferable. looking still at fig. 42 , the reader may wish to know how the collector is connected to an external device. the central passage shown through the middle of collector 42 is useful for making external connections. a large threaded stud equipped with a flange can be attached to collector 42 by passing the threaded stud through the central passage in the collector and bringing the flange attached to the threaded stud up against the flat surface facing the viewer in fig. 42 . looking again at fig. 41 , those skilled in the art will realize that the dome shape provided for this particular example is not essential. one could instead use a completely flat plate. the angled holes would still be made for the anchors (using the same angles shown for the domed example). a shoulder for each nut 116 could then be created by counter-boring each of the holes to a small depth using a square-end mill. the nuts would then bear against these shoulders rather than the exterior surface of the collector itself. accordingly, the reader will understand that the proposed invention allows a relatively large cable made of synthetic filaments to he terminated using convention methods suitable for small cables. the inventive method and hardware involves: 1) dividing the cable into smaller components which are in the size range suitable for the prior art termination technology; (2) creating a termination on the end of each of the smaller components; (3) providing a collector which reassembles the individual terminations back into a single unit; and (4) maintaining alignment between the terminations and the smaller cable components while the terminations are “captured” within the collector. the embodiments disclosed achieve these objectives. however, those skilled in the art will realize that many other forms of hardware could be used to carry out the invention. although the preceding description contains significant detail, it should not be construed as limiting the scope of the invention but rather as providing illustrations of the preferred embodiments of the invention. thus, the language used in the claims shall define the invention rather than the specific embodiments provided.
167-721-693-069-330
DE
[ "EP", "DE", "JP", "US" ]
C08F2/48,G03F7/021,G03F7/038,G03F7/09
1986-05-09T00:00:00
1986
[ "C08", "G03" ]
photosensitive composition and photosensitive registration material prepared therefrom
a photosensitive mixture that contains a photosensitive compound, for example, a photoinitiator or a diazo compound, and a reaction product of a polymer containing active hydrogen with an olefinically unsaturated compound represented by the formula ##str1## wherein x and y are the same or different and denote oxygen or sulfur, r.sub.1 is an olefinically unsaturated aliphatic radical containing 2 to 8 carbon atoms and r.sub.2 is a saturated aliphatic radical containing 1 to 8 carbon atoms or an aryl radical containing 6 to 10 carbon atoms, is suitable for producing photoresists and printing plates.
1. a photosensitive mixture which contains a polymeric compound with lateral olefinically unsaturated radicals and a photosensitive compound as essential constituents, wherein the polymeric compound is a reaction product of an olefinically unsaturated compound of the general formula i wherein x and y are the same or different and denote oxygen or sulfur, r1 is an olefinically unsaturated aliphatic radical containing 2 to 8 carbon atoms and r2 is a saturated aliphatic radical containing 1 to 8 carbon atoms or an aryl radical containing 6 to 10 carbon atoms, with a polymer containing active hydrogen. 2. a photosensitive mixture as claimed in claim 1, wherein r 1 is an olefinically unsaturated aliphatic radical containing 2 to 4 carbon atoms. 3. a photosensitive mixture as claimed in claim 1 or 2, wherein r 2 is a saturated aliphatic radical containing 1 to 2 carbon atoms or an optionally substituted phenyl radical. 4. a photosensitive mixture as claimed in any of claims 1 to 3, wherein r i is a vinyl group and x and y are oxygen atoms. 5. a photosensitive mixture as claimed in claim 1, wherein the polymer containing active hydrogen is a polymer containing hydroxyl or amino groups. 6. a photosensitive mixture as claimed in claim 5, wherein the polymer containing hydroxyl or amino groups is a polymer containing vinyl-alcohol, allyl-alcohol, hydroxyalkyl-acrylate or hydroxyalkyl-methacrylate units, an epoxy resin, cellulose ether, cellulose ester or polyester containing free hydroxyl groups or a polyamine, polyamide or polyurethane. 7. a photosensitive mixture as claimed in claim 1, wherein the photosensitive compound is a photoinitiator for the polymerization or crosslinking of the olefinically unsaturated groups. 8. a photosensitive mixture as claimed in claim 1, wherein the photosensitive compound is a compound which cross-links on exposure. 9. a photosensitive mixture as claimed in claim 8, wherein the compound which crosslinks on exposure is a diazonium salt, a p-quinone diazide or an organic azido compound. 10. a photosensitive mixture as claimed in claim 7 or 8 which further contains a low-molecular polymerizable compound containing at least one olefinic doublebond. 11. a photosensitive mixture as claimed in claim 1 or 10, which further contains a polymerization initiator which is inactive at room temperature and active at elevated temperature. 12. a photosensitive recording material comprising a layer support and a photosensitive layer, wherein the photosensitive layer is formed of a mixture as claimed in any of claims 1 to 11.
background of the invention the present invention relates to a photosensitive mixture that contains a polymeric compound with lateral olefinically unsaturated radicals and a photosensitive compound. the photosensitive mixture of the present invention is outstandingly suitable for producing printed circuits via resist technique and in particular, for producing lithographic printing plates used in offset printing. photosensitive mixtures are described in german offenlegungsschrift no. 2,053,364 that contain reaction products of polymers containing hydroxyl or amino groups and unsaturated sulfonyl isocyanates and also an initiator and optionally further polymerizable compounds. reaction products of the same type are described, in combination with diazonium salt polycondensates or low molecular azides, in german offenlegungsschrift no. 3,036,077. a disadvantage of these mixtures is, on the one hand, the complicated and expensive production of the necessary alkenylsulfonyl isocyanates. moreover, the printing plates produced therefrom exhibit an inadequate ink receptivity, so that an unacceptably high output of waste paper is produced in the proofing process and after a prolonged stoppage. in particular, mixtures according to german offenlegungsschrift no. 2,053,364, which contain exclusively a photoinitiator and the light-curable polymer described, have only a very low photosensitivity and a poor resistance to abrasion. consequently, long-run lithographic printing plates cannot be produced with these materials. from german offenlegungsschrift no. 2,053,363 mixtures are known that contain, as binders, reaction products of polymers containing hydroxyl or amino groups and at least one saturated alkyl-, alkoxy-, aryl- or aryloxysulfonyl isocyanate. the binder is in this case processed in combination with diazonium salt condensation products or photopolymerizable mixtures to form photosensitive layers. the mixtures obtained in this process can, however, be developed under aqueous and alkaline conditions only if the binders used have high acid numbers, as a result of which the abrasion resistance and the printing properties of the cured layer are adversely influenced. summary of the invention it is therefore an object of the present invention to provide a photosensitive mixture that displays the known advantages of the photopolymerizable compounds prepared from sulfonyl isocyanates but, in addition, yields photosensitive layers with better abrasion resistance after exposure and printing plates with better ink receptivity. it is also an object of the present invention to provide a photosensitive mixture, useful in producing photoresists and printing plates, that can be produced from readily available starting materials. it is a further object of the present invention to provide a photosensitive recording material displaying abrasion resistance, ink receptivity and other properties that are enhanced over known, comparable materials. in accomplishing the foregoing objects, there has been provided, in accordance with one aspect of the present invention, a photosensitive mixture comprising (a) a polymeric compound having lateral olefinically unsaturated radicals and (b) a photosensitive compound, wherein the polymeric compound is a reaction product of (i) an olefinically unsaturated compound represented by the formula ##str2## wherein x and y are the same or different, and each is oxygen or sulfur, r.sub.1 is an olefinically unsaturated aliphatic radical containing 2 to 8 carbon atoms and r.sub.2 is a saturated aliphatic radical containing 1 to 8 carbon atoms or an aryl radical containing 6 to 10 carbon atoms, with (ii) a polymer containing active hydrogen. in a preferred embodiment, substituent r.sub.1 of formula (i) is an olefinically unsaturated aliphatic radical containing 2 to 4 carbon atoms. there has also been provided, in accordance with another aspect of the present invention, a photosensitive recording material comprising a layer support and a photosensitive layer comprised of the above-described photosensitive mixture. other objects, features and advantages of the present invention will become apparent from the following detailed description. it should be understood, however, that the detailed description and the specific examples, while indicating preferred embodiments of the invention, are given by way of illustration only, since various changes and modifications within the spirit and scope of the invention will become apparent to those skilled in the art from this detailed description. detailed description of the preferred embodiments the unsaturated polymeric compounds contained in a mixture according to the present invention have lateral, olefinically unsaturated (thio)phosphinylurethane, (thio)phosphinylthiourethane, (thio)phosphinylurea or (thio)phosphinylthiourea functional groups. these compounds are produced by reacting olefinically unsaturated compounds of formula i with polymers containing active hydrogen, for example, with polymers containing hydroxyl or amino groups and the active hydrogen is supplied by hydroxyl or amino groups. if r.sub.1 is an aliphatic or cycloaliphatic radical, it generally contains one or two olefinic doublebonds. examples are vinyl, propenyl, allyl, 1-buten-4-yl, 4-chlorobutadienyl, cyclohexen-1-yl, 3,5-dimethylcyclohexen-1-yl and 3-vinylhexyl radicals. aliphatic radicals containing 2 to 6 carbon atoms, particularly 2 to 3 carbon atoms, are preferred. preferred examples are vinyl, allyl, methallyl and crotyl radicals. halogen atoms, preferably chlorine, are suitable as substituents. r.sub.2 can be an alkyl radical containing 1 to 4 carbon atoms, in particular 1 or 2 carbon atoms, or a substituted phenyl radical. halogen atoms, in particular chlorine, alkyl radicals or alkoxy radicals containing 1 to 4 carbon atoms are suitable as substituents. the aromatic radicals r.sub.2 can contain 1 to 3 substituents, preferably 1 or 2 substituents, or can be unsubstituted. preferred unsaturated (thio)phosphinic acid iso(thio)cyanates for the present invention include: allylmethylphosphinic acid isocyanate, allylmethylthiophosphinic acid isocyanate, allylmethylphosphinic acid isocyanate, allylmethylthiophosphinic acid isocyanate, allylmethylphosphinic acid isothiocyanate, allylmethylthiophosphinic acid isothiocyanate, crotylmethylphosphinic acid isocyanate, .beta.-methallylmethylphosphinic acid isocyanate, .beta.-methallylmethylphosphinic acid isothiocyanate, methylvinylphosphinic acid isocyanate, methylvinylthiophosphinic acid isocyanate, methylvinylphosphinic acid isothiocyanate, methylvinylthiophosphinic acid isothiocyanate, ethylvinylphosphinic acid isocyanate, butylvinylphosphinic acid isocyanate, phenylvinylphosphinic acid isocyanate, phenylvinylthiophosphinic acid isocyanate, phenylvinylphosphinic acid isothiocyanate, and phenylvinylthiophosphinic acid isothiocyanate. the (thio)phosphinic acid derivatives mentioned above can be prepared from the corresponding (thio)phosphinic acid chlorides of the general formula ##str3## wherein r.sub.1, r.sub.2 and x have the meanings specified above, by reaction with inorganic cyanates or thiocyanates. the preparation of these intermediate products is also described in the concurrently filed u.s. patent application ser. no. 047,103. the polymers that are capable of reacting with these compounds and that contain active hydrogen are preferably polymers having hydroxyl or amino groups. among these are compounds that bear hydroxyl groups, which compounds are preferred to those containing amino groups, since the photosensitive polymers containing urethane groups have substantially better solubility in aqueous and alkaline developer solutions than do those that have urea groups. in addition, the reaction products containing urea groups are frequently more brittle and more difficult to process. the good solubility of the polymers having (thio)urethane or (thio)urea groups in aqueous-alkaline solutions results from the acidic character of the hydrogen atoms bonded to the nitrogen as a consequence of activation by adjacent carbonyl or phosphinic acid groups. for example, vinyl or allyl alcohol polymers can be used as starting polymers containing hydroxyl groups. preferably, vinyl alcohol polymers are used, in particular partially acetalated or partially esterified polyvinyl alcohols. among these polyvinyl alcohol polymers, polyvinylacetals are preferred, with polyvinylformals and polyvinylbutyrals having mean molecular weights between 5,000 and 200,000 or more, preferably of 10,000 to 100,000, and containing 5 to 30% by weight of vinyl alcohol units, being particularly preferred. for example, allyl alcohol copolymers can contain, as comonomer units, styrene or substituted styrene units, the allyl alcohol component preferably amounting to between 10 and 40% by weight. in addition, copolymers of vinyl or allyl alcohol with vinyl esters, vinyl ethers, acrylates, methacrylates or (meth)acrylonitrile can be used as starting materials. homopolymers or copolymers of hydroxyalkyl(meth)acrylates or glycerol mono(meth)acrylates with other known (meth)acrylates, such as methyl(meth)acrylate, hexyl(meth),acrylate, (meth)acrylonitrile and the like, can also be used with favorable results as starting polymers which contain hydroxyl groups, if they contain more than 10% by weight of hydroxyalkyl(meth)acrylate or glycerol mono(meth)acrylate units. epoxy resins, for example, condensation products of 2,2-bis(4-hydroxyphenyl)propane and epichlorohydrin, and reaction products of partially reacted glycidyl(meth)acrylates, can also be used with advantage in the present invention, provided these polymers have an adequate number of free reactive hydroxyl groups and have molecular weights between 2,000 and 200,000. cellulose ethers and cellulose esters are likewise suitable, provided they have unreacted hydroxyl groups. particularly useful in this context are partial esters with low aliphatic carboxylic acids, such as cellulose acetate. suitable cellulose ethers are, for example, mixed alkyl hydroxyalkyl ethers of the celluloses. preferably between 0.3 and 2.3 free hydroxyl groups should be present per glucose unit. the condensable polyesters which are useful in the present invention include those compounds that are not completely esterified and that contain hydroxyl groups resulting from branchings or terminal hydroxyl groups. for those compounds the degree of branching should not be too high. finally, compounds containing amino groups can also be used as starting polymers, in accordance with he present invention, for the reaction with olefinically unsaturated (thio)phosphinic acid isocyanates and isothiocyanates of formula i. the polymers resulting from such reactions are not preferred, however, by virtue of their frequently observable brittleness. in addition, they often cannot be developed rapidly enough with aqueous and alkaline developer solutions, and the polymers can adhere too strongly to the metallic substrate. nevertheless, copolymers of n-vinyl-n-methylamine, and also of polyamides, such as polycaprolactam, and of polyurethanes, including reaction products of diisocyanates with dihydric or polyhydric alcohols, and of polyamideimides can be used to advantage in the present invention. if a photosensitive mixture containing an unsaturated polymer in accordance with the present invention, is employed in the offset field, then reaction products with polyvinylacetals are especially preferred. copolymers of hydroxyalkyl(meth)acrylate having low molecular weight are especially suitable when a mixture of the present invention is used in the resist technique. the quantitative proportion of the unsaturated polymers in the mixture according to the invention is generally 20% to 95% by weight, preferably 40% to 90% by weight, based on the nonvolatile constituents in the mixture. the production of the unsaturated polymers suitable for use in the present invention is not difficult, since the unsaturated (thio)phosphinic acid isocyanates and isothiocyanates of formula i react exceedingly easily with groupings that have active hydrogen atoms. it is possible from time to time, consequently, to dispense with an addition of catalysts or an increase in the reaction temperature. in general, to produce the unsaturated polymers, 2% to 25% by weight of the polymer containing active hydrogen is dissolved in a suitable inert solvent such as dioxane, tetrahydrofuran, ethylene glycol dimethyl ether, ethylene glycol diacetate or butanone, and the corresponding isocyanate, preferably dissolved in the same solvent, is added dropwise at room temperature. under these conditions a slight rise in temperature of the reaction mixture is typically observed. if the isothiocyanates of formula i are reacted, it is expedient to add a catalyst, such as diazabicyclo[2.2.2]octane, to the mixture described above, and/or to heat the mixture. 0.4 to 1.4 mol of (thio)phosphinic acid isocyanate and 0.4 to 1.7 mol of (thio)phosphinic acid isothiocyanate are preferably added per mol of active hydrogen, since an excess of the formula i compounds is necessary for a quantitative reaction of all the reactive groups of the starting polymer. the unsaturated polymers may be processed further in the reaction solution, optionally after destroying excess isocyanate or isothiocyanate by adding an alcohol like ethanol. for characterization purposes or for special applications, the polymer can be isolated by adding it dropwise to 10 times the quantity of a nonsolvent, preferably slightly acidified water, under which circumstances it is provided as a colorless-to-slightly-yellowish, amorphous product which, in general, can readily be filtered. before being used in the photosensitive mixture according to the present invention, the product should be adequately dried. numerous substances can be used as photoinitiators in photosensitive mixtures according to the present invention. examples of suitable photoinitiators are benzoins, benzoin ethers, polynuclear quinones such as 2-ethylanthraquinone, acridine derivatives such as 9-phenylacridine and benzacridine, phenazine derivatives such as 9,10-dimethylbenz[o]phenazine, quinoxaline and quinoline derivatives such as 2,3-bis(4-methoxyphenyl)quinoxaline or 2-styrylquinoline, quinazoline compounds and acylphosphine oxide compounds. photoinitiators of this type are described in german patent nos. 2,027,467 and 2,039,861, and in european patent application no. 11,786. also suitable are hydrazones, mercapto compounds, pyrylium and thiopyrylium salts, synergistic mixtures with ketones or hydroxy ketones and dyestuff redox systems. especially preferred are photoinitiators having trihalomethyl groups that can be split by light, in which connection mention should be made, in particular, of corresponding compounds from the triazine and thiazoline series. such compounds are described in german offenlegungsschriften nos. 2,718,259, 3,333,450 and 3,337,024. a particularly preferred triazine photoinitiator is 2-(4'-methoxystyryl)-4,6-bistrichloromethyl-s-triazine. the photoinitiators used in the present invention are generally added in quantitative proportions of 0.1% to 15% by weight, preferably 0.5% to 10% by weight, based on the nonvolatile constituents of the mixture. the unsaturated polymers used according to the present invention have very high photosensitivities in combination with a suitable photoinitiator and, optionally, a polymerizable monomer, and provide adequate curing of the exposed layer regions. in combination with the most varied, negative-functioning photosensitive substances, such as diazonium salt polycondensates, azido derivatives and p-quinonediazides, they produce layers that can be developed readily and without scumming in aqueous and alkaline media and that are notable for maximum abrasion resistance and excellent thermal stability. the mixtures obtained in this way can therefore also be used in numerous applications, in which connection particular mention should again be made of the production of lithographic plates and photoresists. it is surprising that it is possible, pursuant to the present invention, to combine unsaturated polymers, photoinitiators and other photosensitive substances in photosensitive mixtures without, for example, the storage capability of the mixtures being impaired appreciably as a result. in addition, the unsaturated polymers can be used in combination with other photosensitive substances as nonphotoactive binders, it being possible to dispense with the addition of a photoinitiator. such photosensitive mixtures have the advantage that a thermal curing of the developed layer, if desirable, can be effected. in this case it may be advantageous to add to the layer thermally activatable compounds, for example, epoxy compounds and n-methylolamides like hexamethylol and hexamethoxymethylmelamine, that are capable of crosslinking. thermally activatable polymerization initiators, such as organic peroxides, are also suitable for this purpose. if, as described above, the unsaturated polymers are used as binders, they may be combined also with saturated polymers suitable for the same purpose. polymerizable compounds can likewise be used as additives in a mixture according to the present invention, in particular in the field of printed circuits. suitable polymerizable compounds are known from u.s. pat. nos. 2,760,683 and 3,060,023. examples are esters of acrylic acid or methacrylic acid, such as diglycerol diacrylate, guaiacol glycerol ether acrylate, neopentylglycol diacrylate, 2,2-dimethylolbutan-3-ol diacrylate, pentaerythritol tri- and tetraacrylate, and also the corresponding methacrylates. furthermore, acrylates or methacrylates that contain urethane groups are suitable, as are acrylates and methacrylates of polyesters that contain hydroxyl groups. finally, prepolymers containing allyl or vinyl groups are also suitable, those monomers or oligomers being preferred that contain at least two polymerizable groups per molecule. the polymerizable compounds can in general be present in a mixture of the present invention in a quantity of up to 50% by weight, preferably of 10% to 35% by weight, based on the nonvolatile constituents in the mixture. in such photopolymerizable mixtures, it is expedient to increase the initiator concentration, within the range specified above, in accordance with the unsaturated proportion of the photopolymerizable monomers or oligomers added. virtually all known, negatively-functioning photosensitive substances can be used in the present invention, provided they are compatible with the polymer matrix. for instance, diazonium salt polycondensation products, such as condensation products of condensable aromatic diazonium salts with aldehydes, are very well suited. exemplary of such condensation products are the products of diphenylamine-4-diazonium salts with formaldehyde. advantageously, cocondensation products are used that, in addition to the diazonium salt units, contain nonphotosensitive units derived from condensable compounds. such condensation products are known from german offenlegungsschriften nos. 2,024,242, 2,024,243 and 2,024,244. generally, all the diazonium salt condensates described in german offenlegungsschrift no. 2,739,774 are suitable. for certain applications, low- and higher molecular azido derivatives are especially suitable as photosensitive compounds, with low-molecular azido compounds containing at least two azido groups per molecule being preferred. illustrative compounds of this type are 4,4'-diazidostilbenes, 4,4'-diazidobenzophenones, 4,4'-diazidobenzalacetophenones, 4,4'-diazidobenzalacetones and 4,4'-diazidobenzalcyclohexanones. the photosensitivity of such azido compounds may be reinforced by the optional use of suitable sensitizers, for example, 1,2-benzanthraquinone. furthermore, those polyfunctional azides are also suitable that display an individual absorption so displaced by conjugation with doublebonds in the molecule that no additional sensitization is necessary during exposure. additional suitable azido compounds are known from british published application no. 790,131, german patent no. 950,618 and u.s. pat. no. 2,848,328. moreover, low-molecular diazo compounds, such as p-quinone diazides and p-iminoquinone diazides, can be used as photosensitive compounds in the present invention. such mixtures are, however, not preferred because of their relatively low photosensitivity. the quantity of photosensitive, cross-linkable compound contained in a photosensitive mixture of the present invention is generally between 5% and 60% by weight, preferably between 10% and 40% by weight, based on the nonvolatile constituents of the mixture. the mixtures according to the present invention can be processed in a conventional manner, according to their application. for this purpose, the unsaturated reaction product is dissolved in a suitable solvent or solvent mixture, such as ethylene glycol monomethyl ether, ethylene glycol monoacetate, dioxane, tetrahydrofuran or butanone. a coating can then be obtained from the resulting solution, which is optionally mixed with a photoinitiator that is soluble in the mixture; optionally, the mixture can be sensitized by the addition of additional photosensitive substances, which are likewise taken into solution. as a function of the nature of the photosensitive compound employed, furthermore, the following additives can be added to the photosensitive coating solution: (a) for sensitizing with diazo compounds, for example, p-quinon diazides or diazonium salt condensates: a dyestuff for rendering the photosensitive layer visible on the support material; an acid, preferably phosphoric acid, for stabilizing the diazonium salt; and a contrast agent which produces an intensification of the color change in the layer on exposure, (b) for sensitization with azido compounds: a dyestuff that contributes to rendering the photosensitive layer visible and to increasing the senisitivity of the photo-crosslinking compound in the desired spectral range, and (c) for sensitizing with photopolymerizable substances: photoinitiators that initiate the polymerization step on exposure, and inhibitors that suppress any thermally initiated polymerization. other additives, such as plasticizers, pigments, further resin components, etc., can be added to a photosensitive mixture of the present invention if they prove suitable for the particular application. such additives, their action and possibilities of use are known to those skilled in the art. the solutions obtained as described above are filtered for the purpose of further processing, in order to remove any undissolved constituents, and are applied to a suitable support material in a known manner, for example, by knife-coating or spinning. the coating thus applied is then dried. support materials that are suitable for the coating include, among others, aluminum which has been roughened up mechanically or electrochemically and optionally anodized and post-treated; aluminum clad foil or foil otherwise rendered hydrophilic; foil copper-coated in vacuum and multimetal foils. in this context, the type of application depends to a considerable extent on the desired layer thickness of the photosensitive layer, the preferred layer thicknesses of the dried layer being between 0.5 and 200 .mu.m. after adequate drying, the materials can be converted to their respective application form, in a known manner, by imagewise exposure using a negative film original or, with suitable sensitization, using a laser beam and subsequent development. in this connection, the development is preferably carried out with aqueous-alkaline developer solutions having a ph value in the range between about 8 and 13. optionally, the developers can contain additives that contribute to an accelerated, technically appropriate development operation. particularly suitable additives for this purpose include surfactants and small quantities of a low-volatility organic solvent, such as benzyl alcohol. the composition of suitable developer solutions for the photosensitive layers according to the present invention is primarily dependent on the particular application; however, they should contain, as a rule, more than 75% by weight of water and less than 5% by weight of an organic solvent. suitable developer solutions are known, for example, from german offenlegungsschriften nos. 2,809,774, 3,100,259 and 3,140,186. the mixtures according to the present invention make it possible to produce very high-run lithographic printing plates. these exhibit a favorable ink receptivity while retaining favorable copying properties. for this field of application, reaction products of polyvinylacetals and phosphinic acid isocyanates of formula i, are particularly suitable as unsaturated polymers. the lithographic printing plates obtained from photosensitive mixtures containing these polymers exhibit, in addition, a capability for easy, scum-free development with aqueous-alkaline developer solutions, and a very favorable storage stability. moreover, photoresist stencils with excellent resolution can be obtained with the mixtures according to the present invention. for this application, reaction products of copolymers of hydroxyalkyl(meth)acrylate with phosphinic acid isocyanates of formula i are preferred unsaturated polymers. the examples below are intended to explain the present invention and its possible applications in more detail. parts by weight and parts by volume are in the ratio of g/cm.sup.3 ; percentages and quantitative ratios are to be understood in parts by weight, unless otherwise specified. example 1 100 g (0.8 mol) of methylvinylphosphinic acid chloride were dissolved in 100 ml of acetonitrile. 52.2 g (0.8 mol) of sodium cyanate were added thereto in small amounts, with vigorous stirring. the temperature was kept at a maximum of approximately 40.degree. c. by cooling. the precipitate was filtered off by suction after stirring for 24 hours and washed with acetonitrile. the filtrate was concentrated in vacuo. the residue thus produced was distilled at a temperature of 58.degree.-60.degree. c. and a pressure of 52.6 pa. 88 g of methylvinylphosphinic acid isocyanate (83.5% of theory) were obtained. c.sub.4 h.sub.6 no.sub.2 p (131) calculated: 36.6% c, 4.6% h, 10.7% n, 23.7% p found: 35.9% c, 4.6% h, 10.3% n, 24.0% p 20 parts by weight of a polyvinylbutyral with a molecular weight of about 80,000, containing 71% by weight of vinylbutyral, 2% by weight of vinyl-acetate and 27% by weight of vinyl-alcohol units, were dissolved in 350 parts by weight of tetrahydrofuran. to this solution were added dropwise, at room temperature, 12 parts by weight of methylvinylphosphinic acid isocyanate in 50 parts by weight of tetrahydrofuran. the mixture was stirred for 4 hours, mixed with 50 parts by weight of ethanol and then added dropwise to 5,000 parts by weight of vigorously stirred distilled water. a fibrous amorphous product was produced which was filtered off by suction and dried. 27 parts by weight of a white polymer was obtained which had an acid number of 75 and a composition of c: 53.8%, h: 8.3%, n: 3.2% and p: 6.6%. a coating solution was prepared that comprised 3.2 parts by weight of the reaction product described above, 0.32 part by weight of victoria pure blue fga (c.i. basic blue 81) and 0.32 part by weight of 2-(4'-methoxystyryl)-4,6-bis-trichloromethyl-s-triazine in 150.0 parts by weight of ethylene glycol monomethyl ether. the filtered solution was applied to a 0.3 mm-thick aluminum foil, which had been roughened up by brushing with an aqueous grinding-agent suspension and then pretreated with a 0.1% aqueous solution of polyvinylphosphonic acid, and then dried. the copying layer thus obtained, which had a dry layer weight of 0.76 g/m.sup.2, was exposed for 35 seconds, by the use of a negative original, with a metal halide lamp having a power of 5 kw. the exposed layer, which exhibited a clear differentiation between the exposed and unexposed layer regions, was treated with a developer solution of the following composition: 1 part by weight of trisodium phosphate dodecahydrate and 1 part by weight of sodium octylsulfate in 98 parts by weight of distilled water by means of a plush tampon, the nonexposed layer regions being removed within 15 seconds after being wetted by the developer liquid. rinsing was then carried out with water, followed by drying. in the resulting copy, step 4 of a silver-film continuous-tone step wedge, with a density range from 0.05 to 3.05 and density increments of 0.15, was fully imaged. even ultrafine rasters and lines in the original were clearly reproduced. the printing plate thus treated was blackened with protective ink, which was immediately taken up by the image areas. example 2 106 g (0.57 mol) of phenylvinylphosphinic acid chloride were dissolved in 150 ml of acetonitrile and stirred with 38 g (0.59 mol) of sodium cyanate at a temperature of 30.degree. c. after 23 hours the precipitate was filtered off by suction and washed with acetonitrile. the filtrate was concentrated and the residue distilled at a temperature of 107.degree.-110.degree. c. and a pressure of 33 pa. 84 g of phenylvinylphosphinic acid isocyanate (77% of theory) were obtained. c.sub.9 h.sub.8 no.sub.2 p (193) calculated: 56.0% c, 4.2% h, 7.3% n, 16.0% p found: 54.9% c, 4.0% h, 7.1% n, 16.0% p 20 parts by weight of the polyvinylbutyral described in example 1 were reacted with 14 parts by weight of phenylvinylphosphinic acid isocyanate, likewise as specified in example 1. a colorless product was obtained with an acid number of 58 and a composition of c: 59.8%, h: 9.4%, n: 1.8% and p: 4.3%. a coating solution was prepared from 3.2 parts by weight of the reaction product described above, 0.32 part by weight of victoria pure blue fga (c.i. basic blue 81) and 0.32 part by weight of 2-(4'-styrylpheny)-4,6-bis-trichloromethyl-s-triazine in 150 parts by weight of ethylene glycol monomethyl ether and applied to an aluminum foil which had been electrochemically roughened up in nitric acid, then anodized and treated with polyvinylphosphonic acid. the coating solution was dried to a layer weight of 0.8 g/m.sup.2. the exposure and development procedure was similar to that specified in example 1. in this case, too, a covered step 4 was obtained in the continuous-tone step wedge described above. the lithographic printing plate obtained was clamped in a sheet-fed offset machine and a run of up to 140,000 printed. after the printing trial had been discontinued, it was found that the plate used was still in satisfactory condition. example 3 100 g (0.8 mol) of methylvinylphosphinic acid chloride were dissolved in 160 ml of acetonitrile. 61 g (0.8 mol) of ammonium thiocyanate were added thereto in small amounts, with vigorous stirring. the temperature was kept at a maximum of 30.degree. c. by cooling slightly. after 3 days of stirring, the precipitate was filtered off by suction and post-washed with acetonitrile. the filtrate was concentrated in vacuo. the residue thus produced was distilled at a temperature of 70.degree. c. and a pressure of 6.7 pa. 65 g of methylvinylphosphinic acid isothiocyanate (55.1% of theory) were obtained. c.sub.4 h.sub.6 nops (147) calculated: 32.6% c, 4.1% h, 9.5% n, 21.2% p, 21.8% s found: 32.4% c, 4.1% h, 9.3% n, 21.2% p, 21.9% s 20 parts by weight of the polyvinylbutyral described in example 1 were dissolved in 350 parts by weight of tetrahydrofuran. 0.2 parts by weight of diazabicyclo(2.2.2)-octane were added to the clear solution. then 12 parts by weight of methylvinylphosphinic acid isothiocyanate in 50 parts by weight of tetrahydrofuran were added dropwise and, in this process no increase in temperature occurred. the mixture was brought to reflux temperature and left for 5 hours at this temperature. during this time the clear solution became yellow in color. after cooling, the solution was added dropwise to 5,000 parts by weight of water and the yellow fibrous product was filtered off by suction. it had a composition of c: 57.8%, h: 9.0%, n: 1.4%, p: 3.7% and s: 3.4%. a coating solution was prepared from 3.2 parts by weight of the reaction product described above, 0.1 part by weight of crystal violet (c.i. 42555), 0.1 part by weight of phenylazodiphenylamine and 0.25 part by weight of 2-(4'-ethoxynaphth-1-yl)-4,6-bistrichloromethyl-s-triazine in 150 parts by weight of ethylene glycol monomethyl ether. the coating solution was applied to an aluminum plate as in example 2. the dry layer weight was 0.78 g/m.sup.2. the exposed layer was treated with a developer of the composition 5 parts by weight of sodium lauryl sulfate and 3 parts by weight of sodium metasilicate pentahydrate in 92 parts by weight of distilled water as in example 1. testing of the lithographic printing plate in a sheet-fed offset machine produced many thousands of satisfactory impressions. example 4 20 parts by weight of a polyvinylbutyral with a molecular weight of over 80,000, which contained 77% to 80% by weight of vinylbutyral, 2% by weight of vinylacetate and 18% to 21% by weight of vinyl-alcohol units, were reacted, as in example 1, with 16 parts by weight of methylvinylthiophosphinic acid isocyanate in tetrahydrofuran. the product thus obtained was faintly yellowish in color and had a phosphorus content of 6.7% and a sulfur component of 6.1%. as in the previous example, a coating solution was prepared in which 2-(4'-trichloromethylbenzoylmethylene)-3-ethylbenzothiazoline was used as a photoinitiator. after filtration the solution was applied to an electrochemically roughened-up aluminum foil which had been anodized and post treated with polyvinylphosphonic acid, and dried to a layer weight of 1.05 g/m.sup.2. the foil was then exposed for 50 seconds through an original and treated with a developer of the following composition: 3.0 parts by weight of sodium octyl sulfate, 2.0 parts by weight of potassium oxalate, 4.0 parts by weight of disodium hydrogenphosphate hdodecahydrate, 1.5 parts by weight of trisodium phosphate dodecahydrate and 0.2 part by weight of sodium metasilicate nonahydrate in 89.3 parts by weight of distilled water. it was possible to remove the unexposed layer regions easily and without scumming by means of a plush tampon soaked in developer solution. the original was faultlessly reproduced. the printing plate was clamped in a sheet-fed offset machine and supplied more than 150,000 good impressions. example 5 200 g (1.23 mol) of ethyl crotylmethylphosphinate were dissolved in 150 ml of dichloromethane, and phosgene was fed in while cooling at a temperature of 12.degree.-15.degree. c. and stirring for 3.5 hours. stirring was continued for a further hour at 25.degree. c., followed by distillation. 182 g of crotylmethylphosphinic acid chloride (97% of theory), with a boiling point of 83.degree.-85.degree. c. (260 pa), were obtained. c.sub.5 h.sub.10 clop (153) calculated: 39.4% c, 6.6% h, 20.3% p found: 39.9% c, 6.5% h, 20.1% p 170 g (1.1 mol) of crotylmethylphosphinic acid chloride were dissolved in 300 ml of acetonitrile and heated with 73 g (1.1 mol) of sodium cyanate to 55.degree. c. while stirring. after 4.5 hours, the precipitate was filtered off by suction and washed with acetonitrile, the filtrate was concentrated, and the residue distilled at a temperature of 76.degree.-78.degree. c. and at a pressure of 27 pa. 123 g of crotylmethylphosphinic acid isocyanate (70% of theory) were obtained. c.sub.6 h.sub.10 no.sub.2 p (159) calculated: 45.3% c, 6.3% h, 8.8% n, 19.5% p found: 45.0% c, 6.2% h, 8.6% n, 19.4% p 11.3 parts by weight of the polyvinylbutyral described in example 1 were dissolved in 160 parts by weight of tetrahydrofuran. 7.2 parts by weight of crotylmethylphosphinic acid isocyanate in 25 parts by weight of tetrahydrofuran were added dropwise to the clear solution and left for 4 hours at room temperature. the product subsequently precipitated in water and isolated had an acid number of 87. a coating solution was prepared from 48.5 parts by weight of the reaction product described above, 24.1 parts by weight of a diazonium salt polycondensation product prepared from 1 mol of 3-methoxydiphenylamine-4-diazonium sulfate and 1 mol of 4,4'-bis(methoxymethyl)diphenyl ether in 85% phosphoric acid and isolated as mesitylenesulfonate, 2.4 parts by weight of phosphoric acid (85%), 1.8 parts by weight of victoria pure blue fga (c.i. basic blue 81) and 0.9 part by weight of phenylazodiphenylamine in 1,700.0 parts by weight of ethylene glycol monomethyl ether and 518.0 parts by weight of tetrahydrofuran. the filtered solution was applied to an aluminum foil which had been pretreated as in example 2, and the solution was then dried to a dry weight of 1.0 g/m.sup.2. the copying layer was exposed for 35 seconds under a negative original. a solution consisting of 5.0 parts by weight of sodium lauryl sulfate, 1.5 parts by weight of sodium metasilicate pentahydrate and 1.0 part by weight of trisodium phosphate dodecahydrate in 92.5 parts by weight of demineralized water was used for development. in the copy obtained, stage 5 of the continuous-tone step wedge was still fully imaged. after the printing plate had been clamped in a sheet-fed offset machine, more than 130,000 qualitatively satisfactory impressions were obtained. example 6 to the coating solution of example 5 were added 2.6 parts by weight of 2-(4'-styrylphenyl)-4,6-bis-trichloromethyl-s-triazine. for the same layer weight, the photosensitivity of the copy produced using the resulting solution corresponded to that of the previous example. in a printing trial conducted as described above, a run of more than 170,000 qualitatively good impressions was obtained. example 7 a coating solution was prepared from 27.2 parts by weight of an 8.4% solution of the reaction product described in example 1 in tetrahydrofuran, 0.5 part by weight of the diazonium salt polycondensation product described in example 5, 0.1 part by weight of phosphoric acid (85%), 0.03 part by weight of phenylazodiphenylamine, 0.06 part by weight of 2-(4'-methoxystyryl)-4,6-bistrichloromethyl-s-triazine and 0.03 part by weight of a blue azo dyestuff obtained by coupling 2,4-dinitro-6-chlorobenzene-diazonium salt to 2-methoxy-5-acetylamino-n-cyanoethyl-n-hydroxyethylaniline in 150.0 parts by weight of ethylene glycol monomethyl ether. the filtered solution was applied to an aluminum foil, which had been electrolytically roughened up in hydrochloric acid, then anodized and post-treated, and the solution then dried to a layer weight of 1.1 g/m.sup.2. the layer thus prepared was exposed through a negative original for 35 seconds, a covered step 5 being obtained. the development was carried out as described in example 5. after the printing plate had been clamped in a sheet-fed offset machine, it was possible to observe that the printing plate took up the ink presented very rapidly, so that only a small amount of waste paper was produced. the printing trial was discontinued after 150,000 impressions, the lithographic printing plate used not exhibiting any abrasion effects. example 8 the solutions described in examples 5 and 6 were each applied to five aluminum plates which had been electrochemically roughened up, anodized and treated with polyvinylphosphonic acid. the 10 samples in each case had a layer weight of between 1.03 and 1.11 g/m.sup.2. the plates obtained were stored in pairs (a sample as in example 5 and 6 in each case) in a drying oven heated to 100.degree. c. for one hour or two to five hours. after cooling down, the plates were exposed for 35 seconds under a test original and processed with the developer from example 5. the printing plates made as in both examples, when stored for one or two hours, showed no change compared with a copy processed normally. the layer of example 5 likewise largely corresponded to the reference copy after being stored for three hours, while the layer of example 6 could be developed with a slight delay and showed an extension of the continuous-tone step wedge by about 2 steps. this effect was exhibited also by the layer of example 5 stored for 4 hours, while the corresponding layer of example 6 could be developed only with the formation of flakes. when clamped in the printing machine, the latter layer also showed a tendency to take up ink at the non-image areas. this applied also, but to a markedly greater extent, to the plates stored for five hours. all in all, it was possible to deduce from the present example that the quite good storage stability of the layer of example 5 was surprisingly little influenced by the addition of a photoinitiator (example 6). example 9 a terpolymer which comprised 50% by weight of hydroxyethyl methacrylate, 20% by weight of methyl methacrylate and 30% by weight of hexyl methacrylate was mixed with an excess of methylvinylphosphinic acid isocyanate (for preparation, see example 1) such that all the hydroxyl groups were converted into unsaturated phosphinylurethane groups. the polymer obtained had a mean molecular weight of about 32,000. a solution was prepared from 6.5 parts by weight of the reaction product described above, 5.6 parts by weight of an industrial mixture of pentaerythritol tri- and tetramethacrylate, 0.3 part by weight of 2-(4'-methoxystyryl)-4,6-bis-trichloromethyl-s-triazine and 0.03 part by weight of the azo dyestuff described in example 7 in 25.0 parts by weight of butanone, 2.0 parts by weight of ethanol and 1.0 parts by weight of butylacetate, which solution was spun into a biaxially stretched and thermally fixed, 25 .mu.m-thick polyethylene terephthalate film. a layer weight of 35 g/m.sup.2 was obtained after drying at 100.degree. c. the dry resist film produced in this way was laminated with a laminator at 120.degree. c. onto a phenoplastlaminated plate that was clad with 35 .mu.m-thick copper foil. the resist was then exposed for about 25 seconds under a commercial exposure unit. a line original with line widths and spacings of down to 80 .mu.m was used as the original. after exposure, the polyester film was pulled off and the layer obtained was developed for 90 seconds in a developer solution with a composition of 3.0 parts by weight sodium metasilicate nonahydrate and 0.04 part by weight of noniogenic wetting material (coconut fat alcohol polyoxyethylene ether containing approximately 8 oxyethylene units) in 96.96 parts by weight of demineralized water in a spray development unit. the plate was then rinsed for 30 seconds with tap water, slightly etched for 30 seconds with a 15% ammonium peroxydisulfate solution, and finally electroplated in the following electrolytic baths: (1) for 30 minutes in a copper electrolyte made by schloetter, geislingen/steige, "glanzkupferbad" type, current density: 2.5 a/dm.sup.2, metal buildup: about 15 .mu.m. (2) 10 minutes in a nickel bath made by schloetter, geislingen/steige, "norma" type, current density: 4 a/dm.sup.2, metal buildup: 9 .mu.m. the plate showed no damage or undercutting. the stripping was carried out with 5% koh solution at 50.degree. c. the exposed copper was etched away with the usual etching media. example 10 140 g (0.86 mol) of ethyl .beta.-methallylmethylphosphinate were dissolved in 120 ml of dichloromethane, and phosgene was fed in at a temperature of 15.degree.-20.degree. c. for 3 hours. stirring was continued for an additional 2 hours at 25.degree. c., and distillation was then carried out. 127 g of .beta.-methallylmethylphosphinic acid chloride (97% of theory), with a boiling point of 66.degree.-68.degree. c. (27 pa), were obtained. c.sub.5 h.sub.10 clop (153) calculated: 39.4% c, 6.6% h, 20.3% p found: 39.8% c, 6.8% h, 20.6% p 127 g (0.83 mol) of .beta.-methallylmethylphosphinic acid chloride were dissolved in 200 ml of acetonitrile and heated, with stirring at 50.degree.-55.degree. c., together with 54 g (0.83 mol) of sodium cyanate. after 5 hours, the precipitate was filtered off by suction and washed with acetonitrile, and the filtrate was concentrated and distilled at a temperature of 70.degree.-75.degree. c. and at a pressure of 13 pa. 115 g of .beta.-methallylmethylphosphinic acid isocyanate (87% of theory) were obtained. c.sub.6 h.sub.10 no.sub.2 p (159) calculated: 45.3% c, 6.3% h, 8.8% n, 19.5% p found: 45.2% c, 6.4% h, 8.6% n, 18.9% p 20 parts by weight of an epoxy resin (epikote 1007, made by shell) were dissolved in 300 parts by weight of tetrahydrofuran. 12 parts by weight of .beta.-methallylmethylphosphinic acid isocyanate in 30 parts by weight of tetrahydrofuran were added dropwise to the clear solution and left for 4 hours at room temperature. a coating solution consisting of 22.6 parts by weight of an 8.84% solution of the reaction product described above in tetrahydrofuran, 1.5 parts by weight of trimethylolpropane trimethacrylate, 0.8 part by weight of an industrial mixture of pentaerythritol tri- and tetraacrylate, 0.2 parts by weight of 2-(4'-trichloromethylbenzoylmethylene)-3-ethylbenzothiazolin and 0.01 part by weight of p-methoxyphenol in 60.0 parts by weight of ethylene glycol monomethyl ether was applied to an aluminum foil which had been electrochemically roughened up, anodized and post-treated, to a thickness of 3.2 .mu.m. the dried layer was coated with polyvinyl alcohol (about 1 g/m.sup.2) and imagewise exposed under a negative original. an excellently resolving printing plate was obtained, which plate achieved a very high run in a sheet-fed offset machine. example 11 a coating solution was prepared from 20.0 parts by weight of the reaction product described in example 2 of a polyvinylacetal and phenylvinylphosphinic acid isocyanate (for preparation, see example 2), 18.0 parts by weight of 2.6-bis(4'-azidobenzal)cyclohexanone, 2.5 parts by weight of rhodamine 6 gdn extra (c.i. 45160) and 1.9 parts by weight of 2-(4'-styrylphenyl)-4,6-bistrichloromethyl-s-triazine in 900.0 parts by weight of ethylene glycol monomethyl ether and 450.0 parts by weight of tetrahydrofuran. this solution was applied to an aluminum plate to a dry layer weight of 0.9 g/m.sup.2, as in example 2. after exposure under a negative original, development was carried out in a developer with the composition of 5 parts by weight of triethanolammonium lauryl sulfate, 1 part by weight of sodium metasilicate pentahydrate and 1 part by weight of trisodium phosphate dodecahydrate, in 93 parts by weight of water, followed by preservation. with the printing plate obtained in this manner, a run of more than 170,000 qualitatively perfect impressions was achieved on a sheet-fed offset machine. example 12 a coating solution was prepared from 53.5 parts by weight of the polymeric reaction product described in example 10 as an 8.8% solution in tetrahydrofuran, 4.4 parts by weight of 4,4'-diazostilbene-3,3'-disulfonic acid, 0.4 part by weight of rhodamine 6 gdn extra, 0.3 part by weight of 2-benzoylmethylene-1-methyl-.beta.-naphthothiazoline and 0.2 part by weight of 2-ethylanthraquinone in 200 parts by weight of ethylene glycol monomethyl ether and 80 parts by weight of an industrial mixture of butanol/butyl acetate (15:85). the solution was applied to the support described in example 2 to give a dry layer weight of 0.8 g/m.sup.2. the copying layer was exposed for 35 seconds under a negative original, a positive dark-red image being produced. the layer was processed with the developer solution from example 3. in the resulting copy, step 5 was fully covered. a run of 130,000 sheets was obtained, as described above, the lithographic printing plate taking up ink immediately on the image areas, even after prolonged standing. example 13 a coating solution was prepared from 3.0 parts by weight of the polymeric reaction product described in example 1, 1.0 part by weight of the diazonium salt condensation product described in example 5, 0.2 part by weight of victoria pure blue fga, 0.3 part by weight of an industrial mixture of pentaerythritol tri- and tetraacrylate, 0.3 part by weight of 2-(4'-methoxystyryl)-4,6-bis-trichloromethyl-s-triazine and 0.1 part by weight of phosphoric acid (85%) in 100 parts by weight of ethylene glycol monomethyl ether and 80 parts by weight of tetrahydrofuran and applied to an aluminum foil treated as in example 2. the dry layer weight was 1.6 g/m.sup.2. the copying layer thus obtained was exposed for 30 seconds under a negative original, a covered stage 5 of the half-tone step wedge being obtained. a mixture consisting of 0.2 part by weight of sodium metasilicate nonahydrate, 3.9 parts by weight of disodium hydrogenphosphate dodecahydrate, 3.5 parts by weight of trisodium phosphate dodecahydrate, 1.5 parts by weight of potassium tetraborate tetrahydrate and 2.9 parts by weight of sodium octyl sulfate in 88.0 parts by weight of demineralized water was used as developer. far more than 250,000 impressions were achieved with the developed printing plate in a sheet-fed offset machine. examples 14-17 as in example 1, 20 parts by weight of the polyvinylbutyral described there were reacted in each case with 8 parts by weight of the phosphinic acid isocyanates below: example 14: methylvinylphosphinic acid isocyanate (for preparation see example 1) acid number: 54; analysis: c: 55.8%, h: 8.3%, n: 1.9%, p: 3.9%. example 15: allylmethylphosphinic acid isocyanate acid number: 46; analysis: c: 58.3%, h: 9.1%, n: 1.6%, p: 3.6%. example 16: crotylmethylphosphinic acid isocyanate (for preparation see example 5) acid number: 50; analysis: c: 58.1%, h: 8.5%, n: 2.0%, p: 3.8%. example 17: .beta.-methallylmethylphosphinic acid isocyanate (for preparation see example 10) acid number: 42; analysis: c: 57.7%, h: 9.0%, n: 1.5%, p: 3.7%. preparation of allylmethylphosphinic acid isocyanate: 166 g (1.2 mol) of allylmethylphosphinic acid chloride were dissolved in 300 ml of acetonitrile and heated with 78 g (1.2 mol) of sodium cyanate to 50.degree.-55.degree. c. while stirring. after 5.5 hours, the precipitate was filtered off by suction and washed with acetonitrile. the filtrate was then concentrated and the residue distilled at a temperature of 78.degree.-80.degree. c. and at a pressure of 25 pa. 123 g of allylmethylphosphinic acid isocyanate (71% of theory) were obtained. c.sub.5 h.sub.8 no.sub.2 p (145) calculated: 41.4% c, 5.6% h, 9.7% n, 21.4% p found: 41.6% c, 5.7% h, 9.5% n, 21.0% p as in example 1, coating solutions were prepared from these polymers and applied to an aluminum foil which had been electrochemically roughened up, anodized and pretreated with polyvinylphosphonic acid. the coatings were each dried to a mean dry layer weight of approximately 0.8 g/m.sup.2. after the exposure, the nonimage areas were removed with the developer composition specified in example 1. in order to achieve a practical differentiation between image and nonimage areas, the layer of example 14 had to be exposed for 24 seconds, that of example 15 for 35 seconds and that of examples 16 and 17 for 50 seconds. example 18 after cleaning the copper surface with an abrasive and then rinsing with acetone, a plastic-material board clad with thin copper foil was coated with a solution of the following composition: 13.0 parts by weight of the polymeric reaction product described in example 1, 11.0 parts by weight of 4,4'-bis(.beta.-acryloyloxyethoxy)-diphenyl ether, 0.4 part by weight of 2-(4'-methoxystyryl)-4,6-bis-trichloromethyl-s-triazine and 0.1 part by weight of crystal violet in 150 parts by weight of butanone, 40 parts by weight of ethanol and 76 parts by weight of butyl acetate. a layer weight (dry) of 3 g/m.sup.2 was established. exposure was carried out through a negative which represented a circuit pattern, and development was carried out as in example 9. a printed circuit was obtained by etching away the copper in the exposed areas with 40% aqueous iron(iii) chloride solution. examples 19 and 20 two coating solutions were prepared, as in example 1, by the use of the polymers of examples 14 and 16, respectively, and applied to an electrochemically pretreated aluminum foil. the layer weight was in both cases about 1 g/m.sup.2. after exposure and development, the copies obtained were preserved and heated for 5 minutes at 200.degree. c. both printing plates achieved high runs of qualitatively perfect impressions in the sheet-fed offset machine. example 21 140 g (0.86 mol) of ethyl .beta.-methallylmethylphosphinate were dissolved in 120 ml of dichloromethane and phosgene was fed in for 3 hours at a temperature of 15.degree.-20.degree. c. stirring was continued for a further 2 hours at 25.degree. c., followed by distillation. 127 g of .beta.-methallylmethylphosphinic acid chloride (97% of theory) with a boiling point of 66.degree.-68.degree. c. (27 pa), were obtained. c.sub.5 h.sub.10 clop (153) calculated: 39.4% c, 6.6% h, 20.3% p found: 39.8% c, 6.8% h, 20.6% p 83 g (0.55 mol) of .beta.-methallylmethylphosphinic acid chloride were mixed, with slight cooling, in 200 ml of acetonitrile with 42 g (0.55 mol) of ammonium thiocyanate; the mixture was then stirred at a temperature of about 30.degree. c. after 3 days the precipitate was filtered off by suction and was washed with acetonitrile. the filtrate was then concentrated and distilled. 72 g of .beta.-methallylmethylphosphinic acid isothiocyanate (75% of theory) with a boiling point of 98.degree.-100.degree. c. (66 pa), were obtained. c.sub.6 h.sub.10 nops (175) calculated: 41.1% c, 5.8% h, 8.0% n, 17.7% p, 18.3% s found: 41.0% c, 5.6% h, 8.6% n, 17.4% p, 18.0% s 20 parts by weight of the polyvinylbutyral described in example 1 were dissolved in 300 parts by weight of tetrahydrofuran and mixed with 0.2 parts by weight of diazabicyclo(2.2.2)octane. 16 parts by weight of .beta.-methallylmethylphosphinic acid isothiocyanate in 50 parts by weight of tetrahydrofuran were added dropwise, and the resulting mixture was kept under reflux for 6 hours. the yellow solution thus obtained was added dropwise to 6,000 parts by weight of water, yielding 34 parts by weight of a yellow fibrous polymer which contained 7.2% of phosphorus and 6.1% of sulfur. a coating solution was prepared from 25 parts by weight of the reaction product described above, 25 parts by weight of the diazonium salt polycondensate described in example 5, 2.5 parts by weight of phosphoric acid (85%), 2.0 parts by weight of victoria pure blue fga and 0.8 part by weight of phenylazodiphenylamine in 1600 parts by weight of ethylene glycol monomethyl ether. pursuant to the above-described procedures and after exposure through a negative original, a clearly differentiated image of the original was obtained with the developer used in example 13. in the printing machine, the ink was taken up even during the first 6 impressions, when a printing plate prepared with the coating solution was employed. a run performance of markedly more than 100,000 impressions was achieved. example 22 20 parts by weight of a polyamide (ultramid lc.rtm., basf) were swelled for about 12 hours in 200 parts by weight of phosphoric acid trisdimethlyamide, and then were dissolved over the course of 4 hours with vigorous stirring. 15 parts by weight of phenylvinylphosphinic acid isocyanate (for preparation, see example 2) were then added dropwise to the clear solution at 30.degree. c. stirring of the mixture was continued for 6 hours and then precipitation was carried out in a mixture of 10% ethanol and 90% water. the precipitate was filtered, washed and dried. a coating solution was prepared from 8.0 parts by weight of the reaction product described above, 7.5 parts by weight of an industrial mixture of pentaerythritol tri- and tetraacrylate, 0.5 part by weight of 2-(4'-methoxystyryl)-4,6-bis-trichloromethyl-s-triazine, 1.3 parts by weight of a copper phthalocyanin pigment and 0.1 part by weight of 4-methoxyphenol in 200.0 parts by weight of ethylene glycol monomethyl ether and 150.0 parts by weight of dimethylformamide. the coating was carried out as described in example 10. after exposure and development with a developer mixture consisting of 4 parts by weight of benzyl alcohol, 3 parts by weight of sodium lauryl sulfate, 3 parts by weight of trisodium phosphate and 1 part by weight of sodium hydroxide in 89 parts by weight of water a technically appropriate printing plate was obtained with which a very high run performance was achieved.
168-742-618-781-650
US
[ "CA", "AU", "EP", "KR", "US", "JP", "WO", "BR", "CN", "DE", "MX" ]
G05B23/02,B23K9/095,B23K9/10,B23K9/173,B23K26/06,B23K26/20,B23K26/21,G06Q10/06,B23K37/00,G07C3/06
2013-03-15T00:00:00
2013
[ "G05", "B23", "G06", "G07" ]
welding resource performance comparison system and method
metal fabrication systems, such as welding systems and related equipment may be analyzed and performance compared by collecting parameter data from the systems during welding operations via a web based system. the data is stored and analyzed upon request by a user. a user viewable page may be provided that allows for selection of systems and groups of systems of interest. parameters to be used as the basis for comparison may also be selected. pages illustrating the comparisons may be generated and transmitted to the user based upon the selections.
1. a metal fabrication resource performance comparison method, comprising: via at least one computer processor, accessing data representative of a parameter of metal fabrication operations performed on a plurality of metal fabrication resources; via a user viewing device, presenting to a user a listing of the plurality of metal fabrication resources configured to allow for user selection of multiple of the plurality of metal fabrication resources for analysis; via the at least one computer processor, analyzing the parameter for each of the user selected metal fabrication resources to compare performance between the user selected metal fabrication resources; via the at least one computer processor, populating a user viewable report page with graphical indicia representative of the analysis of each of the user selected metal fabrication resources; and via the at least one computer processor, transmitting the user viewable report page to the user viewing device. 2. the method of claim 1 , wherein the plurality of metal fabrication resources are displayed individually and in configurable groups, and wherein the analysis is selectably performed on at least one individual metal fabrication resource and at least one group of metal fabrication resources. 3. the method of claim 1 , wherein the user viewable report page includes graphical indicia, numerical indicia, and indicia identifying each of the plurality of metal fabrication resources. 4. the method of claim 1 , wherein the data is stored in a cloud resource. 5. the method of claim 1 , wherein the analysis and the user viewable report page population are performed in a cloud resource. 6. the method of claim 1 , wherein the user viewable report page comprises a web page viewable in a browser. 7. the method of claim 1 , wherein the data is stored for multiple time periods, and wherein at least one user viewable page is configured to receive a user selection of a particular time period for the analysis. 8. the method of claim 1 , wherein the data is representative of multiple parameters of metal fabrication operations, and wherein the user viewable report page transmitted to the user viewing device is configured to receive a user selection of a particular metal fabrication operation parameter for the user viewable report page. 9. the method of claim 8 , wherein the user viewable report page comprises indicia identifying the selected metal fabrication operation parameter. 10. a metal fabrication resource performance comparison system, comprising: a communications component that in operation accesses data representative of a plurality of parameters sampled during metal fabrication operations of a plurality of metal fabrication resources and transmits a user viewable report page to a user viewing device; and at least one computer processor that in operation analyzes a user selected parameter of the plurality of parameters to compare performance between the plurality of metal fabrication resources, and populates the user viewable report page with graphical indicia representative of each of the analyses for each of the plurality of metal fabrication resources, wherein the user viewable report page includes indicia identifying the plurality of multiple metal fabrication resources, and wherein the analysis is performed and the report populated for multiple metal fabrication resources of the plurality of metal fabrication resources selected by the user via the identifying indicia. 11. the system of claim 10 , wherein the plurality of parameters comprises at least one of arc on time, arc starts, and deposition. 12. the system of claim 10 , wherein the data is accessed from a cloud-based data storage system. 13. the system of claim 10 , wherein the at least one computer processor comprises a cloud-based system. 14. a metal fabrication resource performance comparison interface, comprising: at least one user viewable report page transmitted to a user viewing device, the report page comprising user viewable indicia identifying a plurality of metal fabrication resources, a time period of interest, and at least one graphical indicia of a user selected parameter of a plurality of parameters sampled during metal fabrication operations performed on the plurality of metal fabrication resources for the time period of interest, the graphical indicia comparing performance between multiple selected metal fabrication resources of the plurality of metal fabrication resources, wherein the user viewable report page is configured to receive user selections of the multiple metal fabrication resources of the plurality of metal fabrication resources from a listing of the plurality of metal fabrication resources. 15. the interface of claim 14 , wherein the at least one user viewable report page comprises code that is executable by a processor for viewing in a general purpose browser. 16. the interface of claim 14 , wherein the plurality of metal fabrication resources are displayed individually and in configurable groups, and wherein the analysis is selectably performed on at least one individual metal fabrication resource and at least one group of metal fabrication resources. 17. the interface of claim 14 , wherein the interface comprises an electronic display. 18. a non-transitory tangible computer readable medium comprising executable instructions that when executed cause a processor to: access data representative of a parameter of metal fabrication operations performed on a plurality of metal fabrication resources; transmit a listing of the plurality of metal fabrication resources to a user viewing device; receive a user selection of multiple of the plurality of metal fabrication resources for analysis; analyze the parameter for each of the multiple user selected metal fabrication resources to compare performance between the multiple user selected metal fabrication resources; populate a user viewable report page with graphical indicia representative of the analysis of each of the user selected metal fabrication resources; and transmit the user viewable report page to the user viewing device. 19. the non-transitory tangible computer readable medium of claim 18 , wherein the instructions comprise instructions that when executed cause the processor to initiate display of the plurality of metal fabrication resources individually and in configurable groups, and to selectably perform the analysis on at least one individual metal fabrication resource and at least one group of metal fabrication resources. 20. the non-transitory tangible computer readable medium of claim 18 , wherein the instructions comprise instructions to store the data in a cloud resource. 21. the non-transitory tangible computer readable medium of claim 18 , wherein the instructions comprise instructions to perform the analysis and populate the user viewable report page in a cloud resource. 22. the non-transitory tangible computer readable medium of claim 18 , wherein the instructions comprise instructions that when executed cause the processor to display the user viewable report page via a web page viewable in a browser. 23. the non-transitory tangible computer readable medium of claim 18 , wherein the instructions comprise instructions that when executed cause the processor to store the data for multiple time periods, and to receive a user selection of a particular time period for the analysis via at least one user viewable page. 24. the non-transitory tangible computer readable medium of claim 18 , wherein the instructions comprise instructions that when executed cause the processor to receive a user selection of a particular metal fabrication operation parameter for the user viewable report page, wherein the data is representative of multiple parameters of metal fabrication operations.
background the invention relates generally to welding systems and support equipment for welding operations. in particular, the invention relates to techniques for monitoring and analytical comparison of performance of welding resources. a wide range of welding systems have been developed, along with ancillary and support equipment for various fabrication, repair, and other applications. for example, welding systems are ubiquitous throughout industry for assembling parts, structures and sub-structures, frames, and many components. these systems may be manual, automated or semi-automated. a modern manufacturing and fabrication entity may use a large number of welding systems, and these may be grouped by location, task, job, and so forth. smaller operations may use welding systems from time to time, but these are often nevertheless critical to their operations. for some entities and individuals, welding systems may be stationary or mobile, such as mounted on carts, trucks, and repair vehicles. in all of these scenarios it is increasingly useful to set performance criteria, monitor performance, analyze performance, and, wherein possible, report performance to the operator and/or to management teams and engineers. such analysis allows for planning of resources, determinations of prices and profitability, scheduling of resources, enterprise-wide accountability, among many other uses. systems designed to gather, store, analyze and report welding system performance have not, however, reached a point where they are easily and effectively utilized. in some entities limited tracking of welds, weld quality, and system and operator performance may be available. however, these do not typically allow for any significant degree of analysis, tracking or comparison. improvements are needed in such tools. more specifically, improvements would be useful that allow for data to be gathered at one or multiple locations and from one or multiple systems, analysis performed, and reports generated and presented at the same or other locations. other improvements might include the ability to retrospectively review performance, and to see performance compared to goals and similar systems across groups and entities. brief description the present disclosure sets forth systems and methods designed to respond to such needs. in accordance with certain aspects of the disclosure, a metal fabrication resource performance comparison method, comprises, via a web based system, accessing data representative of a parameter of metal fabrication operations performed on a plurality of metal fabrication resources. via at least one computer processor, the parameter is analyzed for each of the plurality of metal fabrication resources to compare performance of the metal fabrication resources, and a user viewable report page is populated with graphical indicia representative of the analysis. then via a web based system, the user viewable report page is transmitted to a user. also disclosed is a metal fabrication resource performance comparison system, comprising a web based communications component that in operation accesses data representative of a parameter of metal fabrication operations of a plurality of metal fabrication resources. at least one computer processor analyzes the parameter to compare performance of the metal fabrication resources, and populates a user viewable report page with graphical indicia representative of the analysis for each of the metal fabrication resources. a web based transmission component transits the user viewable report page to a user. drawings these and other features, aspects, and advantages of the present invention will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein: fig. 1 is a diagrammatical representation of exemplary monitoring system for gathering information, storing information, analyzing the information, and presenting analysis results in accordance with aspects of the present disclosure, here applied to a large manufacturing and fabrication entity; fig. 2 is a diagrammatical view of an application of the system for a single or mobile welding system with which the techniques may be applied; fig. 3 is a diagrammatical representation of an exemplary cloud-based implementation of the system; fig. 4 is a diagrammatical view of an exemplary welding system of the type that might be monitored and analyzed in accordance with the techniques; fig. 5 is a diagrammatical representation of certain functional components of the monitoring and analysis system; fig. 6 is an exemplary web page view for reporting of a goals and performance of welding systems via the system; fig. 7 is another exemplary web page view illustrating an interface for setting such goals; fig. 8 is a further exemplary web page view of a goal setting interface; fig. 9 is an exemplary web page view of an interface for tracing parameters of a particular weld or system; fig. 10 is an exemplary web page view listing historical welds that may be analyzed and presented; fig. 11 is an exemplary web page view of historical traces available via the system; fig. 12 is an exemplary web page view of a status interface allowing for selection of systems and groups of systems for comparison; and fig. 13 is an exemplary web page view of a comparison of systems and groups of systems selected via the interface of fig. 12 . detailed description as illustrated generally in fig. 1 , a monitoring system 10 allows for monitoring and analysis of one or multiple metal fabrication systems and support equipment. in this view, multiple welding systems 12 and 14 may be interacted with, as may be support equipment 16 . the welding systems and support equipment may be physically and/or analytically grouped as indicated generally by reference numeral 18 . such grouping may allow for enhanced data gathering, data analysis, comparison, and so forth. as described in greater detail below, even where groupings are not physical (i.e., the systems are not physically located near one another), highly flexible groupings may be formed at any time through use of the present techniques. in the illustrated embodiment, the equipment is further grouped in a department or location as indicated by reference numeral 20 . other departments and locations may be similarly associated as indicated by reference numeral 22 . as will be appreciated by those skilled in the art, in sophisticated manufacturing and fabrication entities, different locations, facilities, factories, plants, and so forth may be situated in various parts of the same country, or internationally. the present techniques allow for collection of system data from all such systems regardless of their location. moreover, the groupings into such departments, locations and other equipment sets are highly flexible, regardless of the actual location of the equipment. in general, as represented in fig. 1 , the system includes a monitoring/analysis system 24 that communicates with the monitoring welding systems and support equipment, and that can collect information from these when desired. a number of different scenarios may be envisaged for accessing and collecting the information. for example, certain welding systems and support equipment will be provided with sensors, control circuitry, feedback circuits, and so forth that allow for collection of welding parameter data. some details of such systems are described below. where system parameters such as arc on time are analyzed, for example, data may be collected in each system reflecting when welding arcs are established and times during which welding arcs are maintained. currents and voltages will commonly be sensed and data representative of these will be stored. for support equipment, such as grinders, lights, positioners, fixtures, and so forth, different parameters may be monitored, such as currents, switch closures, and so forth. as noted, many systems will be capable of collecting such data and storing the data within the system itself. in other scenarios, local networks, computer systems, servers, shared memory, and so forth will be provided that can centralize at least at some extent the data collected. such networks and support components are not illustrated in fig. 1 for clarity. the monitoring/analysis system 24 , then, may collect this information directly from the systems or from any support component that themselves collect and store the data. the data will typically be tagged with such identifying information as system designations, system types, time and date, part and weld specification, where applicable, operator and/or shift identifications, and so forth. many such parameters may be monitored on a regular basis and maintained in the system. the monitoring/analysis system 24 may itself store such information, or may make use of extraneous memory. as described more fully below, the system allows for grouping of the information, analysis of the information, and presentation of the information via one or more operator interfaces 26 . in many cases the operator interface may comprise a conventional computer workstation, a handheld device, a tablet computer, or any other suitable interface. it is presently contemplated that a number of different device platforms may be accommodated, and web pages containing useful interfaces, analysis, reports, and the like will be presented in a general purpose interface, such as a browser. it is contemplated that, although different device platforms may use different data transmission and display standards, the system is generally platform-agnostic, allowing reports and summaries of monitored and analyzed data to be requested and presented on any of a variety of devices, such as desktop workstations, laptop computers, tablet computers, hand-held devices and telephones, and so forth. the system may include verification and authentication features, such as by prompting for user names, passwords, and so forth. the system may be designed for a wide range of welding system types, scenarios, applications, and numbers. while fig. 1 illustrates a scenario that might occur in a large manufacturing or fabrication facility or entity, the system may equally well applied to much smaller applications, and even to individual welders. as shown in fig. 2 , for example, even welders that operate independently and in mobile settings may be accommodated. the application illustrated of fig. 2 is an engine-driven generator/welder 28 provided in a truck or work vehicle. in these scenarios, it is contemplated that data may be collected by one of several mechanisms. the welder itself may be capable of transmitting the data wirelessly via its own communications circuitry, or may communicate data via a device connected to the welding system, such as communications circuits within the vehicle, a smart phone, a tablet or laptop computers, and so forth. the system could also be tethered to a data collection point when it arrives at a specified location. in the illustration of fig. 2 a removable memory device 30 , such as a flash drive may be provided that can collect the information from the system and move the information into a monitoring/analysis system 32 . in smaller applications of this type, the system may be particularly designed for reduced data sets, and analysis that would be more useful to the welding operators and entities involved. it should be apparent to those skilled in the art, then, that the system can be scaled and adapted to any one of a wide range of use cases. fig. 3 illustrates an exemplary implementation, for example, which is cloud-based. this implementation is presently contemplated for many scenarios in which data collection, storage, and analysis are performed remotely, such as on a subscription or paid service basis. here the monitored welding system and support equipment 34 communicate directly and indirectly with one or more cloud data storage and services entities 36 . the entities may take any desired form, and significant enhancements in such services are occurring and will continue to occur in coming years. it is contemplated, for example, that a third party provider may contract with a fabricating or manufacturing entity to collect information from the systems, store the information off-site, and perform processing on the information that allows for the analysis and reporting described below. the operator interfaces 26 may be similar to those discussed above, but would typically be addressed to (“hit”) a website for the cloud-based service. following authentication, then, web pages may be served that allow for the desired monitoring, analysis and presentation. the cloud-based services would therefore include components such as communications devices, memory devices, servers, data processing and analysis hardware and software, and so forth. as noted above, many different types and configurations of welding systems may be accommodated by the present techniques. those skilled in the welding arts will readily appreciate that certain such systems have become standards throughout industry. these include, for example, systems commonly referred to as gas metal arc welding (gmaw), gas tungsten gas arc welding (gtaw), shielded metal arc welding (smaw), submerged arc welding (saw), laser, and stud welding systems to mention only a few. all such systems rely on application of energy to workpieces and electrodes to at least partially melt and fuse metals. the systems may be used with or without filler metal, but most systems common in industry do use some form of filler metal which is either machine or hand fed. moreover, certain systems may be used with other materials than metals, and these systems, too, are intended to be serviced where appropriate by the present techniques. by way of example only, fig. 4 illustrates an exemplary welding system 12 , in this case a mig welding system. the system includes a power supply that receives incoming power, such as from a generator or the power grid and converts the incoming power to weld power. power conversion circuitry 38 allows for such conversion, and will typically include power electronic devices that are controlled to provide altering current (ac), direct current, pulsed or other waveforms as defined by welding processes and procedures. the power conversion circuitry will typically be controlled by control and processing circuitry 40 . such circuitry will be supported by memory (not separately shown) that stores welding process definitions, operator-set parameters, and so forth. in a typical system, such parameters may be set via an operator interface 42 . the systems will include some type of data or network interface as indicated at reference numeral 44 . in many such systems this circuitry will be included in the power supply, although it could be located in a separate device. the system allows for performing welding operations, collecting both control and actual data (e.g., feedback of voltages, currents, wire feed speeds, etc.). where desired, certain of this data may be stored in a removable memory 46 . in many systems, however, the information will be stored in the same memory devices that support the control and processing circuitry 40 . in the case of a mig system, a separate wire feeder 48 may be provided. the components of the wire feeder are illustrated here in dashed lines because some systems may optionally use wire feeders. the illustrated system, again, intended only to be exemplary. such wire feeders, where utilized typically include a spool of welding wire electrode wire 50 and a drive mechanism 52 that contacts and drives the wire under the control of a drive control circuitry 54 . the drive control circuitry may be set to provide a desired wire feed speed in a conventional manner. in a typical mig system a gas valve 56 will allow for control of the flow of the shield and gas. setting on the wire feeder may be made via an operator interface 58 . the welding wire, gas, and power is provided by a weld cable as indicated diagrammatically at reference numeral 60 , and a return cable (sometimes referred to as a ground cable) 62 . the return cable is commonly coupled to a workpiece via a clamp and the power, wire, and gas supplied via the weld cable to a welding torch 64 . here again, it should be noted that the system of fig. 4 is exemplary only, the present techniques allow for monitoring and analysis of performance of these types of cutting, heating, and welding systems, as well as others. indeed, the same monitoring analysis system may collect data from different types, makes, sizes, and versions of metal fabrication systems. the data collected and analyzed may relate to different processes and weld procedures on the same or different systems. moreover, as discussed above, data may be collected from support equipment used in, around or with the metal fabrication systems. fig. 5 illustrates certain functional components that may typically be found in the monitoring/analysis system. in the notation used in fig. 5 , these components will be located in a cloud-based service entity, although similar components may be included in any one of the implementations of the system. the components may include, for example, data collection components 68 that receive data from systems and entities. the data collection components may “pull” the data by prompting data exchange with the systems, or may work on a “push” basis where data is provided to the data collection components by the systems without prompting (e.g., at the initiation of the welding system, network device, or management system to which the equipment is connected). the data collection may occur at any desired frequency, or at points in time that are not cyclic. for example, data may be collected on an occasional basis as welding operations are performed, or data may be provided periodically, such as on a shift basis, a daily basis, a weekly basis, or simple as desired by a welding operator or facilities management team. the systems will also include memory 70 that store raw and/or processed data collected from the systems. analysis/reporting components 72 allow for processing of the raw data, and associating the resulting analysis with systems, entities, groups, welding operators, and so forth. examples of the analysis and reporting component operations are provided in greater detail below. finally, communications components 74 allow for populating reports and interface pages with the results of the analysis. a wide range of such pages may be provided as indicated by reference numeral 76 in fig. 5 , some of which are described in detail below. the communications components 74 may thus include various servers, modems, internet interfaces, web page definitions, and the like. as noted above, the present techniques allow for a wide range of data to be collected from welding systems and support equipment for setup, configuration, storage, analysis, tracking, monitoring, comparison and so forth. in the presently contemplated embodiments this information is summarized in a series of interface pages that may be configured as web pages that can be provided to and viewed on a general purpose browser. in practice, however, any suitable interface may be used. the use of general purpose browsers and similar interfaces, however, allows for the data to be served to any range of device platforms and different types of devices, including stationary workstations, enterprise systems, but also mobile and handheld devices as mentioned above. figs. 6-13 illustrate exemplary interface pages that may be provided for a range of uses. referring first to fig. 6 , a goal report page 78 is illustrated. this page allows for the display of one or more welding system and support equipment designations as well as performance analysis based upon goals set for the systems. in the page illustrated in fig. 6 , a number of welding systems and support equipment are identified as indicated at reference numeral 80 . these may be associated in groups as indicated by reference numeral 82 . in practice, the data underlying all of the analyses discussed in the present disclosure are associated with individual systems. these may be freely associated with one another, then, by the interface tools. in the illustrated example, a location or department 84 has been created with several groups designated within the location. each of these groups, then, may include one or more welding systems and any other equipment as shown in the figure. the present embodiment allows for free association of these systems so that useful analysis of individual systems, groups of systems, locations, and so forth may be performed. the systems and support equipment may be in a single physical proximity, but this need not be the case. groups may be created for example, based on system type, work schedules, production and products, and so forth. in systems where operators provide personal identification information, this information may be tracked in addition to or instead of system information. in the illustrated embodiment status indicators are illustrated for conveying the current operational status of the monitored systems and equipment. these indicators, as designated by reference numeral 86 , may indicate, for example, active systems, idle systems, disconnected systems, errors, notifications, and so forth. where system status can be monitored on a real-time or near real-time basis, such indicators may provide useful feedback to management personnel on the current status of the equipment. the particular information illustrated in fig. 6 is obtained, in the present implementation, by selecting (e.g., clicking on) a goals tab 88 . the information presented may be associated in useful time slots or durations, such as successive weeks of use as indicated by reference numeral 90 . any suitable time period may utilized, such as hourly, daily, weekly, monthly, shift-based designations, and so forth. the page 78 also presents the results of analysis of each of a range of performance criteria based upon goals set for the system or systems selected. in the illustrated example a welding system has been selected as indicated by the check mark in the equipment tree on the left, and performance on the basis of several criteria is presented in bar chart form. in this example, a number of monitored criteria are indicated, such as arc on time, deposition, arc starts, spatter, and grinding time. a goal has been set for the particular system as discussed below, and the performance of the system as compared to this goal is indicated by the bars for each monitored parameter. it should be noted that certain of the parameters may be positive in convention while others may be negative. that is, by way of example, for arc on times, representing the portion of the working time in which a welding arc is established and maintained, a percentage of goal exceeding the set standard may be beneficial or desirable. for other parameters, such as spatter, exceeding a goal may actually be detrimental to work quality. as discussed below, the present implementation allows for designation of whether the analysis and presentation may consider these conventionally positive or conventionally negative. the resulting presentations 94 allow for readily visualizing the actual performance as compared to the pre-established goals. fig. 7 illustrates an exemplary goal editing page 96 . certain fields may be provided that allow for setting of standard or commonly used goals, or specific goals for specific purposes. for example, a name of the goal may be designated in a field 98 . the other information pertaining to this name may be stored for use in analyzing the same or different systems. as indicated by reference numeral 100 , the illustrated page allows for setting a standard for the goal, such as arc on time. other standards and parameters may be specified so long as data may be collected that either directly or indirectly indicates the desired standard (i.e., allows for establishment of a value for comparison and presentation). a convention for the goal may be set as indicated at reference numeral 102 . that is, as discussed above, certain goals it may be desired or beneficial that the established goal define a maximum value targeted, while other goals may establish a minimum value targeted. a target 104 may then be established, such as on a numerical percentage basis, an objective (e.g., unit) basis, relative basis, or any other useful basis. further fields, such as a shift field 106 may be provided. still further, in some implementations it may be useful to begin goal or standard setting with an exemplary weld known to have been done and possess characteristics that are acceptable. goals may then be set with this as a standard, or with one or more parameters set based on this weld (e.g., +/−20%). fig. 8 illustrates a goal setting page 108 that may take established goals set by pages such as that illustrated in fig. 7 and apply them to specific equipment. in the page 108 of fig. 8 , a welding system designated “bottom welder” has been selected as indicated by the check mark to the left. the system identification 110 appears in the page. a menu of goals or standards is then displayed as indicated by reference numeral 112 . in this example, selections include placing no goal on the equipment, inheriting certain goals set for a particular location (or other logical grouping), selecting a pre-defined goal (such as a goal established by a page such as thus shown in fig. 7 ), and establishing a custom goal for the equipment. the present techniques also allow for storing and analyzing certain performance parameters of systems in tracking or trace views. these views can be extremely informative in terms of specific welds, performance over certain periods of time, performance by particular operators, performance on particular jobs or parts, and so forth. an exemplary weld trace page 114 is illustrated in fig. 9 . as indicated on this page, a range of equipment may be selected as indicated on the left of the page, with one particular system being currently selected as indicated by reference numeral 116 . once selected, in this implementation a range of data relating to this particular system is displayed as indicated by reference numeral 118 . this information may be drawn from the system or from archived data for the system, such as within an organization, within a cloud resource, and so forth. certain statistical data may be aggregated and displayed as indicated at reference numeral 120 . the weld trace page also includes a graphical presentation of traces of certain monitor parameters that may be of particular interest. the weld trace section 122 , in this example, shows several parameters 124 graphed as a function of time along a horizontal access 126 . in this particular example, the parameters include wire feed speed, current, and volts. the weld for which the cases are illustrated in the example had duration of approximately 8 seconds. during this time the monitored parameters changed, and data reflective of these parameters was sampled and stored. the individual traces 128 for each parameter are then generated and presented to the user. further, in this example by a “mouse over” or other input the system may display the particular value for one or more parameters at a specific point in time as indicated by reference numeral 130 . the trace pages may be populated, as may any of the pages discussed in the present disclosure, in advance or upon demand by a user. this being the case, the trace pages for any number of systems, and specific welds may be stored for later analysis and presentation. a history page 132 may thus be compiled, such as illustrated in fig. 10 . in the history page illustrated, a list of welds performed on a selected system 116 (or combination of selected systems) is presented as indicated by reference numeral 134 . these welds may be identified by times, system, duration, weld parameters, and so forth. moreover, such lists may be compiled for specific operators, specific products and articles of manufacture, and so forth. in the illustrated embodiment, a particular weld has been selected by the user as indicated at reference numeral 136 . fig. 11 illustrates an historical trace page 138 that may be displayed following selection of the particular weld 136 . in this view, an identification of the system, along with the time and date, are provided as indicated by reference numeral 140 . here again, monitored parameters are identified as indicated by reference numeral 124 , and a time axis 126 is provided along which traces 128 are displayed. as will be appreciated by those skilled in the art, the ability to store and compile such analyses may be significantly useful in evaluating system performance, operator performance, performance on particular parts, performance of departments and facilities, and so forth. still further, the present techniques allow for comparisons between equipment on a wide range of bases. indeed, systems may be compared, and presentations resulting from the comparison may be provided any suitable parameter that may form the basis for such comparisons. an exemplary comparison selection page 142 is illustrated in fig. 12 . as shown in this page, multiple systems 80 are again grouped into groups 82 for a facilities or locations 84 . status indicators 86 may be provided for the individual systems or groups. the status page illustrated in fig. 12 may then serve as the basis for selecting systems for comparison as illustrated in fig. 13 . here, the same systems and groups are available for selection and comparison. the comparison page 144 displays these systems and allows users to click or select individual systems, groups, or any sub-group that is created at will. that is, while an entire group of systems may be selected, the user may select individual systems or individual groups as indicated by reference numeral 146 . a comparison section 148 is provided in which a time base for a comparison may be selected, such as on an hourly, daily, weekly, monthly, or any other range. once selected, then, desired parameters are compared for the individual systems, with the systems being identified as indicated at reference numeral 152 , and the comparisons being made and in this case graphically displayed as indicated by reference numeral 154 . in the illustrated example, for example, system on time has been selected as a basis for the comparison. data for each individual system reflective of the respective on time of the system has been analyzed and presented in a percentage basis by a horizontal bar. other comparisons may be made directly between the systems, such as to indicate that one system has outperformed another on the basis of the selected parameter. more than one parameter could be selected in certain embodiments, and these may be based on raw, processed or calculated values. while only certain features of the invention have been illustrated and described herein, many modifications and changes will occur to those skilled in the art. it is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the invention.
170-100-770-966-390
FR
[ "DE", "EP", "FR", "US" ]
H01B11/10,H01B13/26
1997-03-27T00:00:00
1997
[ "H01" ]
data transmission cable and its manufacturing process
a cable includes at least one electrical conductor surrounded by a shield protecting against high-frequency electromagnetic interference. the shield includes an inner tape disposed lengthwise having a conductive layer and an outer tape disposed lengthwise having a conductive layer covered by an insulative layer. the conductive layer of the outer tape facing inwards so that the respective conductive layers of the inner and outer tapes are in contact. at least one of the two tapes has overlapping longitudinal edges. the insulative material of the insulative layer of the outer tape is adhesively bonded to the inside wall of a jacket. the protection is effective up to at least 500 mhz.
a cable including at least one electrical conductor (12, 14) surrounded by a shield protecting against high-frequency electromagnetic interference, the shield including an inner tape (16) disposed lengthwise having a conductive layer (22) and an outer tape (18) disposed lengthwise having a conductive layer (32) covered by an insulative layer (30), the conductive layer (32) of the outer tape (18) facing inwards so that the respective conductive layers (22, 32) of the inner (16) and outer (18) tapes are in contact, and at least one of the two tapes having overlapping longitudinal edge regions, characterized in that the insulative material of the insulative layer (30) of the outer tape (18) is adhesively bonded to the inside wall of a jacket (42). the cable according to claim 1 characterized in that the area of the longitudinal edge regions (24 1 , 26 1 ) of the inner tape (16) is covered by a continuous area (44) of the outer tape (18). the cable according to claim 2 characterized in that the area of the longitudinal edge regions (24 1 , 26 1 ) of the inner tape is opposite the area of the longitudinal edge regions (34 1 , 36 1 ) of the outer tape. the cable according to any of the preceding claims characterized in that the conductive layers (22, 32) of the tapes (16, 18) are based on aluminium. the cable according to any of the preceding claims characterized in that the inner tape (16) includes an insulative layer (20), for example a polyester layer, covering said conductive layer (22). the cable according to any of the preceding claims characterized in that the insulative layer (30) of the outer tape (18) is a polyester layer. the cable according to any of the preceding claims characterized in that the inner tape and the outer tape have substantially identical constructions and dimensions. the cable according to any of the preceding claims characterized in that it includes a continuity conductive wire (40) disposed between the two tapes (16, 18). a method of manufacturing a cable according to any of the preceding claims characterized in that the jacket (42) is extruded at a temperature such that it bonds to the insulative layer of the outer tape (18). a method of manufacturing a cable according to any of the preceding claims characterized in that the inner (16) and outer (18) tapes are preformed in the same guide.
background of the invention 1. field of the invention the present invention concerns a cable for transmitting data comprising a conductive shield for protecting one or more conductors against external electromagnetic interference. 2. description of the prior art data is usually transmitted by means of insulated electrical conductors surrounded by one or more metallic shields, the shield or shields being enclosed within a jacket. the shield isolates the conductors from external electromagnetic interference. the best protection is obtained when the conductive shield is continuous, without openings in it. however, the usual manufacturing techniques impose the use of one or more tapes to form the shield which necessarily leads to openings in the latter which limit the efficacy of the shield at the highest frequencies. the best shields provide effective protection up to frequencies in the order of 30 mhz to 40 mhz. until now the best results have been obtained with a metal tape disposed lengthwise with the longitudinal edges overlapping. a lengthwise tape gives better results than a helically wound tape because the opening extends a shorter distance. increasing data bit rates in cables are leading to an increase in the limiting frequency below which cables are protected from the external environment. u.s. pat. no. 4,510,346 proposes a cable including at least one electrical conductor surrounded by a shield to protect against high-frequency electromagnetic interference. the shield includes an inner tape disposed lengthwise and an outer tape disposed lengthwise, each tape having a conductive layer covered by an insulative layer. the conductive layer of the inner tape faces outwards and the conductive layer of the outer tape faces inwards so that the conductive layers are pressed together. at least one of the two tapes has overlapping longitudinal edges. although the shield of a cable of the above kind provides effective protection against electromagnetic interference up to very high frequencies (at least 100 mhz), it nevertheless gives rise to a major problem, namely that of ease of stripping. the outer tape, disposed lengthwise, with its conductive layer on the inside against the conductive layer of the inner tape prevents easy access to the conductive part of the shield and therefore makes it difficult to connect the cable. an aim of the present invention is to solve this problem by proposing a cable that is effective at very high frequencies, typically above 100 mhz, and easier to strip than prior art cables effective at such frequencies. summary of the invention to this end the present invention proposes a cable including at least one electrical conductor surrounded by a shield protecting against high-frequency electromagnetic interference, said shield including an inner tape disposed lengthwise having a conductive layer and an outer tape disposed lengthwise having a conductive layer covered by an insulative layer, said conductive layer of said outer tape facing inwards so that the respective conductive layers of said inner and outer tapes are in contact, and at least one of the two tapes having overlapping longitudinal edge regions, wherein said insulative material of said insulative layer of said outer tape is adhesively bonded to the inside wall of a jacket. the cable of the invention is protected against interference at frequencies up to 1 ghz. the cable of the invention is particularly simple to connect because, on opening the jacket, the outer tape remains stuck to the latter and only the conductive layer of the inner tape remains visible. connecting the cable of the invention to a connector is therefore facilitated. the area of the longitudinal edges of the inner tape is advantageously covered by a continuous area of the outer tape. this assures good electrical continuity of the shield. in this case, the areas of the longitudinal edges of the tapes are preferably opposite each other. the inner tape preferably further comprises a conductive layer covered with an insulative layer, for example a polyester layer. in this way the inner and outer tapes can slide correctly on guides during manufacture. this minimizes the risk of damage by rubbing. the outer tape can be adhesively bonded to the jacket. to this end, in one embodiment, the material of the jacket is extruded at a sufficiently high temperature for the plastics material of the outer tape to soften and bond to the inside surface of the jacket. an electrical continuity wire can be disposed between the two shields. this improves the contact between the electrical continuity wire and the metal of one of the two tapes, preferably the metal of the inner tape. improved protection against interference at low frequencies has also been observed. to manufacture the cable in accordance with the invention the jacket can be extruded at a temperature such that it bonds to the insulative layer of the outer tape. the inner and outer tapes can be preformed in the same guide. in this way the two tapes do not move relative to each other which prevents potentially harmful rubbing between the conductive layer of the inner tape and the conductive layer of the outer tape. brief description of the drawing fig. 1 is a diagrammatic view in section of a cable in accordance with the invention. fig. 2 is a diagram showing the effect of the invention. detailed description of the preferred embodiment the cable 10 shown in fig. 1 is designed to carry data at high bit rates, in particular in data processing applications. in this example the cable includes two quads 12 and 14, each quad being formed by a set of four insulated conductors, for example the conductors 12.sub.1, 12.sub.2, 12.sub.3 and 12.sub.4 in the case of the quad 12. in accordance with the invention, a shield is provided to protect the conductors against external interference and is in the form of two tapes, one tape 18 surrounding the other tape 16. each tape has two layers, namely a 25 .mu.m thick aluminum layer and a 12 .mu.m thick polyester layer. the polyester layer 20 of the inner tape 16 faces inwards, i.e. towards the quads 12 and 14, and the conductive layer 22 of the inner tape 16 faces outwards. the inner tape 16 is disposed lengthwise, i.e. its longitudinal edges 24 and 26 run along the length of the cable and the longitudinal edge regions 24.sub.1 and 26.sub.1 of the tape, which terminate in said longitudinal edges 24 and 26, overlap and remain on one side of the cable. the outer tape 18, which has exactly the same construction and dimensions as the inner tape 16, is disposed around the inner tape 16. however, its polyester layer 30 faces outwards and its conductive layer 32 therefore faces inwards. like the inner tape 16, the outer tape 18 is disposed lengthwise with longitudinal edge regions 34.sub.1 and 36.sub.1 overlapping opposite the overlapping edge regions 24.sub.1 and 26.sub.1 of the inner tape 16. a continuity wire or conductor 40 is disposed between the inner and outer tapes. finally, the entire assembly is covered by a jacket 42. in accordance with the invention the outwardly facing polyester layer 30 of the outer tape 18 is adhesively bonded to the inside face of the jacket 42. this bonding is effected during extrusion of the jacket. the extrusion is carried out at a temperature sufficiently high for the polyester 30 of the outer tape 18 to soften and therefore bond to the jacket 42. to connect the cable to a connector the jacket 42 is removed at the corresponding end. this exposes the inner tape 16. its conductive layer 22, facing outwards, enables easy connection. to manufacture the cable the two tapes 16 and 18 are preformed by the same guides (not shown). during manufacture, before bonding, the polyester faces of the tapes are in contact with the guides. this minimizes the risk of damage by rubbing. contact of the conductive layer 32 of the outer tape with the conductive layer 22 of the inner tape minimizes the risk of gaps appearing that are vulnerable to external electromagnetic interference. the opposite positions of the overlapping edges are also particularly favorable to minimizing interference. for this reason it is preferable for the area of the overlapping longitudinal edge regions 24.sub.1 and 26.sub.1 of the inner tape 16 to be covered by a continuous area 44 of the outer tape. fig. 2 is a diagram in which the frequency f in mhz is plotted on the abscissa axis and the transfer impedance z of the cable is plotted on the ordinate axis. the lower the impedance z the better the performance of the cable. note that the impedance z represented by the curve 50 has a minimum 52 at around 80 mhz and that its value is satisfactory throughout the measurement range, i.e. up to around 500 mhz.
170-308-096-843-622
JP
[ "JP", "US" ]
H02M3/155,H02M3/158
2000-11-13T00:00:00
2000
[ "H02" ]
voltage conversion circuit and semiconductor integrated circuit device provided therewith
a voltage conversion circuit has a pulse generator that generates a pulse signal having a fixed pulse width and a variable pulse period. the output voltage of this voltage conversion circuit is determined according to the ratio of the pulse width to the pulse period of the pulse signal generated by the pulse generator. this circuit configuration makes it possible to produce as the output voltage lower voltages than ever.
1 . a voltage conversion circuit comprising: a pulse generator for generating a pulse signal having a fixed pulse width and a variable pulse period, wherein an output voltage is determined according to a ratio of the pulse width to the pulse period of the pulse signal generated by the pulse generator. 2 . a voltage conversion circuit as claimed in claim 1 , wherein variation of the pulse period of the pulse signal is reduced by restricting a range within which the output voltage is variable. 3 . a voltage conversion circuit as claimed in claim 2 , wherein an upper limit of the range within which the output voltage is variable is equal to or lower than half a voltage amplitude of the pulse signal. 4 . a voltage conversion circuit as claimed in claim 2 wherein the range within which the output voltage is variable is within 20% of a target value of the output voltage. 5 . a voltage conversion circuit as claimed in claim 1 , wherein the output voltage is selected from among discrete values within the range within which the output voltage is variable. 6 . a voltage conversion circuit as claimed in claim 1 , wherein the pulse generator varies the pulse period of the pulse signal by giving a predetermined delay to a reference pulse signal having a fixed pulse width. 7 . a voltage conversion circuit comprising: an output pulse generator for generating an output pulse signal having a fixed pulse width and a variable pulse period; a switch circuit composed of a pmos transistor receiving a first supply voltage at a source thereof and an nmos transistor receiving a second supply voltage at a source thereof, the switch circuit outputting a voltage at a node at which drains of the pmos and nmos transistors are connected together; a filter circuit for smoothing the voltage output from the switch circuit to produce an output voltage; and a switch timing controller for producing, from the output pulse signal generated by the output pulse generator, first and second control signals with which to control on/off states of the pmos and nmos transistors, wherein the output voltage is determined according to a ratio of the pulse width to the pulse period of the output pulse signal. 8 . a voltage conversion circuit as claimed in claim 7 , wherein the output pulse generator comprises: a delay circuit receiving a reference pulse signal having a fixed pulse width and producing therefrom a delayed pulse signal having a predetermined delay relative thereto; and a delay time controller for varying the delay produced by the delay circuit, wherein the delayed pulse signal is fed as the output pulse signal to the switch timing controller. 9 . a voltage conversion circuit as claimed in claim 8 , wherein the delay circuit comprises: a delay circuit portion composed of a plurality of unit time delay elements connected in series, the unit time delay elements each producing a delay of a unit time; and a selector for selecting one among output signals output respectively from the unit time delay elements according to a select signal fed from the delay time controller, wherein the output signal selected by the selector is used as the delayed pulse signal. 10 . a voltage conversion circuit as claimed in claim 9 , wherein the delay circuit portion comprises: a fixed delay circuit portion composed of one ore more flip-flop circuits connected in series; and an multi-output circuit portion connected to an output end of the fixed delay circuit portion and composed of one ore more flip-flop circuits connected in series, wherein the flip-flop circuits constituting the multi-output circuit portion each receive a clock whose phase is 180 out of phase relative to a clock fed to the flip-flop circuit in a preceding stage. 11 . a voltage conversion circuit as claimed in claim 9 , wherein the selector comprises: a first selector portion for selecting one among the output signals respectively output from the unit time delay elements constituting the delay circuit portion according to a first select signal fed from the delay time controller; an arbitrary time delay element for giving a predetermined delay to the output signal output from the first selector portion; and a second selector portion for choosing between the output signal output from the first selector portion and the output signal output from the arbitrary time delay element according to a second select signal fed from the delay time controller, wherein the output signal selected by the second selector portion is used as the delayed pulse signal. 12 . a voltage conversion circuit as claimed in claim 11 , wherein the unit time delay elements constituting the delay circuit portion and the arbitrary time delay element are each a flip-flop circuit, and the arbitrary time delay element receives a clock whose phase is 180 out of phase relative to clocks fed to the unit time delay elements. 13 . a voltage conversion circuit as claimed in claim 7 , wherein the switch timing controller, when controlling the on/off states of the pmos and nmos transistors constituting the switch circuit, controls voltage levels of the first and second control signals in such a way as to turn off one of those transistors first and then, a predetermined time thereafter, turn on the other. 14 . a voltage conversion circuit as claimed in claim 9 , wherein the delay time controller comprises: a replica circuit for detecting operation speed of an internal circuit driven with the output voltage of the voltage conversion circuit in synchronism with a clock signal fed from outside; and a select signal generator for generating the select signal according to the operation speed of the internal circuit as detected by the replica circuit, wherein the delay produced by the delay circuit is varied in such a way that the output voltage is kept to a minimum required at a given time. 15 . a voltage conversion circuit as claimed in claim 14 , wherein the replica circuit comprises: a critical path circuit, composed of a front delay stage and a latter delay stage connected in series, for producing a delay equal to a delay across a path within the internal circuit that produces a longest delay to an input signal fed thereto, wherein the front delay stage produces a delay that lasts for a first operation time and the critical path circuit as a whole produces a delay that lasts for a second operation time, and the first and second operation times are each compared with a first predetermined operation time and a second predetermined operation time longer than the first predetermined operation time so that if the second operation time is shorter than the first predetermined operation time, it is judged that the operation speed of the internal circuit is too fast, and the select signal generator is instructed to increase the delay produced by the delay circuit, if the first operation time is shorter than the first predetermined operation time and the second operation time is longer than the first predetermined operation time but shorter than the second predetermined operation time, it is judged that the operation speed of the internal circuit is proper, and the select signal generator is instructed to maintain the delay produced by the delay circuit, and if the first operation time is longer than the first predetermined operation time and the second operation time is shorter than the second predetermined operation time, or if the second operation time is longer than the second predetermined operation time, it is judged that the operation speed of the internal circuit is too slow, and the select signal generator is instructed to decrease the delay produced by the delay circuit. 16 . a voltage conversion circuit as claimed in claim 7 , further comprising: a step-up level shifter for individually stepping up the first and second control signals output from the switch timing controller and then feeding the stepped-up control signals respectively to gates of the pmos and nmos transistors constituting the switch circuit, wherein the output voltage of the filter circuit is fed as a supply voltage to the output pulse generator and the switch timing controller. 17 . a semiconductor integrated circuit device comprising a voltage conversion circuit as claimed in claim 1.
background of the invention 1. field of the invention the present invention relates to a voltage conversion circuit for supplying a driving voltage to an integrated circuit, and relates also to a semiconductor integrated circuit device provided with such a voltage conversion circuit. 2. description of the prior art in general, an integrated circuit that performs arithmetic or other operation in synchronism with an operation clock is designed with ample margins in its specifications to ensure that it operates normally even when there are variations in its characteristics that are inevitable in its manufacturing process or fluctuations in the supplied voltage or in the ambient temperature. specifically, an integrated circuit is so designed that, even when the delay it produces increases as a result of a variation or fluctuation such as mentioned above or any other factor, an operation of the integrated circuit as a whole is complete within one clock of the operation clock. moreover, a sufficiently high supply voltage is supplied to the integrated circuit so that it operates normally even when all the factors mentioned above are in the worst condition. designing an integrated circuit with ample margins in its specifications and applying a sufficiently high supply voltage to it as described above, however, make it difficult to enhance its speed and to reduce its power consumption. for this reason, efforts have been made to develop a voltage conversion circuit that controls a supply voltage according to the operation status of an integrated circuit so that the integrated circuit is fed with the minimum driving voltage it requires to operate at a given time. fig. 21 is a diagram schematically showing an example of the circuit configuration of a conventional voltage conversion circuit. the voltage conversion circuit shown in fig. 21 is disclosed in japanese patent application laid-open no. h10-242831, and is composed of a duty factor control circuit 901 , a buffer circuit 902 , a filter circuit 903 , a critical path circuit 904 , a delay circuit 905 , a true/false evaluation circuit 906 , and an adder 907 . the duty factor control circuit 901 is a circuit that controls the varying of an output voltage in the buffer circuit 902 , and is composed of a counter and a comparator. the counter counts up from 0 to 2 ⁿ 1 (for example, when n6, from 0 to 63) in increments of 1 in synchronism with every period of a clock signal (not shown) fed to it, and feeds its count, in the form of an n-bit signal na, to the comparator. the count that follows 2 ⁿ 1 is 0. the comparator is fed with, in addition to the signal na, another n-bit signal nb from the adder 907 . the comparator is a circuit that controls the on/off state of a pmos transistor m 1 and an nmos transistor m 2 that together constitute the buffer circuit 902 . the comparator feeds control signals x 1 and x 2 to the gates of the transistors m 1 and m 2 respectively. when the signal na equals 0, the comparator turns the voltage levels of the control signals x 1 and x 2 to a low level (hereinafter l level); when the signals na and nb are equal, the comparator turns the voltage levels of the control signals x 1 and x 2 to a high level (hereinafter h level). in the buffer circuit 902 , a first supply voltage is applied to the source of the pmos transistor m 1 , and a second supply voltage (here the ground voltage) is applied to the source of the nmos transistor m 2 . the two transistors have their drains connected together, with the node between them serving as the output end of the buffer circuit 902 . accordingly, when the control signals x 1 and x 2 are at l level, the pmos transistor m 1 is on and the nmos transistor m 2 is off. thus, the output voltage of the buffer circuit 902 is nearly equal to the first supply voltage. by contrast, when the control signals x 1 and x 2 are at h level, the pmos transistor m 1 is off and the nmos transistor m 2 is on. thus, the output voltage of the buffer circuit 902 is nearly equal to the second supply voltage (i.e. the ground voltage). that is, the output voltage of the buffer circuit 902 is a pulsating voltage signal y that rises when the signal na turns to 0 and that falls when the signal na becomes equal to the signal nb. this voltage signal y is smoothed by the filter circuit 903 composed of an inductor l 1 and a capacitor c 1 , and is thereby formed into an output voltage z. the output voltage z is supplied to an internal circuit (not shown) formed on the same circuit board so as to be used as the driving voltage for the internal circuit. the output voltage z is used also as the supply voltage for the critical path circuit 904 . in the buffer circuit 902 , let the period in which the pmos transistor m 1 is on and the nmos transistor m 2 is off (i.e. the period in which the control signals x 1 and x 2 are at l level) be called the on period t 1 , and let the period in which the pmos transistor m 1 is off and the nmos transistor m 2 is on (i.e. the period in which the control signals x 1 and x 2 are at h level) be called the off period t 2 . then, the output voltage z of the filter circuit 903 is generally given by z = t1 t1 + t2 vdd ( 1 ) in the formula above, the on period t 1 (the numerator in the right side) represents the pulse width of the voltage signal y, and the sum t 1 t 2 of the on period t 1 and the off period t 2 (the denominator in the right side) represents the pulse period of the voltage signal y. that is, the output voltage z can be controlled by controlling the ratio of the pulse width to the pulse period of the voltage signal y (hereinafter this ratio will be referred to as the duty factor). in the voltage conversion circuit configured as described above, the value of the signal nb fed from the adder 907 to the comparator of the duty factor control circuit 901 is varied to vary the on period t 1 (the pulse width) and thereby control the duty factor of the voltage signal y output from the buffer circuit 902 . in this way, it is possible to control the driving voltage (the output voltage z) fed to the internal circuit. (in the following descriptions, this method of controlling the duty factor is called the variable pulse width method.) moreover, as a means for setting the signal nb at the optimum value at a given time, the operation speed of the critical path circuit 904 is detected. the critical path circuit 904 is a duplicate circuit of the path that is considered to produce the longest delay within the internal circuit to which the output voltage z is fed. as described earlier, the output voltage z of the filter circuit 903 is applied to the critical path circuit 904 as the supply voltage for it. that is, the driving voltage for the internal circuit, i.e. the destination of the voltage supply, is monitored by the critical path circuit 904 . here, it is assumed that the range of voltages on which the critical path circuit 904 can operate is equal to that on which the internal circuit can operate. when the critical path circuit 904 can operate on the output voltage z of the filter circuit 903 , the critical path circuit 904 feeds predetermined data to the true/false evaluation circuit 906 . here, the true/false evaluation circuit 906 receives the data not only directly from the critical path circuit 904 , but also with a delay through the delay circuit 905 . if the true/false evaluation circuit 906 does not receive the data directly from the critical path circuit 904 , the true/false evaluation circuit 906 judges that the internal circuit, i.e. the destination of the voltage supply, is not operating normally, and therefore judges that the driving voltage for the internal circuit (i.e. the output voltage z of the filter circuit 903 ) is too low. thus, the true/false evaluation circuit 906 feeds the adder 907 with a signal s 1 that instructs it to increment the value of the signal nb by 1 to increase the driving voltage. if the true/false evaluation circuit 906 receives the delayed data through the delay circuit 905 , the true/false evaluation circuit 906 judges that the internal circuit, i.e. the destination of the voltage supply, is operating normally despite the delay given to it, and therefore judges that the driving voltage for the internal circuit is too high. thus, the true/false evaluation circuit 906 feeds the adder 907 with a signal s 2 that instructs it to decrement the value of the signal nb by 1 to decrease the driving voltage. if the true/false evaluation circuit 906 receives the data directly from the critical path circuit 904 but does not receive the delayed data through the delay circuit 905 , the true/false evaluation circuit 906 judges that the internal circuit, i.e. the destination of the voltage supply, is receiving the optimum driving voltage at the time. thus, the true/false evaluation circuit 906 feeds the adder 907 with neither the signal s 1 nor the signal s 2 . when the true/false evaluation circuit 906 outputs the signal s 1 , the adder 907 feeds the duty factor control circuit 901 with a value obtained by adding 1 to the current value of the signal nb. by contrast, when the true/false evaluation circuit 906 outputs the signal s 2 , the adder 907 feeds the duty factor control circuit 901 with a value obtained by adding 1 to the current value of the signal nb. in this way, in the voltage conversion circuit configured as described above, the critical path circuit 904 , the delay circuit 905 , and the true/false evaluation circuit 906 detect the operation speed of the internal circuit, i.e. the destination of the voltage supply, and control the duty factor of the voltage signal y in such a way as to decrease the driving voltage for the internal circuit (i.e. the output voltage z) if the detected operation speed is too fast and increase the driving voltage for the internal circuit (i.e. the output voltage z) if the detected operation speed is too slow. it is true that the voltage conversion circuit configured as described above contributes to the reduction of the power consumption of the integrated circuit, because it permits the internal circuit constituting the integrated circuit to be fed with the minimum driving voltage on which the internal circuit can operate at a given time according to the operation status of the internal circuit. it is also true that this voltage conversion circuit is useful as a voltage step-down circuit for common integrated circuits, because it permits the output voltage z to be varied in a wide range. incidentally, a very effective way to further reduce the power consumption of the internal circuit is to reduce the supply voltage for the devices themselves that constitute the internal circuit. for example, the power consumption of an internal circuit employing devices that operate on a supply voltage of 0.5 v is {fraction (1/36)} of the power consumption of an internal circuit employing devices that operate on a supply voltage of 3 v. in this way, by reducing the supply voltage for and the load current through the internal circuit, it is possible to further reduce power consumption. as the power consumption of the internal circuit decreases, the proportion of the power consumption of the voltage conversion circuit to that of the integrated circuit as a whole increases relatively. therefore, to further reduce the power consumption of the integrated circuit as a whole, it is essential to reduce the power consumption of the voltage conversion circuit itself here, one possible means of reducing the power consumption of the voltage conversion circuit itself configured as described above is restricting the range in which the output voltage z can be varied, because this helps simplify the control required and reduce the scale of the duty factor control circuit 901 , the adder 907 , and other circuit blocks. for example, in a case where the voltage conversion circuit receives an external source voltage of about 3 v and supplies power to an internal circuit that operates on 0.5 v, the voltage that the voltage conversion circuit outputs to the internal circuit need not be so high as to be close to the voltage that the voltage conversion circuit receives. moreover, the devices constituting the internal circuit have their respective optimum operating voltages, and therefore it is still possible to cope with variations inevitable in their manufacturing process and changes in the operating environment even if the range in which the output voltage z can be varied is restricted, as long as it is restricted in the vicinity of the operating voltages of those devices. in this way, by restricting the range in which the output voltage z can be varied, it is possible to reduce the circuit scale of the voltage conversion circuit and thereby reduce its power consumption. however, in the voltage conversion circuit adopting the variable pulse width method, in which the value of the signal nb fed from the adder 907 to the comparator is varied to vary the on period t 1 (the pulse width) and thereby control the duty factor of the output signal y output from the buffer circuit 902 , even when the range in which the output voltage z can be varied is restricted, it is still necessary to provide a counter circuit that operates at high speed. for example, in the conventional voltage conversion circuit configured as described above, the counter circuit operates at 2 ⁿ times (i.e., when n6, 64 times) the frequency of the voltage signal y. the counter circuit, operating at such high speed, thus increases the power consumption of the voltage conversion circuit itself, but, to permit the output voltage z to be varied with high accuracy, it is inevitable to keep the operation speed of the counter circuit sufficiently high. for this reason, in the conventional voltage conversion circuit adopting the variable pulse width method, even when the range in which the output voltage z can be varied is restricted for an internal circuit that can operate on a low voltage, the operation speed of the counter circuit needs to be kept sufficiently high. this makes it impossible to achieve satisfactory reduction of the power consumption of the voltage conversion circuit itself. summary of the invention an object of the present invention is to provide a voltage conversion circuit suitable for lower output voltage applications, and to provide a semiconductor integrated circuit device provided with such a voltage conversion circuit. to achieve the above object, according to the present invention, a voltage conversion circuit is provided with a pulse generator for generating a pulse signal having a fixed pulse width and a variable pulse period. here, the output voltage is determined according to the ratio of the pulse width to the pulse period of the pulse signal generated by the pulse generator. brief description of the drawings this and other objects and features of the present invention will become clear from the following description, taken in conjunction with the preferred embodiments with reference to the accompanying drawings in which: fig. 1 is a diagram schematically showing the circuit configuration of the voltage conversion circuit of a first embodiment of the invention; fig. 2 is a diagram schematically showing an example of the circuit configuration of the reference pulse generator 101 and the delay circuits 102 ; fig. 3 is a diagram schematically showing an example of the circuit configuration of the selector 109 ; figs. 4a to 4 d are signal waveform diagrams showing an example of the delay operation performed by the delay circuits 102 ; fig. 5 is a diagram schematically showing an example of the circuit configuration of the reference pulse generator 201 and the delay circuits 202 in a second embodiment of the invention; fig. 6 is a diagram schematically showing the circuit configuration of the voltage conversion circuit of a third embodiment of the invention; fig. 7 is a diagram schematically showing an example of the circuit configuration of the reference pulse generator 301 and the delay circuits 302 ; fig. 8 is a diagram schematically showing an example of the circuit configuration of the selector 309 ; fig. 9 is a diagram schematically showing an example of the circuit configuration of the switch timing controller 104 ; figs. 10a and 10b are timing charts showing the signal waveforms observed in the switch timing controller 104 ; fig. 11 is a diagram schematically showing an example of the circuit configuration of the delay time controller 103 ; fig. 12 is a diagram schematically showing an example of the circuit configuration of the replica circuit 501 ; fig. 13 is a timing chart showing the signal waveforms observed in the pulse generator for status detecting 511 ; fig. 14 is a timing chart showing the signal waveforms observed in the replica circuit 501 ; fig. 15 is a table showing the relationship between the operation status signals la and lb observed in the replica circuit 501 and the operation status of the internal circuit; fig. 16 is a diagram schematically showing an example of the circuit configuration of the select signal generator 502 ; fig. 17 is a truth table of the logic circuit provided in the voltage control signal generator 601 ; fig. 18 is a diagram schematically showing an example of the circuit configuration of the up/down counter 602 ; fig. 19 is a truth table of the logic circuit provided in the encoder 610 ; fig. 20 is a diagram schematically showing the circuit configuration of the voltage conversion circuit of a fourth embodiment of the invention; and fig. 21 is a diagram schematically showing an example of the circuit configuration of a conventional voltage conversion circuit. description of the preferred embodiments hereinafter, as examples of voltage conversion circuits according to the present invention, voltage conversion circuits (voltage step-down circuits) that supply a driving voltage to the internal circuit constituting a semiconductor integrated circuit device will be described. fig. 1 is a diagram schematically showing the circuit configuration of the voltage conversion circuit of a first embodiment of the invention. the voltage conversion circuit shown in this figure is composed of an output pulse generator 100 , a switch timing controller 104 , a switch circuit 105 , and a filter circuit 106 . the output pulse generator 100 is a circuit that generates an output pulse signal dout having a fixed pulse width and a variable pulse period and that then feeds this output pulse signal dout to the switch timing controller 104 . the circuit configuration and the operation of the output pulse generator 100 will be described in detail later. the switch timing controller 104 is a circuit that produces from the output pulse signal dout fed to it a first and a second control signal 1 and 2 and that feeds these first and second control signals 1 and 2 respectively to the gates of a pmos transistor m 1 and an nmos transistor m 2 that together constitute the switch circuit 105 . that is, the switch timing controller 104 controls the on/off states of the pmos transistor m 1 and the nmos transistor m 2 . the circuit configuration and the operation of the switch timing controller 104 will also be described in detail later. in the switch circuit 105 , a first supply voltage (an external source voltage vdd) is applied to the source of the pmos transistor m 1 , and a second supply voltage (a ground voltage gnd) is applied to the source of the nmos transistor m 2 . the two transistors have their drains connected together, and the node between them serves as the output end of the switch circuit 105 . thus, as the on/off states of the pmos transistor m 1 and the nmos transistor m 2 are controlled, the switch circuit 105 outputs a pulsating voltage signal at its output end. the filter circuit 106 is a low-pass filter composed of an inductor l 1 and a capacitor c 1 . one end of the inductor l 1 is connected to the output end of the switch circuit 105 , and the other end is connected through the capacitor c 1 to ground. the node between the inductor l 1 and the capacitor c 1 serves as the output end of the filter circuit 106 , and is connected to an internal circuit (not shown) and the like formed on the same circuit board. the pulsating voltage signal output from the switch circuit 105 is smoothed by the filter circuit 106 and is thereby formed into an output voltage vint. this output voltage vint is supplied to the internal circuit (not shown) so as to be used as the driving voltage for the internal circuit. fig. 1 shows an example in which the filter circuit 106 is built as an lc circuit; however, it may be built as an rc circuit, or a circuit of any other configuration. here, the magnitude of the output voltage vint can be controlled by varying the duty factor (the ratio of the pulse width to the pulse period) of the pulsating voltage signal output from the switch circuit 105 , i.e. by varying the duty factors of the first and second control signals 1 and 2 . in the voltage conversion circuit of this embodiment, the output pulse generator 100 generates an output pulse signal dout having a fixed pulse width and a variable pulse period, and, by varying the pulse period of this output pulse signal dout appropriately, the duty factors of the first and second control signals 1 and 2 are controlled. this makes it possible to control the driving voltage (the output voltage vint) supplied to the internal circuit. (in the following descriptions, a method of controlling a duty factor like this will be called the variable pulse period method.) next, the circuit configuration and the operation of the output pulse generator 100 mentioned above will be described in detail. as fig. 1 shows, the output pulse generator 100 is composed of a reference pulse generator 101 , a delay circuit 102 , and a delay time controller 103 . the reference pulse generator 101 is a circuit that generates a reference pulse signal having a fixed pulse width and that feeds it to the delay circuit 102 . the delay circuit 102 is a circuit that produces a delayed pulse signal delayed by a predetermined time relative to the reference pulse signal, and is composed of delay circuits 1 (or fixed delay circuits) 107 , delay circuits 2 (or multi-delay circuits) 108 , and a selector 109 . the delay time controller 103 is a circuit that feeds a select signal to the selector 109 and thereby sets the delay produced by the delay circuit 102 so that the desired output voltage vint is obtained. the circuit configuration and the operation of the delay time controller 103 will be described in detail later. fig. 2 is a diagram schematically showing an example of the circuit configuration of the reference pulse generator 101 and the delay circuit 102 . first, the circuit configuration of the delay circuit 102 will be described. in the delay circuit 102 , the delay circuits 1 107 as a whole act as a circuit that gives a delay of n times a predetermined unit time to the reference pulse signal fed from the reference pulse generator 101 . the delay circuits 2 108 as a whole act as a circuit that gives a delay of m times the predetermined unit time to the final output signal d 0 of the delay circuits 1 107 . fig. 2 shows an example in which the delay circuits 1 107 and the delay circuits 2 108 employ, as unit time delay elements, d flip-flop circuits that are triggered by positive (rising) edges of an internal clock signal iclk, and the following description assumes this circuit configuration. however, as the unit time delay elements may be used flip-flop circuits or delay elements of any other type than d flip-flop circuits. the delay circuits 1 107 are built as a shift register (of which the number of delay stages is n5) composed of five d flip-flop circuits connected in series. thus, these flip-flop circuits output, at their respective output terminals, output signals dm 4 to dm 1 and d 0 that are respectively given delays of 1 to 5 times a predetermined unit time relative to the reference pulse signal. here, the number n of delay stages needs to be 1 or more. the delay circuits 2 108 also are built as a shift register (of which the number of delay stages is m5) composed of five d flip-flop circuits connected in series. thus, these flip-flop circuits output, at their respective output terminals, output signals d 1 to d 5 that are respectively given delays of 1 to 5 times the predetermined unit time relative to the output signal d 0 . here, the number m of delay stages needs to be 1 or more. the flip-flop circuits constituting the delay circuits 1 107 and the delay circuits 2 108 all receive, at their clock terminals, the same internal clock signal iclk. as this internal clock signal iclk may be used a clock signal produced by any means, for example an external clock signal fed from outside the integrated circuit, a clock signal obtained through frequency division of such an external clock signal, or a clock signal generated by an oscillation circuit provided within the integrated circuit. in this way, the delay circuit 102 can be formed easily by building the delay circuits 1 107 and the delay circuits 2 108 with flip-flop circuits. the selector 109 is a circuit that selects as the delayed pulse signal one among the final output signal d 0 of the delay circuits 1 107 and the outputs d 1 to d 5 of the delay circuits 2 108 according to the select signal fed from the delay time controller 103 . fig. 3 is a diagram schematically showing the circuit configuration of the selector 109 . as this figure shows, the selector 109 is composed of six and circuits each having two input terminals, and an or circuit having multiple input terminals. the and circuits respectively receive, at one input terminal, the final output signal d 0 of the delay circuits 1 107 and the outputs d 1 to d 5 of the delay circuits 2 108 . the and circuits respectively receive, at the other input terminal, select signals s 0 to s 5 fed from the delay time controller 103 . for example, to select the output signal d 0 as the delayed pulse signal, the select signal s 0 is turned to h level, with the other select signals s 1 to s 5 kept at l level. the select signals s 0 to s 5 are so controlled as not to change in periods in which a pulse signal is flowing through the delay circuits 2 108 . the or circuit receives, at its input terminals, the output signals of the individual and circuits, and outputs the logical sum (or) of those signals as the delayed pulse signal selected by the selector 109 . the delayed pulse signal is fed as the output pulse signal dout to the switch timing controller 104 and also to the reference pulse generator 101 . next, the circuit configuration of the reference pulse generator 101 will be described, with reference back to fig. 2 . the reference pulse generator 101 is composed of a nor circuit having multiple input terminals and an or circuit having two impute terminals. the nor circuit receives, at its input terminals, the output signals dm 4 to dm 1 and d 0 to d 5 output individually from the delay circuit 102 . the nor circuit has the function of producing the initial pulse of the reference pulse signal when the voltage conversion circuit has just been started up. the or circuit receives, at one input terminal, the output signal of the nor circuit and, at the other input terminal, the delayed pulse signal selected by the selector 109 . the output signal of the or circuit is fed as the reference pulse signal to the delay circuit 102 . next, the operation of the output pulse generator 100 configured as described above will be described. when the voltage conversion circuit is started up, the flip-flop circuits constituting the delay circuit 102 are first reset by a reset signal (not shown) so that their output signals dm 4 to dm 1 and d 0 to d 5 are all turned to l level. this causes the output signal of the nor circuit, which is the inverted logical sum (nor) of the output signals dm 4 to dm 1 and d 0 to d 5 , to turn to h level. as a result, the output signal of the or circuit, which is the logical sum (or) of the output signal of the nor circuit and the delayed pulse signal output from the selector 109 , also turns to h level. in this way, the initial pulse of the reference pulse signal fed to the delay circuit 102 is produced. on the other hand, when the voltage conversion circuit is operating steadily, one of the output signals dm 4 to dm 1 and d 0 to d 5 fed to the multiple input terminals of the nor circuit is at h level at a time, and therefore the nor circuit always outputs l level. thus, the or circuit transfers the delayed pulse signal returning from the selector 109 intact as the reference pulse signal to the delay circuit 102 . next, the delay operation of the delay circuit 102 will be described. figs. 4a to 4 d are signal waveform diagrams showing an example of the delay operation preformed by the delay circuit 102 , specifically an example of the output pulse signal dout output from the delay circuit 102 . here, it is assumed that the pulse width of the output pulse signal dout is equal to 1 unit time, and that the unit delay time produced by the individual flip-flop circuits constituting the delay circuit 102 is adapted to the pulse width so as to be equal to 1 unit time. fig. 4a shows the signal waveform obtained when the output signal d 0 of the delay circuits 1 107 is selected as the delayed pulse signal, i.e. as the output pulse signal dout. in this case, the initial pulse p 0 of the reference pulse signal fed to the delay circuit 102 is given a delay of 5 unit times by the five flip-flop circuits constituting the delay circuits 1 107 . thus, in the output pulse signal dout appears a pulse p 1 given a delay of 5 unit times relative to the initial pulse p 0 . this pulse p 1 is fed back to the reference pulse generator 101 , and is fed again as the reference pulse signal to the delay circuit 102 . thereafter, in similar manner, the pulse fed to the delay circuit 102 is given a delay of 5 unit times so that pulses p 2 and p 3 are produced sequentially. thus, the pulse period of the output pulse signal dout equals 5 unit times. here, since the pulse width of each pulse in the output pulse signal dout equals 1 unit time, the duty factor of the output pulse signal dout equals 1/5. fig. 4b shows the signal waveform obtained when the output signal d 1 of the delay circuits 2 108 is selected as the output pulse signal dout. in this case, the initial pulse p 0 of the reference pulse signal fed to the delay circuit 102 is given first a delay of 5 unit times by the five flip-flop circuits constituting the delay circuits 1 107 and then a delay of 1 unit time by the first-stage flip-flop circuit of the delay circuits 2 108 . thus, in the output pulse signal dout appears a pulse p 1 given a delay of (51) unit times relative to the initial pulse p 0 . this pulse p 1 is fed back to the reference pulse generator 101 , and is fed again as the reference pulse signal to the delay circuit 102 . thereafter, in a similar manner, the pulse fed to the delay circuit 102 is given a delay of (51) unit times so that pulses p 2 and p 3 are produced sequentially. thus, the pulse period of the output pulse signal dout equals 6 unit times. here, since the pulse width of each pulse in the output pulse signal dout equals 1 unit time, the duty factor of the output pulse signal dout equals 1/6. fig. 4c shows the signal waveform obtained when the output signal d 2 of the delay circuits 2 108 is selected as the output pulse signal dout. in this case, the pulse period of the output pulse signal dout equals 7 unit times, and therefore the duty factor of the output pulse signal dout equals 1/7. in a similar manner, when the output signal d 3 , d 4 , or d 5 of the delay circuits 2 108 is selected as the output pulse signal dout, the duty factor of the output pulse signal dout equals 1/8, 1/9, or 1/10 respectively. as a more generalized example, fig. 4d shows the signal waveform obtained when the delay circuits 1 107 have n delay stages and the output signal of the m-th delay stage of the delay circuits 2 108 is selected as the output pulse signal dout. in this case, the pulse period of the output pulse signal dout equals (nm) unit times, and therefore the duty factor of the output pulse signal dout equals 1/(nm). here, if it is assumed that the first and second control signals 1 and 2 produced by the switch timing controller 104 are pulse signals obtained basically as the logical not (inversion) of the output pulse signal dout, the magnitude of the output voltage vint fed out of the voltage conversion circuit is given by vint = 1 n + m vdd ( 2 ) according to formula (2) above, if it is assumed that the external source voltage vdd supplied to the voltage conversion circuit of this embodiment equals 3 v, the output voltage vint obtained when the output signal d 0 of the delay circuits 1 107 is selected as the output pulse signal dout is calculated as 0.6 v. in a similar manner, the output voltage vint obtained when one of the output signals d 1 to d 5 of the delay circuits 2 108 is selected is calculated as 0.5 v, 0.43 v, 0.38 v, 0.33 v, or 0.3 v respectively in this order. thus, the range in which the output voltage vint of the voltage conversion circuit of this embodiment can be varied is from 0.3 v to 0.6 v, with the unit variable width being 60 mv on average. the upper limit of the variable range of the output voltage vint can be set by controlling the delay produced by the delay circuits 1 107 (i.e. the minimum delay produced by the delay circuit 102 ). the lower limit of the variable range of the output voltage vint can be set by controlling the delay produced by the last delay stage of the delay circuits 2 108 (i.e. the maximum delay produced by the delay circuit 102 ). the unit variable width of the output voltage vint can be set by controlling the unit delay time of the individual flip-flop circuits constituting the delay circuits 2 108 . as described above, in the voltage conversion circuit of this embodiment adopting the variable pulse period method, it is possible to control the output voltage vint without the use of a control circuit, such as a counter circuit, that operates at high speed as in a conventional voltage conversion circuit adopting the variable pulse width method. thus, it is possible to reduce the circuit scale and the operating frequency of the voltage conversion circuit and thereby greatly reduce the power consumption of the voltage conversion circuit itself this contributes to the reduction of the power consumption of the integrated circuit as a whole. moreover, in the voltage conversion circuit of this embodiment, the output voltage vint is varied in discrete steps within the range in which it can be varied. this circuit configuration helps reduce the number of different control states in the control circuits (i.e., in this embodiment, the delay time controller 103 , the selector 109 , etc.) of the voltage conversion circuit, i.e. the number of selectable output voltage values. this contributes to the reduction of the circuit scale of such control circuits and thus to the reduction of power consumption. the circuit configuration of the voltage conversion circuit of this embodiment described above assumes, as an example, a case in which the output voltage vint for an internal circuit that operates on 0.5 v is produced from the external source voltage vdd of 3 v. as described earlier, the devices constituting the internal circuit have their respective optimum operating voltages (i.e., in this case, 0.5 v), and therefore, even to cope with variations inevitable in their manufacturing process and changes in the operating environment, there is no need to output a voltage so high as to be close to the external source voltage vdd (i.e. close to 3v) to the internal circuit that operates on 0.5 v. thus, from the viewpoint of reducing the scale of the control circuits constituting the voltage conversion circuit, it is preferable to make the upper limit of the variable range of the output voltage vint as low as possible. for example, when the upper limit of the variable range of the output voltage vint is set at 1/2 of the external source voltage vdd or lower, the number of different control states in the control circuits (i.e., in this embodiment, the delay time controller 103 , the selector 109 , etc.) of the voltage conversion circuit is reduced to half the number required conventionally or less. in this way, by making the upper limit of the variable range of the output voltage vint as low as possible, it is possible to reduce the circuit scale of the control circuits and thereby reduce power consumption. in the internal circuit that operates on 0.5 v, if the supply voltage fed to it becomes 0.4 v or lower, its operation speed greatly deteriorates; on the other hand, if the supply voltage becomes 0.6 or higher, its operation speed is saturated. this shows that it is advisable to restrict the variable range of the output voltage vint supplied to the internal circuit to about 20% of the optimum operating voltage (i.e. the center value of the variable range of the output voltage vint) even with consideration given to variations inevitable in the manufacturing process and changes in the operating environment. in the example described above, the variable range of the output voltage vint is 0.2 v, which is less than 7% of the external source voltage vdd. by making the variable range of the output voltage vint as narrow as possible in this way, it is possible to reduce the circuit scale of the control circuits and thereby reduce power consumption. moreover, making the upper limit of the variable range of the output voltage vint as low as possible, or making the variable range as narrow as possible, not only contributes to the reduction of the power consumption of the voltage conversion circuit itself, but also has the effect of reducing variations (ripples) in the output voltage vint, which problem is a disadvantage of the variable pulse period method. in general, variations occurring in the output voltage vint is called ripples. here, however, for convenience' sake, the peak-to-peak value of voltage variations occurring in the output voltage vint is called the ripple voltage v. where an lc filter circuit is used as a smoothing means, the ripple voltage v is given by v = ( 1 - d ) t 2 8 l c vint ( 3 ) in formula (3) above, d represents the duty factor of the pulsating voltage signal fed to the lc filter circuit, and t represents the pulse period. moreover, l represents the inductance and c represents the capacitance of the lc filter circuit. formula (3) shows that the magnitude of the ripple voltage v is proportional to the square of the pulse period t of the pulsating voltage signal fed to the lc filter circuit. in a voltage conversion circuit adopting the variable pulse width method, the pulse period t is constant, and therefore the ripple voltage v occurring in the output voltage vint depends only on the duty factor d. on the other hand, in a voltage conversion circuit adopting the variable pulse period method, the pulse period t is variable, and therefore the ripple voltage v occurring in the output voltage vint depends on both the duty factor d and the pulse period t. as described above, the ripple voltage v is proportional to the square of the pulse period t, and therefore, as the pulse period t becomes longer, the ripple voltage v tends to increase abruptly. the problem here is that, in the variable pulse period method, it is necessary to make the pulse period t longer to make the output voltage vint lower, and therefore attempting to lower the output voltage vint results in increasing the ripple voltage v. moreover, in a voltage conversion circuit adopting the variable pulse period method, unnecessarily widening the variable range of the output voltage vint results in a great difference between the pulse period obtained with the output voltage vint at the upper limit of its variable range and the pulse period obtained with the output voltage vint at the lower limit of its variable range. this increases variations in the ripple voltage v that occur when the output voltage vint is varied, and thus makes it impossible to control the output voltage vint accurately. by contrast, in the voltage conversion circuit of this embodiment, the upper limit of the variable range of the output voltage vint is made as low as possible to make the variable range as narrow as possible, and in addition the variable pulse period method is adopted. this circuit configuration makes it possible to minimize the difference between the pulse period obtained with the output voltage vint at the upper limit of its variable range and the pulse period obtained with the output voltage vint at the lower limit of its variable range, and thereby reduce variations in the ripple voltage v to practically negligible levels. moreover, this circuit configuration makes it possible to shift the entire variable range of the pulse period t toward the shorter-period side, and thereby minimize the ripple voltage v that appears when an attempt is made to lower the output voltage vint. next, the voltage conversion circuit of a second embodiment of the invention will be described. the voltage conversion circuit of this embodiment has basically the same circuit configuration as the voltage conversion circuit of the first embodiment (see fig. 1 ). in this embodiment, however, an improvement is made in the delay circuit 102 provided in the output pulse generator 100 . therefore, in the following description of this embodiment, emphasis is placed on the delay circuit 202 , which characterizes it. fig. 5 is a diagram schematically showing an example of the circuit configuration of the reference pulse generator 201 and the delay circuit 202 in the second embodiment. the reference pulse generator 201 is a circuit that generates a reference pulse signal having a fixed pulse width and that feeds it to the delay circuit 202 . the delay circuit 202 is a circuit that produces a delayed pulse signal delayed by a predetermined time relative to the reference pulse signal, and is composed of delay circuits 1 (or fixed delay circuits) 207 , delay circuits 2 (or multi-delay circuits) 208 , and a selector 209 . the delayed pulse signal is fed as the output pulse signal dout to the switch timing controller (not shown) provided in the following stage and also to the reference pulse generator 201 . the delay time controller 203 is a circuit that feeds a select signal to the selector 209 and thereby sets the delay produced by the delay circuit 202 so that the desired output voltage vint is obtained. in this embodiment, the reference pulse generator 201 and the selector 209 are configured in the same manner and operate in the same manner as the reference pulse generator 101 (see fig. 2 ) and the selector 109 (see fig. 3 ) in the first embodiment, and therefore their explanations will not be repeated. the circuit configuration and the operation of the delay time controller 203 will be described in detail later. in the delay circuit 202 , the delay circuits 1 207 are built as a shift register (of which the number of delay stages is n5) composed of five d flip-flop circuits, which are triggered by positive (rising) edges of an internal clock signal iclk, connected in series. thus, these flip-flop circuits output, at their respective output terminals, output signals dm 4 to dm 1 and d 0 that are respectively given delays of 1 to 5 times a predetermined unit time relative to the reference pulse signal. here, the number n of delay stages needs to be 1 or more. on the other hand, the delay circuits 2 208 are built as a shift register (of which the number of delay stages is m5) composed of three dn flip-flop circuits, which are triggered by negative (trailing) edges of the internal clock signal iclk, and two d flip-flop circuits, which are triggered by positive edges of the internal clock signal iclk, connected alternately in series. that is, each flip-flop circuit receives a clock whose phase is 180 out of phase relative to the clock fed to the flip-flop circuit in the preceding stage. here, the number m of delay stages needs to be 1 or more. thus, these flip-flop circuits output, at their respective output terminals, output signals d 1 to d 5 that are each given a delay corresponding to half a period of the internal clock signal iclk (i.e. 0.5 times the predetermined unit time) relative to the output signal output from the preceding stage. in other words, the output signals d 1 to d 5 are signals that are respectively given delays of 0.5 to 2.5 times the predetermined unit time relative to the output signal d 0 of the delay circuits 1 207 . the flip-flop circuits constituting the delay circuit 202 all receive, at their clock terminals, the same internal clock signal iclk. as this internal clock signal iclk may be used a clock signal produced by any means, for example an external clock signal fed from outside the integrated circuit, a clock signal obtained through frequency division of such an external clock signal, or a clock signal generated by an oscillation circuit provided within the integrated circuit. instead of the dn flip-flop circuits that are triggered by negative edges of the internal clock signal iclk, d flip-flop circuits that are driven by a clock having an inverted phase may be used to achieve the same effect as described above. with the delay circuit 202 configured as described above, when one of the output signals d 0 to d 5 is selected as the output pulse signal dout, the duty factor equals 1/5, 1/5.5, 1/6, 1/6.5, 1/7, or 1/7.5 respectively. as a more generalized example, in a case where the delay circuits 1 207 have n delay stages and the output signal of the m-th delay stage of the delay circuits 2 208 is selected as the output pulse signal dout, the duty factor equals 1/(n0.5m). according to formula (2) noted earlier, if it is assumed that the external source voltage vdd supplied to the voltage conversion circuit of this embodiment equals 3 v, the output voltage vint obtained when the output signal d 0 of the delay circuits 1 207 is selected as the output pulse signal dout is calculated as 0.6 v. in a similar manner, the output voltage vint obtained when one of the output signals d 1 to d 5 of the delay circuits 2 208 is selected is calculated as 0.55 v, 0.5 v, 0.46 v, 0.43 v, or 0.4 v respectively in this order. thus, the range in which the output voltage vint of the voltage conversion circuit of this embodiment can be varied is from 0.4 v to 0.6 v, with the unit variable width being 40 mv on average. as described above, in the voltage conversion circuit of this embodiment, the differences among the delays given respectively to the output signals d 0 to d 5 output from the delay circuits 2 208 are made smaller, and thereby the unit variable width of the output voltage vint is made smaller than in the first embodiment. this makes it possible to enhance the accuracy with which the output voltage vint is varied. needless to say, adopting the voltage conversion circuit of this embodiment helps reduce circuit scale and power consumption as compared with conventional circuit configurations, and these advantages are comparable to those offered by the voltage conversion circuit of the first embodiment. next, the voltage conversion circuit of a third embodiment of the invention will be described. fig. 6 is a diagram schematically showing the circuit configuration of the voltage conversion circuit of the third embodiment. as this figure shows, the voltage conversion circuit of this embodiment has basically the same circuit configuration as the voltage conversion circuit of the first embodiment (see fig. 1 ). in this embodiment, however, an improvement is made in the output pulse generator 100 . therefore, in the following description of this embodiment, emphasis is placed on the output pulse generator 300 , which characterizes it. the output pulse generator 300 is a circuit that generates an output pulse signal dout having a fixed pulse width and a variable pulse period and that feeds it to the switch timing controller 304 . in this embodiment, the output pulse generator 300 is composed of a reference pulse generator 301 , a delay circuit 302 , and a delay time controller 303 . the reference pulse generator 301 is a circuit that generates a reference pulse signal having a fixed pulse width and that feeds it to the delay circuit 302 . the delay circuit 302 is a circuit that produces a delayed pulse signal delayed by a predetermined time relative to the reference pulse signal, and is composed of delay circuits 1 (or fixed delay circuits) 307 , delay circuits 2 (or multi-delay circuits) 308 , and a selector 309 . in this embodiment, the selector 309 includes a selector 1 310 , a delay element 311 , and a selector 2 312 . the delay time controller 303 is a circuit that feeds select signals 1 and 2 to the selector 309 and thereby sets the delay produced by the delay circuit 302 so that the desired output voltage vint is obtained. the circuit configuration and the operation of the delay time controller 303 will be described in detail later. fig. 7 is a diagram schematically showing an example of the circuit configuration of the reference pulse generator 301 and the delay circuit 302 . as this figure shows, the reference pulse generator 301 is composed of a nor circuit having multiple input terminals, and an or circuit having two input terminals. this reference pulse generator 301 is configured in the same manner and operates in the same manner as in the first embodiment described earlier. therefore, in the following description, the explanations of the reference pulse generator 301 will not be repeated, and instead emphasis is placed on the delay circuit 302 . in the delay circuit 302 , the delay circuits 1 307 as a whole act as a circuit that gives a delay of n times a predetermined unit time to the reference pulse signal fed from the reference pulse generator 301 . the delay circuits 2 308 as a whole act as a circuit that gives a delay of m times the predetermined unit time to the final output signal d 0 of the delay circuits 1 307 . fig. 7 shows an example in which the delay circuits 1 307 and the delay circuits 2 308 employ, as unit time delay elements, d flip-flop circuits that are triggered by positive edges of an internal clock signal iclk, and the following description assumes this circuit configuration. however, as the unit time delay elements may be used flip-flop circuits or delay elements of any other configuration other than d flip-flop circuits. the delay circuits 1 307 are built as a shift register (of which the number of delay stages is n5) composed of five d flip-flop circuits connected in series. thus, these flip-flop circuits output, at their respective output terminals, output signals dm 4 to dm 1 and d 0 that are respectively given delays of 1 to 5 times a predetermined unit time relative to the reference pulse signal. here, the number n of delay stages needs to be 1 or more. the delay circuits 2 308 are built as a shift register (of which the number of delay stages is m2) composed of two d flip-flop circuits connected in series. thus, these flip-flop circuits output, at their respective output terminals, output signals d 2 and d 4 that are respectively given delays of 1 and 2 times the predetermined unit time relative to the output signal d 0 . that is, the output signals d 2 and d 4 in this embodiment are the same pulse signals as the output signals d 2 and d 4 in the second embodiment described earlier. here, the number m of delay stages needs to be 1 or more. in this way, the delay circuit 302 can be formed easily by building the delay circuits 1 307 and the delay circuits 2 308 with flip-flop circuits. next, the selector 309 of this embodiment will be described. as described earlier, the selector 309 includes the selector 1 310 , the delay element 311 , and the selector 2 312 . the selector 1 310 is a circuit that selects as the delayed pulse signal one among the final output signal d 0 of the delay circuits 1 307 and the output signals d 2 and d 4 of the delay circuits 2 308 according to the select signals 1 s 0 , s 2 , and s 4 fed from the delay time controller 303 . the delayed pulse signal selected by the selector 1 310 is fed to the delay element 311 , to the selector 2 312 , and to the reference pulse generator 301 . the delay element 311 is a circuit that gives a further delay of a predetermined time to the delayed pulse signal selected by the selector 1 310 . the delay produced by the delay element 311 may be set by a control signal fed externally, or may be set internally beforehand. in the voltage conversion circuit of this embodiment, as the delay element 311 is used a dn flip-flop circuit that is triggered by negative edges of the internal clock signal iclk. thus, in response to the output signal d 0 , d 2 , or d 4 selected by the selector 1 310 , the delay element 311 outputs an output signal d 1 , d 3 , or d 5 respectively that is given a further delay corresponding to half a period of the internal clock signal iclk (i.e. 0.5 times the predetermined unit time). that is, the output signals d 1 , d 3 , and d 5 output from the delay element 311 are the same pulse signals as the output signals d 1 , d 3 , and d 5 in the second embodiment. the flip-flop circuits constituting the delay circuit 302 and the delay element 311 all receive, at their clock terminals, the same internal clock signal iclk. as this internal clock signal iclk may be used a clock signal produced by any means, for example an external clock signal fed from outside the integrated circuit, a clock signal obtained through frequency division of such an external clock signal, or a clock signal generated by an oscillation circuit provided within the integrated circuit. as the delay element 311 may be used a flip-flop circuit or a delay element of any other type than a dn flip-flop circuit. the selector 2 312 is a circuit that selects as the output pulse signal dout one of the output signal of the selector 1 310 and the output signal of the delay element 311 according to the select signal 2 sodd fed from the delay time controller 303 and that feeds the output pulse signal dout to the switch timing controller 304 provided in the next stage. fig. 8 is a diagram schematically showing an example of the circuit configuration of the selector 309 . as this figure shows, the selector 1 310 is composed of three and circuits each having two input terminals, and an or circuit having multiple input terminals. on the other hand, the selector 2 312 is composed of two and circuits each having two input terminals, and an or circuit having two input terminals. first, the circuit configuration of the selector 1 310 will be described. the and circuits respectively receive, at one input terminal, the final output signal d 0 of the delay circuits 1 307 and the output signals d 2 and d 4 of the delay circuits 2 308 . moreover, the and circuits respectively receive, at the other input terminal, the select signals 1 s 0 , s 2 , and s 4 fed from the delay time controller 303 . the select signals 1 s 0 , s 2 , and s 4 are so controlled as not to change in periods in which a pulse signal is flowing through the delay circuits 2 308 . on the other hand, the or circuit receives, at its input terminals, the output signals of the individual and circuits, and outputs the logical sum (or) of those signals as the delayed pulse signal selected by the selector 1 310 . next, the circuit configuration of the selector 2 312 will be described. the and circuits respectively receive, at one input terminal, the output signal of the selector 1 310 and the output signal of the delay element 311 . moreover, the and circuits both receive, at the other input terminal, the select signal 2 sodd fed from the delay time controller 303 . here, to the other input terminal of the and circuit to which the output signal of the selector 1 310 is fed, the select signal 2 sodd is fed after being inverted. moreover, the select signal 2 sodd is so controlled as not to change in periods in which a pulse signal is flowing through the delay circuits 2 308 . on the other hand, the or circuit receives, at its input terminals, the output signals of the individual and circuits, and outputs the logical sum (or) of those signals as the output pulse signal dout selected by the selector 2 312 . for example, to select the output signal d 0 as the output pulse signal dout, the selector 1 310 is made to select the output signal d 0 , and the selector 2 312 is made to select the output signal fed directly from the selector 1 310 . to achieve this, the select signal 1 s 0 is turned to h level, the other select signals 1 s 2 and s 4 are kept at l level, and the select signal 2 sodd is turned to l level. to select the output signal d 1 , which is delayed by half a period of the internal clock signal iclk (i.e. 0.5 times the predetermined unit time) relative to the output signal d 0 , as the output pulse signal dout, the selector 1 310 is made to select the output signal d 0 , and the selector 2 312 is made to select the output signal fed from the delay element 311 . to achieve this, the select signal 1 s 0 is turned to h level, the other select signals 1 s 2 and s 4 are kept at l level, and the select signal 2 sodd is turned to h level. as described above, the voltage conversion circuit of this embodiment permits its output voltage vint to be varied with accuracy comparable to or higher than the accuracy achieved by the voltage conversion circuit of the second embodiment. moreover, the voltage conversion circuit of this embodiment permits the number of flip-flop circuits constituting the delay circuits 2 to be reduced to fewer than in the second embodiment described earlier. if one compares the second and third embodiments described above, as compared with the delay circuits 2 208 (see fig. 5 ) in the second embodiment, the delay circuits 2 308 (see fig. 7 ) have two less flip-flop circuits. in this way, this embodiment makes it possible to reduce the circuit scale and the power consumption of the delay circuit 302 without degrading the accuracy with which the output voltage vint is varied. moreover, reducing the number of flip-flop circuits constituting the delay circuits 2 308 results in reducing the number of input terminals of the nor circuit provided in the reference pulse generator 301 . this also contributes to the reduction of circuit scale. needless to say, adopting the voltage conversion circuit of this embodiment helps reduce circuit scale and power consumption as compared with conventional circuit configurations, and these advantages are comparable to those offered by the voltage conversion circuits of the first and second embodiments. next, the circuit configuration and the operation of the switch timing controllers 104 , 204 , and 304 provided in the voltage conversion circuits of the embodiments described thus far will be described. the switch timing controllers 104 , 204 , and 304 have basically the same circuit configuration, and therefore, in the following description, the switch timing controller 104 of the first embodiment is taken up as an example. fig. 9 is a diagram schematically showing an example of the circuit configuration of the switch timing controller 104 . as this figure shows, the switch timing controller 104 includes a first and a second d flip-flop circuit, an inverter circuit, and a nor circuit having two input terminals. the output end of the delay circuit 102 is connected to the data input terminal of the first d flip-flop circuit and to one input terminal of the nor circuit. the output terminal of the first d flip-flop circuit is connected to the data input terminal of the second d flip-flop circuit and to the input terminal of the inverter circuit. the output terminal of the second d flip-flop circuit is connected to the other input terminal of the nor circuit. the output terminal of the inverter circuit is connected to the gate of the pmos transistor m 1 provided in the switch circuit 105 , and the output terminal of the nor circuit is connected to the gate of the nmos transistor m 2 provided in the switch circuit 105 . the first and second d flip-flop circuits both receive, at their clock terminals, an internal clock signal iclk 2 . the internal clock signal iclk 2 is a double-speed version of the internal clock signal iclk mentioned earlier with which the delay circuit 102 is driven, and has twice the frequency of the internal clock signal iclk. in the switch timing controller 104 configured as described above, the output pulse signal dout, which is synchronous with the internal clock signal iclk, is given a delay corresponding to one period of the internal clock signal iclk 2 by the first d flip-flop circuit, of which the output signal is then logically inverted by the inverter circuit to produce a first control signal 1 . the output signal of the first d flip-flop circuit is given a further delay corresponding to one period of the internal clock signal iclk 2 by the second d flip-flop circuit, of which the output signal is then fed, together with the output pulse signal dout fed directly from the delay circuit 102 , to the nor circuit. the nor circuit, by outputting the inverted logical sum (nor) of these signals, produces a second control signal 2 . figs. 10a and 10b are timing charts of relevant signal waveforms observed in the switch timing controller 104 . fig. 10a shows a case in which the output pulse signal dout is synchronous with a positive edge of the internal clock signal iclk. fig. 10b shows a case in which the output pulse signal dout is synchronous with a negative edge of the internal clock signal iclk. as these figures show, in the switch timing controller 104 configured as described above, the timing with which the first control signal 1 is turned to l level (i.e. the timing with which the pmos transistor m 1 is turned on) is intentionally delayed relative to the timing with which the second control signal 2 is turned to l level (i.e. the timing with which the nmos transistor m 2 is turned off). moreover, the timing with which the second control signal 2 is turned to h level (i.e. the timing with which the nmos transistor m 2 is turned on) is intentionally delayed relative to the timing with which the first control signal 1 is turned to h level (i.e. the timing with which the pmos transistor m 1 is turned off). more specifically, the pmos transistor m 1 is kept on only during a period s 2 , and is kept off otherwise. on the other hand, the nmos transistor m 2 is kept on only during periods s 0 and s 0 , and is kept off otherwise. that is, in periods s 1 and s 1 , both the pmos transistor m 1 and the nmos transistor m 2 are off, and thus there is no period in which the pmos transistor m 1 and the nmos transistor m 2 are simultaneously on. in this way, by controlling the on/off states of the pmos transistor m 1 and the nmos transistor m 2 in such a way that first one mos transistor is turned off and then, a predetermined time thereafter, the other mos transistor is turned on, it is possible to eliminate the possibility that the pmos transistor m 1 and the nmos transistor m 2 are on simultaneously even if a slight unintentional delay occurs in one of the first and second control signals 1 and 2 while they are being produced. this makes it possible to prevent a through current from flowing through the switch circuit 105 and thereby save unnecessary power consumption. in a case where the first and second d flip-flop circuits, which give a delay to the output pulse signal dout, are driven by the internal clock signal iclk 2 , which is a double-speed version of the internal clock signal iclk, i.e. in a case where the output pulse signal dout is synchronous with either a positive or negative edge of the internal clock signal iclk, the delay produced by the first and second d flip-flop circuits is made equal to half a period of the internal clock signal iclk, i.e. equal to the period of the internal clock signal iclk 2 . in the embodiment described above, as an example of elements for giving a delay to the output pulse signal dout, d flip-flop circuits are used. however, as those elements may be used flip-flop circuits or delay elements of any other type other than d flip-flop circuits. next, the circuit configuration and the operation of the delay time controllers 103 , 203 , and 303 provided in the voltage conversion circuits of the embodiments described thus far will be described. the delay time controllers 103 , 203 , and 303 have basically the same circuit configuration, and therefore, in the following description, the delay time controller 103 of the first embodiment is taken up as an example. fig. 11 is a diagram schematically showing an example of the circuit configuration of the delay time controller 103 . as described earlier, the delay time controller 103 is a circuit that feeds a select signal to the selector 109 provided in the delay circuit 102 and thereby sets the delay produced by the delay circuit 102 so that the desired output voltage vint is obtained. as fig. 11 shows, the delay time controller 103 includes a replica circuit 501 and a select signal generator 502 . first, the replica circuit 501 will be described. the replica circuit 501 is a circuit that generates a status signal that indicates the operation status of the internal circuit that operates on the output voltage vint. the replica circuit 501 is composed of a pulse generator 511 for status detecting (hereinafter the status-detecting pulse generator), a critical path circuit 512 , and a first and a second latch 513 a and 513 b (collectively referred to as the latch 513 ). the status-detecting pulse generator 511 is a circuit that generates a pulse signal from the operation clock signal eclk of the internal circuit that operates on the output voltage vint. the pulse signal thus generated is fed to the critical path circuit 512 provided in the following stage. the critical path circuit 512 is a circuit that produces a delay equivalent to the delay across the critical path through the internal circuit, i.e. the path that is considered to produce the longest delay to a signal fed thereto. to cope with variations inevitable in the manufacturing process and changes in the operating environment, the critical path circuit 512 is produced by the same manufacturing process as the internal circuit. moreover, the output voltage vint of the filter circuit 106 is applied as the supply voltage to the critical path circuit 512 . that is, the driving voltage of the internal circuit, which is the destination of the voltage supply, is monitored by the critical path circuit 512 . the latch 513 is a circuit that temporarily holds the pulse signal output from the critical path circuit 512 , and its output signal is fed, as the status signal of the replica circuit 501 , to the select signal generator 502 provided in the following stage. next, a practical example of the circuit configuration of the replica circuit 501 and its operation will be described. fig. 12 is a diagram schematically showing an example of the circuit configuration of the replica circuit 501 . first, the circuit configuration and the operation of the status-detecting pulse generator 511 will be described. as fig. 12 shows, the status-detecting pulse generator 511 includes a counter, a first and a second flip-flop circuits, and a first and a second and circuit each having two input terminals. the counter is a circuit that performs frequency division on the operation clock signal eclk of the internal circuit and thereby produces an output signal n 1 . the output terminal of the counter is connected to the data input terminals of the first and second flip-flop circuits, and also to one input terminal of each of the first and second and circuits. the first flip-flop circuit is a dn flip-flop circuit that is triggered by negative edges of the operation clock signal eclk, and its output signal n 2 is a signal delayed by half a period of the operation clock signal eclk relative to the output signal n 1 of the counter. the output signal n 2 is logically inverted and is then fed to the other input terminal of the first and circuit. the second flip-flop circuit is a d flip-flop circuit that is triggered by positive edges of the operation clock signal eclk, and its output signal n 3 is a signal delayed by one period of the operation clock signal eclk relative to the output signal n 1 of the counter. the output signal n 3 is logically inverted and is then fed to the other input terminal of the first and circuit. the first and circuit is a circuit that generates a pulse signal ev 1 by giving the logical product (and) of the logical not (inversion) of the output signal n 2 and the output signal n 1 . the second and circuit is a circuit that generates a pulse signal ev 2 by giving the logical product (and) of the logical not (inversion) of the output signal n 3 and the output signal n 1 . the counter and the first and second flip-flop circuits mentioned above all operate when an enable signal enable fed from outside the replica circuit 501 is on (at h level). now, the operation of the status-detecting pulse generator 511 configured as described above will be described. fig. 13 is a timing chart of relevant signal waveforms observed in the status-detecting pulse generator 511 . the following description deals with a case in which the enable signal enable is kept on (at h level) for a period corresponding to 16 periods of the operation clock signal eclk of the internal circuit. as the output signal n 1 shown in fig. 13 indicates, here, the division factor of the counter is set at 1/8. this specific division factor makes it possible to limit the number of pulse signals ev 1 and ev 2 generated while the enable signal enable is on to one each and thereby suppress unnecessary operation of the replica circuit 501 . moreover, as described earlier, the output signal n 2 of the first flip-flop circuit is a signal delayed by half a period of the operation clock signal eclk relative to the output signal n 1 , and the output signal n 3 of the second flip-flop circuit is a signal delayed by one period of the operation clock signal eclk relative to the output signal n 1 . thus, the pulse width of the pulse signal ev 1 generated by the first and circuit corresponds to half a period of the operation clock signal eclk, and the pulse width of the pulse signal ev 2 generated by the second and circuit corresponds to one period of the operation clock signal eclk. next, back in fig. 12 , the circuit configuration of the critical path circuit 512 will be described. as described earlier, the critical path circuit 512 is a circuit driven by the output voltage vint output from the filter circuit 106 , and therefore the h level of the signals input to and output from it equals the output voltage vint. thus, to adapt the voltage levels used in the critical path circuit 512 to those used in the status-detecting pulse generator 511 and the first and second latches 513 a and 513 b and vice versa, the critical path circuit 512 is provided with a step-down level shifter 514 a in its input stage and step-up level sifters 515 a and 515 b in its output stage. the replica circuit 501 shown in fig. 12 operates by monitoring whether the critical path circuit 512 provided within itself can output a pulse signal within a predetermined period (within one period of the operation clock signal eclk by which the internal circuit is driven) or not and recognizing, on the basis of the result of that monitoring, the operation status of the internal circuit as one of the following states: an overspeed state (hereinafter operation state fast), an operative state (hereinafter operation state ok), an unsafe state (hereinafter operation state warn), and an inoperative state (hereinafter operation state ng). to discriminate the four operation states mentioned above, the critical path circuit 512 is divided into a front critical path circuit 516 and a latter critical path circuit 517 . here, if it is assumed that the critical path circuit 512 as a whole produces a delay of 1, the front critical path circuit 516 and the latter critical path circuit 517 produce delays of 0.5 and 0.5 respectively. that is, the critical path circuit 512 is so divided that the front critical path circuit 516 produces a longer delay than the latter critical path circuit 517 . a suitable example of the circuit configuration of the critical path circuit 512 is an inverter chain composed of a plurality of inverter circuits connected in series. however, instead of inverter circuits, nand circuit or nor circuits may be used. the pulse signal ev 1 output from the status-detecting pulse generator 511 is fed through the step-down level shifter 514 a to the front critical path circuit 516 . the output signal of the front critical path circuit 516 is fed to the latter critical path circuit 517 , and is fed also to the step-up level sifter 515 a so as to be formed into an output signal ra, which is fed to the first latch 513 a. the output signal of the latter critical path circuit 517 is fed to the step-up level sifter 515 b so as to be formed into an output signal rb, which is fed to the second latch 513 b. the first latch 513 a is a dn flip-flop circuit that is triggered by a negative edge of the pulse signal ev 1 output from the status-detecting pulse generator 511 , and receives, at its data input terminal, the output signal ra from the step-up level sifter 515 a. the second latch 513 b is a dn flip-flop circuit that is triggered by a negative edge of the pulse signal ev 2 , and receives, at its data input terminal, the output signal rb from the step-up level sifter 515 b. thus, the signal la obtained by latching the output signal ra in the first latch 513 a on a negative edge of the pulse signal ev 1 and the signal lb obtained by latching the output signal rb in the second latch 513 b on a negative edge of the pulse signal ev 2 are used as status signals la and lb that are eventually fed from the replica circuit 501 to the select signal generator 502 provided in the following stage. the replica circuit 501 simply needs to detect the operation status immediately before the output pulse signal dout is selected in the delay circuit 102 , and therefore the first and second latches 513 a and 513 b both need to be operated only while the enable signal enable fed from outside the replica circuit 501 is on. now, the operation of the replica circuit 501 configured as described above will be described. fig. 14 is a timing chart of relevant signal waveforms observed in the replica circuit 501 . in the following description, the pulse width of the pulse signal ev 1 (half a period of the operation clock signal eclk) is referred to as the first predetermined operation time, the pulse width of the pulse signal ev 2 (one period of the operation clock signal eclk) is referred to as the second predetermined operation time, the delay produced by the front critical path circuit 516 is referred to as the first operation time, and the delay produced by the critical path circuit 512 as a whole is referred to as the second operation time. in fig. 14 , pattern a shows a case in which the output signal ra is latched at h level in the first latch 513 a and the output signal rb is latched at l level in the second latch 513 b, i.e. a case in which the second operation time is shorter than the first predetermined operation time. in this case, the critical path circuit 512 as a whole is operating with a delay shorter than half a period of the operation clock signal eclk. thus, the internal circuit driven by the output voltage vint is considered to be operating at sufficiently high speed. accordingly, when the status signals la and lb of the replica circuit 501 are at h and l levels respectively, the operation status is recognized as operation state fast. in fig. 14 , pattern b shows a case in which the output signal ra is latched at h level in the first latch 513 a and the output signal rb is latched at h level in the second latch 513 b, i.e. a case in which the first operation time is shorter than the first predetermined operation time and the second operation time is longer than the first predetermined operation time but shorter than the second predetermined operation time. in this case, the front critical path circuit 516 is operating with a delay shorter than half a period of the operation clock signal eclk and the critical path circuit 512 as a whole is operating with a delay longer than half a period of the operation clock signal eclk but shorter than one period thereof thus, the internal circuit driven by the output voltage vint is considered to be operating properly. accordingly, when the status signals la and lb of the replica circuit 501 are both at h level, the operation status is recognized as operation state ok. in fig. 14 , pattern c shows a case in which the output signal ra is latched at l level in the first latch 513 a and the output signal rb is latched at h level in the second latch 513 b, i.e. a case in which the first operation time is longer than the first predetermined operation time and the second operation time is shorter than the second predetermined operation time. in this case, the front critical path circuit 516 is operating with a delay longer than half a period of the operation clock signal eclk and the critical path circuit 512 as a whole is operating with a delay shorter than one period thereof. thus, the internal circuit driven by the output voltage vint is considered to be operating with insufficient operation margins and thus very likely to become inoperative upon a slight change in the operating environment or the like. accordingly, when the status signals la and lb of the replica circuit 501 are at l and h levels respectively, the operation status is recognized as operation state warn. in fig. 14 , pattern d shows a case in which the output signal ra is latched at l level in the first latch 513 a and the output signal rb is latched at l level in the second latch 513 b, i.e. a case in which the second operation time is longer than the second predetermined operation time. in this case, the critical path circuit 512 as a whole is operating with a delay longer than one period of the operation clock signal eclk. thus, the internal circuit driven by the output voltage vint is considered to be very likely to be inoperative. accordingly, when the status signals la and lb of the replica circuit 501 are both at l level, the operation status is recognized as operation state ng. as described above, the four operation states of the replica circuit 501 are discriminated by the combination of the status signals la and lb. fig. 15 is a table showing the relationship between the status signals la and lb and the operation status of the internal circuit. in this way, by classifying the operation status of the critical path circuit 512 into four states (fast, ok, warn, and ng), it is possible to recognize with sufficient reliability the operation status of the internal circuit that is driven by the output voltage vint. this makes it possible to cope with any variations inevitable in the manufacturing process and changes in the operating environment, and thus to supply the optimum output voltage vint at a given time. this contributes to the reduction of the power consumption of the integrated circuit as a whole. next, the circuit configuration and the operation of the select signal generator 502 will be described. the select signal generator 502 is a circuit that generates a select signal with which to select the output pulse signal dout output from the delay circuit 102 according to the status signals la and lb fed from the replica circuit 501 . for example, when the status signals la and lb indicate operation state fast, the select signal generator 502 makes the output voltage vint one step lower from its current level. that is, the select signal is so generated that the delay produced by the delay circuit 102 is made one step longer from its current value. when the status signals la and lb indicate operation state ok, the select signal generator 502 keeps the output voltage vint at its current level. that is, the select signal is so generated that the aforementioned delay is kept at its current value. when the status signals la and lb indicate operation state warn or operation state ng, the select signal generator 502 makes the output voltage vint one step higher from its current level. that is, the select signal is so generated that the aforementioned delay is made one step shorter from its current value. in all the embodiments described heretofore, the output voltage vint is varied by increasing and decreasing the delay produced by the delay circuit 102 , but the alternatives among which the delay circuit 102 can select the output pulse signal dout are limited to the output signals d 0 to d 5 . thus, in generating the select signal, exceptions need to be handled appropriately so that, if the output pulse signal dout selected previously is the output signal d 0 and then the replica circuit 501 requests that the aforementioned delay be made another step shorter, or if the output pulse signal dout selected previously is the output signal d 5 and then the replica circuit 501 requests that the aforementioned delay be made another step longer, the output voltage vint is kept at its current level, i.e. the aforementioned delay is kept at its current value. fig. 16 shows a practical example of the circuit configuration of the select signal generator 502 , devised with the above considerations in mind. fig. 16 is a diagram schematically showing an example of the circuit configuration of the select signal generator 502 . as this figure shows, the select signal generator 502 includes a voltage control signal generator 601 , an up/down counter 602 , a register 603 , and a decoder 604 . the voltage control signal generator 601 is a circuit that produces voltage control signals up, stay, and down according to the status signals la and lb fed from the replica circuit 501 and the select signals s 0 and s 5 fed from the decoder 604 . fig. 17 is a truth table of the logic circuit provided in the voltage control signal generator 601 . the voltage control signal up is a signal that requests that the delay produced by the delay circuit 102 be made one step shorter from its current value. the voltage control signal stay is a signal that requests that the aforementioned delay be kept at its current value. the voltage control signal down is a signal that requests that the aforementioned delay be made one step longer from its current value. the up/down counter 602 is a circuit that calculates the value indicating the new select signal according to the voltage control signals up, stay, and down generated by the voltage control signal generator 601 and the output signals cnt 0 to cnt 2 of the register 603 , in which the value indicating the previous select signal is stored. the circuit configuration and the operation of the up/down counter 602 will be described in detail later. the register 603 is a circuit that temporarily holds the output signals cnt 0 to cnt 2 of the up/down counter 602 , and is composed of three d flip-flop circuits that are triggered by a driving clock esclk. the driving clock esclk for the register 603 is a pulse signal that rises before the delay circuit 102 starts selecting the output pulse signal dout. when the voltage conversion circuit is started up, the d flip-flop circuits constituting the register 603 are first reset to l level by a reset signal (not shown). this causes the select signal s 0 output from the decoder 604 to turn to h level and all the other select signals si to s 5 to turn to l level. that is, as the output pulse signal dout at the time of the start-up of the voltage conversion circuit, the output signal d 0 , which makes the delay produced by the delay circuit 102 shortest, is selected. as a result, the output voltage vint is set at the upper limit of its variable range, so that the internal circuit to which the output voltage vint is fed operates reliably even at the time of the start-up of the voltage conversion circuit. the decoder 604 is a circuit that generates select signals s 0 to s 5 by decoding the output signals cnt 0 to cnt 2 of the register 603 . specifically, the decoder 604 converts the 3-bit signals (from 000 to 101), representing from 0 to 5 in the decimal system, held in the register 603 into 6-bit signals (from 100000 to 000001) corresponding respectively to the select signals s 0 to s 5 . next, the circuit configuration and the operation of the up/down counter 602 will be described. fig. 18 is a diagram schematically showing an example of the circuit configuration of the up/down counter 602 . as this figure shows, the up/down counter 602 includes an encoder 610 and a 3-bit adder 611 . the 3 -bit adder 611 is composed of two full adders and one half adder. the encoder 610 is a circuit that generates output signals cf 0 to cf 2 by encoding the voltage control signals up, stay, and down fed from the voltage control signal generator 601 . specifically, the encoder 610 converts the voltage control signals up, stay, and down into three-bit signals (111 to 001) that represent 1 to 1 in the decimal system. fig. 19 is a truth table of the logic circuit provided in the encoder 610 . the 3-bit adder 611 is a circuit that adds together the output signals cf 0 to cf 2 of the encoder 610 and the output signals cnt 0 to cnt 2 of the register 603 . it is to be understood that, although the delay time controller 103 provided in the voltage conversion circuit of the first embodiment is taken up as an example in the above description, the delay time controller 103 configured as described above can be used intact as the delay time controller 203 provided in the voltage conversion circuit of the second embodiment. in a case where the delay time controller 103 configured as described above is used as the delay time controller 303 provided in the voltage conversion circuit of the third embodiment, the decoder 604 is so configured as to generate the first select signals s 0 , s 2 , and s 4 by decoding the highest two bits of the output signals cnt 0 to cnt 2 of the register 603 , and the lowest bit of the output signals cnt 0 to cnt 2 is used as the second select signal sout. next, the voltage conversion circuit of a fourth embodiment of the invention will be described. fig. 20 is a diagram schematically showing the circuit configuration of the voltage conversion circuit of the fourth embodiment. as this figure shows, the voltage conversion circuit of this embodiment has basically the same circuit configuration as the voltage conversion circuits of the first to third embodiments, but differs from them in that the output voltage vint is supplied, as a supply voltage, to the output pulse generator (the reference pulse generator, the delay circuit, and the delay time controller) and to the switch timing controller. as fig. 20 shows, the voltage conversion circuit of this embodiment includes, in addition to a reference pulse generator 701 , a delay circuit 702 , a delay time controller 703 , a switch timing controller 704 , a switch circuit 705 , and a filter circuit 706 , step-up level shifters 710 a and 710 b. the reference pulse generator 701 , the delay circuit 702 , and the delay time controller 703 may be configured in the same manner as in any of the first to third embodiments described earlier. here, to the reference pulse generator 701 , the delay circuit 702 , the delay time controller 703 , and the switch timing controller 704 is supplied, as their supply voltage, not the external source voltage vdd but the output voltage vint of the filter circuit 106 . however, when the switch timing controller 704 is driven by the output voltage vint output from the filter circuit 706 , the h level of the first and second control signals 1 and 2 equals the output voltage vint. this may hinder proper control of the on/off states of the pmos transistor m 1 and the nmos transistor m 2 constituting the switch circuit 705 . therefore, to step up the voltage level of the first and second control signals 1 and 2 to the necessary level, the switch timing controller 704 has the step-up level shifters 710 a and 710 b provided in its output stage. in this way, by driving all the circuit blocks other than the switch circuit 705 and the filter circuit 706 with the output voltage vint, which is lower than the external source voltage vdd, it is possible to greatly reduce the power consumption of the voltage conversion circuit itself, and thereby reduce the power consumption of the integrated circuit as a whole.
170-802-183-454-784
US
[ "US" ]
G09B9/04,B60W50/08,B60W50/14,G09B19/16
2014-10-22T00:00:00
2014
[ "G09", "B60" ]
saliency based awareness modeling
in one or more embodiments, driver awareness may be calculated, inferred, or estimated utilizing a saliency model, a predictive model, or an operating environment model. an awareness model including one or more awareness scores for one or more objects may be constructed based on the saliency model or one or more saliency parameters associated therewith. a variety of sensors or components may detect one or more object attributes, saliency, operator attributes, operator behavior, operator responses, etc. and construct one or more models accordingly. examples of object attributes associated with saliency or saliency parameters may include visual characteristics, visual stimuli, optical flow, velocity, movement, color, color differences, contrast, contrast differences, color saturation, brightness, edge strength, luminance, a quick transient (e.g., a flashing light, an abrupt onset of a change in intensity, brightness, etc.).
1. a system for saliency based awareness modeling, comprising: a sensor component detecting one or more objects within an operating environment and one or more object attributes for one or more of the objects, wherein one or more of the object attributes are associated with saliency of one or more of the objects; a monitoring component detecting one or more operator attributes of an operator of a vehicle; a modeling component constructing: a saliency model for one or more of the objects based on one or more of the attributes associated with saliency of one or more of the objects; and an awareness model for one or more of the objects based on the saliency model and one or more of the operator attributes; and a scoring component assigning one or more awareness scores to one or more objects of the awareness model based on the saliency model and one or more of the operator attributes, wherein the sensor component, the monitoring component, the modeling component, or the scoring component is implemented via a processing unit. 2. the system of claim 1 , comprising an electronic control unit (ecu) receiving one or more operator responses or operator behavior associated with the operator of the vehicle. 3. the system of claim 2 , wherein the modeling component constructs the awareness model based on one or more of the operator responses. 4. the system of claim 2 , comprising a database component housing baseline operator response information, wherein the modeling component constructs the awareness model based on a comparison between the baseline operator response information and one or more of the operator responses. 5. the system of claim 1 , comprising a notification component generating one or more notifications based on one or more awareness scores for one or more of the objects. 6. the system of claim 5 , comprising a management component controlling a timing, a color, or a size of one or more of the notifications. 7. the system of claim 1 , wherein the sensor component comprises an image capture device, a radar sensor, a light detection and ranging (lidar) sensor, a laser sensor, a video sensor, or a movement sensor. 8. the system of claim 1 , wherein the monitoring component comprises an image capture sensor, a motion sensors, an eye tracking unit, an infrared sensor, an infrared illuminator, or a depth sensor. 9. the system of claim 1 , wherein one or more of the object attributes comprises velocity, color, contrast, color saturation, brightness, or a detected transient for one or more of the objects. 10. the system of claim 1 , wherein one or more of the operator attributes comprises eye movement, head movement, focus, facial trajectory, eye gaze trajectory, or gaze distribution. 11. a method for saliency based awareness modeling, comprising: detecting, using a sensor component, one or more objects within an operating environment; detecting one or more object attributes for one or more of the objects, wherein one or more of the object attributes are associated with saliency of one or more of the objects; detecting one or more operator attributes of an operator of the vehicle; receiving one or more operator responses provided by the operator of the vehicle; constructing a saliency model for one or more of the objects based on one or more of the attributes associated with saliency of one or more of the objects; constructing an awareness model for one or more of the objects based on the saliency model, one or more of the operator responses, and one or more of the operator attributes; and assigning one or more awareness scores to one or more objects of the awareness model based on the saliency model, one or more of the operator responses, and one or more of the operator attributes, wherein the detecting, the receiving, the constructing, or the assigning is implemented via a processing unit. 12. the method of claim 11 , comprising constructing the awareness model based on a comparison between baseline operator response information and one or more of the operator responses. 13. the method of claim 11 , comprising constructing the awareness model based on a comparison between baseline object attribute information and one or more of the object attributes. 14. the method of claim 11 , comprising constructing the awareness model based on a comparison between baseline operator attribute information and one or more of the operator attributes. 15. the method of claim 11 , comprising rendering one or more notifications based on one or more awareness scores for one or more of the objects during navigation from an origin location to a destination location. 16. the method of claim 15 , comprising managing one or more aspects of one or more of the notifications. 17. a system for saliency based awareness modeling, comprising: a sensor component detecting one or more objects within an operating environment and one or more object attributes for one or more of the objects, wherein one or more of the object attributes are associated with saliency of one or more of the objects; a monitoring component detecting one or more operator attributes of an operator of a vehicle; a modeling component constructing: a saliency model for one or more of the objects based on one or more of the attributes associated with saliency of one or more of the objects; and an awareness model for one or more of the objects based on the saliency model and one or more of the operator attributes; a scoring component assigning one or more awareness scores to one or more objects of the awareness model based on the saliency model and one or more of the operator attributes; and a notification component generating one or more notifications based on one or more awareness scores for one or more of the objects, wherein the sensor component, the monitoring component, the modeling component, the scoring component, or the notification component is implemented via a processing unit. 18. the system of claim 17 , wherein the sensor component is a gaze detection device tracking eye movement or gaze distribution. 19. the system of claim 17 , comprising an electronic control unit (ecu) determining a number of attention demanding objects based on user interaction with one or more subunits of the ecu. 20. the system of claim 19 , wherein the modeling component constructs the awareness model based on the number of attention demanding objects.
background often, accidents, collisions, crashes, etc. may be caused by a variety of factors. for example, crashes may be caused by operator error, recognition error, decision errors, faulty equipment, performance errors, non-performance errors, or other errors. examples of recognition error may include inadequate surveillance, internal distractions, external distractions, inattention, daydreaming, or other recognition errors. examples of decision errors may include operating a vehicle at a velocity too fast for corresponding driving conditions, such as road segment topology, road surface conditions, temperature, visibility, etc. other examples of decision errors may include false assumptions by an operator of a vehicle (e.g., assuming another vehicle or another operator of another vehicle was turning in a different direction), illegal maneuvers, misjudgment of following distance, misjudgment of speed of vehicle, misjudgment of speed of another vehicle, following too closely, aggressive driving behavior, or other decision errors. performance errors may include overcompensation, poor directional control, panic, or behaving with a freeze response. non-performance errors may include falling asleep at the wheel, experiencing a medical condition or physical impairment, such as a heart attack, or other condition. regardless, a great deal of accidents, collisions, or crashes often result from a lack or gap in awareness of an operator of a vehicle, such as distractions, inattention, false assumptions, or misjudgments, for example. accordingly, it may be desirable to mitigate distractions for operators or drivers of vehicles. brief description this brief description is provided to introduce a selection of concepts in a simplified form that are described below in the detailed description. this brief description is not intended to be an extensive overview of the claimed subject matter, identify key factors or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. in one or more embodiments, saliency based awareness modeling may be provided. for example, operator awareness of one or more objects in an operating environment may be modeled based on eye tracking or saliency, such as visual saliency. an awareness model may be constructed based on saliency or a saliency model for one or more objects in the operating environment. to construct such an awareness model, objects within the operating environment may be modeled (e.g., via an operating environment model) and saliency of one or more of the objects may be observed (e.g., via a saliency model, a predictive model, etc.). as an example, the operating environment may be modeled by detecting one or more objects surrounding a vehicle, within an operating environment, or by detecting the surroundings of the vehicle. here, in this example, as a vehicle is traveling through the operating environment, one or more objects may be tracked, monitored, sensed, or detected. as discussed, an object may be a potential hazard, a hazard, a potential obstacle, an obstacle, a physical object, a task, a line of communication, attention demanding or non-attention demanding, etc. however, because it may not be desirable to present an operator of a vehicle with every possible notification regarding one or more of the respective objects, the system for saliency based awareness modeling may filter or select one or more objects for notification. in other words, the system for saliency based awareness modeling may selectively present or render one or more alerts or one or more notifications associated with one or more selected objects. these notifications may be presented to an operator of a vehicle in a context appropriate manner. as an example, context appropriateness may be determined based on one or more factors, such as saliency, visual saliency, operator awareness, one or more operator responses, one or more operator attributes, or operator behavior. in this way, the system for saliency based awareness modeling may determine how alert or aware a driver or operator of a vehicle is with respect to one or more objects and notify the operator in a context appropriate manner (e.g., according to the given context or scenario). an awareness model may correspond to an operating environment model or a saliency model. for example, one or more objects within an operating environment model may be assigned one or more awareness scores based on one or more factors discussed herein, such as the saliency model. awareness scores for respective objects may be generated based on one or more object attributes (e.g., saliency, proximity of an object with respect to the vehicle), predictive modeling associated with the object (e.g., a likelihood that the object will move or become an obstacle, etc.), operator behavior, one or more operator attributes, or one or more operator responses (e.g., how a driver reacts versus or compared with expected responses, such as how a driver should react or would be expected to react given awareness of an object). in one or more embodiments, the selection of an object (e.g., for notification) or determination of an awareness score for that object within an awareness model may be based on a variety of factors, such as saliency of one or more objects with respect to an operating environment. in other words, visual cues associated with an object may be utilized to determine a likelihood that the object is visible (e.g., without necessarily requiring confirmation that an operator focused on that object via an eye tracking device or similar sensor). other factors may be utilized to determine or infer awareness or an awareness score, such as predictive modeling (e.g., predictive actions) associated with one or more objects, one or more operator attributes, operator behavior, one or more operator responses (e.g., one or more operator reactions, one or more maneuvers, one or more operations, etc.), presence of one or more occupants in a vehicle, one or more communications, one or more applications, one or more attention demanding objects, a number of attention demanding objects, multi-tasking, feedback, one or more operator preferences, one or more baselines associated therewith, or any combination thereof. as an example, if a vehicle is equipped with a system for saliency based awareness modeling senses an object, such as a patrol vehicle or law enforcement vehicle on the side of a roadway, the presence or presence information of the law enforcement vehicle may be noted or associated with an operating environment model and tracked as an object within the operating environment model. one or more aspects associated with the object may be detected and utilized to build or construct a saliency model which corresponds to the operating environment model or one or more objects within the operating environment model. here, in this example, one or more saliency parameters associated with the law enforcement vehicle may be indicative of a state, a quality, or visibility by which the law enforcement vehicle stands out relative to the operating environment in which the law enforcement vehicle exists. in this regard, if the law enforcement vehicle has its emergency lights engaged, activated, or lit up, one or more saliency parameters of the saliency model associated with the law enforcement vehicle may indicate that the brightness or change in brightness associated with the flashing lights of the vehicle may cause the law enforcement vehicle to be more easily identified by the driver or operator of the vehicle. accordingly, a notification may (e.g., or may not) be provided to a driver or operator of a vehicle based on whether the lights of the law enforcement vehicle are engaged, whether the driver has changed lanes, provided appropriate clearance, etc. further, a saliency model or awareness model for an object may be adjusted based on a state of an operator, a length of a trip, a time of day, a level of traffic, proximity of an object, size of an object, etc. in one or more embodiments, a system for saliency based awareness modeling may forego providing an operator of a vehicle with a notification for the law enforcement vehicle if the emergency lights of the law enforcement vehicle are engaged (e.g., due to the visibility or saliency of the emergency lighting system of the law enforcement vehicle). if the lights of the law enforcement vehicle are turned off at a later time, the system for saliency based awareness modeling may track the law enforcement vehicle and mitigate or prevent notifications from being provided based on a likelihood that an operator has already seen the law enforcement vehicle prior to the emergency lighting system being deactivated. in this way, the system for saliency based awareness modeling may utilize predictive modeling to ‘remember’ that an operator is likely to be aware of an object after a state of the object changes or attributes (e.g., saliency) associated with the object change. as discussed, other factors may be utilized to facilitate construction of awareness scores or a corresponding awareness model. for example, operator behavior (e.g., eye tracking), one or more operator attributes, or one or more operator responses (e.g., accelerating, steering, turning, braking, signaling, etc.) may be detected or monitored. if eye tracking indicates that an operator of a vehicle has focused his or her eyes on an object for a threshold period of time, the awareness score for the corresponding object may be increased within the awareness model. similarly, if the operator of the vehicle steers around an object in advance or directs the vehicle on a trajectory away from the object, the awareness score may be increased for the same reasons. here, in this example, the system for saliency based awareness modeling may withhold notifications if an operator of a vehicle has shifted lanes (e.g., operator response) to provide a safe clearance for the law enforcement officer or law enforcement vehicle. because a driver or an operator of a vehicle generally has a limited amount of cognition or awareness as a resource, it may be advantageous to selectively provide notifications based on saliency, object attributes, operator response, operator attributes, or operator behavior. for example, a driver of a vehicle may only be able to effectively pay attention to up to seven objects in a concurrent fashion. accordingly, the system for saliency based awareness modeling may mitigate, manage, select, or target notifications or alerts presented to an operator of a vehicle based on one or more object attributes, saliency, operator behavior, operator attributes, operator response, driving conditions, etc. in other words, because an operator or driver of a vehicle may only pay attention to a limited number of objects, modeling driver awareness based on saliency of objects may mitigate generation of excess notifications, thereby reducing the amount of operator cognition consumed. the following description and annexed drawings set forth certain illustrative aspects and implementations. these are indicative of but a few of the various ways in which one or more aspects may be employed. other aspects, advantages, or novel features of the disclosure will become apparent from the following detailed description when considered in conjunction with the annexed drawings. brief description of the drawings aspects of the disclosure are understood from the following detailed description when read with the accompanying drawings. elements, structures, etc. of the drawings may not necessarily be drawn to scale. accordingly, the dimensions of the same may be arbitrarily increased or reduced for clarity of discussion, for example. fig. 1 is an illustration of an example component diagram of a system for saliency based awareness modeling, according to one or more embodiments. fig. 2 is an illustration of an example flow diagram of a method for saliency based awareness modeling, according to one or more embodiments. fig. 3 is an illustration of the generation of an example saliency based awareness model, according to one or more embodiments. fig. 4 is an illustration of an example operating environment, according to one or more embodiments. fig. 5 is an illustration of an example graphical representation of an awareness model or a saliency model, according to one or more embodiments. fig. 6 is an illustration of an example computer-readable medium or computer-readable device including processor-executable instructions configured to embody one or more of the provisions set forth herein. fig. 7 is an illustration of an example computing environment where one or more of the provisions set forth herein are implemented, according to one or more embodiments. detailed description embodiments or examples, illustrated in the drawings are disclosed below using specific language. it will nevertheless be understood that the embodiments or examples are not intended to be limiting. any alterations and modifications in the disclosed embodiments, and any further applications of the principles disclosed in this document are contemplated as would normally occur to one of ordinary skill in the pertinent art. for one or more of the figures herein, one or more boundaries, such as boundary 714 of fig. 7 , for example, may be drawn with different heights, widths, perimeters, aspect ratios, shapes, etc. relative to one another merely for illustrative purposes, and are not necessarily drawn to scale. for example, because dashed or dotted lines may be used to represent different boundaries, if the dashed and dotted lines were drawn on top of one another they would not be distinguishable in the figures, and thus may be drawn with different dimensions or slightly apart from one another, in one or more of the figures so that they are distinguishable from one another. as another example, where a boundary is associated with an irregular shape, the boundary, such as a box drawn with a dashed line, dotted lined, etc., does not necessarily encompass an entire component in one or more instances. conversely, a drawn box does not necessarily encompass merely an associated component, in one or more instances, but may encompass a portion of one or more other components as well. the following terms are used throughout the disclosure, the definitions of which are provided herein to assist in understanding one or more aspects of the disclosure. as used herein, an occupant of a vehicle may include a driver of a vehicle, an operator of a vehicle, an individual, an entity, a person, a passenger, etc. as used herein, an operator of a vehicle may be a driver of a vehicle or an occupant who provides one or more vehicle operations or commands to the vehicle, such as steering commands, for example. as used herein, an operating environment may be a driving environment or a real world environment through which a vehicle travels, traverses, operates, or moves. an operating environment may include one or more roadways, other vehicles, objects, hazards, etc. as used herein, an object may include an obstacle, a potential obstacle, a hazard, a potential hazard, other vehicles, a person, a pedestrian, an animal, a pothole, road kill, physical objects, etc. additionally, an object may include non-tangible objects or items which may demand a portion of attention from an operator of a vehicle, such as a line of communication, a task, a notification, an alert, etc. as used herein, an attention demanding object may be an object which requires, utilizes, or demands a portion of focus or some attention from an operator of a vehicle. examples of attention demanding objects are a telephonic conversation (e.g., due to the nature of communication, conversation, or multi-tasking between the conversation and operating the vehicle) or an application with which an operating is interacting, such as by adjusting volume of a radio station or selecting a track on a music application. after an operator has adjusted the volume of the radio station, the corresponding radio or music application may require less focus from the driver, and thus become a non-attention demanding object. in other words, attention demanding objects may become merely ‘objects’ or non-attention demanding objects when an operator of a vehicle shifts his or her focus to other objects or other tasks, such as concentrating on driving, for example. as used herein, awareness may include attention, focus, concentration, cognition, etc. as used herein, a notification may include an alert which may be presented or rendered in a variety of formats, such as an audio alert, a graphic element, a video, an animation, a tactile response, a vibratory alert, modification of one or more vehicle systems or vehicle components, etc. in other words, a notification may include one or more adjustments, compensation, responses, or reactions to one or more objects. for example, visual devices, audio devices, tactile devices, antilock brake systems, brake assist systems, cruise control systems, stability control systems, collision warning systems, lane keep assist systems, blind spot indicator systems, pretensioning systems, climate control systems, etc. may be adjusted or controlled to implement a notification. regardless, a notification may provide a stimulus for one or more senses of an occupant of a vehicle. as used herein, the term “infer” or “inference” generally refer to the process of reasoning about or inferring states of a system, a component, an environment, a user from one or more observations captured via events or data, etc. inference may be employed to identify a context or an action or may be employed to generate a probability distribution over states, for example. an inference may be probabilistic. for example, computation of a probability distribution over states of interest based on a consideration of data or events. inference may also refer to techniques employed for composing higher-level events from a set of events or data. such inference may result in the construction of new events or new actions from a set of observed events or stored event data, whether or not the events are correlated in close temporal proximity, and whether the events and data come from one or several event and data sources. in one or more embodiments, driver awareness may be calculated, inferred, or estimated utilizing a saliency model, a predictive model, or an operating environment model. an awareness model including one or more awareness scores for one or more objects may be constructed based on the saliency model or one or more saliency parameters associated therewith. fig. 1 is an illustration of an example component diagram of a system 100 for saliency based awareness modeling, according to one or more embodiments. in one or more embodiments, the system 100 for saliency based awareness modeling may include a sensor component 110 , a modeling component 120 , a monitoring component 130 , an electronic control unit 140 , a database component 150 , a scoring component 160 , a notification component 170 , and a management component 180 . the database component may include a learning component 152 or an interface component 154 . the interface component 154 may be implemented as a standalone component of the system 100 for saliency based awareness modeling in one or more other embodiments. the sensor component 110 may detect or analyze the surroundings of a vehicle, a surrounding environment of a vehicle, or one or more objects within an operating environment, such as extra-vehicular objects (e.g., objects outside of the vehicle). for example, the sensor component 110 may track, monitor, detect, sense, or capture one or more of the objects, which may be potential hazards, potential obstacles, etc. and report respective objects to the modeling component 120 to facilitate construction of an operating environment model. explained another way, the sensor component 110 may identify one or more objects, obstacles, hazards, or potential hazards within an operating environment. the sensor component 110 may include an image capture device, an image acquisition device, a radar sensor, a light detection and ranging (lidar) sensor, a laser sensor, a video sensor, a movement sensor, etc. the sensor component 110 may detect one or more objects, presence information associated with one or more objects, or one or more attributes associated with one or more of the objects (e.g., object attributes), such as attributes associated with saliency of an object. in one or more embodiments, the sensor component 110 may generate one or more saliency parameters based on one or more of the object attributes. these saliency parameters may be indicative of a characteristic or attribute by which an object stands out relative to an environment, such as the operating environment in which the object is within, for example. explained another way, a saliency parameter may be indicative of a characteristic which distinguishes an object from neighbors of that object. thus, the sensor component 110 may detect saliency associated with one or more objects and generate one or more saliency parameters in this way. examples of object attributes associated with saliency or saliency parameters may include visual characteristics, visual stimuli, optical flow, velocity, movement, color, color differences, contrast, contrast differences, color saturation, brightness, edge strength, luminance, a quick transient (e.g., a flashing light, an abrupt onset of a change in intensity, brightness, etc.). in one or more embodiments, the sensor component 110 may detect one or more object attributes (e.g., which are not necessarily associated with saliency). examples of such object attributes may include proximity of an object to a vehicle, the type of object or class of object (e.g., signage, vehicle, pedestrian), etc. examples of other object attributes may include proximity of an object from a vehicle, angle of an object from a trajectory of a vehicle or roadway. respective saliency parameters or object attributes may be utilized to generate a saliency model for one or more objects. a saliency model for an object may be indicative of a likelihood that the object or portions of the object may be seen by an occupant of a vehicle based on characteristics of the object which make the object appear to stand out from its neighbors. in other words, the saliency model may be utilized to determine how likely an object is to get the attention of a driver or operator of a vehicle at a glance (e.g., without applying eye tracking or taking operator behavior into account). the modeling component 120 may construct or build an operating environment model based on presence information associated with one or more objects. the operating environment model may track one or more coordinates or a position associated with one or more of the objects. additionally, the modeling component 120 may tag one or more of the objects with metadata which may be indicative of one or more object attributes of one or more of the objects, such as whether or not an object has moved, for example. regardless, the operating environment model may be associated with one or more objects and one or more corresponding object attributes for one or more of the respective objects. the modeling component 120 may construct or build a saliency model for one or more objects from an operating environment model based on one or more saliency parameters associated with one or more of the objects. in other words, if an object within an operating environment appears to stand out from neighbors within the operating environment or from the operating environment itself in a visually salient manner, the modeling component 120 may construct the saliency model such that the saliency model is indicative or quantifies the visibility of that object (e.g., with respect to the surrounding environment or operating environment). the saliency model may be utilized to update or influence an awareness model associated with the same object. in other words, if the saliency model indicates that an object stands out relative to its neighbors or the operating environment, the awareness model may assign that object a higher awareness score which indicates that there is a higher likelihood that an operator of a vehicle may become aware of the object (e.g., even when eye tracking indicates that the operator of the vehicle hasn't necessarily focused his or her eyes directly on that object). explained yet another way, the awareness model may be constructed or built based on the saliency model. in this way, driver awareness or operator awareness may be modeled accordingly to saliency or visual saliency. in one or more embodiments, the modeling component 120 may build or construct a predictive model for one or more objects of the operating environment model. the predictive model may be indicative of one or more inferences or predictive actions associated with one or more of the objects within the operating environment. for example, the predictive model may include inferences for whether an object is likely to move, whether an object is likely to become an obstacle or a hazard, a likelihood that an object is alive, an estimated risk score associated with an object, etc. in other words, the modeling component 120 may build a predictive model for one or more objects having one or more estimated risk scores. an estimated risk score may be indicative of a likelihood of a risk associated with the object, such as a risk of collision with the object, for example. however, not necessarily all risks may be associated with collisions. for example, a missed stop sign may result in a traffic violation. in this way, the modeling component 120 may build or construct a predictive model based on one or more object attributes observed by the sensor component 110 , the operating environment model, or the saliency model for one or more respective objects. as an example, if the sensor component 110 detects a first object, such as a deer, the sensor component 110 may notate or store one or more object attributes associated with the deer within the operating environment model. the operating environment may include object attributes such as whether or not the deer moved, a velocity at which the deer moved (if at all), the proximity of the deer from the vehicle, etc. here, in this example, the modeling component 120 may build a predictive model associated with the first object or the deer. if the deer is near a wooded area or another area where one or more other objects may obstruct the view of an operator or driver of the vehicle, the predictive model may infer a possibility that other deer may be around the area. accordingly, it may be seen that the modeling component 120 may build a predictive model based on a layout of an operating environment (e.g., objects which may cause an obstructed view), one or more object attributes (e.g., movement of the deer or the deer crossing the road), or one or more objects (e.g., the deer). in one or more embodiments, the predictive model may focus on obstacles or objects on the same side of the roadway as the deer (e.g., because other objects or packs of deer may be nearby). additionally, the predictive model may utilize eye gaze information to supplement predictive modeling of one or more objects. for example, if the monitoring component 130 detects that an operator of a vehicle is focused on a pack of deer to the right of the vehicle, then it may be more likely that saliency or other cues may be missed on objects to the left of the vehicle. in this way, the modeling component 120 may build a predictive model which may compensate for limited operator cognition. in other embodiments, the predictive model may be built, assembled, or constructed based on one or more object attributes. for example, a first pedestrian who is paying attention to the roadway may be assigned a lower estimated risk score, while a second pedestrian who is texting and walking may be assigned a higher estimated risk score indicative of the inattention of the second pedestrian. here, because the second pedestrian is a higher risk object, the predictive model for the second pedestrian may be indicative of such higher risk. further, in one or more embodiments, the modeling component 120 may build or construct a predictive model based on one or more navigation instructions, an estimated navigation route, or use of navigation or telematics (e.g., via the electronic control unit 140 or ecu). for example, if a driver or operator of a vehicle is driving a vehicle in a right lane of a roadway with three lanes: a right lane, a center lane, and a left lane, and navigation is slated to direct the operator to change lanes from the right lane to the left lane, upcoming hazards or objects detected may be prioritized based on the navigation or navigation instructions. in other words, the modeling component 120 may build or construct the predictive model with a focus on objects on the left side of the road based on anticipated navigation instructions which may direct an operator to change lanes from the right lane to the left lane of the roadway. here, in this example, objects on the right side of the roadway may be assigned lower estimated risk scores than objects on the left side of the roadway based on one or more navigations instructions or anticipated navigation. the management component 180 may present or render fewer notifications associated with objects on the right side of the roadway or prioritize objects on the left side of the roadway accordingly. in one or more embodiments, the monitoring component 130 may monitor an operator of a vehicle and capture one or more attributes associated with the operator (e.g., operator attributes) of the vehicle. in other words, the monitoring component 130 may track, monitor, detect, sense, or capture one or more operator attributes or operator behavior of the operator of the vehicle, such as eye movement, head movement, focus, body movement or shifting, etc. the monitoring component 130 may include one or more in-vehicle image capture devices, image capture sensors, motion sensors (e.g., to monitor head movement), eye tracking unit, infrared sensors, infrared illuminators, depth sensors (e.g., to monitor driver inattention or a focal point of the driver's eyes), etc. explained another way, the monitoring component 130 may employ gaze detection to detect inattention, distractions, motion trajectory, face direction trajectories, skeletal information, eye gaze trajectory, gaze distribution, etc. of a driver or operator of a vehicle. regardless, the monitoring component 130 may track eye movements of an operator of the vehicle, such as eye-gaze direction, eye-gaze movement, eye diversion, eye-closure, center gaze point, blinking movements, head movements, head positioning, head orientation, one or more facial features (e.g., such as areas surrounding the eyes, the pupils, eye corners, the nose, the mouth, etc. of an occupant), a head pose, a facial pose, facial temperature, or associated positioning, orientation, movements, etc. in this way, one or more operator attributes may be monitored. these operator attributes may be utilized to determine a state of an operator (e.g., whether the operator is sleepy, drowsy, alert, jumpy, inattentive, distracted, etc.). further, one or more of the operator attributes may be indicative of the positioning of the driver or a pose of one or more portions of the driver's body, such as eyes, head, torso, body, etc. in one or more embodiments, the modeling component 120 may build or construct an awareness model for an operator of a vehicle based on one or more operator attributes, such as operator attributes detected by the monitoring component 130 . here, these operator attributes may be utilized to determine driver awareness with respect to one or more objects within the operating environment. for example, the monitoring component 130 may determine a gaze time or a time of focus (e.g., utilizing depth sensors) for one or more objects in the operating environment. the time of focus may be a peak amount of time (e.g., a maximum) a driver or operator of a vehicle spends with his or her eyes focused on an object within the operating environment. further, the monitoring component 130 may track, monitor, or tag an object with a duration or time at which the operator of the vehicle last looked at that object. effectively, the monitoring component 130 may track whether one or more objects are new to a driver or operator of a vehicle or whether one or more of the objects are stale or perhaps forgotten. explained yet another way, the monitoring component 130 may tag one or more objects with timestamps indicative of a time at which the driver or operator of the vehicle last focused his or her eyes on that object. in one or more embodiments, the monitoring component 130 may classify whether an operator of a vehicle is aware of an object based on a length of time the operator is focused on the object. additionally, the monitoring component 130 may infer a likelihood of whether an operator of a vehicle is aware of a second object based on a length of time the operator is focused on a first object, a perceived distance between the first object and the second object, one or more saliency parameters associated with the first object, one or more saliency parameters associated with the second object, etc. explained another way, the monitoring component 130 may distinguish between ‘looking’ and ‘seeing’ an object based on eye gaze trajectory, gaze point, gaze time, time of focus, depth, output of an eye tracking sensor, etc. similarly, the monitoring component 130 may monitor or track when one or more objects appeared in view of the driver (e.g., when an object is ‘new’ in an operating environment) or times when an object was within peripheral vision of the driver. as an example, if the monitoring component 130 determines that a first object is within the peripheral vision of a driver (e.g., the driver or operator has his or her eyes focused on a second object less than a threshold peripheral vision distance away) for a threshold peripheral vision time, the modeling component 120 may build an awareness model indicative of the first object being within peripheral vision of the driver. in one or more embodiments, this awareness model may be built or constructed based on a saliency model for one or more of the objects. for example, if the first object is brightly colored or otherwise stands out from the operating environment, the modeling component 120 may utilize a saliency model associated with the first object to adjust one or more aspects of the awareness model. here, in this example, because the first object is brightly colored (e.g., as indicated by one or more object attributes or saliency parameters), an awareness score assigned to the first object may be higher than an awareness score assigned to a third object exhibiting less saliency, where the third object is the same distance away from the first object as the second object. in other words, the saliency model may influence awareness scoring or how an awareness model may be built. as another example, when an object exhibits a high degree of saliency, contrast, etc., the distance (e.g., threshold peripheral vision distance) or radius utilized to define peripheral vision may be increased. if an operator is focused on a first object and a second object exhibiting little or no contrast with the operating environment is greater than a threshold peripheral vision distance away, the modeling component 120 may infer that the operator of the vehicle did not see that second object. however, if an operator is focused on a first object and a third object exhibiting high contrast with the operating environment is the same distance away from the first object as the second object, the modeling component 120 may infer that the operator of the vehicle did see the third object due to the saliency modeling or by increasing the threshold peripheral vision distance based on saliency parameters of the third object or a saliency model associated with the third object. in this way, a saliency model for one or more objects may be utilized to adjust or construct an awareness model (e.g., by changing threshold peripheral vision distances or threshold peripheral vision time, etc.). accordingly, this may allow for the modeling component 120 to build an awareness model which infers that a driver has spotted an object associated with a high degree of saliency by merely glancing at the object or near (e.g., within the threshold peripheral vision distance) the object. the monitoring component 130 may detect or sense other types of operator attributes or operator behavior. for example, the monitoring component 130 may include a microphone which detects verbal cues associated with one or more objects within the operating environment. here, in this example, the monitoring component 130 or microphone may detect operator behavior, such as groaning when a light turns red or muttering, “aw c'mon” when a vehicle or pedestrian acts out of turn (e.g., gets to a stop sign after the vehicle but goes before the vehicle, cuts the driver off, etc.). accordingly, the modeling component 120 may build an awareness model based on operator behavior or one or more of these operator attributes. if an operator of a vehicle groans as a light ahead turns red, the modeling component 120 may construct the awareness model with an inference that the operator has seen the light. in one or more embodiments, the monitoring component 130 may detect one or more objects, such as intra-vehicular objects (e.g., passengers, occupants, conversations, tasks, such as peeling a banana, eating a taco, etc.) or objects within the vehicle. additionally, the monitoring component 130 may determine a number of attention demanding objects based on one or more operator attributes (e.g., shifting of eyes, gaze distribution, detected speech or conversations, etc.). for example, if the monitoring component 130 tracks eye movement or gaze distribution utilizing a gaze detection device, a number of objects which the driver has looked at may be determined. in this example, an object may be considered an attention demanding object if the operator of the vehicle has focused on that object for a threshold amount of time (e.g., two hundred milliseconds) within a rolling time window (e.g., within the last minute). as discussed, the modeling component 120 may adjust these thresholds or time windows based on the saliency or saliency models for respective objects. the modeling component 120 may receive a count of a number of objects or a number of attention demanding objects and generate or construct an awareness model accordingly. in one or more embodiments, the electronic control unit 140 (ecu) may receive one or more operator responses, one or more operator reactions, one or more operations, such as vehicle operations (e.g., steering, horn, turn signal, etc.) or maneuvers made by an operator of a vehicle. in other words, the electronic control unit 140 (e.g., or one or more subunits thereof) may receive information or data, such as data related to operator behavior, maneuvers or operator responses provided by the operator of the vehicle (e.g., braking, accelerating, steering, honking, shifting, activation of a turn signal, adjusting vehicle trajectory, etc.). because an operator response may be indicative of how a driver or operator reacts upon seeing or becoming aware of an object, this information (e.g., operator response information) may be utilized to build or construct an awareness model. explained another way, if an operator of a vehicle sees a stop sign, it is likely that the operator will apply the brakes of the vehicle. in this regard, when the brakes are applied (e.g., or other operator responses are detected), inferences may be drawn as to whether or not an operator of a vehicle is aware of a corresponding object. here, in this example, it may be inferred by the modeling component 120 that an operator is aware of an object when the brakes of the vehicle are applied within a threshold radius or distance from the object. the electronic control unit 140 may associate or correlate one or more operator responses with one or more objects. for example, the electronic control unit 140 may receive data, such as one or more operator responses, maneuvers, operations (e.g., honking the horn). in this example, the modeling component 120 may associate an object, such as a vehicle with the operator response of honking the horn based on movement of the object or gaze tracking. regardless, the modeling component 120 may generate an awareness model indicative of a level of driver awareness (e.g., awareness score) with regard to an object based on one or more operator responses. as another example, because an operator of a vehicle may be expected to response to an object or obstacle, such as a bicycler, by changing the trajectory of the vehicle such that the vehicle drifts away or farther in distance from the bicycler (e.g., by occupying a left portion of a lane while the bicycler occupies a right portion of the lane), the modeling component 120 may generate or construct the awareness model based on the trajectory of the vehicle (e.g., an operator response). in one or more embodiments, the electronic control unit 140 may receive information corresponding to the drifting of the vehicle from a steering unit or from a telematics unit. the electronic control unit 140 may include a powertrain control module (pcm), a transmission control module (tcm), a brake control module (bcm or ebcm), a central control module (ccm), a central timing module (ctm), a general electronic module (gem), a body control module (bcm), a suspension control module (scm), a telematics module, etc. the telematics module of the electronic control unit 140 may provide one or more navigation instructions or anticipated navigation to the modeling component 120 to facilitate predictive modeling or other modeling, according to one or more aspects. the electronic control unit 140 may detect one or more objects based on operator interaction with the electronic control unit 140 or one or more subunits of the electronic control unit 140 . for example, if an operator of a vehicle adjusts the volume of a sound system or radio of the vehicle, the electronic control unit 140 may classify the consumption of media as an object. here, in this example, the modeling component 120 may determine that the media is an attention demanding object based on operator interaction with the volume control. after a threshold period of time (e.g., five minutes without operator interaction), the media may be classified as a non-attention demanding object. in this way, the electronic control unit 140 may monitor one or more objects or determine a number of attention demanding objects based on operator interaction with the vehicle, the electronic control unit 140 , or one or more subunits of the electronic control unit 140 . examples of objects which may be detected by the electronic control unit 140 include one or more lines of communication (e.g., personal conversations, telephone calls, text conversations, texting, dialing, etc.), execution of one or more applications (e.g., changing a radio station, adjusting the volume, running apps on the vehicle or a mobile device connected to the vehicle). the database component 150 may include or store one or more baseline attributes, baseline operations, baseline responses (e.g., which may be associated with an operator of a vehicle). examples of baseline responses may include typical reaction times in response to an operator seeing an object, average clearance given to objects, obstacles, or obstructions, average number of objects an operator multi-tasks between, etc. in other words, the sensor component 110 may detect or identify one or more objects, the modeling component 120 may construct an operating environment model indicative of one or more of the objects, and the database component 150 may house or store one or more expected response attributes for one or more corresponding objects for comparison to facilitate abnormal behavior detection or anomalous behavior detection. as an example, if a pedestrian is detected by the sensor component 110 , the modeling component 120 may construct an operating environment model indicative of that pedestrian as an object or potential obstacle within the operating environment. the database component 150 may house expected response information or expected response attributes for a pedestrian or similar object. the modeling component 120 may compare current operator response information with the expected response information from the database component 150 to facilitate formation of an awareness model or an awareness score for the pedestrian. examples of expected response information or expected response attributes may include a distance at which an operator of a vehicle generally begins steering away from an object or obstacle, whether or not an operator decreases velocity of the vehicle, a rate of change in steering angle over time, etc. in one or more embodiments, a learning component 152 may receive one or more operator responses and object attributes and update expected response attributes or expected response information accordingly (e.g., utilizing a rolling data store). additionally, feedback may be received (e.g., via an interface component 154 ) from an operator of a vehicle to supplement or adjust one or more of the expected response attributes. for example, a saliency model associated with an object may be adjusted if an operator of a vehicle has difficulty perceiving differences in color. here, in this example, the learning component 152 may adjust baseline information associated with one or more saliency parameters if an operator systematically fails to identify or be aware of objects based on color differences. in this way, the learning component 152 may update expected response attributes or other baseline information in the database component 150 , thereby enabling a system 100 for saliency based awareness modeling to be trained by an operator of a vehicle during usage. similarly, other types of baseline information may be included or stored by the database component 150 , such as expected response attributes indicative of one or more operator attributes (e.g., typical or baseline gaze distribution of an operator), one or more operator responses (e.g., baseline reaction time, baseline clearance distance, etc.), one or more object attributes (e.g., shades of color recognized, threshold amount of saliency for object awareness to define high degree of saliency), etc. the modeling component 120 may build, construct, or generate an awareness model based on an operating environment model representing one or more objects, a saliency model for one or more of the objects, a predictive model for one or more of the objects, baseline information associated with one or more objects, operator behavior, operator responses, or operator attributes. the awareness model may include one or more awareness scores corresponding to one or more objects of the operating environment model. respective awareness scores may be indicative of a probability that an operator of a vehicle is aware of the object, given detected information (e.g., from the sensor component 110 , the monitoring component 130 , the electronic control unit 140 , the database component 150 , etc.). in one or more embodiments, the modeling component 120 may construct the awareness model based on a saliency model for one or more objects within the operating environment. the modeling component 120 may adjust a saliency model for one or more objects based on a time of day, day of week, length of a trip, duration of a trip, driving conditions, a state of an operator (e.g., drowsiness), level of traffic, proximity of an object, size of an object, etc. for example, if a driver has been driving for four hours, his or her perception may be affected by fatigue, and as a result, the modeling component 120 may adjust the saliency model for one or more corresponding objects to reflect a lower likelihood of awareness than usual. here, in this example, if an object is bright pink (e.g., exhibits a high degree of visual saliency), the modeling component 120 may generate an awareness model indicative of a 99% chance that an operator is aware of the bright pink object during the first hour of a trip. however, as the duration of the trip increases (e.g., 8 hours into a trip), the modeling component 120 may update the awareness model to indicate a lesser chance (e.g., an 85% chance) that the operator would be aware of the same or similar object. further, the modeling component 120 may generate the awareness model in a situational or context dependent manner. for example, during a scenario where traffic is low or light, awareness scores for objects may be assigned differently than when traffic is heavy. the modeling component 120 may generate the awareness model based on one or more aspects of the operating environment model. for example, awareness may be based on whether a vehicle is traveling on a straightaway, a highway, a curved roadway (e.g., when the roadway is curved, an operator may be less aware of objects or obstacles due to focus on steering), etc. in this way, the operating environment model may be utilized to generate or influence the awareness model. the awareness model may be updated on a continual basis based on updated information received by the sensor component 110 , the monitoring component 130 , the electronic control unit 140 , the database component 150 , etc. in one or more embodiments, the modeling component 120 may group or aggregate one or more objects when respective objects share one or more attributes, such as proximity between objects (e.g., a plurality of pedestrians crossing a crosswalk), direction of movement, origin location, etc. for example, if an emergency vehicle is equipped with an emergency lighting system which is activated, an operator of a vehicle is likely to have seen the emergency vehicle due to the flashing lights of the emergency lighting system. when the emergency lighting system is deactivated, the modeling component 120 may mark or tag the emergency vehicle as ‘previously salient’ due to the emergency lighting system. in this regard, the modeling component 120 may tag one or more objects within an awareness model as ‘previously salient’ at a prior time. accordingly, notifications may be provided or omitted based on one or more ‘previously salient’ tags and an elapsed time associated therewith. in other words, the modeling component 120 may generate an awareness model which provides data (e.g., the ‘previously salient’ tag) indicative that a corresponding object was likely to have been visible at a prior time. accordingly, in this example, an awareness model may be generated which indicates a high level of awareness or a high awareness score for the emergency vehicle. the management component 180 may then omit generation of an alert or notification for the emergency vehicle, thereby mitigating distractions for the operator of the vehicle. in one or more embodiments, the modeling component 120 may construct a saliency model for an object based on a peak saliency (e.g., observed by the sensor component 110 ) or a maximum saliency detected for that object. in this way, changes to the saliency of the object may be ‘remembered’. in other words, the modeling component 120 may build an awareness model or a saliency model which accounts for objects a driver or operator has already seen (e.g., even if an object changes in state or in saliency). the modeling component 120 may construct a predictive model which may be indicative of a decay (e.g., after a threshold period of time, incrementally, etc.) or decrease in awareness after a change in state or a change in saliency for an object is detected. the decay or decrease in awareness may be modeled based on a step function, a linear function, a power function, an original color of the object, one or more object attributes, etc. as an example, if an object is bright pink, the rate of decay or the decrease in awareness associated with the predictive model may be minor. in this way, the modeling component 120 may generate one or more models indicative of whether objects are fresh, stale, new, flashy, memorable, etc. further, notifications for one or more objects may be generated or redundant notifications may be mitigated according to one or more of the models (e.g., awareness, saliency, predictive, etc.). conversely, if an object exhibits a high degree of saliency, but a change in state or saliency causes that object to exhibit a low degree of saliency below a threshold level, the modeling component 120 may generate a saliency model for the object as if the object was never highly visible. in other words, a saliency model may be constructed which accounts for ‘disappearance’ of an object which was at one time highly visible. the modeling component 120 may receive a count of a number of objects detected (e.g., among the sensor component 110 , the monitoring component 130 , the electronic control unit 140 , etc.). further, the modeling component 120 may determine a number of attention demanding objects from one or more components of the system 100 for saliency based awareness modeling. for example, objects associated with the electronic control unit 140 may generally be attention demanding when an operator of a vehicle provides a user input, such as by changing the volume on a sound system or by selecting a different channel, etc. the monitoring component 130 may capture one or more images over a rolling period of time to determine a number of objects the driver or operator has focused on for a threshold period of time or predetermined amount of time (e.g., for objects which a gaze is directed for three or more seconds). here, the monitoring component 130 may update the database component 150 with an average number of objects an operator multi-tasks between as a baseline. further, the modeling component 120 may update the awareness model based on a comparison between a current number of objects a driver of a vehicle is multi-tasking between and the baseline or average number of objects the driver typically switches between. in other words, the modeling component 120 may adjust the awareness model based on a comparison of current operator attributes against baseline operator attributes, historical operator attributes, historical operator behavior, or baseline operator behavior. in this way, the modeling component 120 may provide for anomalous behavior detection. further, the modeling component 120 may build or construct one or more models based on operator preferences, user preferences, one or more conditions, one or more rules, feedback (e.g., customization of alerts or notifications), etc. in one or more embodiments, the modeling component 120 may generate a graphical representation of a saliency model or a graphical representation of an awareness model. an operator of a vehicle may utilize these graphical representations to identify ‘blind spots’ in their driving accordingly. the scoring component 160 may determine or calculate one or more awareness scores for one or more objects. an awareness score may be indicative of how aware or how likely an operator is aware of an object or awareness with regard to the surroundings of the operator, such as the operating environment. in other words, the scoring component 160 may calculate or determine a probability or a likelihood that an operator is aware of an object within an operating environment through which a vehicle is traveling. explained yet another way, the scoring component 160 may calculate a likelihood that a driver or operator of a vehicle sees an object such that the operator will react, behave, or respond in a safe manner, such as by steering the vehicle around the object, engaging the brakes of the vehicle, honking the horn, etc. in one or more embodiments, an awareness score may be expressed as a percentage (e.g., 75% or 0.75). in one or more embodiments, the scoring component 160 may assign one or more awareness scores to one or more objects associated with an awareness model based on a saliency model for one or more of the objects, a predictive model for one or more of the objects, etc. as an example, an awareness score may be indicative of a likelihood or probability that an operator is aware of an object given operator behavior, operator attributes, or object attributes. in one or more embodiments, this probability may be expressed as pr (awareness|operator behavior, operator attributes, object attributes). this probability may be calculated based on pr (awareness|object saliency), pr (awareness|operator behavior), pr (awareness|other object attributes), etc. the notification component 170 may provide one or more notifications regarding one or more objects within an operating environment. the notification component 170 may include an audio device (e.g., speakers), a display device (e.g., touchscreen, heads-up-display or hud, three-dimensional displays or 3-d displays), a tactile feedback device (e.g., provides vibration or tactile feedback), a communication device (e.g., provides email notifications, text notifications, etc.). in one or more embodiments, feedback or notifications may be provided to passengers or other occupants of a vehicle in addition to an operator of a vehicle. additionally, higher scoring (e.g., awareness scores) objects may be brought to the attention of passengers. for example, if an operator of a vehicle is highly likely to be aware of an object, a passenger may be presented with a notification for that object, thereby utilizing passenger cognition rather than operator cognition. in one or more embodiments, the notification component 170 may be integrated with a navigation component. in other words, one or more notifications may be provided in conjunction with navigation from an origin location to a destination location. the management component 180 may control how notifications may be shown, presented, or rendered. in other words, the management component 180 may determine whether or not to notify or alert an operator or driver with regard to one or more objects based on one or more awareness scores or an awareness model associated with one or more of the objects. in one or more embodiments, the management component 180 may sort one or more of the objects by awareness score and render notifications for one or more of the objects above or below a threshold awareness score level. in other words, the management component 180 may select a number of objects which a driver is estimated or least likely to be aware of, and have the notification component 170 render notifications for respective objects (e.g., for presentation to the operator of the vehicle). in this way, the management component 180 may filter or select objects and selectively present one or more notifications for one or more of the selected objects to an operator of a vehicle in a context appropriate manner. in one or more embodiments, the management component 180 may rank or select one or more objects which have an awareness score greater than or less than a threshold awareness score for notification or alert. in this way, the management component 180 may selectively provide notifications based on saliency or driver behavior. for example, a system 100 for saliency based awareness modeling may score a law enforcement vehicle object such that notifications associated with that law enforcement vehicle object are omitted when lights of the law enforcement vehicle are engaged in a manner which enhances visibility or saliency of the law enforcement vehicle object. additionally, when the lights for the law enforcement vehicle object are turned off, the system 100 for saliency based awareness modeling may track the law enforcement vehicle and prevent notifications from being generated based on a likelihood that an operator has already seen the law enforcement vehicle object (e.g., when the lights of the law enforcement vehicle object were activated). the management component 180 may manage the timing of notifications, a number of notifications to be rendered, volume of notifications, extent, size, color, etc. as discussed, the modeling component 120 may receive a count for a number of attention demanding objects. if a driver or operator of a vehicle is multi-tasking between a large number (e.g., greater than a baseline number stored in the database component 150 or another threshold number) of attention demanding objects, the management component 180 may cause the notification component 170 to render notifications for high priority objects (e.g., objects associated with a low awareness score below a threshold awareness level and a high estimated risk score above a desired risk level). further, the management component 180 may adjust the order of notifications based on the awareness model, the saliency model, or the predictive model, etc. conversely, the management component 180 may limit or reduce a number of notifications for an operator of a vehicle if there are excess distractions (e.g., above a threshold number of attention demanding objects) to mitigate multi-tasking. here, in this example, if a driver is busy multi-tasking, but the roadway is relatively clear aside from a low risk object, the management component 180 may negate the associated notification for that low risk object. in other words, the management component 180 may disable notifications when appropriate (e.g., when a driver is aware of a corresponding object), thereby mitigating excessive notifications based on context or situational awareness. because an operator or a driver of a vehicle may only pay attention to a limited number of objects, obstacles, tasks, notifications, alerts, etc., redundant presentation of alerts or notifications may be mitigated or managed according to the context of an operating scenario, saliency, driving conditions, operator behavior, or operator responses, thereby reducing the amount of operator cognition consumed. fig. 2 is an illustration of an example flow diagram of a method 200 for saliency based awareness modeling, according to one or more embodiments. at 202 , one or more objects (e.g., from an operating environment) or corresponding object attributes may be detected. at 204 , operator attributes, operator responses, or operator behavior may be detected. at 206 , one or more models may be built, assembled, or constructed based on information from 202 or 204 . for example, an operating environment model may be constructed based on detected objects or object attributes. similarly, a saliency model may be constructed based on object attributes associated with saliency. predictive models may be constructed for one or more objects based on operator responses or operator behavior. an awareness model may be assembled based on one or more of the other models or information from 202 or 204 . awareness scores may be assigned at 208 and notifications may be provided based on respective scores at 210 . fig. 3 is an illustration of the generation 300 of an example saliency based awareness model, according to one or more embodiments. fig. 3 and fig. 4 are described with respect to one or more components of the system 100 for saliency based awareness modeling of fig. 1 . here, information, data, etc. may be collected or aggregated from the sensor component 110 , the monitoring component 130 , the database component 150 , the electronic control unit 140 , or other components of the system 100 and fed to the modeling component 120 . the modeling component may utilize this data to build or construct an operating environment model 310 , a saliency model 312 , or a predictive model 340 for one or more objects. as an example, the sensor component 110 may detect one or more object attributes, such as saliency or movement and the modeling component may build the operating environment model 310 based on input from the sensor component 110 . similarly, the monitoring component 130 may detect one or more operator attributes, such as eye movement or gaze distribution of an operator of a vehicle and the modeling component 120 may build a predictive model 340 or awareness model 350 based thereon. the database component 150 may provide baseline information to the modeling component 120 for comparison against information or data received from the other sensors or components, such as the sensor component 110 , the monitoring component 130 , or the electronic control unit 140 . the modeling component 120 may build the predictive model 340 or the awareness model 350 based on such comparisons. additionally, the electronic control unit 140 may detect one or more operator responses or operator behavior to facilitate construction of the predictive model 340 or the awareness model 350 . fig. 4 is an illustration of an example operating environment 400 , according to one or more embodiments. in this example, a vehicle 450 may be equipped with a system 100 for saliency based awareness modeling. 410 may be a law enforcement vehicle equipped with a system for emergency lighting 410 a. when the sensor component 110 of the system 100 detects that the emergency lighting 410 a is activated, the modeling component 120 may construct an awareness model which assigns a high awareness score (e.g., 95%) to the object 410 or law enforcement vehicle. notifications associated with the object 410 may be mitigated by a management component 180 , thereby causing an operator of a vehicle to utilize or expend less attention on excess notifications. fig. 5 is an illustration of an example graphical representation 500 of an awareness model or a saliency model, according to one or more embodiments. a vehicle 550 may have a driver and/or one or more passengers or other occupants. the graphical representation of an awareness model or saliency model may have one or more regions, such as regions 510 , 520 , and 530 . respective regions may be indicative of likelihood that a driver is aware of objects within those regions. in one or more embodiments, the graphical representation 500 may be color coded. for example, region 530 may be red, region 520 may be yellow, and region 510 may be grey. the red and yellow may be indicative of a higher likelihood of awareness of objects within those regions, while grey may be indicative of a lower likelihood of awareness of objects within those regions. because saliency based awareness modeling may provide a probability distribution indicative of a likelihood of whether a driver or operator of a vehicle is aware of one or more objects, a system or method for saliency based awareness modeling is not merely an application of an abstract idea to a technological environment. for example, saliency based awareness modeling may improve the functioning of a computer by selecting one or more objects or target objects for notification, thereby reducing a processing load for a processing unit (e.g., because the processing unit will not be required to render a notification for all detected objects). further, a system or method for saliency based awareness modeling may effect an improvement in the technological field of vehicular navigation, vehicular notifications, vehicle safety, or in-vehicle infotainment by mitigating unnecessary distractions, alerts, notifications, or other attention demanding objects. additionally, the system or method for saliency based awareness modeling may further effect improvements in respective technological fields by drawing the attention of the operator of a vehicle to fewer notifications, thereby helping a driver or operator focus on featured notifications while compensating, adjusting, or taking into account saliency of objects with respect to variables which may affect saliency. one or more embodiments may employ various artificial intelligence (ai) based schemes for carrying out various aspects thereof. one or more aspects may be facilitated via an automatic classifier system or process. a classifier is a function that maps an input attribute vector, x=(x1, x2, x3, x4, xn), to a confidence that the input belongs to a class. in other words, f(x)=confidence (class). such classification may employ a probabilistic or statistical-based analysis (e.g., factoring into the analysis utilities and costs) to prognose or infer an action that a user desires to be automatically performed. a support vector machine (svm) is an example of a classifier that may be employed. the svm operates by finding a hypersurface in the space of possible inputs, which the hypersurface attempts to split the triggering criteria from the non-triggering events. intuitively, this makes the classification correct for testing data that may be similar, but not necessarily identical to training data. other directed and undirected model classification approaches (e.g., naïve bayes, bayesian networks, decision trees, neural networks, fuzzy logic models, and probabilistic classification models) providing different patterns of independence may be employed. classification as used herein, may be inclusive of statistical regression utilized to develop models of priority. one or more embodiments may employ classifiers that are explicitly trained (e.g., via a generic training data) as well as classifiers which are implicitly trained (e.g., via observing user behavior, receiving extrinsic information). for example, svms may be configured via a learning or training phase within a classifier constructor and feature selection module. thus, a classifier may be used to automatically learn and perform a number of functions, including but not limited to determining according to a predetermined criteria. still another embodiment involves a computer-readable medium including processor-executable instructions configured to implement one or more embodiments of the techniques presented herein. an embodiment of a computer-readable medium or a computer-readable device devised in these ways is illustrated in fig. 6 , wherein an implementation 600 includes a computer-readable medium 608 , such as a cd-r, dvd-r, flash drive, a platter of a hard disk drive, etc., on which is encoded computer-readable data 606 . this computer-readable data 606 , such as binary data including a plurality of zero's and one's as shown in 606 , in turn includes a set of computer instructions 604 configured to operate according to one or more of the principles set forth herein. in one such embodiment 600 , the processor-executable computer instructions 604 may be configured to perform a method 602 , such as the method 200 of fig. 2 . in another embodiment, the processor-executable instructions 604 may be configured to implement a system, such as the system 100 of fig. 1 . many such computer-readable media may be devised by those of ordinary skill in the art that are configured to operate in accordance with the techniques presented herein. as used in this application, the terms “component”, “module,” “system”, “interface”, and the like are generally intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution. for example, a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, or a computer. by way of illustration, both an application running on a controller and the controller may be a component. one or more components residing within a process or thread of execution and a component may be localized on one computer or distributed between two or more computers. further, the claimed subject matter is implemented as a method, apparatus, or article of manufacture using standard programming or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer to implement the disclosed subject matter. the term “article of manufacture” as used herein is intended to encompass a computer program accessible from any computer-readable device, carrier, or media. of course, many modifications may be made to this configuration without departing from the scope or spirit of the claimed subject matter. fig. 7 and the following discussion provide a description of a suitable computing environment to implement embodiments of one or more of the provisions set forth herein. the operating environment of fig. 7 is merely one example of a suitable operating environment and is not intended to suggest any limitation as to the scope of use or functionality of the operating environment. example computing devices include, but are not limited to, personal computers, server computers, hand-held or laptop devices, mobile devices, such as mobile phones, personal digital assistants (pdas), media players, and the like, multiprocessor systems, consumer electronics, mini computers, mainframe computers, distributed computing environments that include any of the above systems or devices, etc. generally, embodiments are described in the general context of “computer readable instructions” being executed by one or more computing devices. computer readable instructions may be distributed via computer readable media as will be discussed below. computer readable instructions may be implemented as program modules, such as functions, objects, application programming interfaces (apis), data structures, and the like, that perform one or more tasks or implement one or more abstract data types. typically, the functionality of the computer readable instructions are combined or distributed as desired in various environments. fig. 7 illustrates a system 700 including a computing device 712 configured to implement one or more embodiments provided herein. in one configuration, computing device 712 includes at least one processing unit 716 and memory 718 . depending on the exact configuration and type of computing device, memory 718 may be volatile, such as ram, non-volatile, such as rom, flash memory, etc., or a combination of the two. this configuration is illustrated in fig. 7 by dashed line 714 . in other embodiments, device 712 includes additional features or functionality. for example, device 712 may include additional storage such as removable storage or non-removable storage, including, but not limited to, magnetic storage, optical storage, etc. such additional storage is illustrated in fig. 7 by storage 720 . in one or more embodiments, computer readable instructions to implement one or more embodiments provided herein are in storage 720 . storage 720 may store other computer readable instructions to implement an operating system, an application program, etc. computer readable instructions may be loaded in memory 718 for execution by processing unit 716 , for example. the term “computer readable media” as used herein includes computer storage media. computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions or other data. memory 718 and storage 720 are examples of computer storage media. computer storage media includes, but is not limited to, ram, rom, eeprom, flash memory or other memory technology, cd-rom, digital versatile disks (dvds) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which may be used to store the desired information and which may be accessed by device 712 . any such computer storage media is part of device 712 . the term “computer readable media” includes communication media. communication media typically embodies computer readable instructions or other data in a “modulated data signal” such as a carrier wave or other transport mechanism and includes any information delivery media. the term “modulated data signal” includes a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. device 712 includes input device(s) 724 such as keyboard, mouse, pen, voice input device, touch input device, infrared cameras, video input devices, or any other input device. output device(s) 722 such as one or more displays, speakers, printers, or any other output device may be included with device 712 . input device(s) 724 and output device(s) 722 may be connected to device 712 via a wired connection, wireless connection, or any combination thereof. in one or more embodiments, an input device or an output device from another computing device may be used as input device(s) 724 or output device(s) 722 for computing device 712 . device 712 may include communication connection(s) 726 to facilitate communications with one or more other devices. according to one or more aspects, a system for saliency based awareness modeling is provided, including a sensor component, a monitoring component, a modeling component, and a scoring component. the system may include an electronic control unit (ecu), a database component, a notification component, or a management component. the sensor component may detect one or more objects within an operating environment and one or more object attributes for one or more of the objects. one or more of the object attributes are associated with saliency of one or more of the objects. the monitoring component may detect one or more operator attributes of an operator of a vehicle. the modeling component may construct a saliency model for one or more of the objects based on one or more of the attributes associated with saliency of one or more of the objects. the modeling component may construct an awareness model for one or more of the objects based on the saliency model and one or more of the operator attributes. the a scoring component may assign one or more awareness scores to one or more objects of the awareness model based on the saliency model and one or more of the operator attributes. the system may include an electronic control unit (ecu) receiving one or more operator responses or operator behavior associated with the operator of the vehicle. the modeling component may construct the awareness model based on one or more of the operator responses. the system may include a database component housing baseline operator response information. the modeling component may construct the awareness model based on a comparison between the baseline operator response information and one or more of the operator responses. the system may include a notification component generating one or more notifications based on one or more awareness scores for one or more of the objects. the system may include a management component controlling a timing, a color, or a size of one or more of the notifications. in one or more embodiments, the sensor component may include an image capture device, a radar sensor, a light detection and ranging (lidar) sensor, a laser sensor, a video sensor, or a movement sensor. the monitoring component may include an image capture sensor, a motion sensors, an eye tracking unit, an infrared sensor, an infrared illuminator, or a depth sensor. one or more of the object attributes may include velocity, color, contrast, color saturation, brightness, or a detected transient for one or more of the objects. one or more of the operator attributes may include eye movement, head movement, focus, facial trajectory, eye gaze trajectory, or gaze distribution. according to one or more aspects, a method for saliency based awareness modeling is provided, including detecting one or more objects within an operating environment, detecting one or more object attributes for one or more of the objects, wherein one or more of the object attributes are associated with saliency of one or more of the objects, detecting one or more operator attributes of an operator of a vehicle, receiving one or more operator responses provided by the operator of the vehicle, constructing a saliency model for one or more of the objects based on one or more of the attributes associated with saliency of one or more of the objects, constructing an awareness model for one or more of the objects based on the saliency model, one or more of the operator responses, and one or more of the operator attributes, and assigning one or more awareness scores to one or more objects of the awareness model based on the saliency model, one or more of the operator responses, and one or more of the operator attributes. the method may include constructing an awareness model based on a comparison between baseline operator response information and one or more of the operator responses. in one or more embodiments, the awareness model may be constructed based on other factors. for example, the method may include constructing the awareness model based on a comparison between baseline object attribute information and one or more of the object attributes. as another example, the method may include constructing the awareness model based on a comparison between baseline operator attribute information and one or more of the operator attributes. other combinations may be possible. the method may include rendering one or more notifications based on one or more awareness scores for one or more of the objects during navigation from an origin location to a destination location or managing one or more aspects of one or more of the notifications. according to one or more aspects, a system for saliency based awareness modeling is provided, including a sensor component, a monitoring component, a modeling component, a scoring component, and a notification component. the sensor component may detect one or more objects within an operating environment and one or more object attributes for one or more of the objects. the sensor component may detect one or more of the object attributes associated with saliency of one or more of the objects. the monitoring component may detect one or more operator attributes of an operator of a vehicle. the modeling component may construct a saliency model for one or more of the objects based on one or more of the attributes associated with saliency of one or more of the objects. the modeling component may construct an awareness model for one or more of the objects based on the saliency model and one or more of the operator attributes. the scoring component may assign one or more awareness scores to one or more objects of the awareness model based on the saliency model and one or more of the operator attributes. the notification component may generate one or more notifications based on one or more awareness scores for one or more of the objects. the sensor component may be a gaze detection device tracking eye movement or gaze distribution. the system may include an electronic control unit (ecu) determining a number of attention demanding objects based on user interaction with one or more subunits of the ecu. the modeling component may construct the awareness model based on the number of attention demanding objects. although the subject matter has been described in language specific to structural features or methodological acts, it is to be understood that the subject matter of the appended claims is not necessarily limited to the specific features or acts described above. rather, the specific features and acts described above are disclosed as example embodiments. various operations of embodiments are provided herein. the order in which one or more or all of the operations are described should not be construed as to imply that these operations are necessarily order dependent. alternative ordering will be appreciated based on this description. further, not all operations may necessarily be present in each embodiment provided herein. as used in this application, “or” is intended to mean an inclusive “or” rather than an exclusive “or”. further, an inclusive “or” may include any combination thereof (e.g., a, b, or any combination thereof). in addition, “a” and “an” as used in this application are generally construed to mean “one or more” unless specified otherwise or clear from context to be directed to a singular form. additionally, at least one of a and b and/or the like generally means a or b or both a and b. further, to the extent that “includes”, “having”, “has”, “with”, or variants thereof are used in either the detailed description or the claims, such terms are intended to be inclusive in a manner similar to the term “comprising”. further, unless specified otherwise, “first”, “second”, or the like are not intended to imply a temporal aspect, a spatial aspect, an ordering, etc. rather, such terms are merely used as identifiers, names, etc. for features, elements, items, etc. for example, a first channel and a second channel generally correspond to channel a and channel b or two different or two identical channels or the same channel. additionally, “comprising”, “comprises”, “including”, “includes”, or the like generally means comprising or including, but not limited to. although the disclosure has been shown and described with respect to one or more implementations, equivalent alterations and modifications will occur based on a reading and understanding of this specification and the annexed drawings. the disclosure includes all such modifications and alterations and is limited only by the scope of the following claims.
171-074-879-458-066
US
[ "US", "WO", "CN", "EP", "KR", "JP" ]
G02B6/126,G02B6/12,G02B6/122,G02B6/13,G02B6/27,G02B6/34,G02F1/13363,G02B5/18,G02B6/10,G02F1/01,G02F1/1335,G02B5/30,G02B27/00,G02B27/01,G02B27/02,G03H1/00
2018-03-16T00:00:00
2018
[ "G02", "G03" ]
holographic waveguides incorporating birefringence control and methods for their fabrication
many embodiments in accordance with the invention are directed towards waveguides implementing birefringence control. in some embodiments, the waveguide includes a birefringent grating layer and a birefringence control layer. in further embodiments, the birefringence control layer is compact and efficient. such structures can be utilized for various applications, including but not limited to: compensating for polarization related losses in holographic waveguides; providing three-dimensional lc director alignment in waveguides based on bragg gratings; and spatially varying angular/spectral bandwidth for homogenizing the output from a waveguide. in some embodiments, a polarization-maintaining, wide-angle, and high-reflection waveguide cladding with polarization compensation is implemented for grating birefringence. in several embodiments, a thin polarization control layer is implemented for providing either quarter wave or half wave retardation.
1. a waveguide comprising: at least one waveguide substrate; at least one grating; at least one polarization modifying layer, wherein the at least one polarization modifying layer is a liquid crystal and polymer system aligned using directional ultraviolet radiation; a light source for outputting light; an input coupler for directing the light into total internal reflection paths within the waveguide; and an output coupler for extracting light from the waveguide, wherein the interaction of the light with the at least one polarization modifying layer and the at least one grating provides a predefined characteristic of light extracted from the waveguide. 2. a wavequide comprising: at least one wavequide substrate; at least one grating; at least one polarization modifying layer; a light source for outputting light; an input coupler for directing the light into total internal reflection paths within the wavequide; and an output coupler for extracting light from the waveguide, wherein the interaction of the light with the at least one polarization modifying layer and the at least one grating provides a predefined characteristic of light extracted from the wavequide, wherein the at least one grating includes a birefringent grating formed in a liquid crystal and polymer system, and wherein the at least one polarization modifying layer influences the alignment of lc directors in the birefringent grating. 3. a wavequide comprising: at least one waveguide substrate; at least one grating; at least one polarization modifying layer, wherein the at least one polarization modifying layer comprises at least one stack of refractive index layers disposed on at least one optical surface of the waveguide, and wherein at least one layer in the stack of refractive index layers has an isotropic refractive index and at least one layer in the stack of refractive index layers has an anisotropic refractive index; a light source for outputting light; an input coupler for directing the light into total internal reflection paths within the waveguide; and an output coupler for extracting light from the waveguide, wherein the interaction of the light with the at least one polarization modifying layer and the at least one grating provides a predefined characteristic of light extracted from the waveguide. 4. a waveguide comprising: at least one waveguide substrate; at least one grating; at least one polarization modifying layer, wherein the at least one polarization modifying layer provides optical power; a light source for outputting light; an input coupler for directing the light into total internal reflection paths within the waveguide; and an output coupler for extracting light from the waveguide, wherein the interaction of the light with the at least one polarization modifying layer and the at least one grating provides a predefined characteristic of light extracted from the waveguide. 5. the waveguide of claim 1 , wherein the predefined characteristic comprises at least one of: uniform illumination and uniform polarization over the angular range of the light. 6. the waveguide of claim 1 , wherein the at least one polarization modifying layer provides compensation for polarization rotation introduced by the at least one grating along at least one direction of light propagation within the waveguide. 7. the waveguide of claim 1 , wherein the at least one polarization modifying layer is a liquid crystal and polymer material system. 8. the waveguide of claim 1 , wherein the interaction of light with the at least one polarization modifying layer provides at least one of: an angular or spectral bandwidth variation; a polarization rotation; a birefringence variation; an angular or spectral dependence of at least one of beam transmission or polarization rotation; or a light transmission variation in at least one direction in the plane of the waveguide substrate. 9. the waveguide of claim 1 , wherein the at least one polarization modifying layer is aligned by at least one of: electromagnetic radiation; electrical or magnetic fields; mechanical forces; chemical reaction; or thermal exposure. 10. the waveguide of claim 1 , wherein the predefined characteristic varies across the waveguide. 11. the waveguide of claim 1 , wherein the at least one polarization modifying layer has an anisotropic refractive index. 12. the waveguide of claim 1 , wherein the at least one polarization modifying layer is formed on at least one internal or external optical surface of the waveguide. 13. the waveguide of claim 1 , wherein the predefined characteristic results from the cumulative effect of the interaction of the light with the at least one polarization modifying layer and the at least one grating along at least one direction of light propagation within the waveguide. 14. the waveguide of claim 1 , wherein the at least one polarization modifying layer is reflective. 15. the waveguide of claim 1 , wherein the at least one grating comprises two or more gratings configured as a stack. 16. the waveguide of claim 1 , wherein the at least one polarization modifying layer provides an environmental isolation layer for the waveguide. 17. the waveguide of claim 1 , wherein the at least one polarization modifying layer has a gradient index structure. 18. the waveguide of claim 1 , wherein the at least one polarization modifying layer is formed by stretching a layer of an optical material to spatially vary its refractive index in the plane of the waveguide substrate. 19. the waveguide of claim 1 , wherein the light source provides collimated light in angular space. 20. the waveguide of claim 1 , wherein at least one of the input coupler and output coupler comprises a birefringent grating. 21. the waveguide of claim 1 , wherein the at least one grating is formed in a birefringent material. 22. the waveguide of claim 1 , wherein the at least one grating is a surface relief grating. 23. the waveguide of claim 1 , wherein the at least one grating is a fold grating. 24. the waveguide of claim 1 , wherein the at least one grating comprises two or more gratings multiplexed in a layer.
cross-reference to related applications the current application is a continuation of u.s. patent application ser. no. 16/906,872 entitled “holographic waveguides incorporating birefringence control and methods for their fabrication,” filed jun. 19, 2020, which is a continuation of u.s. patent application ser. no. 16/357,233 entitled “holographic waveguides incorporating birefringence control and methods for their fabrication,” filed mar. 18, 2019, which claims the benefit of and priority under 35 u.s.c. § 119(e) to u.s. provisional patent application no. 62/643,977 entitled “holographic waveguides incorporating birefringence control and methods for their fabrication,” filed mar. 16, 2018. the disclosures which are hereby incorporated by reference in their entireties for all purposes. field of the invention the present disclosure relates to optical waveguides and more particularly to waveguide displays using birefringent gratings. background of the invention waveguides can be referred to as structures with the capability of confining and guiding waves (i.e., restricting the spatial region in which waves can propagate). one subclass includes optical waveguides, which are structures that can guide electromagnetic waves, typically those in the visible spectrum. waveguide structures can be designed to control the propagation path of waves using a number of different mechanisms. for example, planar waveguides can be designed to utilize diffraction gratings to diffract and couple incident light into the waveguide structure such that the in-coupled light can proceed to travel within the planar structure via total internal reflection (“tir”). fabrication of waveguides can include the use of material systems that allow for the recording of holographic optical elements within the waveguides. one class of such material includes polymer dispersed liquid crystal (“pdlc”) mixtures, which are mixtures containing photopolymerizable monomers and liquid crystals. a further subclass of such mixtures includes holographic polymer dispersed liquid crystal (“hpdlc”) mixtures. holographic optical elements, such as volume phase gratings, can be recorded in such a liquid mixture by illuminating the material with two mutually coherent laser beams. during the recording process, the monomers polymerize and the mixture undergoes a photopolymerization-induced phase separation, creating regions densely populated by liquid crystal micro-droplets, interspersed with regions of clear polymer. the alternating liquid crystal-rich and liquid crystal-depleted regions form the fringe planes of the grating. waveguide optics, such as those described above, can be considered for a range of display and sensor applications. in many applications, waveguides containing one or more grating layers encoding multiple optical functions can be realized using various waveguide architectures and material systems, enabling new innovations in near-eye displays for augmented reality (“ar”) and virtual reality (“vr”), compact heads-up displays (“huds”) for aviation and road transport, and sensors for biometric and laser radar (“lidar”) applications. summary of the invention following below are more detailed descriptions of various concepts related to, and embodiments of, an inventive optical display and methods for displaying information. it should be appreciated that various concepts introduced above and discussed in greater detail below may be implemented in any of numerous ways, as the disclosed concepts are not limited to any particular manner of implementation. examples of specific implementations and applications are provided primarily for illustrative purposes. a more complete understanding of the invention can be obtained by considering the following detailed description in conjunction with the accompanying drawings, wherein like index numerals indicate like parts. for purposes of clarity, details relating to technical material that is known in the technical fields related to the invention have not been described in detail. one embodiment includes a waveguide including at least one waveguide substrate, at least one birefringent grating; at least one birefringence control layer, a light source for outputting light, an input coupler for directing the light into total internal reflection paths within the waveguide, and an output coupler for extracting light from the waveguide, wherein the interaction of the light with the birefringence control layer and the birefringent grating provides a predefined characteristic of light extracted from the waveguide. in another embodiment, the interaction of light with the birefringence control layer provides at least one of: an angular or spectral bandwidth variation, a polarization rotation, a birefringence variation, an angular or spectral dependence of at least one of beam transmission or polarization rotation, and a light transmission variation in at least one direction in the plane of the waveguide substrate. in a further embodiment, the predefined characteristic varies across the waveguide. in still another embodiment, the predefined characteristic results from the cumulative effect of the interaction of the light with the birefringence control layer and the birefringent grating along at least one direction of light propagation within the waveguide. in a still further embodiment, the predefined characteristic includes at least one of: uniform illumination and uniform polarization over the angular range of the light. in yet another embodiment, the birefringence control layer provides compensation for polarization rotation introduced by the birefringent grating along at least one direction of light propagation within the waveguide. in a yet further embodiment, the birefringence control layer is a liquid crystal and polymer material system. in another additional embodiment, the birefringence control layer is a liquid crystal and polymer system aligned using directional ultraviolet radiation. in a further additional embodiment, the birefringence control layer is aligned by at least one of: electromagnetic radiation, electrical or magnetic fields, mechanical forces, chemical reaction, and thermal exposure. in another embodiment again, the birefringence control layer influences the alignment of lc directors in a birefringent grating formed in a liquid crystal and polymer system. in a further embodiment again, the birefringence control layer has an anisotropic refractive index. in still yet another embodiment, the birefringence control layer is formed on at least one internal or external optical surface of the waveguide. in a still yet further embodiment, the birefringence control layer includes at least one stack of refractive index layers disposed on at least one optical surface of the waveguide, wherein at least one layer in the stack of refractive index layers has an isotropic refractive index and at least one layer in the stack of refractive index layers has an anisotropic refractive index. in still another additional embodiment, the birefringence control layer provides a high reflection layer. in a still further additional embodiment, the birefringence control layer provides optical power. in still another embodiment again, the birefringence control layer provides an environmental isolation layer for the waveguide. in a still further embodiment again, the birefringence control layer has a gradient index structure. in yet another additional embodiment, the birefringence control layer is formed by stretching a layer of an optical material to spatially vary its refractive index in the plane of the waveguide substrate. in a yet further additional embodiment, the light source provides collimated light in angular space. in yet another embodiment again, at least one of the input coupler and output coupler includes a birefringent grating. in a yet further embodiment again, the birefringent grating is recorded in a material system including at least one polymer and at least one liquid crystal. in another additional embodiment again, the at least one birefringent grating includes at least one birefringent grating for providing at least one of the functions of: beam expansion in a first direction, beam expansion in a second direction and light extraction from the waveguide, and coupling light from the source into a total internal reflection path in the waveguide. in a further additional embodiment again, the light source includes a laser, and the alignment of lc directors in the birefringent grating spatially vary to compensate for illumination banding. a still yet another additional embodiment includes a method of fabricating a waveguide, the method including providing a first transparent substrate, depositing a layer of grating recording material, exposing the layer of grating recording material to form a grating layer, forming a birefringence control layer, and applying a second transparent substrate. in a still yet further additional embodiment, the layer of grating recording material is deposited onto the substrate, the birefringence control layer is formed on the grating layer, and the second transparent substrate is applied over the birefringence control layer. in yet another additional embodiment again, the layer of grating recording material is deposited onto the substrate, the second transparent substrate is applied over the grating layer, and the birefringence control layer is formed on second transparent substrate. in a yet further additional embodiment again, the birefringence control layer is formed on the first transparent substrate, the layer of grating recording material is deposited onto the birefringence control layer, and the second transparent substrate is applied over the grating layer. in still yet another embodiment again, the method further includes depositing a layer of liquid crystal polymer material and aligning the liquid crystal polymer material using directional uv light, wherein the layer of grating recording material is deposited onto the substrate and the second transparent substrate is applied over the aligned liquid crystal polymer layer. in a still yet further embodiment again, the layer of liquid crystal polymer material is deposited onto one of either the grating layer or the second transparent substrate. in still yet another additional embodiment again, the layer of liquid crystal polymer material is deposited onto the first transparent substrate, the layer of grating recording material is deposited onto the aligned liquid crystal polymer material, and the second transparent substrate is applied over the grating layer. additional embodiments and features are set forth in part in the description that follows, and in part will become apparent to those skilled in the art upon examination of the specification or may be learned by the practice of the invention. a further understanding of the nature and advantages of the present invention may be realized by reference to the remaining portions of the specification and the drawings, which forms a part of this disclosure. brief description of the drawings these and other features and advantages of the present invention will be better understood by reference to the following detailed description when considered in conjunction with the accompanying data and figures, wherein: fig. 1 conceptually illustrates a schematic cross section view of a waveguide incorporating a birefringent grating and birefringence control layer in accordance with an embodiment of the invention. fig. 2 conceptually illustrates a schematic cross section view of a waveguide incorporating a birefringent grating and birefringence control layer for compensating grating birefringence in accordance with some embodiments of the invention. fig. 3 conceptually illustrates a schematic cross section view of a waveguide incorporating a birefringent grating and birefringence control layer for providing uniform output illumination from the waveguide in accordance with an embodiment of the invention. fig. 4 conceptually illustrates a schematic cross section view of a birefringence control layer formed by a multilayer structure combining isotropic and anisotropic index layers in accordance with an embodiment of the invention. fig. 5 conceptually illustrates a schematic cross section view of a birefringence control layer formed by a multilayer structure combining isotropic and anisotropic index layers integrated with a birefringent grating layer in accordance with an embodiment of the invention. fig. 6 conceptually illustrates a plan view of a dual expansion waveguide with birefringent control layers in accordance with an embodiment of the invention. fig. 7 conceptually illustrates a schematic cross section view of a waveguide incorporating a birefringent grating and birefringence control layer for correcting birefringence introduced by an optical element in the output light path from the waveguide in accordance with an embodiment of the invention. fig. 8 conceptually illustrates a schematic plan view of an apparatus for aligning a birefringence control layer by applying forces to the edges of the layer in accordance with an embodiment of the invention. figs. 9a-9f conceptually illustrate the process steps and apparatus for fabricating a waveguide containing a birefringent grating and a birefringence control layer in accordance with various embodiments of the invention. figs. 10a-10f conceptually illustrate the process steps and apparatus for fabricating a waveguide containing a birefringent grating with a birefringence control layer applied to an outer surface of the waveguide in accordance with various embodiments of the invention. figs. 11a-11f conceptually illustrate the process steps and apparatus for fabricating a waveguide containing a birefringent grating and a birefringence control layer in accordance with various embodiments of the invention. fig. 12 conceptually illustrates a flow chart showing a method of fabricating a waveguide containing a birefringent grating and a birefringence control layer in accordance with an embodiment of the invention. fig. 13 conceptually illustrates a flow chart showing a method of fabricating a waveguide containing a birefringent grating and a birefringence control layer applied to an outer surface of the waveguide in accordance with an embodiment of the invention. fig. 14 conceptually illustrates a flow chart showing a method of fabricating a waveguide containing a birefringent grating and a birefringence control layer where forming the birefringence control layer is carried out before the recording of the grating layer in accordance with an embodiment of the invention. fig. 15 conceptually illustrates a schematic side view of a waveguide with a birefringence control layer applied at the waveguide to air interface in accordance with an embodiment of the invention. fig. 16 conceptually illustrates a schematic side view of a waveguide with a birefringence control layer that isolates the waveguide from its environment applied to the waveguide to air interface in accordance with an embodiment of the invention. fig. 17 conceptually illustrates a schematic side view of an apparatus for fabricating a structure containing a birefringent grating layer overlaying a birefringence control layer where the grating recording beams propagate through the birefringence control layer in accordance with an embodiment of the invention. fig. 18 conceptually illustrates a schematic side view of an apparatus for fabricating a structure containing a birefringence control layer overlaying a birefringent grating layer where the birefringence control layer is aligned by uv radiation propagating through the grating in accordance with an embodiment of the invention. fig. 19 conceptually illustrates a cross section of waveguide containing substrates sandwiching a grating layer. fig. 20 conceptually illustrates a waveguide with a quarter wave polarization layer inserted in accordance with an embodiment of the invention. fig. 21 conceptually illustrates a schematic cross section view showing a portion of a waveguide illustrating the use of a quarter wave polarization layer with a rkv grating in accordance with an embodiment of the invention. fig. 22 conceptually illustrates a polarization layer architecture containing an lcp quarter wave cell and a reactive monomer liquid crystal mixture (rmlcm) cell separated by index matching oil layer in accordance with an embodiment of the invention. fig. 23 conceptually illustrates an example of a polarization architecture based on a grating cell with the rmlcm grating material layer in direct contact with a bare lcp film in accordance with an embodiment of the invention. fig. 24 conceptually illustrates a cross section view schematically showing an example of polarization layer architecture in which a bare lcp layer is bonded to a bare rmlcm layer in accordance with an embodiment of the invention. fig. 25 conceptually illustrates a cross section view schematically showing an example of a polarization layer architecture using a rmlcm layer as a polarization layer in accordance with an embodiment of the invention. fig. 26 conceptually illustrates an example of a polarization layer architecture that includes a feature for compensating for polarization rotation introduced by birefringent gratings in accordance with an embodiment of the invention. fig. 27 conceptually illustrates a plan view schematically showing a waveguide display incorporating the features of the embodiment of fig. 26 in accordance with an embodiment of the invention. figs. 28 and 29 conceptually illustrate cross section views schematically showing examples of polarization layer architectures containing an upper substrate, an lcp layer with hard encapsulation layer, a rmlcm layer, and a lower substrate in accordance with various embodiments of the invention. fig. 30 conceptually illustrates a plan view schematically showing a first example of a two-region polymer film in accordance with an embodiment of the invention. fig. 31 conceptually illustrates a plan view schematically showing a second example of a two-region polymer film in accordance with an embodiment of the invention. fig. 32 conceptually illustrates a plan view schematically showing a third example of a two-region polymer film in accordance with an embodiment of the invention. fig. 33 conceptually illustrates a drawing showing a clear aperture layout in accordance with an embodiment of the invention. fig. 34 conceptually illustrates a plan view schematically showing a waveguide containing input, fold, and output gratings including the k-vectors and alignment layer fast axis directions for each grating in accordance with an embodiment of the invention. detailed description of the invention for the purposes of describing embodiments, some well-known features of optical technology known to those skilled in the art of optical design and visual displays have been omitted or simplified in order to not obscure the basic principles of the invention. unless otherwise stated, the term “on-axis” in relation to a ray or a beam direction refers to propagation parallel to an axis normal to the surfaces of the optical components described in relation to the invention. in the following description the terms light, ray, beam, and direction may be used interchangeably and in association with each other to indicate the direction of propagation of electromagnetic radiation along rectilinear trajectories. the term light and illumination may be used in relation to the visible and infrared bands of the electromagnetic spectrum. parts of the following description will be presented using terminology commonly employed by those skilled in the art of optical design. in the following description, the term grating may be used to refer to any kind of diffractive structure used in a waveguide, including holograms and bragg or volume holograms. the term grating may also encompass a grating that includes of a set of gratings. for example, in some embodiments the input grating and output grating each include two or more gratings multiplexed into a single layer. for illustrative purposes, it is to be understood that the drawings are not drawn to scale unless stated otherwise. referring generally to the drawings, systems and methods relating to waveguide applications incorporating birefringence control in accordance with various embodiments of the invention are illustrated. birefringence is the optical property of a material having a refractive index that depends on the polarization and propagation direction of light. a birefringent grating can be referred to as a grating having such properties. in many cases, the birefringent grating is formed in a liquid crystal polymer material system such as but not limited to hpdlc mixtures. the polarization properties of such a grating can depend on average relative permittivity and relative permittivity modulation tensors. many embodiments in accordance with the invention are directed towards waveguides implementing birefringence control. in some embodiments, the waveguide includes a birefringent grating layer and a birefringence control layer. in further embodiments, the birefringence control layer is compact and efficient. such structures can be utilized for various applications, including but not limited to: compensating for polarization related losses in holographic waveguides; providing three-dimensional lc director alignment in waveguides based on bragg gratings; and spatially varying angular/spectral bandwidth for homogenizing the output from a waveguide. in some embodiments, a polarization-maintaining, wide-angle, and high-reflection waveguide cladding with polarization compensation is implemented for grating birefringence. in several embodiments, a thin polarization control layer is implemented for providing either quarter wave or half wave retardation. in a number of embodiments, a polarization-maintaining, wide-angle birefringence control layer is implemented for modifying the polarization output of a waveguide to balance the birefringence of an external optical element used with the waveguide. in many embodiments, the waveguide includes at least one input grating and at least one output grating. in further embodiments, the waveguide can include additional gratings for various purposes, such as but not limited to fold gratings for beam expansion. the input grating and output grating may each include multiplexed gratings. in some embodiments, the input grating and output grating may each include two overlapping gratings layers that are in contact or vertically separated by one or more thin optical substrate. in some embodiments, the grating layers are sandwiched between glass or plastic substrates. in some embodiments two or more such gratings layers may form a stack within which total internal reflection occurs at the outer substrate and air interfaces. in some embodiments, the waveguide may include just one grating layer. in some embodiments, electrodes may be applied to faces of the substrates to switch gratings between diffracting and clear states. the stack may further include additional layers such as beam splitting coatings and environmental protection layers. the input and output gratings shown in the drawings may be provided by any of the above described grating configurations. advantageously, the input and output gratings can be designed to have common surface grating pitch. in cases where the waveguide contains grating(s) in addition to the input and output gratings, the gratings can be designed to have grating pitches such that the vector sum of the grating vectors is substantially zero. the input grating can combine gratings orientated such that each grating diffracts a polarization of the incident unpolarized light into a waveguide path. the output gratings can be configured in a similar fashion such that the light from the waveguide paths is combined and coupled out of the waveguide as unpolarized light. each grating is characterized by at least one grating vector (or k-vector) in 3d space, which in the case of a bragg grating is defined as the vector normal to the bragg fringes. the grating vector can determine the optical efficiency for a given range of input and diffracted angles. in some embodiments, the waveguide includes at least one surface relief grating. waveguide gratings structures, materials systems, and birefringence control are discussed below in further detail. switchable bragg gratings optical structures recorded in waveguides can include many different types of optical elements, such as but not limited to diffraction gratings. in many embodiments, the grating implemented is a bragg grating (also referred to as a volume grating). bragg gratings can have high efficiency with little light being diffracted into higher orders. the relative amount of light in the diffracted and zero order can be varied by controlling the refractive index modulation of the grating, a property that is can be used to make lossy waveguide gratings for extracting light over a large pupil. one class of gratings used in holographic waveguide devices is the switchable bragg grating (“sbg”). sbgs can be fabricated by first placing a thin film of a mixture of photopolymerizable monomers and liquid crystal material between glass plates or substrates. in many cases, the glass plates are in a parallel configuration. one or both glass plates can support electrodes, typically transparent tin oxide films, for applying an electric field across the film. the grating structure in an sbg can be recorded in the liquid material (often referred to as the syrup) through photopolymerization-induced phase separation using interferential exposure with a spatially periodic intensity modulation. factors such as but not limited to control of the irradiation intensity, component volume fractions of the materials in the mixture, and exposure temperature can determine the resulting grating morphology and performance. as can readily be appreciated, a wide variety of materials and mixtures can be used depending on the specific requirements of a given application. in many embodiments, hpdlc material is used. during the recording process, the monomers polymerize and the mixture undergoes a phase separation. the lc molecules aggregate to form discrete or coalesced droplets that are periodically distributed in polymer networks on the scale of optical wavelengths. the alternating liquid crystal-rich and liquid crystal-depleted regions form the fringe planes of the grating, which can produce bragg diffraction with a strong optical polarization resulting from the orientation ordering of the lc molecules in the droplets. the resulting volume phase grating can exhibit very high diffraction efficiency, which can be controlled by the magnitude of the electric field applied across the film. when an electric field is applied to the grating via transparent electrodes, the natural orientation of the lc droplets can change, causing the refractive index modulation of the fringes to lower and the hologram diffraction efficiency to drop to very low levels. typically, the electrodes are configured such that the applied electric field will be perpendicular to the substrates. in a number of embodiments, the electrodes are fabricated from indium tin oxide (“ito”). in the off state with no electric field applied, the extraordinary axis of the liquid crystals generally aligns normal to the fringes. the grating thus exhibits high refractive index modulation and high diffraction efficiency for p-polarized light. when an electric field is applied to the hpdlc, the grating switches to the on state wherein the extraordinary axes of the liquid crystal molecules align parallel to the applied field and hence perpendicular to the substrate. in the on state, the grating exhibits lower refractive index modulation and lower diffraction efficiency for both s- and p-polarized light. thus, the grating region no longer diffracts light. each grating region can be divided into a multiplicity of grating elements such as for example a pixel matrix according to the function of the hpdlc device. typically, the electrode on one substrate surface is uniform and continuous, while electrodes on the opposing substrate surface are patterned in accordance to the multiplicity of selectively switchable grating elements. one of the known attributes of transmission sbgs is that the lc molecules tend to align with an average direction normal to the grating fringe planes (i.e., parallel to the grating or k-vector). the effect of the lc molecule alignment is that transmission sbgs efficiently diffract p polarized light (i.e., light with a polarization vector in the plane of incidence), but have nearly zero diffraction efficiency for s polarized light (i.e., light with the polarization vector normal to the plane of incidence). as a result, transmission sbgs typically cannot be used at near-grazing incidence as the diffraction efficiency of any grating for p polarization falls to zero when the included angle between the incident and reflected light is small. in addition, illumination light with non-matched polarization is not captured efficiently in holographic displays sensitive to one polarization only. hpdlc material systems hpdlc mixtures in accordance with various embodiments of the invention generally include lc, monomers, photoinitiator dyes, and coinitiators. the mixture (often referred to as syrup) frequently also includes a surfactant. for the purposes of describing the invention, a surfactant is defined as any chemical agent that lowers the surface tension of the total liquid mixture. the use of surfactants in hpdlc mixtures is known and dates back to the earliest investigations of hpdlcs. for example, a paper by r. l sutherland et al., spie vol. 2689, 158-169, 1996, the disclosure of which is incorporated herein by reference, describes a pdlc mixture including a monomer, photoinitiator, coinitiator, chain extender, and lcs to which a surfactant can be added. surfactants are also mentioned in a paper by natarajan et al, journal of nonlinear optical physics and materials, vol. 5 no. i 89-98, 1996, the disclosure of which is incorporated herein by reference. furthermore, u.s. pat. no. 7,018,563 by sutherland; et al., discusses polymer-dispersed liquid crystal material for forming a polymer-dispersed liquid crystal optical element including: at least one acrylic acid monomer; at least one type of liquid crystal material; a photoinitiator dye; a coinitiator; and a surfactant. the disclosure of u.s. pat. no. 7,018,563 is hereby incorporated by reference in its entirety. the patent and scientific literature contains many examples of material systems and processes that can be used to fabricate sbgs, including investigations into formulating such material systems for achieving high diffraction efficiency, fast response time, low drive voltage, and so forth. u.s. pat. no. 5,942,157 by sutherland, and u.s. pat. no. 5,751,452 by tanaka et al. both describe monomer and liquid crystal material combinations suitable for fabricating sbg devices. examples of recipes can also be found in papers dating back to the early 1990s. many of these materials use acrylate monomers, including: r. l. sutherland et al., chem. mater. 5, 1533 (1993), the disclosure of which is incorporated herein by reference, describes the use of acrylate polymers and surfactants. specifically, the recipe includes a crosslinking multifunctional acrylate monomer; a chain extender n-vinyl pyrrolidinone, lc e7, photo-initiator rose bengal, and coinitiator n-phenyl glycine. surfactant octanoic acid was added in certain variants.fontecchio et al., sid 00 digest 774-776, 2000, the disclosure of which is incorporated herein by reference, describes a uv curable hpdlc for reflective display applications including a multi-functional acrylate monomer, lc, a photoinitiator, a coinitiators, and a chain terminator.y. h. cho, et al., polymer international, 48, 1085-1090, 1999, the disclosure of which is incorporated herein by reference, discloses hpdlc recipes including acrylates.karasawa et al., japanese journal of applied physics, vol. 36, 6388-6392, 1997, the disclosure of which is incorporated herein by reference, describes acrylates of various functional orders.t. j. bunning et al., polymer science: part b: polymer physics, vol. 35, 2825-2833, 1997, the disclosure of which is incorporated herein by reference, also describes multifunctional acrylate monomers.g. s. iannacchione et al., europhysics letters vol. 36 (6). 425-430, 1996, the disclosure of which is incorporated herein by reference, describes a pdlc mixture including a penta-acrylate monomer, lc, chain extender, coinitiators, and photoinitiator. acrylates offer the benefits of fast kinetics, good mixing with other materials, and compatibility with film forming processes. since acrylates are cross-linked, they tend to be mechanically robust and flexible. for example, urethane acrylates of functionality 2 (di) and 3 (tri) have been used extensively for hpdlc technology. higher functionality materials such as penta and hex functional stems have also been used. overview of birefringence holographic waveguides based on hpdlc offer the benefits of switching capability and high index modulation, but can suffer from the inherent birefringence resulting from the alignment of liquid crystal directors along grating vectors during the lc-polymer phase separation. while this can lead to a large degree of polarization selectivity, which can be advantageous in many applications, adverse effects such as polarization rotation can occur in gratings designed to fold and expand the waveguided beam in the plane of the waveguide (known as fold gratings). this polarization rotation can lead to efficiency losses and output light nonuniformity. two common approaches for modifying the alignment of lc directors include rubbing and the application of an alignment layer. typically, by such means, lc directors in a plane parallel to the alignment layer can be realigned within the plane. in hpdlc bragg gratings, the problem is more challenging owing to the natural alignment of lc directors along grating k-vectors, making director alignment in all but the simplest gratings a complex three-dimensional problem and rendering conventional techniques using rubbing or polyamide alignment layers impractical. other approaches can include applying electric fields, magnetic fields, and mechanical pressure during curing. these approaches have been shown to have limited success when applied to reflection gratings. however, such techniques typically do not easily translate to transmission bragg grating waveguides. a major design challenge in waveguides is the coupling of image content from an external projector into the waveguide efficiently and in such a way that the waveguide image is free from chromatic dispersion and brightness non-uniformity. to overcome chromatic dispersion and to achieve the respectable collimation, the use of lasers can be implemented. however, lasers can suffer from the problem of pupil banding artifacts, which manifest themselves as output illumination non-uniformity. banding artifacts can form when the collimated pupil is replicated (expanded) in a tir waveguide. in basic terms, the light beams diffracted out of the waveguide each time the beam interacts with the grating can have gaps or overlaps, leading to an illumination ripple. in many cases, the degree of ripple is a function of field angle, waveguide thickness, and aperture thickness. the effect of banding can be smoothed by the dispersion typically exhibited by broadband sources such as leds. however, led illumination is not entirely free from the banding problem and, moreover, tends to result in bulky input optics and an increase in the thickness of the waveguide. debanding can be minimized using a pupil shifting technique for configuring the light coupled into the waveguide such that the input grating has an effective input aperture that is a function of the tir angle. techniques for performing pupil-shifting in international application no. pct/us2018/015553 entitled “waveguide device with uniform output illumination,” the disclosure of which is hereby incorporated by reference in its entirety. in some cases, the polarization rotation that takes place in fold gratings (described above) can compensate for illumination banding in waveguides that uses laser illumination. the mechanism for this is that the large number of grating interactions in a fold grating combined with the small polarization rotation at each interaction can average out the banding (arising from imperfect matching of tir beams and other coherent optical effects such as but not limited to those arising from parasitic gratings left over from the recording process, stray light interactions with the grating and waveguide surfaces, etc.). the process of compensating for the birefringence can be aided by fine tuning the spatial variation of the birefringence (alignment of the lc directors) in the fold grating. a further issue that arises in waveguide displays is that contact with moisture or surface combination can inhibit waveguide total internal reflection (tir), leading to image gaps. in such cases, the scope for using protective outer layers can be limited by the need for low index materials that will provide tir over the waveguide angular bandwidth. a further design challenge in waveguides is maintaining high efficiency over the angular bandwidth of the waveguide. one exemplary solution would be a polarization-maintaining, wide-angle, and high-reflection waveguide cladding. in some applications, polarization balancing within a waveguide can be accomplished using either a quarter wave retarding layer or a half wave retarder layer applied to one or both of the principal reflecting surfaces of the waveguide. however, in some cases, practical retarder films can add unacceptable thickness to the waveguide. thin film coatings of the required prescription will normally entail an expensive and time-consuming vacuum coating step. one exemplary method of implementing a coating includes but not limited to the use of an inkjet printing or industry-standard spin-coating procedure. in many embodiments, the coating could be applied directly to a printed grating layer. alternatively, the coating could be applied to an external optical surface of the assembled waveguide. in some applications, waveguides are combined with conventional optics for correcting aberrations. such aberrations may arise when waveguides are used in applications such as but not limited to a car hud, which projects an image onto a car windscreen for reflection into the viewer's eyebox. the curvatures of the windscreen can introduce significant geometric aberration. since many waveguides operate with collimated beams, it can be difficult to pre-compensate for the distortion within the waveguide itself. one solution includes mounting a pre-compensating optical element near the output surface of the waveguides. in many cases, the optical element is molded in plastic and can introduce severe birefringence, which should be balanced by the waveguide. in view of the above, many embodiments of the invention are directed towards birefringence control layers designed to address one or more of the issues posed above. for example, in many embodiments, a compact and efficient birefringence control layer is implemented for compensating for polarization related losses in holographic waveguides, for providing three-dimensional lc director alignment in waveguides based on bragg gratings, for spatially varying angular/spectral bandwidth for homogenizing the output from a waveguide, and/or for isolating a waveguide from its environment while ensuring confinement of wave-guided beams. in some embodiments, a polarization-maintaining, wide-angle, and high-reflection waveguide cladding with polarization compensation is implemented for grating birefringence. in several embodiments, a thin polarization control layer is implemented for providing either quarter wave or half wave retardation. a polarization control layer can be implemented as a thin layer directly on top of the grating layer or to one or both of the waveguide substrates using a standard spin coating or inkjet printing process. in a number of embodiments, a polarization-maintaining, wide-angle birefringence control layer is implemented for modifying the polarization output of a waveguide to balance the birefringence of an external optical element used with the waveguide. other implementations and specific configurations are discussed below in further detail. waveguide applications incorporating birefringence control waveguides and waveguide displays implementing birefringence control techniques in accordance with various embodiments of the invention can be achieved using many different techniques. in some embodiments, the waveguide includes a birefringent grating layer and a birefringence control layer. in further embodiments, a compact and efficient birefringence control layer is implemented. a birefringence control layer can be implemented for various functions such as but not limited to: compensating for polarization related losses in holographic waveguides; providing three-dimensional lc director alignment in waveguides based on bragg gratings; and efficient and cost-effective integration within a waveguide for spatially varying angular/spectral bandwidth for homogenizing the output from the waveguide. in any of the embodiments to be described, the birefringence control layer may be formed on any optical surface of the waveguide. for the purposes of understanding the invention, an optical surface of the waveguide may be one of the tir surfaces, a surface of the grating layer, a surface of the waveguide substrates sandwiching the grating layer, or a surface of any other optical substrate implemented within the waveguide (for example, a beam-splitter layer for improving uniformity). fig. 1 conceptually illustrates a waveguide implementing a birefringence control layer in accordance with an embodiment of the invention. in the illustrative embodiment, the waveguide apparatus 100 includes an optical substrate 101 containing a birefringent grating layer 102 and a birefringence control layer 103 . as shown, light 104 propagating under tir within the waveguide interacts with both layers. for example, the light ray 104 a with an initial polarization state represented by the symbol 104 b has its polarization rotated to the state 104 c after propagation through the grating region around the point 102 a. the birefringence control layer 103 rotates the polarization vector into the state 104 d, which is the polarization state for achieving some predefined diffraction efficiency of the ray 104 e when it interacts with the grating around the point 102 b and is diffracted into the direction 104 f with a polarization state 104 g, which is similar to the state 104 d. as will be shown in the following description, many different configurations of a birefringence control layer and birefringent grating can be implemented in accordance with various embodiments of the invention. fig. 2 conceptually illustrates a waveguide apparatus 200 that includes at least one optical substrate 201 and a coupler 202 for deflecting light 203 a, 203 b (covering a range of incident angles) from an external source 204 into tir paths 205 a, 205 b in the waveguide substrate. light in the tir path can interact with the output grating, which can be configured to extract a portion of the light each time the tir light satisfies the condition for diffraction by the grating. in the case of a bragg grating, extraction can occur when the bragg condition is met. more precisely, efficient extraction can occur when a ray incident on the grating lies within an angular bandwidth and spectral bandwidth around the bragg condition. the bandwidths being defined according to some measure of acceptable diffraction efficiency (such as but not limited to 50% of peak de). for example, light in the tir ray paths 205 a, 205 b is diffracted by the output grating into output direction 206 a, 206 b, 207 a, and 207 b at different points along the output grating. it should be apparent from basic geometrical optics that a unique tir angle can be defined by each light incidence angle at the input grating. many different types of optical elements can be used as the coupler. for example, in some embodiments, the coupler is a grating. in several embodiments, the coupler is a birefringent grating. in many embodiments, the coupler is a prism. the apparatus further includes at least one birefringent grating 208 for providing beam expansion in a first direction and light extraction from the waveguide and at least one birefringence control layer 209 with anisotropic refractive index properties. in the embodiments to be discussed, the source 204 can be an input image generator that includes a light source, a microdisplay panel, and optics for collimating the light. as can readily be appreciated, various input image generators can be used, including those that output non-collimated light. in many embodiments, the input image generator projects the image displayed on the microdisplay panel such that each display pixel is converted into a unique angular direction within the substrate waveguide. the collimation optics may include lens and mirrors, which can be diffractive lenses and mirrors. in some embodiments, the source may be configured to provide illumination that is not modulated with image information. in several embodiments, the light source can be a laser or led and can include one or more lenses for modifying the illumination beam angular characteristics. in a number of embodiments, the image source can be a micro-display or an image scanner. the interaction of the light with the birefringence control layer 209 and the birefringent grating 208 integrated along the total internal reflection path for any direction of the light can provide a predefined characteristic of the light extracted from the waveguide. in some embodiments, the predefined characteristic includes at least one of a uniform polarization or a uniform illumination over the angular range of the light. fig. 2 also illustrates how the birefringence control layer 209 and grating 208 provide uniform polarization. in many embodiments, the input state will correspond to p polarization, a state which may be used for gratings recorded in hpdlc. for the purposes of explaining the invention, an initial polarization state represented by 210 is assumed. the interaction of the light with the birefringence control layer near a grating interaction region along the tir path 205 a is represented by the polarization states 211 , 212 , which show the rotation of the polarization vector before and after propagation through the thickness ab of the birefringence control layer 209 . this polarization rotation can be designed to balance the polarization rotation through the thickness cd of the adjacent grating region the ray encounters along the tir path 205 a. thus, the polarization of the light extracted by the grating can be aligned parallel to the input polarization vector as indicated by the polarization state 213 . in some embodiments, the output polarization state may differ from the input polarization state. in a number of embodiments, such as the one shown in fig. 2 , there is at least partial overlap of the birefringent grating and the birefringence control layer. in several embodiments, the two are separated by a portion of the waveguide path. fig. 3 conceptually illustrates a waveguide apparatus 300 in which the birefringence control layer and grating provide uniform output illumination in accordance with an embodiment of the invention. in the illustrative embodiment, the waveguide apparatus 300 includes at least one optical substrate 301 and a coupler 302 for deflecting light 303 from an external source 304 into tir path 305 in the waveguide substrate. the apparatus 300 further includes at least one birefringent grating 306 for providing beam expansion in a first direction and light extraction from the waveguide and at least one birefringence control layer 307 with anisotropic index properties. as shown, light in the tir ray paths 305 can be diffracted by the output grating into output direction 308 , 309 . for the purposes of explaining the invention, an initial beam illumination (i) versus angle (u) profile represented by 310 is assumed. the interaction of the light with the birefringence control layer 307 near a grating interaction region along the tir path 305 is characterized by the illumination profiles before ( 311 ) and after ( 312 ) propagation through the thickness ab of the birefringence control layer. in some applications, such as but not limited to display applications, the waveguide apparatus 300 can be designed to have uniform illumination versus angle across the exit pupil of the waveguide. this may be achieved by matching the birefringence versus angle characteristics of the birefringence control layer to the angular bandwidth of the grating (along nearby grating paths cd in proximity to the path ab) such that the light extracted by the grating (indicated by 308 , 309 ) integrated across the waveguide exit pupil provides uniform illumination versus angle distribution 313 . in some embodiments, the characteristics of the grating and birefringence control layer vary over the aperture of the waveguide. implementing birefringence control layers various materials and fabrication processes can be used to provide a birefringence control layer. in many embodiments, the birefringent control layer has anisotropic index properties that can be controlled during fabrication to provide a spatial distribution of birefringence such that the interaction of the light with the birefringence control layer and the birefringent grating integrated along the total internal reflection path for any direction of the light provides a predefined characteristic of the light extracted from the waveguide. in some embodiments, the layer may be implemented as a thin stack that includes more than one layer. alignment of hpdlc gratings can present significant challenges depending on the grating configuration. in the simplest case of a plane grating, polarization control can be confined to a single plane orthogonal to the grating plane. rolled k-vector gratings can require the alignment to vary across the grating plane. fold gratings, particularly ones with slanted bragg fringes, can have much more complicated birefringence, requiring 3d alignment and, in some cases, more highly spatially resolved alignment. the following examples of birefringence control layers for use with the invention are illustrative only. in each case, it is assumed that the layer is processed such that the properties vary across the surface of the layer. it is also assumed that the birefringence control layer is configured within the waveguide or on an optical surface of the waveguide containing the grating. in some embodiments, the birefringence control layer is in contact with the grating layer. in several cases, the birefringence control layer spits into separate sections and are disposed on different surfaces of the waveguide. in a number of embodiments, a birefringence layer may include multiple layers. in some embodiments, the invention provides a thin polarization control layer that can provide either quarter wave or half wave retardation. the polarization control layer can be implemented as a thin layer directly on top of the grating layer or to one or both of the waveguide substrates using a standard spin coating or ink jet printing process. in one group of embodiments, the birefringence control layer is formed using materials using liquid crystal and polymer networks that can be aligned in 3d using directional uv light. in some embodiments, the birefringence control layer is formed at least in part from a liquid crystal polymer (lcp) network. lcps, which have also been referred to in the literature as reactive mesogens, are polymerizable liquid crystals containing liquid crystalline monomers that include, for example, reactive acrylate end groups, which polymerize with one another in the presence of photo-initiators and directional uv light to form a rigid network. the mutual polymerization of the ends of the liquid crystal molecules can freeze their orientation into a three-dimensional pattern. the process typically includes coating a material system containing liquid crystal polymer onto a substrate and selectively aligning the lc directors using directionally/spatially controllable uv source prior to annealing. in some embodiments, the birefringence control layer is formed at least in part from a photo-alignment layer, also referred to in the literature as a linearly polymerized photopolymer (lpp). an lpp can be configured to align lc directors parallel or perpendicular to incident linearly polarized uv light. lpp can be formed in very thin layers (typically 50 nm) minimizing the risks of scatter or other spurious optical effect. in some embodiments, the birefringence control layer is formed from lcp, lpp, and at least one dopant. birefringence control layers based on lcps and lpps can be used align lc directors in the complex three-dimensional geometries characteristic of fold gratings and rolled k-vector gratings formed in thin film (2-4 microns). in some embodiments, a birefringence control layer based on lcps or lpps further includes dichroic dyes, chiral dopants to achieve narrow or broadband cholesteric filters, twisted retarders, or negative c-plate retarders. in many embodiments, birefringence control layers based on lcps or lpps provide quarter or half-wave retardation layers. in some embodiments, the birefringence control layer is formed by a multilayer structure combining isotropic and anisotropic index layers (as shown in fig. 4 ). in fig. 4 , the multilayer structure 400 includes isotropic layers 401 , 402 and anisotropic index layers 403 , 404 . in some embodiments, a multiplayer stack may include a high number of layers, such as but not limited to several tens or several hundreds of layers. fig. 5 conceptually illustrates a multilayer structure 500 that includes isotropic layers 501 , 502 and anisotropic index layers 503 , 504 combined with a birefringent grating layer 505 . when birefringence is on the order of the change of the in-plane refractive index between adjacent material layers of the stack, it is possible to achieved improved control of the reflectivity of p-polarized light. normally in isotropic materials brewster's law dictates that for any interface, there is an angle of incidence (brewster's angle) for which the p-polarization reflectivity vanishes. however, the reflectivity can increase dramatically at other angles. the limitations imposed by the brewster angle can be overcome by applying the basic principles discussed in weber et al., “giant birefringent optics in multilayer polymer mirrors,” published in science, vol. 287, 31 mar. 2000, pages 2451-2456. because the optical characteristic of systems of isotropic/anisotropic index layers are based on the fundamental physics of interfacial reflection and phase thickness and not on a particular multilayer interference stack design, new design freedoms are possible. designs for wide-angle, broadband applications are simplified if the brewster angle restriction is eliminated, particularly for birefringence control layers immersed in a high-index medium such as a waveguide substrate. a further advantage in relation to waveguide displays is that color fidelity can be maintained for all incidence angles and polarizations. a birefringent grating will typically have polarization rotation properties that are functions of angle wavelength. the birefringence control layer can be used to modify the angular, spectral, or polarization characteristics of the waveguide. in some embodiments, the interaction of light with the birefringence control layer can provide an effective angular bandwidth variation along the waveguide. in many embodiments, the interaction of light with the birefringence control layer can provide an effective spectral bandwidth variation along the waveguide. in several embodiments, the interaction of light with the birefringence control layer can provide a polarization rotation along the waveguide. in a number of embodiments, the grating birefringence can be made to vary across the waveguide by spatially varying the composition of the liquid crystal polymer mixture during grating fabrication. in some embodiments, the birefringence control layer can provide a birefringence variation in at least one direction in the plane of the waveguide substrate. the birefringence control layer can also provide a means for optimizing optical transmission (for different polarizations) within the waveguide. in many embodiments, the birefringence control layer can provide a transmission variation in at least one direction in the plane of the waveguide substrate. in several embodiments, the birefringence control layer can provide an angular dependence of at least one of beam transmission or polarization rotation in at least one direction in the plane of the waveguide substrate. in a number of embodiments, the birefringence control layer can provide a spectral dependence of at least one of beam transmission or polarization rotation in at least one direction in the plane of the waveguide substrate. in many embodiments, birefringent gratings may provide input couplers, fold gratings, and output gratings in a wide range of waveguide architectures. fig. 6 conceptually illustrates a plan view of a dual expansion waveguide with birefringent control layers in accordance with an embodiment of the invention. in the illustrative embodiment, the waveguide 600 includes an optical substrate 601 that contains an input grating 602 , a fold grating 603 , and an output grating 604 that are overlaid by polarization control layers 605 , 606 , 607 , respectively. in some embodiments, the invention provides a polarization-maintaining, wide angle birefringence control layer for modifying the polarization output of a waveguide to balance the birefringence of an external optical element used with the waveguide. fig. 7 conceptually illustrates an embodiment of the invention directed at automobile huds, which reflect collimated imagery off the windscreen into an eyebox. any windscreen curvature will typically result in aberrations and other geometrical distortion, which cannot be corrected in certain waveguide implementations with the requirement for the beam to remain substantially collimated. one solution to this problem is to mount a correction element, which may be a conventional refractive element or a diffractive element, near the output surface of the waveguide. in such implementations, the birefringence correction component can avoid disturbing ray paths from the waveguide and can be achromatic. the compensator technology used can provide spatially-varying configuration, low haze, and high transmission. in the illustrative embodiment of fig. 7 , the waveguide 700 includes an optical substrate 701 containing a grating coupler 702 for deflecting light 703 from an external source of image modulated light (not shown) into the tir path 704 in the waveguide, a birefringent grating 705 for providing beam expansion in a first direction and extracting light from the waveguide, and a birefringence control layer 706 . the apparatus 700 further includes an optical element 707 disposed in proximity to the waveguide for correcting geometrical distortions and other aberrations introduced by reflection at the windscreen. in some embodiments, the optical element 707 is a refractive lens. in other embodiments, the optical element 707 can be a diffractive lens. for wide field of view huds providing a generous eye box, the corrector will typically have a large footprint with a horizontal dimension (along the dashboard) as large as 400 mm. however, if the corrector is molded in plastic, it will tend to suffer from birefringence. hence, in the embodiment of fig. 7 , the birefringence control element 706 can be designed to compensate for both the grating polarization and polarization rotation introduced by the optical element 707 . referring again to fig. 7 , an initial polarization state corresponding to p polarization is assumed. the polarization state after propagation through the birefringence grating, birefringence control layer, and the correction elements is represented by the symbols 708 - 711 . the interaction of the light with the birefringence control layer near to a grating interaction region along the tir path is represented by the polarization states. in the embodiment of fig. 7 , the polarization of the light 712 , 713 extracted by the grating is aligned parallel to the input polarization vector. in some embodiments, the birefringence control layer 706 may be configured to rotate the output light polarization vector through ninety degrees. in some embodiments, the birefringence control layer can be provided by various techniques using mechanical, thermal, or electro-magnetic processing of substrates. for example, in some embodiments, the birefringence control layer is formed by applying spatially varying mechanical stress across the surface of an optical substrate. fig. 8 conceptually illustrates an apparatus 800 for aligning a birefringence control layer 801 in which forces are applied in the directions indicated by 802 - 805 , resulting in the iso-birefringence contours 806 . in many embodiments, the forces illustrated do not necessarily all need to be applied to the layer. in some embodiments, the birefringence control layer 801 is formed by inducing thermal gradients into an optical substrate. in a number of embodiments, the birefringence control layer 801 is provided by a hpdlc grating in which lc directors are aligned using electric or magnetic fields during curing. in several embodiments, two or more of the above techniques may be combined. fabrication of waveguides implementing birefringence control layers the present invention also provides methods and apparatus for fabricating a waveguide containing a birefringent grating and a birefringence control layer. the construction and arrangement of the apparatus and methods as shown in the various exemplary embodiments are illustrative only. although only a few embodiments have been described in detail in this disclosure, many modifications are possible (for example, additional steps for improving the efficiency of the process and quality of the finished waveguide, minimizing process variances, monitoring the process and others.) any process step referring to the formation of a layer should be understood to cover multiple such layers. for example, where a process step of recording a grating layer is described, this step can extend to recording a stack containing two or more grating layers. accordingly, all such modifications are intended to be included within the scope of the present disclosure. the order or sequence of any process or method steps may be varied or re-sequenced according to alternative embodiments. other substitutions, modifications, changes, and omissions may be made in the design of the process apparatus, operating conditions and arrangement of the exemplary embodiments without departing from the scope of the present disclosure. for the purposes of explaining the invention, the description of the processes will refer to birefringence control layers based on liquid crystal polymer material systems as described above. however, it should be clear from the description that the processes may be based on any of the implementations of a birefringence control layer described herein. figs. 9a-9f conceptually illustrate the process steps and apparatus for fabricating a waveguide containing a birefringent grating and a birefringence control layer in accordance with various embodiments of the invention. fig. 9a shows the first step 900 of providing a first transparent substrate 901 . fig. 9b illustrates an apparatus 910 for applying holographic recording material to the substrate 901 . in the illustrative embodiment, the apparatus 910 includes a coating apparatus 911 that provides a spray pattern 912 that forms a layer 913 of grating recording material onto the substrate 901 . in some embodiments, the spray pattern may include a narrow jet or blade swept or stepped across the surface to be coated. in several embodiments, the spray pattern may include a divergent jet for covering large areas of a surface simultaneously. in a number of embodiments, the coating apparatus may be used in conjunction with one or more masks for providing selective coating of regions of the surface. in many embodiments, the coating apparatus is based on industry-standard standard spin-coating or ink-jet printing processes. fig. 9c conceptually illustrates an apparatus 920 for exposing a layer of grating recording material to form a grating layer in accordance with an embodiment of the invention. in the illustrative embodiment, the apparatus 920 contains a master grating 921 for contact copying the grating in the recording material and a laser 922 . as shown, the master 921 diffracts incident light 923 to provide zero order 924 and diffracted light 925 , which interferes within the grating material layer to form a grating layer 926 . the apparatus may have further features, such as but not limited to light stops and masks for overcoming stray light from higher diffraction orders or other sources. in some embodiments, several gratings may be recorded into a single layer using the principles of multiplexed holograms. fig. 9d conceptually illustrates an apparatus 930 for coating a layer of liquid crystal polymer material onto the grating layer in accordance with an embodiment of the invention. in the illustrative embodiment, the apparatus 930 contains a coating apparatus 931 configured to deliver a spray pattern 932 forming a layer of material 933 . the coating apparatus 931 may have similar features to the coating apparatus used to apply the grating recording material. fig. 9e conceptually illustrates an apparatus 940 for providing an aligned liquid crystal polymer layer of material in accordance with an embodiment of the invention. in the illustrative embodiment, the apparatus 940 contains a uv source (which can include collimation, beams steering, and beam shaping optics, depending on the specific requirements of a given application) 941 providing directional uv light 942 for forming an aligned lc polymer layer 943 . fig. 9f conceptually illustrates the completed waveguide 950 after the step of applying a second substrate 951 over the aligned liquid crystal polymer layer 943 . in some embodiments, exposure of the grating recording material may use conventional cross beam recording procedures instead of the mastering process described above. in many embodiments, further processing of the grating layer may include annealing, thermal processing, and/or other processes for stabilizing the optical properties of grating layer. in some embodiments, electrodes coatings may be applied to the substrates. in many embodiments, a protective transparent layer may be applied over the grating layer after exposure. in a number of embodiments, the liquid crystal polymer material is based on the lcp, lpp material systems discussed above. in several embodiments, the alignment of the liquid crystal polymer can result in an alignment of the liquid crystal directors parallel to the uv beam direction. in other embodiments, the alignment is at ninety degrees to the uv beam direction. in some embodiments, the second transparent substrate may be replaced by a protective layer applied using a coating apparatus. figs. 10a-10f conceptually illustrate the process steps and apparatus for fabricating a waveguide containing a birefringent grating with a birefringence control layer applied to an outer surface of the waveguide in accordance with various embodiments of the invention. fig. 10a conceptually illustrates the first step 1000 of providing a first transparent substrate 1001 in accordance with an embodiment of the invention. fig. 10b conceptually illustrates an apparatus 1010 for applying holographic recording material to the substrate in accordance with an embodiment of the invention. in the illustrative embodiment, the apparatus 1010 includes a coating apparatus 1011 providing a spray pattern 1012 that forms the layer 1013 of grating recording material onto the substrate 1001 . fig. 10c conceptually illustrates an apparatus 1020 for exposing a layer of grating recording material to form a grating layer in accordance with an embodiment of the invention. in the illustrative embodiment, the apparatus 1020 includes a master grating 1021 for contact copying the grating in the recording material and a laser 1022 . as shown, the master 1021 converts light 1023 from the laser 1022 into zero order 1024 and diffracted light 1025 , which interfere within the grating material layer 1013 to form a grating layer 1026 . fig. 10d conceptually illustrates the partially completed waveguide 1030 after the step of applying a second substrate 1031 over the exposed grating layer in accordance with an embodiment of the invention. fig. 10e conceptually illustrates an apparatus 1040 for coating a layer of liquid crystal polymer material onto the second substrate in accordance with an embodiment of the invention. in the illustrative embodiment, the apparatus 1040 includes a spray coater 1041 for delivering a spray pattern 1042 to form a layer of material 1043 . fig. 10f conceptually illustrates an apparatus 1050 for aligning the liquid crystal polymer material in accordance with an embodiment of the invention. in the illustrative embodiment, the apparatus 1050 includes a uv source 1051 providing the directional uv light 1052 for forming an aligned liquid crystal polymer layer 1053 , which can be configured to realign the lc directors of the grating layer 1026 . figs. 11a-11f conceptually illustrate the process steps and apparatus for fabricating a waveguide containing a birefringent grating and a birefringence control layer in accordance with various embodiments of the invention. unlike the above described embodiments, the step of forming the birefringence control layer can be carried out before the recording of the grating layer, which is formed above the birefringence control layer. fig. 11a conceptually illustrates the first step 1100 of providing a first transparent substrate 1101 . fig. 11b conceptually illustrates an apparatus 1110 for coating a layer of liquid crystal polymer material onto the first substrate in accordance with an embodiment of the invention. in the illustrative embodiment, the apparatus 1110 includes a coating apparatus 1111 configured to deliver a spray pattern 1112 to form a layer of material 1113 . fig. 11c conceptually illustrates an apparatus 1120 for aligning the liquid crystal polymer material in accordance with an embodiment of the invention. in the illustrative embodiment, the apparatus 1120 includes a uv source 1121 providing the directional uv light 1122 for forming an aligned liquid crystal polymer layer 1123 . fig. 11d conceptually illustrates an apparatus 1130 for applying holographic recording material to the substrate in accordance with an embodiment of the invention. in the illustrative embodiment, the apparatus 1130 includes a coating apparatus 1131 for providing a spray pattern 1132 to form a layer of grating recording material 1133 on top of the liquid crystal polymer layer 1123 . fig. 11e conceptually illustrates an apparatus 1140 for exposing a layer of grating recording material to form a grating layer in accordance with an embodiment of the invention. in the illustrative embodiment, the apparatus 1140 includes a master grating 1141 for contact copying the grating in the recording material and a laser 1142 . as shown, the master 1141 converts light 1142 from the laser into zero order 1143 and diffracted light 1144 , which interfere in the grating material layer 1133 to form a grating layer 1145 , which is aligned by the liquid crystal polymer material layer 1123 . fig. 11f conceptually illustrates the completed waveguide 1150 after the step of applying a second substrate 1151 over the exposed grating layer in accordance with an embodiment of the invention. fig. 12 conceptually illustrates a flow chart illustrating a method of fabricating a waveguide containing a birefringent grating and a birefringence control layer in accordance with an embodiment of the invention. referring to fig. 12 , the method 1200 includes providing ( 1201 ) a first transparent substrate. a layer of grating recording material can be deposited ( 1202 ) onto the substrate. the layer of grating recording material can be exposed ( 1203 ) to form a grating layer. a layer of liquid crystal polymer material can be deposited ( 1204 ) onto the grating layer. the liquid crystal polymer material can be aligned ( 1205 ) using directional uv light. a second transparent substrate can be applied ( 1206 ) over the alignment layer. fig. 13 conceptually illustrates a flow chart illustrating a method of fabricating a waveguide containing a birefringent grating and a birefringence control layer applied to an outer surface of the waveguide in accordance with an embodiment of the invention. referring to fig. 13 , the method 1300 includes providing ( 1301 ) a first transparent substrate. a layer of grating recording material can be deposited ( 1302 ) onto the substrate. the layer of grating recording material can be exposed ( 1303 ) to form a grating layer. a second transparent substrate can be applied ( 1304 ) over the exposed grating layer. a layer of liquid crystal polymer material can be deposited ( 1305 ) onto the second transparent substrate. the liquid crystal polymer material can be aligned ( 1306 ) using directional uv light. fig. 14 conceptually illustrates a flow chart illustrating a method of fabricating a waveguide containing a birefringent grating and a birefringence control layer where forming the birefringence control layer is carried out before the recording of the grating layer in accordance with an embodiment of the invention. referring to fig. 14 , the method 1400 includes providing ( 1401 ) a first transparent substrate. a layer of liquid crystal polymer material can be deposited ( 1402 ) onto the substrate. the liquid crystal polymer material can be aligned ( 1403 ) using directional uv light. a layer of grating recording material can be deposited ( 1404 ) onto the aligned liquid crystal polymer material. the layer of grating recording material can be exposed ( 1405 ) to form a grating layer. a second transparent substrate can be applied ( 1406 ) over the grating layer. although figs. 12-14 illustrate specific processes for fabricating waveguides containing a birefringent grating and a birefringence control layer, many other fabrication processes and apparatus can be implemented to form such waveguides in accordance with various embodiments of the invention. for example, the order or sequence of any process or method steps may be varied or re-sequenced according to alternative embodiments. other substitutions, modifications, changes, and omissions may be made in the design of the process apparatus, operating conditions and arrangement of the exemplary embodiments without departing from the scope of the present disclosure. additional embodiments and applications in some embodiments, a polarization-maintaining, wide angle, high reflection waveguide cladding with polarization compensation for grating birefringence can be implemented. fig. 15 shows one such embodiment. in the illustrative embodiment, the waveguide 1500 includes a waveguiding substrate 1501 containing a birefringent grating 1502 and a birefringence control layer 1503 overlaying the waveguiding substrate 1501 . as shown, guided light 1504 interacting with the birefringence control layer 1503 at its interface with the waveguiding substrate 1501 has its polarization rotated from the state indicated by symbol 1505 (resulting from the previous interaction with the grating) to the state indicated by 1506 (which has a desired orientation for the next interaction with the grating, for example, having an orientation for providing a predefined diffraction efficiency at some pre-defined point along the grating). in many embodiments, a compact and efficient birefringence control layer for isolating a waveguide from its environment while ensuring efficient confinement of wave-guided beams can be implemented. fig. 16 illustrates one such embodiment. in the illustrative embodiment, the environmentally isolated waveguide 1600 includes a waveguiding substrate 1601 containing a birefringent grating 1602 and a birefringence control layer 1603 overlaying the waveguiding substrate 1601 . as shown, guided light 1604 interacting with the birefringence control layer 1603 at its interface with the waveguiding substrate 1601 has its polarization rotated from the state indicated by the symbol 1605 to the state indicated by 1606 . environmental isolation of the waveguide can be provided by designing the birefringence control layer 1603 such that total internal reflection occurs at the interface 1607 between the birefringence control layer 1603 and the waveguiding substrate 1601 . in some embodiments, environmental isolation is provided by designing the birefringence control layer to have gradient index characteristics such that only a small portion of the guided light is reflected at the air interface of the birefringence control layer. in several embodiments, the birefringence control layer may incorporate a separate grin layer. in a number of embodiments, a grin layer may be based on embodiments disclosed in u.s. provisional patent application no. 62/123,282 entitled near eye display using gradient index optics and u.s. provisional patent application no. 62/124,550 entitled waveguide display using gradient index optics. fig. 17 conceptually illustrates an apparatus 1700 , which may be used in conjunction with some of the methods described above, for fabricating a structure containing a birefringent grating layer 1701 overlaying a birefringence control layer 1702 in accordance with an embodiment of the invention. in fig. 17 , the substrate supporting the birefringence control layer is not shown. the construction beams, indicated by rays 1703 , 1704 , may be provided by a master grating or a crossed beam holographic recording setup. as shown, the construction beams propagate through the birefringence control layer 1702 . in many embodiments, the construction beams are in the visible band. in some embodiments, the construction beams are in the uv band. fig. 18 conceptually illustrates an apparatus 1800 , which may be used in conjunction with some of the methods described above, for fabricating a structure containing a birefringence control layer 1801 overlaying a birefringent grating layer 1802 in accordance with an embodiment of the invention. in fig. 18 , the substrate supporting the grating layer is not shown. the direction of a recording beam is indicated by 1803 . in many embodiments the birefringence control layer is a liquid crystal polymer material system which uses a directional uv beam for alignment. in some embodiments, in which the grating is a recorded in a polymer and liquid crystal material system, an exposed grating may be erased during the process of aligning the birefringence control layer by applying an external stimulus such as heat, electric or magnetic fields or light to effective to create an isotropic phase of the liquid crystal. fig. 19 conceptually illustrates a cross section of waveguide 1900 containing substrates 1901 , 1902 sandwiching a grating layer 1903 . as shown, a source 1904 emits collimated light 1905 a, which is coupled by the grating layer into the total internal reflection (tir) path indicated by rays 1905 b, 1905 c and extracted by the grating layer 1903 into the output ray path 1905 d. in the illustrative embodiments, the source 1904 can be a variety of light sources, including but not limited to a laser or a led. fig. 20 conceptually illustrates a waveguide similar to the one of fig. 19 with a quarter wave polarization layer inserted by replacing the substrate 1902 by a quarter wave film 2001 sandwiched by substrates 2002 , 2003 in accordance with an embodiment of the invention. a quarter wave polarization layer can be beneficial to the holographic waveguide design in two ways. firstly, it can reduce reinteraction (outcoupling) of a rolled k-vector (rkv) input grating to increase the overall coupling efficiency of the input grating. secondly, it can continuously mix up the polarization of the light going into the fold and output gratings to provide better extraction. the quarter wave layer can be located on a waveguide surface along the optical from the input grating. typically, a waveguide surface can include one of the tir surface of the wave or some intermediate surface formed inside the waveguide. the optical characteristics of the quarter wave layer can be optimized for “waveguide angles”—i.e., angles in glass beyond the tir angle. in some embodiments, the center field is designed at approximately 55 deg. in glass (corresponding to a refractive index of approximately 1.51 at wavelength 532 nm). in many embodiments, optimization for red, green, and blue can be used for optimum performance of red, green, and blue transmitting waveguides. as will be shown in the embodiments to be described, there are several different ways of incorporating the quarter wave film within a waveguide. in the following embodiment, we refer generally to a quarter wave polarization layer provided by a liquid crystal polymer (lcp) material. however, it should be understood that other materials be used in applications of the invention. fig. 21 conceptually illustrates a schematic cross section view 2100 showing a portion of a waveguide illustrating how the use of a quarter wave polarization layer with a rkv grating can overcome the problem of unwanted extraction of light along the propagation path in the input grating portion of the waveguide in accordance with an embodiment of the invention. one ray path is illustrated in which input light including the p-polarized light 2101 a is coupled by the grating layer into a tir path indicated by the rays 2101 b- 2101 l in the waveguide. the waveguide grating has rolled k-vectors of which examples occurring at three points along the length of the waveguide are illustrated schematically by the vectors 2102 a- 2102 c. in the illustrative embodiment, the light 2101 a diffractively coupled into tir by the input grating is p-polarized with respect to the grating. in many embodiments, the tir angle can be nominally 55 degrees in glass. on transmission through the quarter wave layer, the polarization of the light changes from p to circularly polarized ( 2101 c). after tir at the lower surface of the waveguide the polarization changes to circularly polarized light ( 2101 d) of an opposing sense such that after traversing the quarter wave layer on its upward path becomes s-polarized ( 2101 e) with respect to the grating. the s-polarized light passes through the gating without deviation ( 2101 f) or substantial loss since it is both off-bragg and “off polarization” (since the grating has zero or low diffraction efficiency for s). this light then undergoes tir ( 2101 g) a second time retaining its s-polarization. hence the light 2101 g is now on-bragg but is still off polarization with respect to the p-polarization sensitive grating. the light therefore passes through the grating without diffraction ( 2101 h). at this location the rkv ( 2102 b) has rolled slightly from the one ( 2102 a) near the light entry point on the up grating. if the light was “on polarization,” the ‘roll’ effect of rkv would be small, and so the light would be strongly out-coupled. the s-polarization light passing through the grating goes through another full cycle, ( 2101 h- 2101 m) in a similar fashion to the cycle illustrated by rays 2101 b- 2101 g, and then returns to a p-polarized state for the next ( 2101 m) on-bragg interaction at the grating region with k-vector 2102 c. at this point, the light has performed two complete tir bounce cycles down the waveguide, increasing the angular separation of the incidence angle at the grating and k-vector, which strongly reduces the on-bragg interaction. to clarify the embodiment of fig. 21 further, a 55-degree tir angle light in a 1 mm thick waveguide is considered, with a 20 mm projector relief (distance of the projector from the input grating), and a nominal 4.3 mm diameter projector exit pupil: the first interaction with the grating takes place approximately 2.85 mm down the waveguide. this equates to an 8.1-degree angle at 20 mm projector relief. for comparison the fwhm angular bandwidth of a typical 1.6 um grating is about 12 degrees in air (prescription dependent) i.e. the angle subtended by the pupil is not much larger than the semi-width of the lens. this leads to strong outcoupling if polarization is not changed to s-polarized as described above. in effect, the use of the quarter wave layer doubles the tir length to approximately 5.7 mm. this offset equates to about 15.9 deg, which is larger than the angular bandwidth of most waveguide gratings, thereby reducing outcoupling reinteraction losses from the waveguide. fig. 22 conceptually illustrates a polarization layer architecture 2200 containing an lcp quarter wave cell and a reactive monomer liquid crystal mixture (rmlcm) cell separated by index matching oil layer ( 2201 ) in accordance with an embodiment of the invention. the lcp cell includes a substrate 2202 and the lcp film 2203 . the rmlcm cell includes substrates 2204 , 2205 sandwiching the rmlcm layer 2206 . this configuration has the advantage that the index matching oil bond can provide a non-permanent bond, which allows for installation and removal of polarization cell for testing purposes. adhesive can also be applied at the edges (tacked) for a semi-permanent bond. in some embodiments the oil layer can be provided using a cell filled with oil. fig. 23 conceptually illustrates an example of a polarization architecture 2300 based on a grating cell with the rmlcm grating material layer 2301 in direct contact with a bare lcp film 2302 in accordance with an embodiment of the invention. the two films are sandwiched by the substrates 2303 , 2304 . this is a simple and cost-effective solution for implementing an lcp layer. maintaining thickness control of the rmlcm layer using spacer beads can be difficult if the beads are embedded directly onto lcp layer. the embodiment of fig. 23 can required careful matching of the material properties of the rmlcm and lcp to avoid detrimental interactions between the rmlcm and the lcp layers. in many embodiments, holographic exposure of the rmlcm layer can be applied directly into the rmlcm and does not need to be through the lcp layer. if exposure construction through the lcp layer is unavoidable, pre-compensation of polarization rotation of the lcp layer can be made in some embodiments. fig. 24 conceptually illustrates a cross section view schematically showing an example of polarization layer architecture 2400 in which a bare lcp layer is bonded to a bare rmlcm layer in accordance with an embodiment of the invention. the apparatus includes an upper substrate 2401 , a bare lcp film 2402 , adhesive layer 2403 , an exposed rmlcm layer 2404 , and a lower substrate 2405 . in many embodiments, the adhesive layer can be norland noa65 adhesive or a similar adhesive. fig. 25 conceptually illustrates a cross section view schematically showing an example of a polarization layer architecture 2500 using a rmlcm layer as a polarization layer in accordance with an embodiment of the invention. the apparatus includes an upper substrate 2501 , an upper rmlcm layer 2502 , a transparent spacer 2503 , a lower rmlcm layer 2504 , and a lower substrate 2505 . one of the rmlcm layers can be used not only as the grating material, but also as a polarization rotation layer, using the inherent birefringent properties of rmlcm materials. the ‘polarization rotation grating’ should have a period and/or k-vector direction such that its diffraction is minimal. in some embodiments the rmlcm layer can be configure as a subwavelength grating. in some embodiments, the rmcm layer can be provided sandwiched between two release layers such that after curing the layer can be removed and re-applied elsewhere. fig. 26 conceptually illustrates an example of a polarization layer architecture 2600 that includes a feature for compensating for polarization rotation introduced by birefringent gratings in accordance with an embodiment of the invention. the apparatus includes an upper substrate 2601 , a polarization control layer 2602 , a transparent substrate 2603 , a grating layer 2604 , and a lower substrate 2605 . the grating layer contains a first grating 2606 a and a second grating 2606 b separated by a clear region 2607 . in some embodiments, the clear region can a polymer with refractive index similar to that of the substrates. in many embodiments other low refractive index materials may be used to provide the clear region. the polarization control layer includes quarter wave retarding regions 2608 a, 2608 b and a polarization compensation region, which balances the polarization rotation introduced by the birefringent grating 2606 a (in the case where the guide light propagates from grating 2606 a to grating 2606 b). fig. 27 conceptually illustrates a plan view schematically showing a waveguide display 2700 incorporating the features of the embodiment of fig. 26 in accordance with an embodiment of the invention. the waveguide display 2700 includes a waveguide substrate 2701 , an input grating 2702 , a fold grating 2703 , and an output grating 2704 . polarization control regions 2705 , 2706 apply compensation for grating depolarization according to the principle of the embodiment of fig. 26 . fig. 28 conceptually illustrates a cross section view schematically showing an example of a polarization layer architecture 2800 containing an upper substrate 2801 , an lcp layer 2802 with hard encapsulation layer 2803 , a rmlcm layer 2804 , and a lower substrate 2805 in accordance with an embodiment of the invention. in many embodiments, the hard encapsulation layer or film can be designed to protect the delicate lcp film from mechanical contact, such that standard cleaning procedures will not destroy the film. advantageously, the hard encapsulation layer can employ a material resistant to spacer beads being pushed into it through the lamination process, as well as being chemically resistant to index matching oil and adhesives. fig. 29 conceptually illustrates a cross section view schematically showing an example of a polarization layer architecture 2900 containing an upper substrate 2901 , an lcp layer 2902 with soft encapsulation layer 2903 , a rmlcm layer 2904 , and a lower substrate 2905 in accordance with an embodiment of the invention. the polarization alignment film can be encapsulated with a soft encapsulation layer or film designed to protect the delicate lcp film from mechanical contact, such that standard cleaning procedures such as drag wiping with iso-propyl alcohol, for example, will not destroy the film. in some embodiments, the soft encapsulation can provide some resistance to spacer beads during the lamination process. fig. 30 conceptually illustrates a plan view schematically showing a first example 3000 of a two-region polymer film in accordance with an embodiment of the invention. this example using a non-encapsulated lcp film 3001 supported by a 0.5 mm thickness eagle xg substrate of dimensions 77.2 mm×47.2 mm. region 1 is characterized by a fast axis 75° from horizontal and by quarter-wave retardance at 55° in-glass angle, 45° ellipticity ±5°, for wavelength 524 nm. region 2 is characterized by a fast axis 105° from horizontal and a quarter-wave retardance at 55° in-glass angle, 45° ellipticity ±5°, for wavelength 524 nm. typically, region 1 and region 2 extend to the halfway point horizontally, ±2 mm. fig. 31 conceptually illustrates a plan view schematically showing a second example 3100 of a two-region polymer film in accordance with an embodiment of the invention. this example uses encapsulation of the lcp layer 3101 by a protective film 3102 , said layers supported by a 0.5 mm thickness eagle xg substrate of dimensions 77.2 mm×47.2 mm. region 1 is characterized by a fast axis 75° from horizontal and by quarter-wave retardance at 55° in-glass angle, 45° ellipticity ±5°, for wavelength 524 nm. region 2 is characterized by a fast axis 105° from horizontal and by quarter-wave retardance at 55° in-glass angle, 45° ellipticity ±5°, for wavelength 524 nm. typically, region 1 and region 2 extend to the halfway point horizontally, ±2 mm. the encapsulation layer can seal the polarization layer such that performance is unaffected when covered by layer of oil such as cargille series a with refractive index 1.516. the encapsulation layer can seal the polarization layer such that performance is unaffected when covered by an additional layer of liquid crystal-based photopolymer. fig. 32 conceptually illustrates a plan view schematically showing a third example 3200 of a two-region polymer film in accordance with an embodiment of the invention. this example uses glass encapsulation of the lcp. a 0.5 mm thickness eagle xg substrate of dimensions 77.2 mm×47.2 mm supports a lcp layer 3201 , an adhesive layer 3202 , and 0.2 mm thickness willow glass cover 3203 . region 1 is characterized by a fast axis 75° from horizontal and by quarter-wave retardance at 55° in-glass angle, 45° ellipticity ±5°, for wavelength 524 nm. region 2 is characterized by a fast axis 105° from horizontal and by a quarter-wave retardance at 55° in-glass angle, 45° ellipticity ±5°, for wavelength 524 nm. advantageously, the glass for encapsulations of the lcp is 0.5 mm eaglexg or 0.2 mm willow glass. typically, region 1 and region 2 extend to the halfway point horizontally, ±2 mm. fig. 33 conceptually illustrates a drawing showing the clear aperture layout 3300 for the embodiments illustrated in figs. 30-32 in accordance with an embodiment of the invention. the clear aperture is highlighted in the dashed line. all dimensions are in mm. fig. 34 conceptually illustrates a plan view 3400 schematically showing the waveguide 3401 containing input 3402 , fold 3403 , and output 3404 gratings based on the embodiments of figs. 30 - 33 , including the k-vectors and alignment layer fast axis directions for each grating in accordance with an embodiment of the invention. as shown in fig. 34 , the k-vector and fast axis directions are for the input grating k-vector: 30 degrees; for the fold grating k-vector: 270 degrees; and for the output grating k-vector: 150 degrees. the above description covers only some of the possible embodiments in which an lcp layer (or equivalent retarding layer) can be combined with an rmlcm layer in a waveguide structure. in many of the above described embodiments, the substrates can be fabricated from 0.5 mm thickness corning eagle xg glass. in some embodiments, thinner or thicker substrates can be used. in several embodiments, the substrates can be fabricated from plastic. in a number of embodiments, the substrates and optical layers encapsulated by the said substrates can be curved. any of the embodiments can incorporated additional layers for protection from chemical contamination or damage incurred during processing and handling. in some embodiments, additional substrate layers may be provided to achieve a required waveguide thickness. in some embodiments, additional layers may be provided to perform at least one of the functions of illumination homogenization spectral filtering, angle selective filtering, stray light control, and debanding. in many embodiments, the bare lcp layer can be bonded directly to a bare exposed rmlcm layer. in several embodiments, an intermediate substrate can be disposed between the lcp layer and the rmlcm layer. in a number of embodiments, the lcp layer can be combined with an unexposed layer of rmlcm material. in many embodiments, layers of lcp, with or without encapsulation, can have haze characteristics <0.25%, and preferably 0.1% or less. it should be noted that the quoted haze characteristics are based on bulk material scatter and are independent of surface scatter losses, which are largely lost upon immersion. the lcp and encapsulation layers can survive 100 c exposure (>80 c for thermal um exposures). in many embodiments, the lcp encapsulation layer can be drag wipe resistant to permit layer cleaning. in the embodiments described above, there can be constant retardance and no bubbles or voids within the film clear aperture. the lcp and adhesive layers can match the optical flatness criteria met by the waveguide substrates. a color waveguide according to the principles of the invention would typically include a stack of monochrome waveguides. the design may use red, green, and blue waveguide layers or, alternatively, red and blue/green layers. in some embodiments, the gratings are all passive, that is non-switching. in some embodiments, at least one of the gratings is switching. in some embodiments, the input gratings in each layer are switchable to avoid color crosstalk between the waveguide layers. in some embodiments color crosstalk is avoided by disposing dichroic filters between the input grating regions of the red and blue and the blue and green waveguides. in some embodiments, the thickness of the birefringence control layer is optimized for the wavelengths of light propagating within the waveguide to provide the uniform birefringence compensation across the spectral bandwidth of the waveguide display. wavelengths and spectral bandwidths bands for red, green, blue wavelengths typically used in waveguide displays are red: 626 nm±9 nm, green: 522 nm±18 nm and blue: 452 nm±11 nm. in some embodiments, the thickness of the birefringence control layer is optimized for trichromatic light. in many embodiments, the birefringence control layer is provided by a subwavelength grating recorded in hpdlc. such gratings are known to exhibit the phenomenon of form birefringence and can be configured to provide a range of polarization functions including quarter wave and half wave retardation. in some embodiments, the birefringence control layer is provided by a liquid crystal medium in which the lc directors are aligned by illuminating an azo-dye doped alignment layer with polarized or unpolarized light. in a number of embodiments, a birefringence control layer is patterned to provide lc director orientation patterns with submicron resolution steps. in same embodiments, the birefringence control layer is processed to provide continuous variation of the lc director orientations. in several embodiments, a birefringence control layer provided by combining one or more of the techniques described above is combined with a rubbing process or a polyimide alignment layer. in some embodiments, the birefringence control layer provides optical power. in a number of embodiments, the birefringence control layer provides a gradient index structure. in several embodiments, the birefringence control layer is provided by a stack containing at least one hpdlc grating and at least one alignment layer. in many embodiments, the birefringent grating may have rolled k-vectors. the k-vector is a vector aligned normal to the grating planes (or fringes) which determines the optical efficiency for a given range of input and diffracted angles. rolling the k-vectors allows the angular bandwidth of the grating to be expanded without the need to increase the waveguide thickness. in many embodiments, the birefringent grating is a fold grating for providing exit pupil expansion. the fold grating may be based on any of the embodiments disclosed in pct application no.: pct/gb2016000181 entitled waveguide display and embodiments discussed in the other references give above. in some embodiments, the apparatus is used in a waveguide design to overcome the problem of laser banding. a waveguide according to the principles of the invention can provide a pupil shifting means for configuring the light coupled into the waveguide such that the input grating has an effective input aperture which is a function of the tir angle. several embodiments of the pupil shifting means will be described. the effect of the pupil shifting means is that successive light extractions from the waveguide by the output grating integrate to provide a substantially flat illumination profile for any light incidence angle at the input grating. the pupil shifting means can be implemented using the birefringence control layers to vary at least one of amplitude, polarization, phase, and wavefront displacement in 3d space as a function of incidence light angle. in each case, the effect is to provide an effective aperture that gives uniform extraction across the output grating for any light incidence angle at the input grating. in some embodiments, the pupil shifting means is provided at least in part by designing the optics of the input image generator to have a numerical aperture (na) variation ranging from high na on one side of the microdisplay panel varying smoothly to a low na at the other side according to various embodiments, such as those similar to ones disclosed in pct application no.: pct/gb2016000181 entitled waveguide display, the disclosure of which is hereby incorporated in its entirety. typically, the microdisplay is a reflective device. in some embodiments, the grating layer may be broken up into separate layers. the number of layers may then be laminated together into a single waveguide substrate. in many embodiments, the grating layer contains several pieces, including the input coupler, the fold grating, and the output grating (or portions thereof) that are laminated together to form a single substrate waveguide. the pieces may be separated by optical glue or other transparent material of refractive index matching that of the pieces. in several embodiments, the grating layer may be formed via a cell making process by creating cells of the desired grating thickness and vacuum filling each cell with sbg material for each of the input coupler, the fold grating and the output grating. in one embodiment, the cell is formed by positioning multiple plates of glass with gaps between the plates of glass that define the desired grating thickness for the input coupler, the fold grating and the output grating. in one embodiment, one cell may be made with multiple apertures such that the separate apertures are filled with different pockets of sbg material. any intervening spaces may then be separated by a separating material (e.g., glue, oil, etc.) to define separate areas. in one embodiment, the sbg material may be spin-coated onto a substrate and then covered by a second substrate after curing of the material. by using a fold grating, the waveguide display advantageously requires fewer layers than previous systems and methods of displaying information according to some embodiments. in addition, by using a fold grating, light can travel by total internal refection within the waveguide in a single rectangular prism defined by the waveguide outer surfaces while achieving dual pupil expansion. in another embodiment, the input coupler, the gratings can be created by interfering two waves of light at an angle within the substrate to create a holographic wave front, thereby creating light and dark fringes that are set in the waveguide substrate at a desired angle. in some embodiments, the grating in a given layer is recorded in stepwise fashion by scanning or stepping the recording laser beams across the grating area. in some embodiments, the gratings are recorded using mastering and contact copying process currently used in the holographic printing industry. in many embodiments, the gratings are bragg gratings recorded in holographic polymer dispersed liquid crystal (hpdlc) as already discussed, although sbgs may also be recorded in other materials. in one embodiment, sbgs are recorded in a uniform modulation material, such as policryps or poliphem having a matrix of solid liquid crystals dispersed in a liquid polymer. the sbgs can be switching or non-switching in nature. in its non-switching form a sbg has the advantage over conventional holographic photopolymer materials of being capable of providing high refractive index modulation due to its liquid crystal component. exemplary uniform modulation liquid crystal-polymer material systems are disclosed in united state patent application publication no.: us2007/0019152 by caputo et al and pct application no.: pct/ep2005/006950 by stumpe et al., both of which are incorporated herein by reference in their entireties. uniform modulation gratings are characterized by high refractive index modulation (and hence high diffraction efficiency) and low scatter. in some embodiments at least one of the gratings is a surface relief grating. in some embodiments at least one of the gratings is a thin (or raman-nath) hologram, in some embodiments, the gratings are recorded in a reverse mode hpdlc material. reverse mode hpdlc differs from conventional hpdlc in that the grating is passive when no electric field is applied and becomes diffractive in the presence of an electric field. the reverse mode hpdlc may be based on any of the recipes and processes disclosed in pct application no.: pct/gb2012/000680, entitled improvements to holographic polymer dispersed liquid crystal materials and devices. the grating may be recorded in any of the above material systems but used in a passive (non-switching) mode. the fabrication process can be identical to that used for switched but with the electrode coating stage being omitted. lc polymer material systems may be used for their high index modulation. in some embodiments, the gratings are recorded in hpdlc but are not switched. in many embodiments, a waveguide display according to the principles of the invention may be integrated within a window, for example, a windscreen-integrated hud for road vehicle applications. in some embodiments, a window-integrated display may be based on the embodiments and teachings disclosed in u.s. provisional patent application no. 62/125,064 entitled optical waveguide displays for integration in windows and u.s. patent application ser. no. 15/543,016 entitled environmentally isolated waveguide display. in some embodiments, a waveguide display according to the principles of the invention may incorporate a light pipe for providing beam expansion in one direction based on the embodiments disclosed in u.s. patent application ser. no. 15/558,409 entitled waveguide device incorporating a light pipe. in some embodiments, the input image generator may be based on a laser scanner as disclosed in u.s. pat. no. 9,075,184 entitled compact edge illuminated diffractive display. the embodiments of the invention may be used in wide range of displays including hmds for ar and vr, helmet mounted displays, projection displays, heads up displays (huds), heads down displays, (hdds), autostereoscopic displays and other 3d displays. some of the embodiments and teachings of this disclosure may be applied in waveguide sensors such as, for example, eye trackers, fingerprint scanners and lidar systems and in illuminators and backlights. it should be emphasized that the drawings are exemplary and that the dimensions have been exaggerated. for example, thicknesses of the sbg layers have been greatly exaggerated. optical devices based on any of the above-described embodiments may be implemented using plastic substrates using the materials and processes disclosed in pct application no.: pct/gb2012/000680, entitled improvements to holographic polymer dispersed liquid crystal materials and devices. in some embodiments, the dual expansion waveguide display may be curved. although the description has provided specific embodiments of the invention, additional information concerning the technology may be found in the following patent applications, which are incorporated by reference herein in their entireties: u.s. pat. no. 9,075,184 entitled compact edge illuminated diffractive display, u.s. pat. no. 8,233,204 entitled optical displays, pct application no.: us2006/043938, entitled method and apparatus for providing a transparent display, pct application no.: gb2012/000677 entitled wearable data display, u.s. patent application ser. no. 13/317,468 entitled compact edge illuminated eyeglass display, u.s. patent application ser. no. 13/869,866 entitled holographic wide angle display, and u.s. patent application ser. no. 13/844,456 entitled transparent waveguide display, u.s. patent application ser. no. 14/620,969 entitled waveguide grating device, u.s. patent application ser. no. 15/553,120 entitled electrically focus tunable lens, u.s. patent application ser. no. 15/558,409 entitled waveguide device incorporating a light pipe, u.s. patent application ser. no. 15/512,500 entitled method and apparatus for generating input images for holographic waveguide displays, u.s. provisional patent application no. 62/123,282 entitled near eye display using gradient index optics, u.s. provisional patent application no. 62/124,550 entitled waveguide display using gradient index optics, u.s. provisional patent application no. 62/125,064 entitled optical waveguide displays for integration in windows, u.s. patent application ser. no. 15/543,016 entitled environmentally isolated waveguide display, u.s. provisional patent application no. 62/125,089 entitled holographic waveguide light field displays, u.s. pat. no. 8,224,133 entitled laser illumination device, u.s. pat. no. 8,565,560 entitled laser illumination device, u.s. pat. no. 6,115,152 entitled holographic illumination system, pct application no.: pct/gb2013/000005 entitled contact image sensor using switchable bragg gratings, pct application no.: pct/gb2012/000680, entitled improvements to holographic polymer dispersed liquid crystal materials and devices, pct application no.: pct/gb2014/000197 entitled holographic waveguide eye tracker, pct/gb2013/000210 entitled apparatus for eye tracking, pct application no.:gb2013/000210 entitled apparatus for eye tracking, pct/gb2015/000274 entitled holographic waveguide opticaltracker, u.s. pat. no. 8,903,207 entitled system and method of extending vertical field of view in head up display using a waveguide combiner, u.s. pat. no. 8,639,072 entitled compact wearable display, u.s. pat. no. 8,885,112 entitled compact holographic edge illuminated eyeglass display, u.s. patent application ser. no. 16/086,578 entitled method and apparatus for providing a polarization selective holographic waveguide device, u.s. provisional patent application no. 62/493,578 entitled waveguide display apparatus, pct application no.: pct/gb2016000181 entitled waveguide display, u.s. patent application no. 62/497,781 entitled apparatus for homogenizing the output from a waveguide device, u.s. patent application no. 62/499,423 entitled waveguide device with uniform output illumination. doctrine of equivalents the construction and arrangement of the systems and methods as shown in the various exemplary embodiments are illustrative only. although only a few embodiments have been described in detail in this disclosure, many modifications are possible (for example, variations in sizes, dimensions, structures, shapes and proportions of the various elements, values of parameters, mounting arrangements, use of materials, colors, orientations, etc.). for example, the position of elements may be reversed or otherwise varied, and the nature or number of discrete elements or positions may be altered or varied. accordingly, all such modifications are intended to be included within the scope of the present disclosure. the order or sequence of any process or method steps may be varied or re-sequenced according to alternative embodiments. other substitutions, modifications, changes, and omissions may be made in the design, operating conditions and arrangement of the exemplary embodiments without departing from the scope of the present disclosure.
172-700-881-060-868
US
[ "JP", "CA", "AU", "BR", "US", "RU", "SG", "CN", "EP", "MY", "WO" ]
F02C7/36,F01K23/10,F02C3/06,F02C3/08,F02C3/107,F01K23/14,F02C3/30,F02C3/34,F02C6/10,F02C6/18,F02C7/143,F02C7/32,F02C9/28,F23R3/00,F02C1/00,B01D53/26,B01D53/94,F01D15/10,F01D15/12,F01K7/16,F02C3/04,F02C9/50,F02M26/28,F02M25/07
2012-11-02T00:00:00
2012
[ "F02", "F01", "F23", "B01" ]
system and method for oxidant compression in a stoichiometric exhaust gas recirculation gas turbine system
a system includes a gas turbine system having a turbine combustor, a turbine driven by combustion products from the turbine combustor, and an exhaust gas compressor driven by the turbine. the exhaust gas compressor is configured to compress and supply an exhaust gas to the turbine combustor. the gas turbine system also has an exhaust gas recirculation (egr) system. the egr system is configured to recirculate the exhaust gas along an exhaust recirculation path from the turbine to the exhaust gas compressor. the system further includes a main oxidant compression system having one or more oxidant compressors. the one or more oxidant compressors are separate from the exhaust gas compressor, and the one or more oxidant compressors are configured to supply all compressed oxidant utilized by the turbine combustor in generating the combustion products.
1. a system, comprising: a gas turbine system, comprising: a turbine combustor; a turbine driven by combustion products from the turbine combustor; and an exhaust gas compressor driven by the turbine, wherein the exhaust gas compressor is configured to compress and supply an exhaust gas to the turbine combustor; an exhaust gas recirculation (egr) system, wherein the egr system is configured to recirculate the exhaust gas along an exhaust recirculation path from the turbine to the exhaust gas compressor; a main oxidant compression system configured to supply compressed oxidant to the gas turbine system, and the main oxidant compression system comprises: a first oxidant compressor coupled to a shaft of the gas turbine system, such that the first oxidant compressor is at least partially driven by the gas turbine system; and a second oxidant compressor; an electrical generator coupled to the shaft of the gas turbine system; a drive coupled to the second oxidant compressor, wherein the drive comprises a steam turbine or an electric motor; and a gearbox coupling the drive to the second oxidant compressor, wherein the gearbox is configured to enable the second oxidant compressor to operate at a speed different from an operating speed of the drive. 2. the system of claim 1 , wherein the first oxidant compressor and the second oxidant compressor are configured to operate in a series configuration of compression. 3. the system of claim 2 , wherein the first oxidant compressor is a low pressure oxidant compressor and the second oxidant compressor is a high pressure oxidant compressor. 4. the system of claim 1 , wherein the drive coupled to the second oxidant compressor comprises the steam turbine. 5. the system of claim 4 , wherein the egr system comprises a heat recovery steam generator configured to receive a stream of water to generate steam via a heat exchange relationship with the exhaust gas. 6. the system of claim 5 , wherein the heat recovery generator is configured to supply the steam to the steam turbine, and wherein the steam turbine is configured to drive the second oxidant compressor via electric power generated from the steam. 7. the system of claim 1 , wherein the drive is the electric motor, and wherein the electric motor receives electric power from the generator to drive the second oxidant compressor. 8. the system of claim 1 , wherein the gearbox comprises a parallel shaft gearbox having input and output shafts that are generally parallel with one another, or wherein the gearbox comprises an epicyclic gearbox having input and output shafts in line with one another. 9. the system of claim 1 , comprising a stoichiometric combustion system having the turbine combustor configured to combust a fuel/oxidant mixture in a combustion equivalence ratio of between 0.95 and 1.05 fuel to oxygen in the oxidant. 10. a system, comprising: a gas turbine system, comprising: a turbine combustor; a turbine driven by combustion products from the turbine combustor; and an exhaust gas compressor driven by the turbine, wherein the exhaust gas compressor is configured to compress and supply an exhaust gas to the turbine combustor; an exhaust gas recirculation (egr) system, wherein the egr system is configured to recirculate the exhaust gas along an exhaust recirculation path from the turbine to the exhaust gas compressor; and a main oxidant compression system configured to supply compressed oxidant to the gas turbine system, and the main oxidant compression system comprises: a first oxidant compressor coupled to a shaft of the turbine of the gas turbine system in series, such that the first oxidant compressor is at least partially driven by the gas turbine system; a second oxidant compressor coupled to the shaft of the turbine of the gas turbine system in series between the first oxidant compressor and the gas turbine, such that the second oxidant compressor is at least partially driven by the gas turbine system; and a gearbox coupled to the shaft of the turbine of the gas turbine system between the first oxidant compressor and the second oxidant compressor, wherein the gearbox is configured to enable the first oxidant compressor to operate at a speed different from the second oxidant compressor. 11. the system of claim 10 , wherein the first oxidant compressor receives compressed oxidant from the second oxidant compressor. 12. the system of claim 11 , wherein the first oxidant compressor is a high pressure oxidant compressor and the second oxidant compressor is a low pressure oxidant compressor. 13. the system of claim 11 , wherein the first oxidant compressor is a centrifugal compressor and the second oxidant compressor is an axial flow compressor. 14. the system of claim 10 , comprising a stoichiometric combustion system having the turbine combustor configured to combust a fuel/oxidant mixture in a combustion equivalence ratio of between 0.95 and 1.05 fuel to oxygen in the oxidant. 15. the system of claim 10 , wherein the gearbox comprises a speed-increasing gearbox. 16. a system, comprising: a gas turbine system, comprising: a turbine combustor; a turbine driven by combustion products from the turbine combustor; and an exhaust gas compressor driven by the turbine, wherein the exhaust gas compressor is configured to compress and supply an exhaust gas to the turbine combustor; an exhaust gas recirculation (egr) system, wherein the egr system is configured to recirculate the exhaust gas along an exhaust recirculation path from the turbine to the exhaust gas compressor; a main oxidant compression system configured to supply compressed oxidant to the gas turbine system, and the main oxidant compression system comprises: a first oxidant compressor coupled to a shaft of the gas turbine system, such that the first oxidant compressor is at least partially driven by the gas turbine system; and a second oxidant compressor configured to receive partially compressed oxidant from the first oxidant compressor to supply compressed oxidant to the gas turbine system; and a steam turbine coupled to the second oxidant compressor and configured to at least partially drive the second oxidant compressor, wherein the steam turbine is coupled to the second oxidant compressor via a gearbox, and wherein the gearbox is configured to enable the second oxidant compressor to operate at a speed different from an operating speed of the steam turbine. 17. the system of claim 16 , wherein the first oxidant compressor is an axial flow compressor and the second oxidant compressor is an axial flow compressor. 18. the system of claim 16 , wherein the gas turbine system comprises a heat recovery steam generator configured to receive a stream of water to generate steam via a heat exchange relationship with the exhaust gas. 19. the system of claim 18 , wherein the heat recovery generator is configured to supply the steam to the steam turbine, and wherein the steam turbine is configured to at least partially drive the second oxidant compressor via electric power generated from the steam. 20. the system of claim 16 , wherein the first oxidant compressor is a low pressure oxidant compressor and the second oxidant compressor is a high pressure oxidant compressor.
cross-reference to related applications this application claims is a continuation of u.s. patent application ser. no. 14/066,579, which claims priority to and benefit of u.s. provisional patent application no. 61/747,192, entitled “system and method for oxidant compression in a stoichiometric exhaust gas recirculation gas turbine system,” filed on dec. 28, 2012, u.s. provisional patent application no. 61/722,118, entitled “system and method for diffusion combustion in a stoichiometric exhaust gas recirculation gas turbine system,” filed on nov. 2, 2012, u.s. provisional patent application no. 61/722,115, entitled “system and method for diffusion combustion with fuel-diluent mixing in a stoichiometric exhaust gas recirculation gas turbine system,” filed on nov. 2, 2012, u.s. provisional patent application no. 61/722,114, entitled “system and method for diffusion combustion with oxidant-diluent mixing in a stoichiometric exhaust gas recirculation gas turbine system,” filed on nov. 2, 2012, and u.s. provisional patent application no. 61/722,111, entitled “system and method for load control with diffusion combustion in a stoichiometric exhaust gas recirculation gas turbine system,” filed on nov. 2, 2012, all of which are herein incorporated by reference in their entirety for all purposes. background the subject matter disclosed herein relates to gas turbine engines. gas turbine engines are used in a wide variety of applications, such as power generation, aircraft, and various machinery. gas turbine engine generally combust a fuel with an oxidant (e.g., air) in a combustor section to generate hot combustion products, which then drive one or more turbine stages of a turbine section. in turn, the turbine section drives one or more compressor stages of a compressor section, thereby compressing oxidant for intake into the combustor section along with the fuel. again, the fuel and oxidant mix in the combustor section, and then combust to produce the hot combustion products. gas turbine engines generally include a compressor that compresses the oxidant, along with one or more diluent gases. unfortunately, controlling the flux of oxidant and diluent gas into the combustor section in this manner can impact various exhaust emission and power requirements. furthermore, gas turbine engines typically consume a vast amount of air as the oxidant, and output a considerable amount of exhaust gas into the atmosphere. in other words, the exhaust gas is typically wasted as a byproduct of the gas turbine operation. brief description certain embodiments commensurate in scope with the originally claimed invention are summarized below. these embodiments are not intended to limit the scope of the claimed invention, but rather these embodiments are intended only to provide a brief summary of possible forms of the invention. indeed, the invention may encompass a variety of forms that may be similar to or different from the embodiments set forth below. in a first embodiment, a system includes a gas turbine system, which includes a turbine combustor; a turbine driven by combustion products from the turbine combustor; and an exhaust gas compressor driven by the turbine, wherein the exhaust gas compressor is configured to compress and supply an exhaust gas to the turbine combustor; and an exhaust gas recirculation (egr) system, wherein the egr system is configured to recirculate the exhaust gas along an exhaust recirculation path from the turbine to the exhaust gas compressor. the system also includes a main oxidant compression system configured to supply compressed oxidant to the gas turbine system, and the main oxidant compression system includes: a first oxidant compressor; and a first gearbox configured to enable the first oxidant compressor to operate at a first speed different from a first operating speed of the gas turbine system. in a second embodiment, a system includes a gas turbine system, having: a turbine combustor; a turbine driven by combustion products from the turbine combustor; and an exhaust gas compressor driven by the turbine, wherein the exhaust gas compressor is configured to compress and supply an exhaust gas to the turbine combustor. the gas turbine system also includes an exhaust gas recirculation (egr) system, wherein the egr system is configured to recirculate the exhaust gas along an exhaust recirculation path from the turbine to the exhaust gas compressor. the system also includes a main oxidant compression system configured to supply compressed oxidant to the gas turbine system, and the main oxidant compression system has a first oxidant compressor; and a second oxidant compressor, wherein the first and second oxidant compressors are driven by the gas turbine system. in a third embodiment, a system, includes a gas turbine system, having: a turbine combustor; a turbine driven by combustion products from the turbine combustor; and an exhaust gas compressor driven by the turbine, wherein the exhaust gas compressor is configured to compress and supply an exhaust gas to the turbine combustor; and an exhaust gas recirculation (egr) system, wherein the egr system is configured to recirculate the exhaust gas along an exhaust recirculation path from the turbine to the exhaust gas compressor. the system also includes a main oxidant compression system configured to supply compressed oxidant to the gas turbine system, and the main oxidant compression system comprises one or more oxidant compressors; a heat recovery steam generator (hrsg) coupled to the gas turbine system, wherein the hrsg is configured to generate steam by transferring heat from the exhaust gas to a feed water, and the exhaust recirculation path of the egr system extends through the hrsg; and a steam turbine disposed along a shaft line of the gas turbine system and at least partially driven by the steam from the hrsg, wherein the steam turbine is configured to return condensate as at least a portion of the feedwater to the hrsg. in a fourth embodiment, a system includes: a gas turbine system, having: a turbine combustor; a turbine driven by combustion products from the turbine combustor; and an exhaust gas compressor driven by the turbine, wherein the exhaust gas compressor is configured to compress and supply an exhaust gas to the turbine combustor; and an exhaust gas recirculation (egr) system, wherein the egr system is configured to recirculate the exhaust gas along an exhaust recirculation path from the turbine to the exhaust gas compressor. the system also includes a main oxidant compression system comprising one or more oxidant compressors, wherein the one or more oxidant compressors are separate from the exhaust gas compressor, and the one or more oxidant compressors are configured to supply all compressed oxidant utilized by the turbine combustor in generating the combustion products. brief description of the drawings these and other features, aspects, and advantages of the present invention will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein: fig. 1 is a diagram of an embodiment of a system having a turbine-based service system coupled to a hydrocarbon production system; fig. 2 is a diagram of an embodiment of the system of fig. 1 , further illustrating a control system and a combined cycle system; fig. 3 is a diagram of an embodiment of the system of figs. 1 and 2 , further illustrating details of a gas turbine engine, exhaust gas supply system, and exhaust gas processing system; fig. 4 is a flow chart of an embodiment of a process for operating the system of figs. 1-3 ; fig. 5 is a diagram of an embodiment of the oxidant compression system of fig. 3 having a main oxidant compressor indirectly driven by the segr gt system via an electrical generator; fig. 6 is a diagram of an embodiment of the oxidant compression system of fig. 3 having a main oxidant compressor directly driven by the segr gt system, and the main oxidant compressor drives an electrical generator; fig. 7 is a diagram of an embodiment of the oxidant compression system of fig. 3 having a main oxidant compressor indirectly driven by the segr gt system via an electrical generator and a gearbox; fig. 8 is a diagram of an embodiment of the oxidant compression system of fig. 3 having oxidant compression separated into low pressure and high pressure compressors driven by the segr gt system via an electrical generator; fig. 9 is a diagram of an embodiment of the oxidant compression system of fig. 3 having oxidant compression separated into low pressure and high pressure compressors driven by the segr gt system via an electrical generator, the low pressure compressor being an axial flow compressor and the high pressure compressor being a centrifugal compressor; fig. 10 is a diagram of an embodiment of the oxidant compression system of fig. 3 having oxidant compression separated into low pressure and high pressure compressors driven by the segr gt system, the low pressure compressor being directly driven by the segr gt system and the high pressure compressor being driven via the low pressure compressor, a generator, and a gearbox; fig. 11 is a diagram of an embodiment of the oxidant compression system of fig. 3 having oxidant compression separated into low pressure and high pressure compressors driven by the segr gt system, the low pressure compressor being driven by the segr gt system via an electrical generator and the high pressure compressor being driven via the low pressure compressor and a gearbox; fig. 12 is a diagram of an embodiment of the oxidant compression system of fig. 3 similar to the embodiment of fig. 11 , the high pressure compressor being a centrifugal compressor; fig. 13 is a diagram of an embodiment of the oxidant compression system of fig. 3 having oxidant compression being performed by main oxidant compressors operating in parallel and driven in series by the segr gt system via an electrical generator and a gearbox; fig. 14 is a diagram of an embodiment of the oxidant compression system of fig. 3 having oxidant compression being performed by main oxidant compressors operating in parallel, with one compressor being driven by the segr gt system via an electrical generator and a gearbox, and the other oxidant compressor being driven by an additional drive and an additional gearbox; fig. 15 is a diagram of an embodiment of the oxidant compression system of fig. 3 having oxidant compression being performed by a low and a high pressure compressor operating in a series configuration of compression, and the low pressure compressor is driven by the segr gt system via an electrical generator, and the low pressure compressor is driven by an additional drive via a gearbox; fig. 16 is a diagram of an embodiment of the oxidant compression system of fig. 3 similar to the embodiment of fig. 15 , with the high pressure compressor being a centrifugal compressor; fig. 17 is a diagram of an embodiment of the oxidant compression system of fig. 3 having oxidant compression being performed by a low and a high pressure compressor operating in a series configuration of compression, and the high pressure compressor is driven by the segr gt system via an electrical generator and a gearbox, and the low pressure compressor is driven by an additional drive via an additional gearbox; fig. 18 is a diagram of an embodiment of the oxidant compression system of fig. 3 having oxidant compression separated into low pressure and high pressure compressors driven by the segr gt system, the low pressure compressor being driven by the segr gt system via an electrical generator and the high pressure compressor being driven via the low pressure compressor and a gearbox, and a spray intercooler is positioned along a low pressure compressed oxidant flow path between the low and high pressure compressors; fig. 19 is a diagram of an embodiment of the oxidant compression system of fig. 3 having oxidant compression separated into low pressure and high pressure compressors driven by the segr gt system, the low pressure compressor being driven by the segr gt system via an electrical generator and the high pressure compressor being driven via the low pressure compressor and a gearbox, and a cooler is positioned along a low pressure compressed oxidant flow path between the low and high pressure compressors; fig. 20 is a diagram of an embodiment of the oxidant compression system of fig. 3 having oxidant compression separated into low pressure and high pressure compressors driven by the segr gt system, the low pressure compressor being driven by the segr gt system via an electrical generator and the high pressure compressor being driven via the low pressure compressor and a gearbox, and a steam generator and feedwater heater are positioned along a low pressure compressed oxidant flow path between the low and high pressure compressors; fig. 21 is a diagram of an embodiment of the oxidant compression system of fig. 3 having a main oxidant compressor driven by the segr gt system via a steam turbine and an electrical generator; fig. 22 is a diagram of an embodiment of the oxidant compression system of fig. 3 having a main oxidant compressor driven by the segr gt system via an electrical generator and a steam turbine; fig. 23 is a diagram of an embodiment of the oxidant compression system of fig. 3 having a main oxidant compressor partially driven by the segr gt system via an electrical generator, and the main oxidant compressor is also partially driven by a steam turbine; fig. 24 is a diagram of an embodiment of the oxidant compression system of fig. 3 having a main oxidant compressor partially driven by the segr gt system via an electrical generator, and the main oxidant compressor is also partially driven by a steam turbine via a clutch. detailed description one or more specific embodiments of the present invention will be described below. in an effort to provide a concise description of these embodiments, all features of an actual implementation may not be described in the specification. it should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another. moreover, it should be appreciated that such a development effort might be complex and time consuming, but would nevertheless be a routine undertaking of design, fabrication, and manufacture for those of ordinary skill having the benefit of this disclosure. when introducing elements of various embodiments of the present invention, the articles “a,” “an,” “the,” and “said” are intended to mean that there are one or more of the elements. the terms “comprising,” “including,” and “having” are intended to be inclusive and mean that there may be additional elements other than the listed elements. as discussed in detail below, the disclosed embodiments relate generally to gas turbine systems with exhaust gas recirculation (egr), and particularly stoichiometric operation of the gas turbine systems using egr. for example, the gas turbine systems may be configured to recirculate the exhaust gas along an exhaust recirculation path, stoichiometrically combust fuel and oxidant along with at least some of the recirculated exhaust gas, and capture the exhaust gas for use in various target systems. the recirculation of the exhaust gas along with stoichiometric combustion may help to increase the concentration level of carbon dioxide (co 2 ) in the exhaust gas, which can then be post treated to separate and purify the co 2 and nitrogen (n 2 ) for use in various target systems. the gas turbine systems also may employ various exhaust gas processing (e.g., heat recovery, catalyst reactions, etc.) along the exhaust recirculation path, thereby increasing the concentration level of co 2 , reducing concentration levels of other emissions (e.g., carbon monoxide, nitrogen oxides, and unburnt hydrocarbons), and increasing energy recovery (e.g., with heat recovery units). furthermore, the gas turbine engines may be configured to utilize a separate main oxidant compression system for oxidant compression, rather than or in addition to utilizing the compressor of the gas turbine for such compression. the use of a separate main oxidant compression system can controllably and reliably produce oxidant at desired flow rates, temperatures, pressures, and the like, which in turn helps to enhance the efficiency of combustion and the operation of various components of a turbine-based system. the turbine-based systems may, in turn, reliably and controllably produce exhaust gas having various desired parameters (e.g., composition, flow rate, pressure, temperature) for further use in a downstream process. possible target systems include pipelines, storage tanks, carbon sequestration systems, and hydrocarbon production systems, such as enhanced oil recovery (eor) systems. fig. 1 is a diagram of an embodiment of a system 10 having an hydrocarbon production system 12 associated with a turbine-based service system 14 . as discussed in further detail below, various embodiments of the turbine-based service system 14 are configured to provide various services, such as electrical power, mechanical power, and fluids (e.g., exhaust gas), to the hydrocarbon production system 12 to facilitate the production or retrieval of oil and/or gas. in the illustrated embodiment, the hydrocarbon production system 12 includes an oil/gas extraction system 16 and an enhanced oil recovery (eor) system 18 , which are coupled to a subterranean reservoir 20 (e.g., an oil, gas, or hydrocarbon reservoir). the oil/gas extraction system 16 includes a variety of surface equipment 22 , such as a christmas tree or production tree 24 , coupled to an oil/gas well 26 . furthermore, the well 26 may include one or more tubulars 28 extending through a drilled bore 30 in the earth 32 to the subterranean reservoir 20 . the tree 24 includes one or more valves, chokes, isolation sleeves, blowout preventers, and various flow control devices, which regulate pressures and control flows to and from the subterranean reservoir 20 . while the tree 24 is generally used to control the flow of the production fluid (e.g., oil or gas) out of the subterranean reservoir 20 , the eor system 18 may increase the production of oil or gas by injecting one or more fluids into the subterranean reservoir 20 . accordingly, the eor system 18 may include a fluid injection system 34 , which has one or more tubulars 36 extending through a bore 38 in the earth 32 to the subterranean reservoir 20 . for example, the eor system 18 may route one or more fluids 40 , such as gas, steam, water, chemicals, or any combination thereof, into the fluid injection system 34 . for example, as discussed in further detail below, the eor system 18 may be coupled to the turbine-based service system 14 , such that the system 14 routes an exhaust gas 42 (e.g., substantially or entirely free of oxygen) to the eor system 18 for use as the injection fluid 40 . the fluid injection system 34 routes the fluid 40 (e.g., the exhaust gas 42 ) through the one or more tubulars 36 into the subterranean reservoir 20 , as indicated by arrows 44 . the injection fluid 40 enters the subterranean reservoir 20 through the tubular 36 at an offset distance 46 away from the tubular 28 of the oil/gas well 26 . accordingly, the injection fluid 40 displaces the oil/gas 48 disposed in the subterranean reservoir 20 , and drives the oil/gas 48 up through the one or more tubulars 28 of the hydrocarbon production system 12 , as indicated by arrows 50 . as discussed in further detail below, the injection fluid 40 may include the exhaust gas 42 originating from the turbine-based service system 14 , which is able to generate the exhaust gas 42 on-site as needed by the hydrocarbon production system 12 . in other words, the turbine-based system 14 may simultaneously generate one or more services (e.g., electrical power, mechanical power, steam, water (e.g., desalinated water), and exhaust gas (e.g., substantially free of oxygen)) for use by the hydrocarbon production system 12 , thereby reducing or eliminating the reliance on external sources of such services. in the illustrated embodiment, the turbine-based service system 14 includes a stoichiometric exhaust gas recirculation (segr) gas turbine system 52 and an exhaust gas (eg) processing system 54 . the gas turbine system 52 may be configured to operate in a stoichiometric combustion mode of operation (e.g., a stoichiometric control mode) and a non-stoichiometric combustion mode of operation (e.g., a non-stoichiometric control mode), such as a fuel-lean control mode or a fuel-rich control mode. in the stoichiometric control mode, the combustion generally occurs in a substantially stoichiometric ratio of a fuel and oxidant, thereby resulting in substantially stoichiometric combustion. in particular, stoichiometric combustion generally involves consuming substantially all of the fuel and oxidant in the combustion reaction, such that the products of combustion are substantially or entirely free of unburnt fuel and oxidant. one measure of stoichiometric combustion is the equivalence ratio, or phi (φ), which is the ratio of the actual fuel/oxidant ratio relative to the stoichiometric fuel/oxidant ratio. an equivalence ratio of greater than 1.0 results in a fuel-rich combustion of the fuel and oxidant, whereas an equivalence ratio of less than 1.0 results in a fuel-lean combustion of the fuel and oxidant. in contrast, an equivalence ratio of 1.0 results in combustion that is neither fuel-rich nor fuel-lean, thereby substantially consuming all of the fuel and oxidant in the combustion reaction. in context of the disclosed embodiments, the term stoichiometric or substantially stoichiometric may refer to an equivalence ratio of approximately 0.95 to approximately 1.05. however, the disclosed embodiments may also include an equivalence ratio of 1.0 plus or minus 0.01, 0.02, 0.03, 0.04, 0.05, or more. again, the stoichiometric combustion of fuel and oxidant in the turbine-based service system 14 may result in products of combustion or exhaust gas (e.g., 42 ) with substantially no unburnt fuel or oxidant remaining. for example, the exhaust gas 42 may have less than 1, 2, 3, 4, or 5 percent by volume of oxidant (e.g., oxygen), unburnt fuel or hydrocarbons (e.g., hcs), nitrogen oxides (e.g., no x ), carbon monoxide (co), sulfur oxides (e.g., so x ), hydrogen, and other products of incomplete combustion. by further example, the exhaust gas 42 may have less than approximately 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 200, 300, 400, 500, 1000, 2000, 3000, 4000, or 5000 parts per million by volume (ppmv) of oxidant (e.g., oxygen), unburnt fuel or hydrocarbons (e.g., hcs), nitrogen oxides (e.g., no x ), carbon monoxide (co), sulfur oxides (e.g., so x ), hydrogen, and other products of incomplete combustion. however, the disclosed embodiments also may produce other ranges of residual fuel, oxidant, and other emissions levels in the exhaust gas 42 . as used herein, the terms emissions, emissions levels, and emissions targets may refer to concentration levels of certain products of combustion (e.g., no x , co, so x , o 2 , n 2 , hz, hcs, etc.), which may be present in recirculated gas streams, vented gas streams (e.g., exhausted into the atmosphere), and gas streams used in various target systems (e.g., the hydrocarbon production system 12 ). although the segr gas turbine system 52 and the eg processing system 54 may include a variety of components in different embodiments, the illustrated eg processing system 54 includes a heat recovery steam generator (hrsg) 56 and an exhaust gas recirculation (egr) system 58 , which receive and process an exhaust gas 60 originating from the segr gas turbine system 52 . the hrsg 56 may include one or more heat exchangers, condensers, and various heat recovery equipment, which collectively function to transfer heat from the exhaust gas 60 to a stream of water, thereby generating steam 62 . the steam 62 may be used in one or more steam turbines, the eor system 18 , or any other portion of the hydrocarbon production system 12 . for example, the hrsg 56 may generate low pressure, medium pressure, and/or high pressure steam 62 , which may be selectively applied to low, medium, and high pressure steam turbine stages, or different applications of the eor system 18 . in addition to the steam 62 , a treated water 64 , such as a desalinated water, may be generated by the hrsg 56 , the egr system 58 , and/or another portion of the eg processing system 54 or the segr gas turbine system 52 . the treated water 64 (e.g., desalinated water) may be particularly useful in areas with water shortages, such as inland or desert regions. the treated water 64 may be generated, at least in part, due to the large volume of air driving combustion of fuel within the segr gas turbine system 52 . while the on-site generation of steam 62 and water 64 may be beneficial in many applications (including the hydrocarbon production system 12 ), the on-site generation of exhaust gas 42 , 60 may be particularly beneficial for the eor system 18 , due to its low oxygen content, high pressure, and heat derived from the segr gas turbine system 52 . accordingly, the hrsg 56 , the egr system 58 , and/or another portion of the eg processing system 54 may output or recirculate an exhaust gas 66 into the segr gas turbine system 52 , while also routing the exhaust gas 42 to the eor system 18 for use with the hydrocarbon production system 12 . likewise, the exhaust gas 42 may be extracted directly from the segr gas turbine system 52 (i.e., without passing through the eg processing system 54 ) for use in the eor system 18 of the hydrocarbon production system 12 . the exhaust gas recirculation is handled by the egr system 58 of the eg processing system 54 . for example, the egr system 58 includes one or more conduits, valves, blowers, exhaust gas treatment systems (e.g., filters, particulate removal units, gas separation units, gas purification units, heat exchangers, heat recovery units, moisture removal units, catalyst units, chemical injection units, or any combination thereof), and controls to recirculate the exhaust gas along an exhaust gas circulation path from an output (e.g., discharged exhaust gas 60 ) to an input (e.g., intake exhaust gas 66 ) of the segr gas turbine system 52 . in the illustrated embodiment, the segr gas turbine system 52 intakes the exhaust gas 66 into a compressor section having one or more compressors, thereby compressing the exhaust gas 66 for use in a combustor section along with an intake of an oxidant 68 and one or more fuels 70 . the oxidant 68 may include ambient air, pure oxygen, oxygen-enriched air, oxygen-reduced air, oxygen-nitrogen mixtures, or any suitable oxidant that facilitates combustion of the fuel 70 . the fuel 70 may include one or more gas fuels, liquid fuels, or any combination thereof. for example, the fuel 70 may include natural gas, liquefied natural gas (lng), syngas, methane, ethane, propane, butane, naphtha, kerosene, diesel fuel, ethanol, methanol, biofuel, or any combination thereof. the segr gas turbine system 52 mixes and combusts the exhaust gas 66 , the oxidant 68 , and the fuel 70 in the combustor section, thereby generating hot combustion gases or exhaust gas 60 to drive one or more turbine stages in a turbine section. in certain embodiments, each combustor in the combustor section includes one or more premix fuel nozzles, one or more diffusion fuel nozzles, or any combination thereof. for example, each premix fuel nozzle may be configured to mix the oxidant 68 and the fuel 70 internally within the fuel nozzle and/or partially upstream of the fuel nozzle, thereby injecting an oxidant-fuel mixture from the fuel nozzle into the combustion zone for a premixed combustion (e.g., a premixed flame). by further example, each diffusion fuel nozzle may be configured to isolate the flows of oxidant 68 and fuel 70 within the fuel nozzle, thereby separately injecting the oxidant 68 and the fuel 70 from the fuel nozzle into the combustion zone for diffusion combustion (e.g., a diffusion flame). in particular, the diffusion combustion provided by the diffusion fuel nozzles delays mixing of the oxidant 68 and the fuel 70 until the point of initial combustion, i.e., the flame region. in embodiments employing the diffusion fuel nozzles, the diffusion flame may provide increased flame stability, because the diffusion flame generally forms at the point of stoichiometry between the separate streams of oxidant 68 and fuel 70 (i.e., as the oxidant 68 and fuel 70 are mixing). in certain embodiments, one or more diluents (e.g., the exhaust gas 60 , steam, nitrogen, or another inert gas) may be pre-mixed with the oxidant 68 , the fuel 70 , or both, in either the diffusion fuel nozzle or the premix fuel nozzle. in addition, one or more diluents (e.g., the exhaust gas 60 , steam, nitrogen, or another inert gas) may be injected into the combustor at or downstream from the point of combustion within each combustor. the use of these diluents may help temper the flame (e.g., premix flame or diffusion flame), thereby helping to reduce no x emissions, such as nitrogen monoxide (no) and nitrogen dioxide (no 2 ). regardless of the type of flame, the combustion produces hot combustion gases or exhaust gas 60 to drive one or more turbine stages. as each turbine stage is driven by the exhaust gas 60 , the segr gas turbine system 52 generates a mechanical power 72 and/or an electrical power 74 (e.g., via an electrical generator). the system 52 also outputs the exhaust gas 60 , and may further output water 64 . again, the water 64 may be a treated water, such as a desalinated water, which may be useful in a variety of applications on-site or off-site. exhaust extraction is also provided by the segr gas turbine system 52 using one or more extraction points 76 . for example, the illustrated embodiment includes an exhaust gas (eg) supply system 78 having an exhaust gas (eg) extraction system 80 and an exhaust gas (eg) treatment system 82 , which receive exhaust gas 42 from the extraction points 76 , treat the exhaust gas 42 , and then supply or distribute the exhaust gas 42 to various target systems. the target systems may include the eor system 18 and/or other systems, such as a pipeline 86 , a storage tank 88 , or a carbon sequestration system 90 . the eg extraction system 80 may include one or more conduits, valves, controls, and flow separations, which facilitate isolation of the exhaust gas 42 from the oxidant 68 , the fuel 70 , and other contaminants, while also controlling the temperature, pressure, and flow rate of the extracted exhaust gas 42 . the eg treatment system 82 may include one or more heat exchangers (e.g., heat recovery units such as heat recovery steam generators, condensers, coolers, or heaters), catalyst systems (e.g., oxidation catalyst systems), particulate and/or water removal systems (e.g., gas dehydration units, inertial separators, coalescing filters, water impermeable filters, and other filters), chemical injection systems, solvent based treatment systems (e.g., absorbers, flash tanks, etc.), carbon capture systems, gas separation systems, gas purification systems, and/or a solvent based treatment system, exhaust gas compressors, any combination thereof. these subsystems of the eg treatment system 82 enable control of the temperature, pressure, flow rate, moisture content (e.g., amount of water removal), particulate content (e.g., amount of particulate removal), and gas composition (e.g., percentage of co 2 , n 2 , etc.). the extracted exhaust gas 42 is treated by one or more subsystems of the eg treatment system 82 , depending on the target system. for example, the eg treatment system 82 may direct all or part of the exhaust gas 42 through a carbon capture system, a gas separation system, a gas purification system, and/or a solvent based treatment system, which is controlled to separate and purify a carbonaceous gas (e.g., carbon dioxide) 92 and/or nitrogen (n 2 ) 94 for use in the various target systems. for example, embodiments of the eg treatment system 82 may perform gas separation and purification to produce a plurality of different streams 95 of exhaust gas 42 , such as a first stream 96 , a second stream 97 , and a third stream 98 . the first stream 96 may have a first composition that is rich in carbon dioxide and/or lean in nitrogen (e.g., a co 2 rich, n 2 lean stream). the second stream 97 may have a second composition that has intermediate concentration levels of carbon dioxide and/or nitrogen (e.g., intermediate concentration co 2 , n 2 stream). the third stream 98 may have a third composition that is lean in carbon dioxide and/or rich in nitrogen (e.g., a co 2 lean, n 2 rich stream). each stream 95 (e.g., 96 , 97 , and 98 ) may include a gas dehydration unit, a filter, a gas compressor, or any combination thereof, to facilitate delivery of the stream 95 to a target system. in certain embodiments, the co 2 rich, n 2 lean stream 96 may have a co 2 purity or concentration level of greater than approximately 70, 75, 80, 85, 90, 95, 96, 97, 98, or 99 percent by volume, and a n 2 purity or concentration level of less than approximately 1, 2, 3, 4, 5, 10, 15, 20, 25, or 30 percent by volume. in contrast, the co 2 lean, n 2 rich stream 98 may have a co 2 purity or concentration level of less than approximately 1, 2, 3, 4, 5, 10, 15, 20, 25, or 30 percent by volume, and a n 2 purity or concentration level of greater than approximately 70, 75, 80, 85, 90, 95, 96, 97, 98, or 99 percent by volume. the intermediate concentration co 2 , n 2 stream 97 may have a co 2 purity or concentration level and/or a n 2 purity or concentration level of between approximately 30 to 70, 35 to 65, 40 to 60, or 45 to 55 percent by volume. although the foregoing ranges are merely non-limiting examples, the co 2 rich, n 2 lean stream 96 and the co 2 lean, n 2 rich stream 98 may be particularly well suited for use with the eor system 18 and the other systems 84 . however, any of these rich, lean, or intermediate concentration co 2 streams 95 may be used, alone or in various combinations, with the eor system 18 and the other systems 84 . for example, the eor system 18 and the other systems 84 (e.g., the pipeline 86 , storage tank 88 , and the carbon sequestration system 90 ) each may receive one or more co 2 rich, n 2 lean streams 96 , one or more co 2 lean, n 2 rich streams 98 , one or more intermediate concentration co 2 , n 2 streams 97 , and one or more untreated exhaust gas 42 streams (i.e., bypassing the eg treatment system 82 ). the eg extraction system 80 extracts the exhaust gas 42 at one or more extraction points 76 along the compressor section, the combustor section, and/or the turbine section, such that the exhaust gas 42 may be used in the eor system 18 and other systems 84 at suitable temperatures and pressures. the eg extraction system 80 and/or the eg treatment system 82 also may circulate fluid flows (e.g., exhaust gas 42 ) to and from the eg processing system 54 . for example, a portion of the exhaust gas 42 passing through the eg processing system 54 may be extracted by the eg extraction system 80 for use in the eor system 18 and the other systems 84 . in certain embodiments, the eg supply system 78 and the eg processing system 54 may be independent or integral with one another, and thus may use independent or common subsystems. for example, the eg treatment system 82 may be used by both the eg supply system 78 and the eg processing system 54 . exhaust gas 42 extracted from the eg processing system 54 may undergo multiple stages of gas treatment, such as one or more stages of gas treatment in the eg processing system 54 followed by one or more additional stages of gas treatment in the eg treatment system 82 . at each extraction point 76 , the extracted exhaust gas 42 may be substantially free of oxidant 68 and fuel 70 (e.g., unburnt fuel or hydrocarbons) due to substantially stoichiometric combustion and/or gas treatment in the eg processing system 54 . furthermore, depending on the target system, the extracted exhaust gas 42 may undergo further treatment in the eg treatment system 82 of the eg supply system 78 , thereby further reducing any residual oxidant 68 , fuel 70 , or other undesirable products of combustion. for example, either before or after treatment in the eg treatment system 82 , the extracted exhaust gas 42 may have less than 1, 2, 3, 4, or 5 percent by volume of oxidant (e.g., oxygen), unburnt fuel or hydrocarbons (e.g., hcs), nitrogen oxides (e.g., no x ), carbon monoxide (co), sulfur oxides (e.g., so x ), hydrogen, and other products of incomplete combustion. by further example, either before or after treatment in the eg treatment system 82 , the extracted exhaust gas 42 may have less than approximately 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 200, 300, 400, 500, 1000, 2000, 3000, 4000, or 5000 parts per million by volume (ppmv) of oxidant (e.g., oxygen), unburnt fuel or hydrocarbons (e.g., hcs), nitrogen oxides (e.g., no x ), carbon monoxide (co), sulfur oxides (e.g., so x ), hydrogen, and other products of incomplete combustion. thus, the exhaust gas 42 is particularly well suited for use with the eor system 18 . the egr operation of the turbine system 52 specifically enables the exhaust extraction at a multitude of locations 76 . for example, the compressor section of the system 52 may be used to compress the exhaust gas 66 without any oxidant 68 (i.e., only compression of the exhaust gas 66 ), such that a substantially oxygen-free exhaust gas 42 may be extracted from the compressor section and/or the combustor section prior to entry of the oxidant 68 and the fuel 70 . the extraction points 76 may be located at interstage ports between adjacent compressor stages, at ports along the compressor discharge casing, at ports along each combustor in the combustor section, or any combination thereof. in certain embodiments, the exhaust gas 66 may not mix with the oxidant 68 and fuel 70 until it reaches the head end portion and/or fuel nozzles of each combustor in the combustor section. furthermore, one or more flow separators (e.g., walls, dividers, baffles, or the like) may be used to isolate the oxidant 68 and the fuel 70 from the extraction points 76 . with these flow separators, the extraction points 76 may be disposed directly along a wall of each combustor in the combustor section. once the exhaust gas 66 , oxidant 68 , and fuel 70 flow through the head end portion (e.g., through fuel nozzles) into the combustion portion (e.g., combustion chamber) of each combustor, the segr gas turbine system 52 is controlled to provide a substantially stoichiometric combustion of the exhaust gas 66 , oxidant 68 , and fuel 70 . for example, the system 52 may maintain an equivalence ratio of approximately 0.95 to approximately 1.05. as a result, the products of combustion of the mixture of exhaust gas 66 , oxidant 68 , and fuel 70 in each combustor is substantially free of oxygen and unburnt fuel. thus, the products of combustion (or exhaust gas) may be extracted from the turbine section of the segr gas turbine system 52 for use as the exhaust gas 42 routed to the eor system 18 . along the turbine section, the extraction points 76 may be located at any turbine stage, such as interstage ports between adjacent turbine stages. thus, using any of the foregoing extraction points 76 , the turbine-based service system 14 may generate, extract, and deliver the exhaust gas 42 to the hydrocarbon production system 12 (e.g., the eor system 18 ) for use in the production of oil/gas 48 from the subterranean reservoir 20 . fig. 2 is a diagram of an embodiment of the system 10 of fig. 1 , illustrating a control system 100 coupled to the turbine-based service system 14 and the hydrocarbon production system 12 . in the illustrated embodiment, the turbine-based service system 14 includes a combined cycle system 102 , which includes the segr gas turbine system 52 as a topping cycle, a steam turbine 104 as a bottoming cycle, and the hrsg 56 to recover heat from the exhaust gas 60 to generate the steam 62 for driving the steam turbine 104 . again, the segr gas turbine system 52 receives, mixes, and stoichiometrically combusts the exhaust gas 66 , the oxidant 68 , and the fuel 70 (e.g., premix and/or diffusion flames), thereby producing the exhaust gas 60 , the mechanical power 72 , the electrical power 74 , and/or the water 64 . for example, the segr gas turbine system 52 may drive one or more loads or machinery 106 , such as an electrical generator, an oxidant compressor (e.g., a main air compressor), a gear box, a pump, equipment of the hydrocarbon production system 12 , or any combination thereof. in some embodiments, the machinery 106 may include other drives, such as electrical motors or steam turbines (e.g., the steam turbine 104 ), in tandem with the segr gas turbine system 52 . accordingly, an output of the machinery 106 driven by the segr gas turbines system 52 (and any additional drives) may include the mechanical power 72 and the electrical power 74 . the mechanical power 72 and/or the electrical power 74 may be used on-site for powering the hydrocarbon production system 12 , the electrical power 74 may be distributed to the power grid, or any combination thereof. the output of the machinery 106 also may include a compressed fluid, such as a compressed oxidant 68 (e.g., air or oxygen), for intake into the combustion section of the segr gas turbine system 52 . each of these outputs (e.g., the exhaust gas 60 , the mechanical power 72 , the electrical power 74 , and/or the water 64 ) may be considered a service of the turbine-based service system 14 . the segr gas turbine system 52 produces the exhaust gas 42 , 60 , which may be substantially free of oxygen, and routes this exhaust gas 42 , 60 to the eg processing system 54 and/or the eg supply system 78 . the eg supply system 78 may treat and delivery the exhaust gas 42 (e.g., streams 95 ) to the hydrocarbon production system 12 and/or the other systems 84 . as discussed above, the eg processing system 54 may include the hrsg 56 and the egr system 58 . the hrsg 56 may include one or more heat exchangers, condensers, and various heat recovery equipment, which may be used to recover or transfer heat from the exhaust gas 60 to water 108 to generate the steam 62 for driving the steam turbine 104 . similar to the segr gas turbine system 52 , the steam turbine 104 may drive one or more loads or machinery 106 , thereby generating the mechanical power 72 and the electrical power 74 . in the illustrated embodiment, the segr gas turbine system 52 and the steam turbine 104 are arranged in tandem to drive the same machinery 106 . however, in other embodiments, the segr gas turbine system 52 and the steam turbine 104 may separately drive different machinery 106 to independently generate mechanical power 72 and/or electrical power 74 . as the steam turbine 104 is driven by the steam 62 from the hrsg 56 , the steam 62 gradually decreases in temperature and pressure. accordingly, the steam turbine 104 recirculates the used steam 62 and/or water 108 back into the hrsg 56 for additional steam generation via heat recovery from the exhaust gas 60 . in addition to steam generation, the hrsg 56 , the egr system 58 , and/or another portion of the eg processing system 54 may produce the water 64 , the exhaust gas 42 for use with the hydrocarbon production system 12 , and the exhaust gas 66 for use as an input into the segr gas turbine system 52 . for example, the water 64 may be a treated water 64 , such as a desalinated water for use in other applications. the desalinated water may be particularly useful in regions of low water availability. regarding the exhaust gas 60 , embodiments of the eg processing system 54 may be configured to recirculate the exhaust gas 60 through the egr system 58 with or without passing the exhaust gas 60 through the hrsg 56 . in the illustrated embodiment, the segr gas turbine system 52 has an exhaust recirculation path 110 , which extends from an exhaust outlet to an exhaust inlet of the system 52 . along the path 110 , the exhaust gas 60 passes through the eg processing system 54 , which includes the hrsg 56 and the egr system 58 in the illustrated embodiment. the egr system 58 may include one or more conduits, valves, blowers, gas treatment systems (e.g., filters, particulate removal units, gas separation units, gas purification units, heat exchangers, heat recovery units such as heat recovery steam generators, moisture removal units, catalyst units, chemical injection units, or any combination thereof) in series and/or parallel arrangements along the path 110 . in other words, the egr system 58 may include any flow control components, pressure control components, temperature control components, moisture control components, and gas composition control components along the exhaust recirculation path 110 between the exhaust outlet and the exhaust inlet of the system 52 . accordingly, in embodiments with the hrsg 56 along the path 110 , the hrsg 56 may be considered a component of the egr system 58 . however, in certain embodiments, the hrsg 56 may be disposed along an exhaust path independent from the exhaust recirculation path 110 . regardless of whether the hrsg 56 is along a separate path or a common path with the egr system 58 , the hrsg 56 and the egr system 58 intake the exhaust gas 60 and output either the recirculated exhaust gas 66 , the exhaust gas 42 for use with the eg supply system 78 (e.g., for the hydrocarbon production system 12 and/or other systems 84 ), or another output of exhaust gas. again, the segr gas turbine system 52 intakes, mixes, and stoichiometrically combusts the exhaust gas 66 , the oxidant 68 , and the fuel 70 (e.g., premixed and/or diffusion flames) to produce a substantially oxygen-free and fuel-free exhaust gas 60 for distribution to the eg processing system 54 , the hydrocarbon production system 12 , or other systems 84 . as noted above with reference to fig. 1 , the hydrocarbon production system 12 may include a variety of equipment to facilitate the recovery or production of oil/gas 48 from a subterranean reservoir 20 through an oil/gas well 26 . for example, the hydrocarbon production system 12 may include the eor system 18 having the fluid injection system 34 . in the illustrated embodiment, the fluid injection system 34 includes an exhaust gas injection eor system 112 and a steam injection eor system 114 . although the fluid injection system 34 may receive fluids from a variety of sources, the illustrated embodiment may receive the exhaust gas 42 and the steam 62 from the turbine-based service system 14 . the exhaust gas 42 and/or the steam 62 produced by the turbine-based service system 14 also may be routed to the hydrocarbon production system 12 for use in other oil/gas systems 116 . the quantity, quality, and flow of the exhaust gas 42 and/or the steam 62 may be controlled by the control system 100 . the control system 100 may be dedicated entirely to the turbine-based service system 14 , or the control system 100 may optionally also provide control (or at least some data to facilitate control) for the hydrocarbon production system 12 and/or other systems 84 . in the illustrated embodiment, the control system 100 includes a controller 118 having a processor 120 , a memory 122 , a steam turbine control 124 , a segr gas turbine system control 126 , and a machinery control 128 . the processor 120 may include a single processor or two or more redundant processors, such as triple redundant processors for control of the turbine-based service system 14 . the memory 122 may include volatile and/or non-volatile memory. for example, the memory 122 may include one or more hard drives, flash memory, read-only memory, random access memory, or any combination thereof. the controls 124 , 126 , and 128 may include software and/or hardware controls. for example, the controls 124 , 126 , and 128 may include various instructions or code stored on the memory 122 and executable by the processor 120 . the control 124 is configured to control operation of the steam turbine 104 , the segr gas turbine system control 126 is configured to control the system 52 , and the machinery control 128 is configured to control the machinery 106 . thus, the controller 118 (e.g., controls 124 , 126 , and 128 ) may be configured to coordinate various sub-systems of the turbine-based service system 14 to provide a suitable stream of the exhaust gas 42 to the hydrocarbon production system 12 . in certain embodiments of the control system 100 , each element (e.g., system, subsystem, and component) illustrated in the drawings or described herein includes (e.g., directly within, upstream, or downstream of such element) one or more industrial control features, such as sensors and control devices, which are communicatively coupled with one another over an industrial control network along with the controller 118 . for example, the control devices associated with each element may include a dedicated device controller (e.g., including a processor, memory, and control instructions), one or more actuators, valves, switches, and industrial control equipment, which enable control based on sensor feedback 130 , control signals from the controller 118 , control signals from a user, or any combination thereof. thus, any of the control functionality described herein may be implemented with control instructions stored and/or executable by the controller 118 , dedicated device controllers associated with each element, or a combination thereof. in order to facilitate such control functionality, the control system 100 includes one or more sensors distributed throughout the system 10 to obtain the sensor feedback 130 for use in execution of the various controls, e.g., the controls 124 , 126 , and 128 . for example, the sensor feedback 130 may be obtained from sensors distributed throughout the segr gas turbine system 52 , the machinery 106 , the eg processing system 54 , the steam turbine 104 , the hydrocarbon production system 12 , or any other components throughout the turbine-based service system 14 or the hydrocarbon production system 12 . for example, the sensor feedback 130 may include temperature feedback, pressure feedback, flow rate feedback, flame temperature feedback, combustion dynamics feedback, intake oxidant composition feedback, intake fuel composition feedback, exhaust composition feedback, the output level of mechanical power 72 , the output level of electrical power 74 , the output quantity of the exhaust gas 42 , 60 , the output quantity or quality of the water 64 , or any combination thereof. for example, the sensor feedback 130 may include a composition of the exhaust gas 42 , 60 to facilitate stoichiometric combustion in the segr gas turbine system 52 . for example, the sensor feedback 130 may include feedback from one or more intake oxidant sensors along an oxidant supply path of the oxidant 68 , one or more intake fuel sensors along a fuel supply path of the fuel 70 , and one or more exhaust emissions sensors disposed along the exhaust recirculation path 110 and/or within the segr gas turbine system 52 . the intake oxidant sensors, intake fuel sensors, and exhaust emissions sensors may include temperature sensors, pressure sensors, flow rate sensors, and composition sensors. the emissions sensors may includes sensors for nitrogen oxides (e.g., no x sensors), carbon oxides (e.g., co sensors and co 2 sensors), sulfur oxides (e.g., so x sensors), hydrogen (e.g., h 2 sensors), oxygen (e.g., o 2 sensors), unburnt hydrocarbons (e.g., hc sensors), or other products of incomplete combustion, or any combination thereof. using this feedback 130 , the control system 100 may adjust (e.g., increase, decrease, or maintain) the intake flow of exhaust gas 66 , oxidant 68 , and/or fuel 70 into the segr gas turbine system 52 (among other operational parameters) to maintain the equivalence ratio within a suitable range, e.g., between approximately 0.95 to approximately 1.05, between approximately 0.95 to approximately 1.0, between approximately 1.0 to approximately 1.05, or substantially at 1.0. for example, the control system 100 may analyze the feedback 130 to monitor the exhaust emissions (e.g., concentration levels of nitrogen oxides, carbon oxides such as co and co 2 , sulfur oxides, hydrogen, oxygen, unburnt hydrocarbons, and other products of incomplete combustion) and/or determine the equivalence ratio, and then control one or more components to adjust the exhaust emissions (e.g., concentration levels in the exhaust gas 42 ) and/or the equivalence ratio. the controlled components may include any of the components illustrated and described with reference to the drawings, including but not limited to, valves along the supply paths for the oxidant 68 , the fuel 70 , and the exhaust gas 66 ; an oxidant compressor, a fuel pump, or any components in the eg processing system 54 ; any components of the segr gas turbine system 52 , or any combination thereof. the controlled components may adjust (e.g., increase, decrease, or maintain) the flow rates, temperatures, pressures, or percentages (e.g., equivalence ratio) of the oxidant 68 , the fuel 70 , and the exhaust gas 66 that combust within the segr gas turbine system 52 . the controlled components also may include one or more gas treatment systems, such as catalyst units (e.g., oxidation catalyst units), supplies for the catalyst units (e.g., oxidation fuel, heat, electricity, etc.), gas purification and/or separation units (e.g., solvent based separators, absorbers, flash tanks, etc.), and filtration units. the gas treatment systems may help reduce various exhaust emissions along the exhaust recirculation path 110 , a vent path (e.g., exhausted into the atmosphere), or an extraction path to the eg supply system 78 . in certain embodiments, the control system 100 may analyze the feedback 130 and control one or more components to maintain or reduce emissions levels (e.g., concentration levels in the exhaust gas 42 , 60 , 95 ) to a target range, such as less than approximately 10, 20, 30, 40, 50, 100, 200, 300, 400, 500, 1000, 2000, 3000, 4000, 5000, or 10000 parts per million by volume (ppmv). these target ranges may be the same or different for each of the exhaust emissions, e.g., concentration levels of nitrogen oxides, carbon monoxide, sulfur oxides, hydrogen, oxygen, unburnt hydrocarbons, and other products of incomplete combustion. for example, depending on the equivalence ratio, the control system 100 may selectively control exhaust emissions (e.g., concentration levels) of oxidant (e.g., oxygen) within a target range of less than approximately 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 250, 500, 750, or 1000 ppmv; carbon monoxide (co) within a target range of less than approximately 20, 50, 100, 200, 500, 1000, 2500, or 5000 ppmv; and nitrogen oxides (no x ) within a target range of less than approximately 50, 100, 200, 300, 400, or 500 ppmv. in certain embodiments operating with a substantially stoichiometric equivalence ratio, the control system 100 may selectively control exhaust emissions (e.g., concentration levels) of oxidant (e.g., oxygen) within a target range of less than approximately 10, 20, 30, 40, 50, 60, 70, 80, 90, or 100 ppmv; and carbon monoxide (co) within a target range of less than approximately 500, 1000, 2000, 3000, 4000, or 5000 ppmv. in certain embodiments operating with a fuel-lean equivalence ratio (e.g., between approximately 0.95 to 1.0), the control system 100 may selectively control exhaust emissions (e.g., concentration levels) of oxidant (e.g., oxygen) within a target range of less than approximately 500, 600, 700, 800, 900, 1000, 1100, 1200, 1300, 1400, or 1500 ppmv; carbon monoxide (co) within a target range of less than approximately 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 150, or 200 ppmv; and nitrogen oxides (e.g., no x ) within a target range of less than approximately 50, 100, 150, 200, 250, 300, 350, or 400 ppmv. the foregoing target ranges are merely examples, and are not intended to limit the scope of the disclosed embodiments. the control system 100 also may be coupled to a local interface 132 and a remote interface 134 . for example, the local interface 132 may include a computer workstation disposed on-site at the turbine-based service system 14 and/or the hydrocarbon production system 12 . in contrast, the remote interface 134 may include a computer workstation disposed off-site from the turbine-based service system 14 and the hydrocarbon production system 12 , such as through an internet connection. these interfaces 132 and 134 facilitate monitoring and control of the turbine-based service system 14 , such as through one or more graphical displays of sensor feedback 130 , operational parameters, and so forth. again, as noted above, the controller 118 includes a variety of controls 124 , 126 , and 128 to facilitate control of the turbine-based service system 14 . the steam turbine control 124 may receive the sensor feedback 130 and output control commands to facilitate operation of the steam turbine 104 . for example, the steam turbine control 124 may receive the sensor feedback 130 from the hrsg 56 , the machinery 106 , temperature and pressure sensors along a path of the steam 62 , temperature and pressure sensors along a path of the water 108 , and various sensors indicative of the mechanical power 72 and the electrical power 74 . likewise, the segr gas turbine system control 126 may receive sensor feedback 130 from one or more sensors disposed along the segr gas turbine system 52 , the machinery 106 , the eg processing system 54 , or any combination thereof. for example, the sensor feedback 130 may be obtained from temperature sensors, pressure sensors, clearance sensors, vibration sensors, flame sensors, fuel composition sensors, exhaust gas composition sensors, or any combination thereof, disposed within or external to the segr gas turbine system 52 . finally, the machinery control 128 may receive sensor feedback 130 from various sensors associated with the mechanical power 72 and the electrical power 74 , as well as sensors disposed within the machinery 106 . each of these controls 124 , 126 , and 128 uses the sensor feedback 130 to improve operation of the turbine-based service system 14 . in the illustrated embodiment, the segr gas turbine system control 126 may execute instructions to control the quantity and quality of the exhaust gas 42 , 60 , 95 in the eg processing system 54 , the eg supply system 78 , the hydrocarbon production system 12 , and/or the other systems 84 . for example, the segr gas turbine system control 126 may maintain a level of oxidant (e.g., oxygen) and/or unburnt fuel in the exhaust gas 60 below a threshold suitable for use with the exhaust gas injection eor system 112 . in certain embodiments, the threshold levels may be less than 1, 2, 3, 4, or 5 percent of oxidant (e.g., oxygen) and/or unburnt fuel by volume of the exhaust gas 42 , 60 ; or the threshold levels of oxidant (e.g., oxygen) and/or unburnt fuel (and other exhaust emissions) may be less than approximately 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 200, 300, 400, 500, 1000, 2000, 3000, 4000, or 5000 parts per million by volume (ppmv) in the exhaust gas 42 , 60 . by further example, in order to achieve these low levels of oxidant (e.g., oxygen) and/or unburnt fuel, the segr gas turbine system control 126 may maintain an equivalence ratio for combustion in the segr gas turbine system 52 between approximately 0.95 and approximately 1.05. the segr gas turbine system control 126 also may control the eg extraction system 80 and the eg treatment system 82 to maintain the temperature, pressure, flow rate, and gas composition of the exhaust gas 42 , 60 , 95 within suitable ranges for the exhaust gas injection eor system 112 , the pipeline 86 , the storage tank 88 , and the carbon sequestration system 90 . as discussed above, the eg treatment system 82 may be controlled to purify and/or separate the exhaust gas 42 into one or more gas streams 95 , such as the co 2 rich, n 2 lean stream 96 , the intermediate concentration co 2 , n 2 stream 97 , and the co 2 lean, n 2 rich stream 98 . in addition to controls for the exhaust gas 42 , 60 , and 95 , the controls 124 , 126 , and 128 may execute one or more instructions to maintain the mechanical power 72 within a suitable power range, or maintain the electrical power 74 within a suitable frequency and power range. fig. 3 is a diagram of embodiment of the system 10 , further illustrating details of the segr gas turbine system 52 for use with the hydrocarbon production system 12 and/or other systems 84 . in the illustrated embodiment, the segr gas turbine system 52 includes a gas turbine engine 150 coupled to the eg processing system 54 . the illustrated gas turbine engine 150 includes a compressor section 152 , a combustor section 154 , and an expander section or turbine section 156 . the compressor section 152 includes one or more exhaust gas compressors or compressor stages 158 , such as 1 to 20 stages of rotary compressor blades disposed in a series arrangement. likewise, the combustor section 154 includes one or more combustors 160 , such as 1 to 20 combustors 160 distributed circumferentially about a rotational axis 162 of the segr gas turbine system 52 . furthermore, each combustor 160 may include one or more fuel nozzles 164 configured to inject the exhaust gas 66 , the oxidant 68 , and/or the fuel 70 . for example, a head end portion 166 of each combustor 160 may house 1, 2, 3, 4, 5, 6, or more fuel nozzles 164 , which may inject streams or mixtures of the exhaust gas 66 , the oxidant 68 , and/or the fuel 70 into a combustion portion 168 (e.g., combustion chamber) of the combustor 160 . the fuel nozzles 164 may include any combination of premix fuel nozzles 164 (e.g., configured to premix the oxidant 68 and fuel 70 for generation of an oxidant/fuel premix flame) and/or diffusion fuel nozzles 164 (e.g., configured to inject separate flows of the oxidant 68 and fuel 70 for generation of an oxidant/fuel diffusion flame). embodiments of the premix fuel nozzles 164 may include swirl vanes, mixing chambers, or other features to internally mix the oxidant 68 and fuel 70 within the nozzles 164 , prior to injection and combustion in the combustion chamber 168 . the premix fuel nozzles 164 also may receive at least some partially mixed oxidant 68 and fuel 70 . in certain embodiments, each diffusion fuel nozzle 164 may isolate flows of the oxidant 68 and the fuel 70 until the point of injection, while also isolating flows of one or more diluents (e.g., the exhaust gas 66 , steam, nitrogen, or another inert gas) until the point of injection. in other embodiments, each diffusion fuel nozzle 164 may isolate flows of the oxidant 68 and the fuel 70 until the point of injection, while partially mixing one or more diluents (e.g., the exhaust gas 66 , steam, nitrogen, or another inert gas) with the oxidant 68 and/or the fuel 70 prior to the point of injection. in addition, one or more diluents (e.g., the exhaust gas 66 , steam, nitrogen, or another inert gas) may be injected into the combustor (e.g., into the hot products of combustion) either at or downstream from the combustion zone, thereby helping to reduce the temperature of the hot products of combustion and reduce emissions of no x (e.g., no and no 2 ). regardless of the type of fuel nozzle 164 , the segr gas turbine system 52 may be controlled to provide substantially stoichiometric combustion of the oxidant 68 and fuel 70 . in diffusion combustion embodiments using the diffusion fuel nozzles 164 , the fuel 70 and oxidant 68 generally do not mix upstream from the diffusion flame, but rather the fuel 70 and oxidant 68 mix and react directly at the flame surface and/or the flame surface exists at the location of mixing between the fuel 70 and oxidant 68 . in particular, the fuel 70 and oxidant 68 separately approach the flame surface (or diffusion boundary/interface), and then diffuse (e.g., via molecular and viscous diffusion) along the flame surface (or diffusion boundary/interface) to generate the diffusion flame. it is noteworthy that the fuel 70 and oxidant 68 may be at a substantially stoichiometric ratio along this flame surface (or diffusion boundary/interface), which may result in a greater flame temperature (e.g., a peak flame temperature) along this flame surface. the stoichiometric fuel/oxidant ratio generally results in a greater flame temperature (e.g., a peak flame temperature), as compared with a fuel-lean or fuel-rich fuel/oxidant ratio. as a result, the diffusion flame may be substantially more stable than a premix flame, because the diffusion of fuel 70 and oxidant 68 helps to maintain a stoichiometric ratio (and greater temperature) along the flame surface. although greater flame temperatures can also lead to greater exhaust emissions, such as no x emissions, the disclosed embodiments use one or more diluents to help control the temperature and emissions while still avoiding any premixing of the fuel 70 and oxidant 68 . for example, the disclosed embodiments may introduce one or more diluents separate from the fuel 70 and oxidant 68 (e.g., after the point of combustion and/or downstream from the diffusion flame), thereby helping to reduce the temperature and reduce the emissions (e.g., no x emissions) produced by the diffusion flame. in operation, as illustrated, the compressor section 152 receives and compresses the exhaust gas 66 from the eg processing system 54 , and outputs a compressed exhaust gas 170 to each of the combustors 160 in the combustor section 154 . upon combustion of the fuel 60 , oxidant 68 , and exhaust gas 170 within each combustor 160 , additional exhaust gas or products of combustion 172 (i.e., combustion gas) is routed into the turbine section 156 . similar to the compressor section 152 , the turbine section 156 includes one or more turbines or turbine stages 174 , which may include a series of rotary turbine blades. these turbine blades are then driven by the products of combustion 172 generated in the combustor section 154 , thereby driving rotation of a shaft 176 coupled to the machinery 106 . again, the machinery 106 may include a variety of equipment coupled to either end of the segr gas turbine system 52 , such as machinery 106 , 178 coupled to the turbine section 156 and/or machinery 106 , 180 coupled to the compressor section 152 . in certain embodiments, the machinery 106 , 178 , 180 may include one or more electrical generators, oxidant compressors for the oxidant 68 , fuel pumps for the fuel 70 , gear boxes, or additional drives (e.g. steam turbine 104 , electrical motor, etc.) coupled to the segr gas turbine system 52 . non-limiting examples are discussed in further detail below with reference to table 1. as illustrated, the turbine section 156 outputs the exhaust gas 60 to recirculate along the exhaust recirculation path 110 from an exhaust outlet 182 of the turbine section 156 to an exhaust inlet 184 into the compressor section 152 . along the exhaust recirculation path 110 , the exhaust gas 60 passes through the eg processing system 54 (e.g., the hrsg 56 and/or the egr system 58 ) as discussed in detail above. again, each combustor 160 in the combustor section 154 receives, mixes, and stoichiometrically combusts the compressed exhaust gas 170 , the oxidant 68 , and the fuel 70 to produce the additional exhaust gas or products of combustion 172 to drive the turbine section 156 . in certain embodiments, the oxidant 68 is compressed by an oxidant compression system 186 , such as a main oxidant compression (moc) system (e.g., a main air compression (mac) system) having one or more oxidant compressors (mocs). the oxidant compression system 186 includes an oxidant compressor 188 coupled to a drive 190 . for example, the drive 190 may include an electric motor, a combustion engine, or any combination thereof. in certain embodiments, the drive 190 may be a turbine engine, such as the gas turbine engine 150 . accordingly, the oxidant compression system 186 may be an integral part of the machinery 106 . in other words, the compressor 188 may be directly or indirectly driven by the mechanical power 72 supplied by the shaft 176 of the gas turbine engine 150 . in such an embodiment, the drive 190 may be excluded, because the compressor 188 relies on the power output from the turbine engine 150 . however, in certain embodiments employing more than one oxidant compressor is employed, a first oxidant compressor (e.g., a low pressure (lp) oxidant compressor) may be driven by the drive 190 while the shaft 176 drives a second oxidant compressor (e.g., a high pressure (hp) oxidant compressor), or vice versa. for example, in another embodiment, the hp moc is driven by the drive 190 and the lp oxidant compressor is driven by the shaft 176 . in the illustrated embodiment, the oxidant compression system 186 is separate from the machinery 106 . in each of these embodiments, the compression system 186 compresses and supplies the oxidant 68 to the fuel nozzles 164 and the combustors 160 . accordingly, some or all of the machinery 106 , 178 , 180 may be configured to increase the operational efficiency of the compression system 186 (e.g., the compressor 188 and/or additional compressors). the variety of components of the machinery 106 , indicated by element numbers 106 a, 106 b, 106 c, 106 d, 106 e, and 106 f, may be disposed along the line of the shaft 176 and/or parallel to the line of the shaft 176 in one or more series arrangements, parallel arrangements, or any combination of series and parallel arrangements. for example, the machinery 106 , 178 , 180 (e.g., 106 a through 106 f) may include any series and/or parallel arrangement, in any order, of: one or more gearboxes (e.g., parallel shaft, epicyclic gearboxes), one or more compressors (e.g., oxidant compressors, booster compressors such as eg booster compressors), one or more power generation units (e.g., electrical generators), one or more drives (e.g., steam turbine engines, electrical motors), heat exchange units (e.g., direct or indirect heat exchangers), clutches, or any combination thereof. the compressors may include axial compressors, radial or centrifugal compressors, or any combination thereof, each having one or more compression stages. regarding the heat exchangers, direct heat exchangers may include spray coolers (e.g., spray intercoolers), which inject a liquid spray into a gas flow (e.g., oxidant flow) for direct cooling of the gas flow. indirect heat exchangers may include at least one wall (e.g., a shell and tube heat exchanger) separating first and second flows, such as a fluid flow (e.g., oxidant flow) separated from a coolant flow (e.g., water, air, refrigerant, or any other liquid or gas coolant), wherein the coolant flow transfers heat from the fluid flow without any direct contact. examples of indirect heat exchangers include intercooler heat exchangers and heat recovery units, such as heat recovery steam generators. the heat exchangers also may include heaters. as discussed in further detail below, each of these machinery components may be used in various combinations as indicated by the non-limiting examples set forth in table 1. generally, the machinery 106 , 178 , 180 may be configured to increase the efficiency of the compression system 186 by, for example, adjusting operational speeds of one or more oxidant compressors in the system 186 , facilitating compression of the oxidant 68 through cooling, and/or extraction of surplus power. the disclosed embodiments are intended to include any and all permutations of the foregoing components in the machinery 106 , 178 , 180 in series and parallel arrangements, wherein one, more than one, all, or none of the components derive power from the shaft 176 . as illustrated below, table 1 depicts some non-limiting examples of arrangements of the machinery 106 , 178 , 180 disposed proximate and/or coupled to the compressor and turbine sections 152 , 156 . table 1106a106b106c106d106e106fmocgenmocgbxgenlphpgenmocmochpgbxlpgenmocmocmocgbxgenmochpgbxgenlpmocmocmocgbxgenmocgbxdrvdrvgbxlphpgbxgenmocmocdrvgbxhplpgenmocmochpgbxlpgenmocclrmochpgbxlpgbxgenmocclrmochpgbxlpgenmochtrmocstgnmocgendrvmocdrvgendrvmocgendrvclumocgendrvclumocgbxgen as illustrated above in table 1, a cooling unit is represented as clr, a clutch is represented as clu, a drive is represented by drv, a gearbox is represented as gbx, a generator is represented by gen, a heating unit is represented by htr, a main oxidant compressor unit is represented by moc, with low pressure and high pressure variants being represented as lp moc and hp moc, respectively, and a steam generator unit is represented as stgn. although table 1 illustrates the machinery 106 , 178 , 180 in sequence toward the compressor section 152 or the turbine section 156 , table 1 is also intended to cover the reverse sequence of the machinery 106 , 178 , 180 . in table 1, any cell including two or more components is intended to cover a parallel arrangement of the components. table 1 is not intended to exclude any non-illustrated permutations of the machinery 106 , 178 , 180 . these components of the machinery 106 , 178 , 180 may enable feedback control of temperature, pressure, and flow rate of the oxidant 68 sent to the gas turbine engine 150 . as discussed in further detail below, the oxidant 68 and the fuel 70 may be supplied to the gas turbine engine 150 at locations specifically selected to facilitate isolation and extraction of the compressed exhaust gas 170 without any oxidant 68 or fuel 70 degrading the quality of the exhaust gas 170 . the eg supply system 78 , as illustrated in fig. 3 , is disposed between the gas turbine engine 150 and the target systems (e.g., the hydrocarbon production system 12 and the other systems 84 ). in particular, the eg supply system 78 , e.g., the eg extraction system (eges) 80 ), may be coupled to the gas turbine engine 150 at one or more extraction points 76 along the compressor section 152 , the combustor section 154 , and/or the turbine section 156 . for example, the extraction points 76 may be located between adjacent compressor stages, such as 2, 3, 4, 5, 6, 7, 8, 9, or 10 interstage extraction points 76 between compressor stages. each of these interstage extraction points 76 provides a different temperature and pressure of the extracted exhaust gas 42 . similarly, the extraction points 76 may be located between adjacent turbine stages, such as 2, 3, 4, 5, 6, 7, 8, 9, or 10 interstage extraction points 76 between turbine stages. each of these interstage extraction points 76 provides a different temperature and pressure of the extracted exhaust gas 42 . by further example, the extraction points 76 may be located at a multitude of locations throughout the combustor section 154 , which may provide different temperatures, pressures, flow rates, and gas compositions. each of these extraction points 76 may include an eg extraction conduit, one or more valves, sensors, and controls, which may be used to selectively control the flow of the extracted exhaust gas 42 to the eg supply system 78 . the extracted exhaust gas 42 , which is distributed by the eg supply system 78 , has a controlled composition suitable for the target systems (e.g., the hydrocarbon production system 12 and the other systems 84 ). for example, at each of these extraction points 76 , the exhaust gas 170 may be substantially isolated from injection points (or flows) of the oxidant 68 and the fuel 70 . in other words, the eg supply system 78 may be specifically designed to extract the exhaust gas 170 from the gas turbine engine 150 without any added oxidant 68 or fuel 70 . furthermore, in view of the stoichiometric combustion in each of the combustors 160 , the extracted exhaust gas 42 may be substantially free of oxygen and fuel. the eg supply system 78 may route the extracted exhaust gas 42 directly or indirectly to the hydrocarbon production system 12 and/or other systems 84 for use in various processes, such as enhanced oil recovery, carbon sequestration, storage, or transport to an offsite location. however, in certain embodiments, the eg supply system 78 includes the eg treatment system (egts) 82 for further treatment of the exhaust gas 42 , prior to use with the target systems. for example, the eg treatment system 82 may purify and/or separate the exhaust gas 42 into one or more streams 95 , such as the co 2 rich, n 2 lean stream 96 , the intermediate concentration co 2 , n 2 stream 97 , and the co 2 lean, n 2 rich stream 98 . these treated exhaust gas streams 95 may be used individually, or in any combination, with the hydrocarbon production system 12 and the other systems 84 (e.g., the pipeline 86 , the storage tank 88 , and the carbon sequestration system 90 ). similar to the exhaust gas treatments performed in the eg supply system 78 , the eg processing system 54 may include a plurality of exhaust gas (eg) treatment components 192 , such as indicated by element numbers 194 , 196 , 198 , 200 , 202 , 204 , 206 , 208 , and 210 . these eg treatment components 192 (e.g., 194 through 210 ) may be disposed along the exhaust recirculation path 110 in one or more series arrangements, parallel arrangements, or any combination of series and parallel arrangements. for example, the eg treatment components 192 (e.g., 194 through 210 ) may include any series and/or parallel arrangement, in any order, of: one or more heat exchangers (e.g., heat recovery units such as heat recovery steam generators, condensers, coolers, or heaters), catalyst systems (e.g., oxidation catalyst systems), particulate and/or water removal systems (e.g., inertial separators, coalescing filters, water impermeable filters, and other filters), chemical injection systems, solvent based treatment systems (e.g., absorbers, flash tanks, etc.), carbon capture systems, gas separation systems, gas purification systems, and/or a solvent based treatment system, or any combination thereof. in certain embodiments, the catalyst systems may include an oxidation catalyst, a carbon monoxide reduction catalyst, a nitrogen oxides reduction catalyst, an aluminum oxide, a zirconium oxide, a silicone oxide, a titanium oxide, a platinum oxide, a palladium oxide, a cobalt oxide, or a mixed metal oxide, or a combination thereof. the disclosed embodiments are intended to include any and all permutations of the foregoing components 192 in series and parallel arrangements. as illustrated below, table 2 depicts some non-limiting examples of arrangements of the components 192 along the exhaust recirculation path 110 . table 2194196198200202204206208210cuhrubbmruprucuhruhrubbmruprudilcuhrsghrsgbbmrupruocuhruocuhruocubbmrupruhruhrubbmruprucucuhrsghrsgbbmruprudilocuocuocuhrsgocuhrsgocubbmruprudilocuocuocuhrsghrsgbbcondinerwfilcfildilststocuocubbcondinerfildilhrsghrsgststocuhrsghrsgocubbmrumruprupruststhewfilinerfilcondcfilcuhruhruhrubbmrupruprudilcondcondcondheinerfilcondcfilwfil as illustrated above in table 2, a catalyst unit is represented by cu, an oxidation catalyst unit is represented by ocu, a booster blower is represented by bb, a heat exchanger is represented by hx, a heat recovery unit is represented by hru, a heat recovery steam generator is represented by hrsg, a condenser is represented by cond, a steam turbine is represented by st, a particulate removal unit is represented by pru, a moisture removal unit is represented by mru, a filter is represented by fil, a coalescing filter is represented by cfil, a water impermeable filter is represented by wfil, an inertial separator is represented by iner, and a diluent supply system (e.g., steam, nitrogen, or other inert gas) is represented by dil. although table 2 illustrates the components 192 in sequence from the exhaust outlet 182 of the turbine section 156 toward the exhaust inlet 184 of the compressor section 152 , table 2 is also intended to cover the reverse sequence of the illustrated components 192 . in table 2, any cell including two or more components is intended to cover an integrated unit with the components, a parallel arrangement of the components, or any combination thereof. furthermore, in context of table 2, the hru, the hrsg, and the cond are examples of the he; the hrsg is an example of the hru; the cond, wfil, and cfil are examples of the wru; the iner, fil, wfil, and cfil are examples of the pru; and the wfil and cfil are examples of the fil. again, table 2 is not intended to exclude any non-illustrated permutations of the components 192 . in certain embodiments, the illustrated components 192 (e.g., 194 through 210 ) may be partially or completed integrated within the hrsg 56 , the egr system 58 , or any combination thereof. these eg treatment components 192 may enable feedback control of temperature, pressure, flow rate, and gas composition, while also removing moisture and particulates from the exhaust gas 60 . furthermore, the treated exhaust gas 60 may be extracted at one or more extraction points 76 for use in the eg supply system 78 and/or recirculated to the exhaust inlet 184 of the compressor section 152 . as the treated, recirculated exhaust gas 66 passes through the compressor section 152 , the segr gas turbine system 52 may bleed off a portion of the compressed exhaust gas along one or more lines 212 (e.g., bleed conduits or bypass conduits). each line 212 may route the exhaust gas into one or more heat exchangers 214 (e.g., cooling units), thereby cooling the exhaust gas for recirculation back into the segr gas turbine system 52 . for example, after passing through the heat exchanger 214 , a portion of the cooled exhaust gas may be routed to the turbine section 156 along line 212 for cooling and/or sealing of the turbine casing, turbine shrouds, bearings, and other components. in such an embodiment, the segr gas turbine system 52 does not route any oxidant 68 (or other potential contaminants) through the turbine section 156 for cooling and/or sealing purposes, and thus any leakage of the cooled exhaust gas will not contaminate the hot products of combustion (e.g., working exhaust gas) flowing through and driving the turbine stages of the turbine section 156 . by further example, after passing through the heat exchanger 214 , a portion of the cooled exhaust gas may be routed along line 216 (e.g., return conduit) to an upstream compressor stage of the compressor section 152 , thereby improving the efficiency of compression by the compressor section 152 . in such an embodiment, the heat exchanger 214 may be configured as an interstage cooling unit for the compressor section 152 . in this manner, the cooled exhaust gas helps to increase the operational efficiency of the segr gas turbine system 52 , while simultaneously helping to maintain the purity of the exhaust gas (e.g., substantially free of oxidant and fuel). fig. 4 is a flow chart of an embodiment of an operational process 220 of the system 10 illustrated in figs. 1-3 . in certain embodiments, the process 220 may be a computer implemented process, which accesses one or more instructions stored on the memory 122 and executes the instructions on the processor 120 of the controller 118 shown in fig. 2 . for example, each step in the process 220 may include instructions executable by the controller 118 of the control system 100 described with reference to fig. 2 . the process 220 may begin by initiating a startup mode of the segr gas turbine system 52 of figs. 1-3 , as indicated by block 222 . for example, the startup mode may involve a gradual ramp up of the segr gas turbine system 52 to maintain thermal gradients, vibration, and clearance (e.g., between rotating and stationary parts) within acceptable thresholds. for example, during the startup mode 222 , the process 220 may begin to supply a compressed oxidant 68 to the combustors 160 and the fuel nozzles 164 of the combustor section 154 , as indicated by block 224 . in certain embodiments, the compressed oxidant may include a compressed air, oxygen, oxygen-enriched air, oxygen-reduced air, oxygen-nitrogen mixtures, or any combination thereof. for example, the oxidant 68 may be compressed by the oxidant compression system 186 illustrated in fig. 3 . the process 220 also may begin to supply fuel to the combustors 160 and the fuel nozzles 164 during the startup mode 222 , as indicated by block 226 . during the startup mode 222 , the process 220 also may begin to supply exhaust gas (as available) to the combustors 160 and the fuel nozzles 164 , as indicated by block 228 . for example, the fuel nozzles 164 may produce one or more diffusion flames, premix flames, or a combination of diffusion and premix flames. during the startup mode 222 , the exhaust gas 60 being generated by the gas turbine engine 156 may be insufficient or unstable in quantity and/or quality. accordingly, during the startup mode, the process 220 may supply the exhaust gas 66 from one or more storage units (e.g., storage tank 88 ), the pipeline 86 , other segr gas turbine systems 52 , or other exhaust gas sources. the process 220 may then combust a mixture of the compressed oxidant, fuel, and exhaust gas in the combustors 160 to produce hot combustion gas 172 , as indicated by block 230 . in particular, the process 220 may be controlled by the control system 100 of fig. 2 to facilitate stoichiometric combustion (e.g., stoichiometric diffusion combustion, premix combustion, or both) of the mixture in the combustors 160 of the combustor section 154 . however, during the startup mode 222 , it may be particularly difficult to maintain stoichiometric combustion of the mixture (and thus low levels of oxidant and unburnt fuel may be present in the hot combustion gas 172 ). as a result, in the startup mode 222 , the hot combustion gas 172 may have greater amounts of residual oxidant 68 and/or fuel 70 than during a steady state mode as discussed in further detail below. for this reason, the process 220 may execute one or more control instructions to reduce or eliminate the residual oxidant 68 and/or fuel 70 in the hot combustion gas 172 during the startup mode. the process 220 then drives the turbine section 156 with the hot combustion gas 172 , as indicated by block 232 . for example, the hot combustion gas 172 may drive one or more turbine stages 174 disposed within the turbine section 156 . downstream of the turbine section 156 , the process 220 may treat the exhaust gas 60 from the final turbine stage 174 , as indicated by block 234 . for example, the exhaust gas treatment 234 may include filtration, catalytic reaction of any residual oxidant 68 and/or fuel 70 , chemical treatment, heat recovery with the hrsg 56 , and so forth. the process 220 may also recirculate at least some of the exhaust gas 60 back to the compressor section 152 of the segr gas turbine system 52 , as indicated by block 236 . for example, the exhaust gas recirculation 236 may involve passage through the exhaust recirculation path 110 having the eg processing system 54 as illustrated in figs. 1-3 . in turn, the recirculated exhaust gas 66 may be compressed in the compressor section 152 , as indicated by block 238 . for example, the segr gas turbine system 52 may sequentially compress the recirculated exhaust gas 66 in one or more compressor stages 158 of the compressor section 152 . subsequently, the compressed exhaust gas 170 may be supplied to the combustors 160 and fuel nozzles 164 , as indicated by block 228 . steps 230 , 232 , 234 , 236 , and 238 may then repeat, until the process 220 eventually transitions to a steady state mode, as indicated by block 240 . upon the transition 240 , the process 220 may continue to perform the steps 224 through 238 , but may also begin to extract the exhaust gas 42 via the eg supply system 78 , as indicated by block 242 . for example, the exhaust gas 42 may be extracted from one or more extraction points 76 along the compressor section 152 , the combustor section 154 , and the turbine section 156 as indicated in fig. 3 . in turn, the process 220 may supply the extracted exhaust gas 42 from the eg supply system 78 to the hydrocarbon production system 12 , as indicated by block 244 . the hydrocarbon production system 12 may then inject the exhaust gas 42 into the earth 32 for enhanced oil recovery, as indicated by block 246 . for example, the extracted exhaust gas 42 may be used by the exhaust gas injection eor system 112 of the eor system 18 illustrated in figs. 1-3 . as discussed in detailed above with respect to figs. 1-4 , the segr gas turbine system 52 utilizes a combination of the fuel 70 and compressed oxidant 68 for combustion to generate exhaust gas 42 . again, the exhaust gas 42 generated by the segr gas turbine system 52 is provided to either or both of the eg processing system 54 and the eg supply system 78 for recirculation back to the segr gas turbine system 52 or the hydrocarbon production system 12 ( fig. 1 ). as also discussed above with respect to fig. 3 , the oxidant compression system 186 is fluidly coupled to the segr gas turbine engine 150 , and provides the oxidant 68 in compressed form for combustion. the particular configuration of the oxidant compression system 186 may have a direct impact on the overall cycle efficiency of the segr gas turbine system 52 . indeed, any one or a combination of the components of the machinery 106 discussed above in table 1 may be utilized to enhance the efficiency of the operation of the oxidant compression system 186 , in turn enhancing the efficiency of the entire process of compression, combustion and exhaust gas generation. by way of non-limiting example, the oxidant compression system 186 may include features for rejecting heat generated during compression, generating electrical power from surplus energy generated by the segr gas turbine engine 150 , and extracting power in the form of electrical and/or mechanical energy for driving units that may operate in series or parallel. figs. 5-23 provide a number of embodiments directed toward enhancing the efficiency of the operation of the oxidation compression system 186 . it should be noted that certain features of the turbine-based service system 14 have been omitted for clarity, including the control system 100 having the segr gt system control 126 and machinery control 128 . accordingly, it should be noted that all of the embodiments discussed below may be partially or completely controlled by the control system 100 , with the control system 100 using sensor feedback 130 obtained from sensors disposed on any one or a combination of the components of the oxidant compression system 186 described below. indeed, such sensor feedback 130 may enable synchronous operation of the machinery 106 so as to enhance the efficiency of each machine component and, therefore, at least the oxidant compression system 186 . moving now to fig. 5 , one embodiment of the oxidant compression system 186 is illustrated as including a main oxidant compressor (moc) 300 , the particular configuration of which is discussed in further detail below. the moc 300 is coupled to a generator 302 (e.g., a double-ended generator), which is directly driven by the segr gt system 52 . during operation, the main oxidant compressor 300 receives the oxidant 68 , and is driven by the generator 302 to compress the oxidant 68 to produce a compressed oxidant 304 . at the same time, the generator 302 , driven by the segr gt system 52 , produces electric power 74 . the electric power 74 may be used in a number of ways. for example, the electric power 74 may be provided to an electric power grid, or utilized by an additional component of the machinery 106 operating in parallel to the generator 302 . in particular, the generator 302 and the moc 300 are disposed along a shaft line 306 of the segr gt system 52 , which may also be referred to as a “train” of the segr gt system 52 . in the illustrated embodiment, the generator 302 has an input shaft 308 that receives power from the shaft 176 of the segr gt system 52 , and an output shaft 310 that provides input power to the moc 300 for oxidant compression at a particular flow rate, pressure, and temperature. that is, the output shaft 310 of the generator 302 is, or is coupled to, an input shaft 312 of the moc 300 . indeed, while certain embodiments discussed below are described as having an output shaft “coupled to” or “mechanically coupled to” an input shaft, to facilitate description, this is also intended to denote embodiments where the output shaft of a certain component is the input shaft for another component (i.e., the input shafts and the output shafts may be the same component or different components). thus, in the illustrated embodiment, while the output shaft 310 of the generator 302 is presently described as being coupled to the input shaft 312 of the moc 300 , this is also intended to refer to a configuration in which the output shaft 310 of the generator 302 and the input shaft 312 of the moc 300 are the same. in other words, the output shaft 310 and the input shaft 312 may be the same component, or may be different components. further, while the moc 300 is illustrated in the embodiment of fig. 5 as an axial flow compressor, the moc 300 may have any suitable compressor configuration capable of generating the compressed oxidant 304 at desired operational states (e.g., pressure, temperature). generally, the moc 300 , and any of the compressors discussed in detail below, may include one or more rows of rotating and/or stationary blading to form compression stages, which may be axial and/or radial. in some embodiments, the moc 300 may, additionally or alternatively, include one or more radial compressor stages, such as centrifugal impellers. for example, the moc 300 may include a series of axial flow stages followed by a series of radial flow stages. such a configuration may be referred to as an axi-radial or axial-radial compressor. in still further embodiments, the moc 300 may include only radial stages. in such an embodiment, the moc 300 may be a centrifugal compressor. thus, the moc 300 , while illustrated as a single unit housed in a single compressor casing, may actually include one, two, three or more stages housed in one, two, three or more compressor casings, with or without cooling features disposed between the cooling stages. it should be noted that the moc 300 , when in an axial flow configuration, may enable the production of the compressed oxidant 304 at high discharge temperatures and at a relatively high efficiency without the use of interstage cooling. therefore, in one embodiment, the moc 300 does not include interstage cooling. it should also be noted that in the embodiment illustrated in fig. 5 , the output shaft 310 of the generator 302 may be designed to deliver the full power used by the moc 300 to generate the compressed oxidant 304 at the desired conditions. the shaft 310 may therefore have a relatively large diameter when compared to a typical electrical generator having a similar capacity. by way of non-limiting example, the diameter of the shaft 310 of the generator 302 may be between approximately 40% and 120% of the diameter of the shaft 176 of the segr gt system 52 , such as between approximately 60% and 100%, or between approximately 80% and 90%. moving now to fig. 6 , another embodiment of the oxidant compression system 186 is illustrated. in fig. 6 , the moc 300 is directly driven by the segr gt system 52 . in particular, the moc 300 in fig. 6 is a double ended compressor in which the segr gt system 52 provides input power to the moc 300 , and the moc 300 provides input power to the generator 302 . in other words, in the configuration illustrated in fig. 6 , the respective positions of the moc 300 and the generator 302 are reversed compared to the configuration in fig. 5 . thus, an output shaft 314 of the moc 300 is mechanically coupled to the input shaft 308 of the generator 302 . such a configuration may be desirable in that the generator 302 does not drive the moc 300 , which enables a wider variety of generators (i.e., those not necessarily having oversized shafts) to be utilized. indeed, the generator 302 may be a single- or a double-ended generator that is driven by the moc 300 to produce the electric power 74 . in embodiments where the generator 302 is a double-ended generator, the generator 302 may in turn drive one or more additional features of the oxidant compression system 186 and/or the turbine-based service system 14 , such as various pumps, booster compressors, or the like. again, the moc 300 may be an axial flow compressor, a centrifugal compressor, or a combination thereof. in other words, the moc 300 may include only axial flow stages, only radial flow stages, or a combination of axial and radial stages. further, it should be noted that in the configurations illustrated in figs. 5 and 6 , because the shaft 176 directly drives the moc 300 (or directly drives a feature that in turn directly drives the moc 300 ), the moc 300 may be configured such that its operational speed is substantially the same as that of the compressor section 152 and the turbine section 156 of the gas turbine engine 150 . such a configuration, while high in efficiency, may not offer operational flexibility. furthermore, it may be difficult to realize an axial flow compressor that operates at typical gas turbine engine operating speeds. indeed, only a fraction of a flow capacity of the moc 300 may be utilized in the operation of the segr gt system 52 due at least in part to the use of exhaust gas as a diluent during combustion in addition to the compressed oxidant 304 . accordingly, it may be desirable to provide features that enable the moc 300 to operate at a certain rotational speed when compared to the segr gt system. for example, it may be desirable to operate the moc 300 at a first operating speed that is different than a first operating speed of the segr gt system 52 (e.g., a first speed of the shaft 176 ). one such embodiment of the oxidant compression system 186 is illustrated in fig. 7 . in particular, the oxidant compression system 186 includes a gearbox 320 , which enables the moc 300 to operate at a different speed when compared to the segr gt system 52 . in particular, the generator 302 directly drives the gearbox 320 , and the segr gt system 52 directly drives the generator 302 . the gearbox 320 may be a speed-increasing or a speed-decreasing gearbox that drives the moc 300 at its design speed. therefore, the moc 300 may be designed or selected so as to provide a desired amount (e.g., flow rate and pressure) of the compressed oxidant 304 to the segr gt system 52 while operating at a different speed compared to the compressor section 152 of the segr gt system 52 . for example, in one embodiment, the moc 300 may be an axial flow compressor that is similar in scale to the compressor of the compressor section 152 of the segr gt system 52 , which may also be an axial flow compressor. however, in other embodiments, the moc 300 may be smaller or larger than the compressor of the segr gt system 52 . as an example in which the moc 300 and the segr gt system 52 operate at different speeds, in a configuration in which the flow rate of the moc 300 is 40% of the design flow rate of the compressor of the compressor section 152 , the operating speed of the moc 300 may be approximately 1.6 times the operating speed of the segr gt system 52 . indeed, by way of example, the gearbox 320 may enable the moc 300 to operate at a speed that is at least 1% higher, such as between 10% and 200%, between 20% and 150%, between 30% and 100%, or between 40% and 75% higher, than the speed of the segr gt system 52 . conversely, in embodiments where the gearbox 320 is a speed-decreasing gearbox, the gearbox 320 , by way of example, may enable the moc 300 to operate at a speed that is at least 1% lower, such as between 10% and 90%, between 20% and 80%, between 30% and 70%, or between 40% and 60% lower, than the speed of the segr gt system 52 . in accordance with present embodiments, the gearbox 320 may have any suitable configuration. for example, in one embodiment, the gearbox 320 may be a parallel shaft gearbox in which an input shaft 322 of the gearbox 320 is not in line with, but is generally parallel to an output shaft 324 of the gearbox 320 . in another embodiment, the gearbox 320 may be an epicyclic gearbox or other speed increasing or decreasing gearbox in which the input shaft 322 of the gearbox 320 is in line with the output shaft 324 of the gearbox 320 and, in certain embodiments, is along the shaft line 306 . furthermore, other gearbox arrangements are presently contemplated. for example, gearbox arrangements in which idler gears increase shaft separation are contemplated, and/or embodiments of gearboxes having multiple output and/or input shafts to drive other equipment or to enable the use of an additional drive, such as an additional turbine engine, are also presently contemplated. as noted above, the moc 300 may include one or more compression stages housed within a single or multiple compressor casings. fig. 8 illustrates an embodiment of the oxidant compressor system 186 in which the compression stages are provided as multiple stages housed in separate casings. in particular, the illustrated oxidant compression system 186 includes a low pressure (lp) moc 330 and a high pressure (hp) moc 332 . the lp moc 330 receives the oxidant 68 (e.g., at an inlet of the lp moc 330 ) and compresses the oxidant 68 to a first pressure—producing and subsequently discharging (e.g., from an outlet of the lp moc 330 ) lp compressed oxidant 334 . the hp moc 332 receives (e.g., an at inlet of the hp moc 332 ) and compresses the lp compressed oxidant 334 to produce the compressed oxidant 304 used by the segr gt system 52 . in the illustrated embodiment, the hp moc 332 is driven by the generator 302 , which is double-ended, to compress the low pressure compressed oxidant 334 . the generator 302 , in turn, is directly driven by the segr gt system 52 . the hp moc 332 is also double ended. thus, an input 336 (e.g., an input shaft) to the hp moc 332 is the output shaft 310 of the generator 302 , and an output 338 of the hp moc 332 (e.g., an output shaft) is an input 339 (e.g., an input shaft) of the lp moc 330 . that is, the hp moc 332 is mechanically coupled to the output shaft 310 of the generator 302 for mechanical power and in turn provides power to the lp moc 330 , which is mechanically coupled to the output shaft 338 of the hp moc 332 . the lp moc 330 may produce the low pressure compressed oxidant 334 at a pressure that is between 10% and 90% of the pressure of the compressed oxidant 304 . for example, the low pressure compressed oxidant 334 may be between 20% and 80%, 30% and 70%, or between 40% and 60% of the pressure of the compressed oxidant 304 . again, the hp moc 332 then compresses the low pressure compressed oxidant 334 to the pressure, flow, and temperature desired for use in segr gt system 52 as the compressed oxidant 304 . it should be noted that the placement of the generator 302 is merely an example. indeed, the generator 302 may be placed in a number of locations along the segr gt train. for example, the generator 302 may be placed generally along the shaft line 306 in between the lp moc 330 and the hp moc 332 . in such an embodiment, the input shaft 308 of the generator 302 may be the output of the hp moc 332 , and the output shaft 310 of the generator 302 may be an input to the lp moc 330 . alternatively, the generator 302 may be placed at the end of the train as discussed above. thus, in accordance with present embodiments, the generator 302 , the lp moc 330 , and the hp moc 332 of fig. 8 may all operate at substantially the same operating speed as the segr gt system 52 . as discussed above with respect to the moc 300 of figs. 5-7 , the lp moc 330 and the hp moc 332 may be axial flow compressors each having one or more compression stages housed within a single casing or multiple casings. indeed, any number of stages may be employed in the lp moc 330 and the hp moc 332 , with or without cooling features for interstage cooling. furthermore, the lp moc 330 and the hp moc 332 may independently be axial flow compressors, centrifugal compressors, or a combination of compression features including axial compression stages and radial compression stages. thus, the lp moc 330 and the hp moc 332 may be axi-radial or axial-radial compressors. furthermore, in one embodiment, the lp moc 330 , the hp moc 332 , and the generator 302 may be disposed within a single casing. moving now to fig. 9 , an embodiment of the oxidant compression system 186 is depicted in which main oxidant compression is divided into an axial flow lp moc 340 and a centrifugal hp moc 342 . as illustrated, the axial flow lp moc 340 is driven by the generator 302 , which is in turn directly driven by the segr gt system 52 . similarly, the centrifugal hp moc 342 is directly driven by the axial flow lp moc 340 , which is double ended. thus, the axial flow lp moc 340 is mechanically coupled to the output shaft 310 of the generator 302 , and the centrifugal hp moc 342 is mechanically coupled to an output 344 (e.g., an output shaft) of the axial flow lp moc 340 . during operation, the axial flow lp moc 340 receives the oxidant 68 and produces the low pressure compressed oxidant 334 , which is provided to the centrifugal hp moc 342 to provide staged compression (e.g., series compression). the centrifugal hp moc 342 then produces the compressed oxidant 304 from the low pressure compressed oxidant 334 . the axial flow lp moc 340 and/or the centrifugal hp moc 342 may be housed in one or more casings, and may include one or more compression stages. for example, the axial flow lp moc 340 may include one or more oxidant compression stages, such that the oxidant 68 is compressed along a series of axial compression stages until the oxidant reaches a desired pressure that is suitable for provision to the centrifugal hp moc 342 . as noted above with respect to the lp moc 330 of fig. 8 , the lp moc 340 may produce the low pressure compressed oxidant 334 at a pressure that is between 10% and 90% of the pressure of the compressed oxidant 304 . for example, the low pressure compressed oxidant 334 may be between 20% and 80%, 30% and 70%, or between 40% and 60% of the pressure of the compressed oxidant 304 . likewise, the centrifugal hp moc 342 may progressively compress the low pressure compressed oxidant 334 in a series of radial compression stages until the oxidant is compressed to a suitable pressure for provision to the segr gt system 52 . in a similar manner as discussed above with respect to fig. 8 , the generator 302 of fig. 9 may be placed in a variety of positions along the gt train. for example, the generator 302 , rather than being positioned between the axial flow lp moc 340 and the segr gt system 52 , may instead be placed between the centrifugal hp moc 342 and the axial flow lp moc 340 . thus, an input to the generator 302 may be the output shaft 344 of the axial flow lp moc 340 , and the output shaft 310 of the generator 302 may be the input for the centrifugal hp moc 342 . further, the generator 302 may be located at the end of the gt train. in such an embodiment, the centrifugal hp moc 342 may be double ended such that an input of the centrifugal hp moc 342 is the output of the axial flow lp moc 340 , and the output of the centrifugal hp moc 342 is the input for the generator 302 . as depicted in fig. 10 , the present disclosure also provides embodiments in which the speed-increasing or speed-decreasing gearbox 320 is disposed between the lp moc 330 and the hp moc 332 operating in series (e.g., staged compression). thus, the hp moc 332 and the lp moc 330 may operate at the same or different operational speeds. for example, as illustrated, the lp moc 330 may operate at substantially the same operational speed as the segr gt system 52 . however, the hp moc 332 , driven by the lp moc 330 via the gearbox 320 , may operate at a faster or slower operational speed when compared to the lp moc 330 and, concomitantly, the segr gt system 52 . for example, the hp moc 332 may operate at a speed that is between 10% and 200% of the operating speed of the segr gt system 52 . more specifically, the hp moc 332 may operate at speed that is between approximately 20% and 180%, 40% and 160%, 60% and 140%, 80% and 120% of the operating speed of the segr gt system 52 . in embodiments in which the hp moc 332 operates at a lower operational speed compared to the segr gt system 52 , the hp moc 332 may operate at a speed that is between approximately 10% and 90%, 20% and 80%, 30% and 70%, or 40% and 60% of the operational speed at segr gt system 52 . conversely, in embodiments in which the hp moc 332 operates at a higher operational speed when compared to the segr gt system 52 , the hp moc 332 may operate at a speed that is at least approximately 10% greater than the operational speed of the segr gt system 52 . more specifically, the hp moc 332 may operate at a speed that is between approximately 20% and 200% greater, 50% and 150% greater, or approximately 100% greater than the segr gt system 52 . in a similar manner to the embodiments discussed above with respect to figs. 5-10 , it should be noted that the generator 302 may be placed at various positions along the segr train. for example, moving to fig. 11 , the generator 302 is illustrated as being positioned between the axial flow lp moc 330 and the segr gt system 52 . thus, the generator 302 is directly driven by the segr gt system 52 , and directly drives the axial flow lp moc 330 . in other words, compared to the configuration of fig. 10 , the respective positions of the generator 302 and the lp moc 330 are reversed. further, as illustrated, the axial flow hp moc 332 is driven by the axial flow lp moc 330 via the speed-increasing or speed-decreasing gearbox 320 . again, the gearbox 320 may be any speed increasing or speed decreasing gearbox, such as a parallel shaft gearbox or an epicyclic gearbox. as discussed above with respect to fig. 10 , the present disclosure also provides embodiments including combinations of centrifugal and axial flow compressors. therefore, in one embodiment, the hp moc 332 of figs. 10 and 11 may be replaced with the centrifugal hp moc 342 . referring to fig. 12 , the centrifugal hp moc 342 is driven via the gearbox 320 by the axial flow lp moc 330 . further, as discussed above, the axial flow lp moc 330 is directly driven by the segr gt system 52 via the generator 302 . as discussed in detail above, in an alternative configuration, the axial flow lp moc 330 and the generator 302 may reverse, such that the generator 302 is located along the train between the centrifugal hp moc 342 and the axial flow lp moc 330 . furthermore, it should be noted that the present disclosure also contemplates the use of two or more centrifugal oxidant compressors. thus, in such embodiments, the axial flow lp moc 330 may be replaced with one or more centrifugal lp mocs. while several of the foregoing embodiments are directed to configurations of the oxidant compression system 186 in which the main oxidant compressors are arranged in a series configuration, the present disclosure also provides embodiments in which oxidant compressors are operating in parallel (e.g., parallel compression). moving now to fig. 13 , an embodiment of the oxidant compressor system 186 has first and second oxidant compressors 370 , 372 configured to operate in parallel is provided. in the illustrated embodiment, the first and second mocs 370 , 372 each receive a separate influx of the oxidant 68 . as should be appreciated, the first moc 370 generates a first stream of compressed oxidant 374 and the second moc 372 generates a second stream of compressed oxidant 376 . the first and second compressed oxidant streams 374 , 376 combine along a path 378 to flow the compressed oxidant 304 to the segr gt system 52 . as described above with respect to the moc 300 , the first and second mocs may have any suitable configuration, including all-axial flow compression, axi-radial or axial-radial compression, or all-radial compression. furthermore, the first and second mocs may be substantially the same size, or may be different. that is, the first and second compressed oxidant streams may be at the same pressure and flow rate, or their respective pressures and/or flow rates may be different. by way of non-limiting example, the first and second mocs may independently produce between 10% and 90% of the total compressed oxidant 304 , with the remainder being produced by at least the remaining moc. for example, the first moc 370 may produce approximately 40% of the total compressed oxidant 304 , while the second moc 372 may produce the remainder—approximately 60%, or vice versa. such operational flexibility may be afforded by the use of the gearbox 320 , though in certain embodiments the gearbox 320 may not be present. in certain embodiments, one or more additional gearboxes may also be utilized. for example, an additional gearbox may be positioned between the first and second mocs 370 , 372 to enable each moc to operate at a speed independent from the other. therefore, in some embodiments, the first and second mocs 370 , 372 may operate at the same or different speeds when compared to one another, and may independently operate at the same or different speeds when compared to the segr gt train 52 . furthermore, the first and second mocs 370 , 372 may be disposed within separate casings, as illustrated, or may be disposed within the same compressor casing, depending on the particular configuration utilized (e.g., whether additional features are positioned between them). for example, in embodiments in which the first and second mocs 370 , 372 operate at a slower speed than the segr gt system 52 , their operational speed may be between 10% and 90% of the operational speed of the segr gt system 52 . furthermore, in embodiments in which the first and second mocs 370 , 372 operate at a higher speed than the segr gt system 52 , their speed may be at least 10%, at least 20%, at least 50%, at least 100%, or least 150% greater than the operational speed of the segr gt system 52 . the present disclosure also provides embodiments of the oxidant compression system 186 in which the gearbox 320 is not present. thus, in such an embodiment, the first and second main oxidant compressors 370 , 372 may operate at substantially the same speed as the segr gt system 52 . thus, the first and second mocs 370 , 372 may be directly driven by the segr gt system 52 via the generator 302 . in other embodiments, the generator 302 may be placed along the gt train between the first and second mocs 370 , 372 , such that the second moc 372 is directly driven by the segr gt system 52 . therefore, the second moc 372 may directly drive the first moc 370 via the generator 302 . further, as discussed with respect to the embodiments above, the generator 302 may be positioned at the end of the segr gt train. in such an embodiment, the first moc 370 may be double ended, such that the output of the first moc 370 provides the input power for the generator 302 . while the embodiments discussed above generally include configurations in which the oxidant compressors derive a majority or all their power from the segr gt system 52 , the present disclosure also provides embodiments in which one or more of the oxidant compressors are driven by an additional drive, such as a steam turbine or an electric motor. such embodiments are discussed with respect to figs. 14-17 . referring now to fig. 14 , an embodiment of the oxidant compression system 186 is illustrated as having the first moc 370 decoupled from the train of the segr gt system 52 . in other words, the first moc 370 is not positioned along the shaft line 306 . in particular, the first moc 370 is driven by an additional drive 390 , which may be a steam turbine, electric motor, or any other suitable prime mover. as illustrated, the first moc 370 is driven by the additional drive 390 via a first gearbox 392 , which may be any speed-increasing or speed-decreasing gearbox. indeed, the first gearbox 392 may be a parallel shaft or epicyclic gearbox. accordingly, the first moc 370 generally derives its power from a shaft 394 of the additional drive 390 . in particular, the shaft 394 of the additional drive 390 provides input power to the first gearbox 392 . the first gearbox 392 , in turn, provides input power to the first moc 370 via an output shaft 395 , which may be in-line with the shaft 394 of the additional drive 390 or may be substantially parallel to the shaft 394 . again, the first moc 370 and the second moc 372 operate in parallel (e.g., parallel compression) to provide the first and second streams 374 , 376 , which combine to produce the compressed oxidant 304 that is directed to the segr gt system 52 . while the first moc 370 is decoupled from the segr gt train, the second moc 372 is illustrated as deriving its energy from the segr gt system 52 . in particular, the second moc 372 is depicted as being driven by the segr gt system 52 via the generator 302 and a second gearbox 396 . the second gearbox 396 receives input power from the output shaft 310 of the generator 302 , and in turn provides output power to the second moc 372 via its shaft 398 . again, the second gearbox 396 may be a parallel shaft or epicyclic gearbox, such that its output shaft 398 is substantially parallel with its input shaft 399 (e.g., the output shaft 310 of the generator 302 ), or in-line with its input shaft 399 . thus, the second moc 372 may be driven at a different speed compared to the segr gt system 52 during operation while still producing a desired amount of the compressed oxidant 304 . in some embodiments, the first and second mocs 370 , 372 may operate at substantially the same speed, or at different speeds. indeed, the first and second mocs 370 , 372 may independently operate at a higher or lower speed than the segr gt system 52 . by way of non-limiting example, in embodiments where the first and second mocs 370 , 372 independently operate at a higher speed than the segr gt system, they may independently operate at least approximately 10% faster, such as between 10% and 200%, 50% and 150%, or approximately 100% faster. conversely, in embodiments where the first and second mocs 370 , 372 independently operate at a slower speed than the segr gt system, they may independently operate at least approximately 10% slower, such as between 10% and 90%, 20% and 80%, 30% and 70%, or 40% and 60% slower. furthermore, it should be noted that the de-coupling of the first moc 370 from the segr gt train may enable the additional drive 390 to power the first moc 370 as the segr gt system 52 is coming on line. for example, during a startup procedure, the segr gt system 52 may not necessarily produce sufficient power to run the second moc 372 . however, because the first moc 370 is driven by the additional drive 390 , the first moc 370 is able to produce a sufficient amount of the compressed oxidant 304 to enable combustion (e.g., stoichiometric combustion) during a startup procedure. in still further embodiments, the first and second gearboxes 392 , 396 may not be present. thus, in such embodiments, the first moc 370 may be directly driven by the additional drive 390 , and the second moc 372 may be directly driven through the generator 302 by the segr gt system 52 . however, it should be noted that the first gearbox 392 and the second gearbox 396 may have a smaller size when compared to a typical gearbox. this is in part because each gearbox 392 , 396 simply drives one moc rather than two. furthermore, the starting load on the segr gt system 52 may be reduced, since the additional drive 390 may generate the starting load for the first moc 370 , rather than for both of the first and second mocs 370 , 372 . as noted above, in some embodiments, the additional drive 390 may be a steam turbine. the steam turbine generally derives it power from any source of steam produced within the system, such as the steam 62 generated by the hrsg 56 of the eg processing system 54 . for example, the hrsg 56 may generate the steam 62 at a first pressure (e.g., a high or medium pressure steam), and work may be extracted from the steam 62 by the steam turbine to generate a steam having a second pressure, which is lower than the first (e.g., a medium or low pressure steam). in certain embodiments, the steam turbine may extract sufficient work from the steam 62 so as to generate water 64 . in this way, the efficiency of the compression system 186 may be enhanced in that the steam turbine (i.e., the additional drive 390 ) and the hrsg 56 may each produce a feed stream for the other. similarly, in embodiments in which additional drive 390 is an electric motor, the electric motor may derive its power from any electric power source. however, to enhance the efficiency of the oxidant compression system 186 , the electric power used by the electric motor may be the electric power 74 generated by the generator 302 , which is disposed along the segr gt train. furthermore, it should be noted that the first moc 370 and the second moc 372 , while illustrated as axial flow compressors, may be any suitable compressor. for example, the first moc 370 , the second moc 372 , or a combination thereof, may be axial flow compressors, centrifugal compressors, or compressors having any number of suitable stages having axial and/or radial flow components. while the embodiments discussed above respect to fig. 14 are provided in the context of two or more oxidant compressors operating in parallel, it should also be noted that embodiments in which at least one oxidant compressor that is operatively decoupled from the segr gt train may be fluidly coupled in series to another oxidant compressor that is coupled to the segr gt train. in other words, embodiments in which at least one oxidant compressor operating in a series configuration and is driven by the additional drive 390 is presently contemplated. for example, as illustrated in fig. 15 , which depicts an embodiment of the oxidant compressor system 186 , the hp moc 332 is driven by the additional drive 390 via the first gearbox 392 . as also illustrated, the lp moc 330 is directly driven by the segr gt system 52 through the generator 302 . in other words, a first compression stage or first set of compression stages are driven by the segr gt system 52 , while a second compression stage or set of compression stages in a similar manner as discussed above with respect to fig. 14 , the first gearbox 392 of fig. 15 may be present in some embodiments and not present in others. thus, the hp moc 332 may be directly driven by the additional drive 390 , or may be indirectly driven through the first gearbox 392 . further, the first gearbox 392 enables the hp moc 332 to operate at a higher or a lower speed when compared to the additional drive 390 . in embodiments where the additional drive 390 is a steam turbine, the steam may be steam 62 produced by the hrsg 56 , improving overall cycle efficiency. alternatively, in embodiments in which the additional drive 390 is an electric motor, the electric motor may receive its power from the generator 302 , which produces the electric power 74 . accordingly, in embodiments when such coupling is present, the hp moc 332 may be considered to be drivingly de-coupled from the segr gt system 52 . as with the embodiments discussed above, the relative positions of the lp moc 330 and the double ended generator 302 may be reversed. therefore, the lp moc 330 may be directly driven by the segr gt system 52 , and its output may be the input of the generator 302 . in such an embodiment, it should be appreciated that the generator 302 may not be double-ended and my instead merely receive an input. however, it is also presently contemplated that in embodiments where the generator 302 receives its input power from the lp moc 330 , the generator 302 may drive another piece of equipment such as, for example, a pump, compressor booster, or similar machine feature. fig. 16 depicts another embodiment of the oxidant compressor system 186 in which the axial flow hp moc 332 is replaced with the centrifugal hp moc 342 . thus, the centrifugal hp moc 342 receives the lp compressed oxidant 334 from the lp moc 330 , and compresses the lp compressed oxidant 334 to produce the compressed oxidant 304 (e.g., via staged or series compression). it should be noted that any compression configuration may be utilized with either one of oxidant compressors of the oxidant compression system 186 . therefore, while the embodiment illustrated in fig. 16 utilizes one axial flow compressor and one centrifugal compressor, any number of axial flow and/or centrifugal compressors housed in one or more compressor casings may be utilized. indeed, the centrifugal hp moc 342 may include one or more compression stages in which some, none or all of the stages are radial or axial. likewise, the lp moc 330 , while illustrated as an axial flow compressor, may include one or more compression stages housed in one or more compressor casings in which some, none or all of the compression stages are axial and/or radial. as with the previous configurations, it should be noted that the first gearbox 392 disposed between the centrifugal hp moc 342 and the additional drive 390 may or may not be present. the first gearbox 392 , as will be appreciated based on the foregoing discussions, enables the centrifugal hp moc 342 to operate at a different operational speed than the additional drive 390 . as also discussed above, the positions of the lp moc 330 and the generator 302 may be reversed, such that the lp moc 330 is directly driven by segr gt system 52 , and in turn drives the generator 302 . furthermore, an additional gearbox (e.g., the second gearbox 396 ) may be positioned along the segr gt train between the lp moc 330 and the segr gt shaft 176 , so as to enable the lp moc 300 to operate at a different speed compared to the segr gt system 52 . embodiments in which the positions of the lp moc 330 and the hp moc 332 are reversed are also presently contemplated. fig. 17 illustrates one such embodiment of the oxidant compression 186 in which the hp moc 332 is generally disposed along the segr gt train, and the lp moc 330 is de-coupled therefrom. in particular, the hp moc 332 is driven by the segr gt system 52 via the generator 302 and through the second gearbox 396 . again, the second gearbox 396 enables the hp moc 332 to be operated at the different speed when compared to the segr gt system 52 . as illustrated, the hp moc 332 generates the compressed oxidant 304 from an inlet stream of the lp compressed oxidant 334 generated by the lp moc 330 . the lp moc 330 is generally disposed along a train of the additional drive 390 which, as described above, may be a steam turbine, an electric motor, or similar drive. specifically, the lp moc 330 derives its power from the shaft 394 of the additional drive 390 through the first gearbox 392 . the first gearbox 392 enables to lp moc 330 to operate at the same or a different operational speed than the additional drive 390 . it should be noted that embodiments in which either or both of the gearboxes 392 , 396 are not present are also contemplated. thus, the hp moc 332 may be directly driven by segr gt system 52 via the generator 302 , and the lp moc 330 may be directly driven by the additional drive 390 . furthermore, embodiments in which the position of the hp moc 332 and generator 302 are switched are also presently contemplated. in such embodiments, the generator 302 may be single or double ended. in such embodiments in which generator 302 is double ended, an additional feature of the oxidant compression system 186 may be driven by generator 302 . in the embodiments discussed above in which multiple compressors are operating in series, such as embodiments in which oxidant discharged from an lp moc is delivered through an inlet of the hp moc, one or more cooling units may also be provided therebetween. in other words, in embodiments where series arrangements of an lp moc and an hp moc are provided, such an embodiment may also include one or more cooling units disposed between the hp moc and the lp moc along a flow path of the lp compressed oxidant 334 . one embodiment of the oxidant compression system 186 having such a cooling unit is depicted in fig. 18 . in particular, in the embodiment depicted in fig. 18 , the oxidant compression system 186 includes the lp moc 330 and the hp moc 332 operating in a series arrangement (e.g., staged or series compression), wherein both of the mocs 330 , 332 are disposed along the train of the segr gt system 52 (i.e., derive all or a majority of their power from the segr gt system 52 . the lp moc 330 is directly driven by the segr gt system 52 through the generator 302 . the hp moc 332 , on the other hand, is driven by the lp moc 330 through the gearbox 320 such that the hp moc 332 is able to operate at a different speed when compared to lp moc 330 or the segr gt system 52 . in addition to these features, the oxidant compression system 186 also includes a spray intercooler 400 disposed along a flow path 402 of the lp compressed oxidant 334 extending from an outlet of the lp moc 300 to an inlet of the hp moc 332 . though any suitable cooling fluid may be utilized, in the illustrated embodiment, the spray intercooler 400 utilizes demineralized or polished water 404 to cool the lp compressed oxidant 334 . the demineralized or polished water 404 is generally substantially free of minerals, particulates, or other materials that may negatively affect various operating components (e.g., conduits, pumps, compressor blading and/or housing). by way of non-limiting example, water may be passed through a biological, chemical, or physical filter, or any combination thereof, to generate the polished or demineralized water. in particular, the spray intercooler 400 utilizes psychrometric cooling to cool the lp compressed oxidant 334 by injecting a spray of the demineralized or polished water 404 into the stream 334 . the demineralized or polished water 404 vaporizes, which reduces the temperature of the lp compressor oxidant stream 334 by reducing its superheat or dew point margin. while any fluid capable of engaging in this type of cooling may be utilized, it may be desirable for the water to be demineralized or polished so as to avoid the fouling or other deposit buildup within the piping of the flow path 402 . such a cooling method may be desirable in that pressure drop across conduits from the lp moc 330 to the hp moc 332 may be reduced or mitigated. in addition, such a cooling method may also obviate the need for costly heat exchange equipment. as discussed in detail above, a single casing may house one or more of the compression stages. for example, in the embodiment depicted in fig. 18 , the lp moc 330 and the hp moc 332 may be housed in a single compressor casing. in such embodiments, the present disclosure also contemplates the use of one or more cooling features disposed therebetween. thus, in some embodiments, the spray intercooler 400 maybe disposed on, within, or separate from a single casing housing the lp moc 330 and hp moc 332 . for example, the intercooler 400 may be partially or totally positioned within a casing housing the lp and hp mocs 330 , 332 , and may be configured to cool compressed oxidant in between compression stages. turning now to fig. 19 , an embodiment of the oxidant compression system 186 in which a cooler 420 provides cooling along the flow path 402 of the lp compressed oxidant 334 is provided. in particular, the cooler 420 may be an intercooler (e.g., heat exchanger) that provides interstage cooling between lp moc 330 and hp moc 332 . as discussed in detail above, the cooler 420 may be disposed on, in, or apart from one or more casings housing the lp moc 330 and the hp moc 332 . the cooler 420 , which may be an intercooler, utilizes cooling water 422 or another cooling medium such as ambient air to cool the lp compressed oxidant 334 through heat exchange. thus, the cooler 420 may be a heat exchanger that rejects heat to the cooling water 422 or to the ambient environment. to enable such cooling, the cooler 420 may be any suitable type of heat exchanger. by way of non-limiting example, the heat exchanger may be a shell and tube heat exchanger, an air fin-based heat exchanger, or any similar configuration. in one embodiment, it may be desirable to use such a configuration to avoid directly contacting water with the lp compressed oxidant 334 , which may utilize polished or demineralized water as discussed above with respect to fig. 18 . in another embodiment, more than one unit may be used to cool the lp compressed oxidant 334 . for example, as depicted in fig. 20 , a steam generator 440 and/or a feedwater heater 442 may be disposed along the flow path 402 of the lp compressed oxidant 334 so as to provide cooling of the oxidant prior to delivery to the hp moc 332 . the steam generator 440 utilizes a feedwater supply, such as boiler feedwater, and returns a saturated steam for utilization by another machine component, such as a steam turbine. in other words, the steam generator 440 utilizes a feedwater supply and saturated steam return 444 . in one embodiment, the saturated steam return generated by the steam generator 440 may be utilized by a steam turbine used to drive one or more oxidant compressors. the feedwater heater 442 , on the other hand, utilizes a feedwater supply, such as boiler feedwater, and returns heated water, thereby utilizing a feedwater supply and return 446 . this heated water may be used as a feed for the steam generator 440 and/or for the hrsg 56 of the eg processing system 54 . in one embodiment, the lp moc 330 produces the lp compressed oxidant 334 in a manner that enables the steam generator 440 to generate a medium pressure saturated steam. the medium pressure saturated steam may have a pressure of at least approximately 300 psig, such as between 350 psig and 500 psig, between 375 psig and 450 psig, or approximately 400 psig. the lp compressed oxidant 334 , after passing through the steam generator 440 , may then be used to heat high pressure boiler feedwater at the feedwater heater 442 . in some embodiments, the lp compressed oxidant 334 may have a pressure sufficient to generate a desired pressure level of saturated steam at the steam generator 440 , while then being cooled by the feedwater heater 442 such that the output of the compressed oxidant 304 by the hp moc 332 is at least equal to, or below, a maximum output temperature of the hp moc 332 . in addition to, or in lieu of, the embodiments discussed above, other drives (e.g., a steam turbine) may be provided along the train of the segr gt system 52 . such a configuration may be desirable to generate additional power, such as electric power during the operation of the turbine based service system 14 . for example, electric or mechanical power generated by the steam turbine may be utilized by certain components of the oxidant compression system 186 , such as by the electric motor 390 discussed above with respect to figs. 14-17 . such embodiments are discussed with respect to figs. 21-24 . moving now to fig. 21 , an embodiment similar to the configuration illustrated in fig. 5 is depicted as including the main oxidant compressor 300 , the generator 302 , and a steam turbine 460 disposed along the line 306 of the shaft 176 of the segr gt system 52 . in the illustrated embodiment, the steam turbine 460 is double ended, with its input shaft 462 being mechanically coupled to the shaft 176 of the segr gt system 52 and its output shaft 464 being mechanically coupled to the generator 302 . thus, the steam turbine 460 and the segr gt system 52 provide power in series to the generator 302 . the generator 302 in turn provides input power to the main oxidant compressor 300 , which compresses the oxidant 68 to produce the compressed oxidant 304 . while the illustrated embodiment depicts each of the machine components discussed above (moc 300 , generator 302 , steam turbine 460 ) as being directly driven, embodiments in which one or more gearboxes are utilized are also presently contemplated. for example, a gearbox may be positioned between the segr gt system 52 and the steam turbine 460 , between the steam turbine 460 and the generator 302 , or between the generator 302 and the moc 300 , or any combination thereof. thus, any one or a combination of the steam turbine 460 , the generator 302 , or the moc 300 may be driven at a speed that is at least 10% less than the speed of the segr gt system 52 , such as between approximately 10% and 90%, 20% and 80%, 30% and 70%, or 40% and 60%, of the speed of the segr gt system 52 . conversely, any one or a combination of the steam turbine 460 , the generator 302 , or the moc 300 may be driven at a speed that is at least 10% greater than, such as between approximately 10% and 200%, 20% and 175%, 30% and 150%, or 40% and 125% greater than the speed of the segr gt system 52 . in the illustrated embodiment, the steam turbine 460 is depicted as including an input denoted as “a” and an output denoted as “b.” the input a may be steam generated by one or more features of the turbine based service system 14 . by way of non-limiting example, the input a may be the steam 62 generated by the hrsg 56 of the eg processing system 54 . similarly, the output b may be a condensate generated by removing work from the input steam, and the condensate may be provided to any feature which utilizes a feedwater. by way of non-limiting example, the output water or condensate b may be provided as an input stream to the hrsg 56 , e.g., as a water source for steam generation. in other embodiments, the condensate may be used as a working or other cooling fluid, for example, in any one or a combination of the cooling units described above. furthermore, while the moc 300 is illustrated as single unit having an axial flow configuration, the moc 300 may be divided into any number of stages such as the lp moc and hp moc described above, and those stages may be axial stages, radial stages, or any suitable combination of compression stages. furthermore, the compressors maybe housed in one or more compressor casings, and may be utilized in combination with any of the cooling features, additional drive features, gearboxes, pumps, booster compressors, and so forth, described above to enhance operational efficiency of the oxidant compression system 186 . the relative positioning of the illustrated features is not limited to the particular configuration that is illustrated in fig. 21 . rather, in some embodiments, relative positions of the machine components may be reversed or otherwise re-arranged. for example, the respective positions of the generator 302 and the steam turbine 460 may be reversed, as depicted in fig. 22 . in fig. 22 , the steam turbine 460 and the segr gt system 52 both directly provide power to the generator 302 . in particular, the input shaft 462 of the steam turbine 460 is mechanically coupled to the output shaft 310 of the generator 302 . the steam turbine 460 and the segr gt system 52 also provide power in series to the moc 300 . specifically, the output shaft 464 of the steam turbine 460 is mechanically coupled to the input shaft 312 of the moc 300 . as described above, the steam turbine 460 may utilize input steam a generated by any steam-generating features, such as the hrsg 56 , and may generate the condensate b therefrom, which may be returned to the steam-generating feature (e.g., the hrsg 56 ). in addition to reversing the respective positions of the generator 302 and the steam turbine 460 , the steam turbine 460 may be positioned at any point along the train of the segr gt system 52 . for example, as illustrated in fig. 23 , the steam turbine 460 may be located at the end of the train such that it inputs power to the output shaft 314 of the moc 300 . in other words, the output shaft 314 of the moc 300 is mechanically coupled to the input shaft 462 of the steam generator 460 . thus, as illustrated, the generator 302 drives the moc 300 , and the segr gt system 52 directly drives the generator 302 . accordingly, the segr gt system 52 and the steam turbine 460 both provide power to the moc 300 , albeit at opposing ends. during certain situations, such as during startup, steam production by the segr gt system 52 may not favor operation of the steam turbine 460 (e.g., may not be sufficient to drive the steam turbine 460 ). accordingly, in some embodiments, the steam turbine 460 may be decoupled from the segr gt system 52 during operation. for example, as illustrated in fig. 24 , the input shaft 462 of the steam turbine 460 may be coupled to a clutch 480 , which is in turn coupled to the train of the segr gt system 52 . therefore, in situations in which the amount of the steam 62 produced by the segr gt system 52 (or other steam-generating component) is insufficient to drive the steam turbine 460 , the action of the clutch 480 may de-couple the steam turbine 460 from the train. additional description the present embodiments provide a system and method for compressing an oxidant (e.g., ambient air, oxygen-enriched air, oxygen depleted air, substantially pure oxygen) for use in exhaust gas recirculation gas turbine engines. it should be noted that any one or a combination of the features described above may be utilized in any suitable combination. indeed, all permutations of such combinations are presently contemplated. by way of example, the following clauses are offered as further description of the present disclosure: embodiment 1 a system, having a gas turbine system, which includes a turbine combustor; a turbine driven by combustion products from the turbine combustor; and an exhaust gas compressor driven by the turbine, wherein the exhaust gas compressor is configured to compress and supply an exhaust gas to the turbine combustor; and an exhaust gas recirculation (egr) system, wherein the egr system is configured to recirculate the exhaust gas along an exhaust recirculation path from the turbine to the exhaust gas compressor. the system also includes a main oxidant compression system configured to supply compressed oxidant to the gas turbine system, and the main oxidant compression system includes: a first oxidant compressor; and a first gearbox configured to enable the first oxidant compressor to operate at a first speed different from a first operating speed of the gas turbine system. embodiment 2 the system of embodiment 1, wherein the first gearbox includes a parallel shaft gearbox having input and output shafts that are generally parallel with one another, the input shaft is in line with a shaft line of the gas turbine system, and the output shaft is drivingly coupled to the first oxidant compressor. embodiment 3 the system of embodiment 1, wherein the first gearbox comprises an epicyclic gearbox having input and output shafts in line with one another and a shaft line of the gas turbine system, and the output shaft is drivingly coupled to the first oxidant compressor. embodiment 4 the system of any preceding embodiment, wherein the main oxidant compression system is at least partially driven by the gas turbine system, and the main oxidant compression system comprises a plurality of compression stages including the first oxidant compressor and a second oxidant compressor. embodiment 5 the system of any preceding embodiment, wherein the first oxidant compressor is driven by the gas turbine system through the first gearbox. embodiment 6 the system of any preceding embodiment, comprising: an electrical generator coupled to a shaft of the gas turbine system, wherein the first oxidant compressor is coupled to the electrical generator via the first gearbox; a drive coupled to the second oxidant compressor, wherein the drive comprises a steam turbine or an electric motor; and a second gearbox coupling the second oxidant compressor and the drive, wherein the second gearbox is configured to enable the second oxidant compressor to operate at a second speed different from a second operating speed of the drive. embodiment 7 the system of embodiment 4, wherein the second oxidant compressor is directly driven by the gas turbine system. embodiment 8 the system of embodiments 4 or 7, wherein the second oxidant compressor is disposed along a shaft line of the gas turbine system and coupled to an input shaft of an electrical generator, and the first oxidant compressor is coupled to an output shaft of the electrical generator via the first gearbox. embodiment 9 the system of embodiments 4, 7, or 8, having an electrical generator disposed along a shaft line of the gas turbine system, wherein the second oxidant compressor is coupled to the electrical generator and to an input shaft of the first gearbox, and the first oxidant compressor is coupled to the second oxidant compressor via the first gearbox. embodiment 10 the system of embodiments 4, 7, 8, or 9, having an interstage cooling system disposed along an oxidant flow path between the first and second oxidant compressors. embodiment 11 the system of embodiment 10, wherein the interstage cooling system includes a spray system configured to output a spray along the oxidant flow path. embodiment 12 the system of embodiments 10 or 11, wherein the interstage cooling system includes a heat exchanger disposed along the oxidant flow path, and the heat exchanger comprises a coolant path configured to circulate a coolant to absorb heat along the oxidant flow path. embodiment 13 the system of embodiments 10, 11, or 12, wherein the interstage cooling system includes a steam generator, a feed water heater, or a combination thereof, configured to cool compressed oxidant along the oxidant flow path by transferring heat to a feed water supply, wherein the steam generator is configured to generate steam for a steam turbine generator having a steam turbine coupled to an electrical generator, and the feed water heater is configured to preheat the feed water supply for eventual supply to a heat recovery steam generator (hrsg). embodiment 14 the system of any preceding embodiment, having a drive coupled to the first oxidant compressor, wherein the drive includes a steam turbine or an electric motor coupled to an input shaft of the first gearbox. embodiment 15 the system of embodiments 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, or 14, wherein at least one of the first or second oxidant compressors comprises a plurality of compression stages. embodiment 16 the system of embodiments 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, or 15, wherein at least one of the first or second oxidant compressors comprises one or more axial flow compressors, one or more centrifugal compressors, or a combination thereof. embodiment 17 the system of embodiments 1, 2, or 3, wherein the main oxidant compression system includes a second oxidant compressor, the first and second oxidant compressors are fluidly coupled in parallel to the gas turbine system, and the second oxidant compressor is coupled to the first gearbox via the first oxidant compressor. embodiment 18 the system of embodiments 1, 2, or 3, having: an electrical generator coupled to a shaft of the gas turbine system; and a drive coupled to the first oxidant compressor, wherein the drive includes a steam turbine or an electric motor, and the drive is coupled to an input shaft of the first gearbox; and wherein the main oxidant compression system has a second oxidant compressor coupled to the electrical generator via a second gearbox, and the first and second oxidant compressors are fluidly coupled in parallel to the gas turbine system. embodiment 19 the system of any preceding embodiment, including a stoichiometric combustion system having the turbine combustor configured to combust a fuel/oxidant mixture in a combustion equivalence ratio of 1.0 plus or minus 0.01, 0.02, 0.03, 0.04, or 0.05 fuel to oxygen in the oxidant. embodiment 20 the system of any preceding embodiment, including a heat recovery steam generator (hrsg) coupled to the gas turbine system, wherein the hrsg is configured to generate steam by transferring heat from the exhaust gas to a feed water. embodiment 21 the system of embodiment 20, wherein the hrsg is fluidly coupled to a steam turbine generator having a steam turbine coupled to an electrical generator, the steam turbine is configured to drive the first oxidant compressor via the first gearbox, to drive a second oxidant compressor of the main oxidant compression system, or any combination thereof. embodiment 22 the system of embodiments 20 or 21, wherein the egr system is configured to route the exhaust gas from the turbine, through the hrsg, and back to the exhaust gas compressor, wherein the egr system includes a blower configured to motivate the exhaust gas toward the exhaust gas compressor; a cooler configured to cool the exhaust gas; and a moisture removal unit configured to remove moisture from the exhaust gas. embodiment 23 the system of embodiments 20, 21, or 22, wherein the hrsg includes a catalyst configured to reduce a concentration of oxygen in the exhaust gas. embodiment 24 the system of any preceding embodiment, including an exhaust extraction system coupled to the gas turbine system, wherein the exhaust extraction system is configured to remove a portion of the exhaust gas from the gas turbine system. embodiment 25 the system of embodiment 24, including a hydrocarbon production system fluidly coupled to the exhaust extraction system, wherein the exhaust extraction system is configured to utilize the portion of the exhaust gas as a pressurized fluid for enhanced oil recovery. embodiment 26 the system of embodiment 24, wherein the exhaust extraction system comprises a catalyst configured to reduce a concentration of oxygen in the portion of the exhaust gas. embodiment 27 the system of any preceding embodiment, wherein the main oxidant compression system is configured to supply the compressed oxidant as atmospheric air, oxygen enriched air having between approximately 21% and 80% by volume oxygen, oxygen depleted air having between approximately 1% and 21% by volume oxygen, or substantially pure oxygen comprising greater than 80% by volume oxygen. embodiment 28 a system including a gas turbine system, having: a turbine combustor; a turbine driven by combustion products from the turbine combustor; and an exhaust gas compressor driven by the turbine, wherein the exhaust gas compressor is configured to compress and supply an exhaust gas to the turbine combustor. the gas turbine system also includes an exhaust gas recirculation (egr) system, wherein the egr system is configured to recirculate the exhaust gas along an exhaust recirculation path from the turbine to the exhaust gas compressor. the system also includes a main oxidant compression system configured to supply compressed oxidant to the gas turbine system, and the main oxidant compression system has a first oxidant compressor; and a second oxidant compressor, wherein the first and second oxidant compressors are driven by the gas turbine system. embodiment 29 the system of embodiment 28, wherein an oxidant outlet of the second oxidant compressor is fluidly coupled to an oxidant inlet of the first oxidant compressor. embodiment 30 the system of embodiments 28 or 29, wherein the first and second oxidant compressors are driven by the gas turbine system via an electrical generator drivingly coupled to a shaft of the gas turbine system, wherein the second oxidant compressor is drivingly coupled to an output shaft of the electrical generator. embodiment 31 the system of embodiments 28, 29, or 30, wherein the first oxidant compressor comprises a centrifugal compressor and the second oxidant compressor comprises an axial flow compressor. embodiment 32 the system of embodiments 28, 29, 30, or 31, comprising a first gearbox coupling the first and second oxidant compressors, wherein the second oxidant compressor is drivingly coupled to an input shaft of the first gearbox and the first oxidant compressor is drivingly coupled to an output shaft of the first gearbox. embodiment 33 the system of embodiments 28 or 29, wherein the first oxidant compressor is driven by the gas turbine system via an electrical generator, wherein the second oxidant compressor is drivingly coupled to an input shaft of the electrical generator and the first oxidant compressor is drivingly coupled to an output shaft of the electrical generator. embodiment 34 the system of embodiments 28, 29, 30, 31, 32, or 33, including an interstage cooling system disposed along an oxidant flow path between the first and second oxidant compressors. embodiment 35 the system of embodiment 34, wherein the interstage cooling system includes a spray system configured to output a spray along the oxidant flow path. embodiment 36 the system of embodiments 34 or 35, wherein the interstage cooling system includes a heat exchanger disposed along the oxidant flow path, and the heat exchanger includes a coolant path configured to circulate a coolant to absorb heat along the oxidant flow path. embodiment 37 the system of embodiments 34, 35, or 36, wherein the interstage cooling system includes a steam generator, a feed water heater, or a combination thereof, configured to cool compressed oxidant along the oxidant flow path by transferring heat to a feed water supply, wherein the steam generator is configured to generate steam for a steam turbine generator having a steam turbine coupled to an electrical generator, and the feed water heater is configured to preheat the feed water supply for eventual supply to a heat recovery steam generator (hrsg). embodiment 38 the system of embodiments 28, 30, 31, 32, 33, 34, 35, 36, or 37, wherein the main oxidant compression system includes a first gearbox configured to enable the first oxidant compressor to operate at a first speed different from a first operating speed of the gas turbine system, the first and second oxidant compressors are fluidly coupled in parallel to the gas turbine system, and the second oxidant compressor is coupled to the first gearbox via the first oxidant compressor. embodiment 39 the system of embodiments 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, or 38, including a stoichiometric combustion system having the turbine combustor configured to combust a fuel/oxidant mixture in a combustion equivalence ratio of 1.0 plus or minus 0.01, 0.02, 0.03, 0.04, or 0.05 fuel to oxygen in the oxidant. embodiment 40 the system of embodiments 28, 29, 30, 31, 32, 33, 34, 35, 36, or 38, including a heat recovery steam generator (hrsg) coupled to the gas turbine system, wherein the hrsg is configured to generate steam by transferring heat from the exhaust gas to a feed water. embodiment 41 the system of embodiment 40, wherein the hrsg is fluidly coupled to a steam turbine generator having a steam turbine coupled to an electrical generator, the steam turbine is configured to drive the first oxidant compressor via the first gearbox, to drive the second oxidant compressor of the main oxidant compression system, or any combination thereof. embodiment 42 the system of embodiments 38, 40, or 41, wherein the egr system is configured to route the exhaust gas from the turbine, through the hrsg, and back to the exhaust gas compressor, wherein the egr system includes: a blower configured to motivate the exhaust gas toward the exhaust gas compressor; a cooler configured to cool the exhaust gas; and a moisture removal unit configured to remove moisture from the exhaust gas. embodiment 43 the system of embodiments 38, 40, 41, or 42, wherein the hrsg comprises a catalyst configured to reduce a concentration of oxygen in the exhaust gas. embodiment 44 the system of embodiments 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, or 43, including an exhaust extraction system coupled to the gas turbine system, wherein the exhaust extraction system is configured to remove a portion of the exhaust gas from the gas turbine system. embodiment 45 the system of embodiment 44, including a hydrocarbon production system fluidly coupled to the exhaust extraction system, wherein the exhaust extraction system is configured to utilize the portion of the exhaust gas as a pressurized fluid for enhanced oil recovery. embodiment 46 the system of embodiments 44 or 45, wherein the exhaust extraction system comprises a catalyst configured to reduce a concentration of oxygen in the portion of the exhaust gas. embodiment 47 the system of embodiments 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, or 47, wherein the main oxidant compression system is configured to supply the compressed oxidant as atmospheric air, oxygen enriched air having between approximately 21% and 80% by volume oxygen, oxygen depleted air having between approximately 1% and 21% by volume oxygen, or substantially pure oxygen comprising greater than 80% by volume oxygen. embodiment 48 a system, including a gas turbine system, having: a turbine combustor; a turbine driven by combustion products from the turbine combustor; and an exhaust gas compressor driven by the turbine, wherein the exhaust gas compressor is configured to compress and supply an exhaust gas to the turbine combustor; and an exhaust gas recirculation (egr) system, wherein the egr system is configured to recirculate the exhaust gas along an exhaust recirculation path from the turbine to the exhaust gas compressor. the system also includes a main oxidant compression system configured to supply compressed oxidant to the gas turbine system, and the main oxidant compression system comprises one or more oxidant compressors; a heat recovery steam generator (hrsg) coupled to the gas turbine system, wherein the hrsg is configured to generate steam by transferring heat from the exhaust gas to a feed water, and the exhaust recirculation path of the egr system extends through the hrsg; and a steam turbine disposed along a shaft line of the gas turbine system and at least partially driven by the steam from the hrsg, wherein the steam turbine is configured to return condensate as at least a portion of the feedwater to the hrsg. embodiment 49 the system of embodiment 48, wherein at least one oxidant compressor of the one or more oxidant compressors of the main oxidant compression system is disposed along the shaft line of the gas turbine system. embodiment 50 the system of embodiments 48 or 49, wherein the steam turbine is disposed along the shaft line between the main oxidant compression system and the gas turbine system. embodiment 51 the system of embodiments 49 or 50, having an electrical generator disposed between the steam turbine and the at least one oxidant compressor of the main oxidant compression system. embodiment 52 the system of embodiments 48, 49, 50, or 51, having an electrical generator disposed between the steam turbine and the gas turbine system, wherein the gas turbine system is mechanically coupled to an input shaft of the electrical generator and the steam turbine is mechanically coupled to an output shaft of the electrical generator. embodiment 53 the system of embodiments 48, 49, 50, 51, or 52, wherein the main oxidant compression system is driven by the gas turbine system, and the main oxidant compression system is positioned along the shaft line between the steam turbine and the gas turbine system. embodiment 54 the system of embodiments 49, 50, 51, 52, or 53, including a clutch disposed between the at least one compressor of the main oxidant compression system and the steam turbine, wherein the clutch enables the steam turbine to operate at the same speed as the gas turbine system when engaged, and to operate separate from the gas turbine system when not engaged. embodiment 55 the system of embodiments 48, 49, 50, 51, 52, 53, or 54, wherein the main oxidant compression system includes a plurality of compressors in a series arrangement of compression. embodiment 56 the system of embodiments 48, 49, 50, 51, 52, 53, or 54, wherein the main oxidant compression system comprises a plurality of compressors in a parallel arrangement of compression. embodiment 57 the system of embodiments 48, 49, 50, 51, 52, 53, 54, 55, or 56, wherein the main oxidant compression system comprises at least one oxidant compressor drivingly coupled to a speed-reducing or speed-increasing gearbox that enables the at least one oxidant compressor to operate at a speed that is different from an operating speed of the gas turbine system. embodiment 58 the system of embodiments 48, 49, 50, 51, 52, 53, 54, 55, 56 or 57, wherein the hrsg comprises a catalyst configured to reduce a concentration of oxygen in the exhaust gas. embodiment 59 the system of embodiments 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, or 58, including an exhaust extraction system coupled to the gas turbine system, wherein the exhaust extraction system is configured to remove a portion of the exhaust gas from the gas turbine system. embodiment 60 the system of embodiment 59, including a hydrocarbon production system fluidly coupled to the exhaust extraction system, wherein the exhaust extraction system is configured to utilize the portion of the exhaust gas as a pressurized fluid for enhanced oil recovery. embodiment 61 the system of embodiments 59 or 60, wherein the exhaust extraction system includes a catalyst configured to reduce a concentration of oxygen in the portion of the exhaust gas. embodiment 62 the system of embodiments 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, or 61, wherein the main oxidant compression system is configured to supply the compressed oxidant as atmospheric air, oxygen enriched air having between approximately 21% and 80% by volume oxygen, oxygen depleted air having between approximately 1% and 21% by volume oxygen, or substantially pure oxygen comprising greater than 80% by volume oxygen. embodiment 63 a system, including: a gas turbine system, having: a turbine combustor; a turbine driven by combustion products from the turbine combustor; and an exhaust gas compressor driven by the turbine, wherein the exhaust gas compressor is configured to compress and supply an exhaust gas to the turbine combustor; and an exhaust gas recirculation (egr) system, wherein the egr system is configured to recirculate the exhaust gas along an exhaust recirculation path from the turbine to the exhaust gas compressor. the system also includes a main oxidant compression system comprising one or more oxidant compressors, wherein the one or more oxidant compressors are separate from the exhaust gas compressor, and the one or more oxidant compressors are configured to supply all compressed oxidant utilized by the turbine combustor in generating the combustion products. embodiment 64 the system of any preceding embodiment, wherein the combustion products have substantially no unburnt fuel or oxidant remaining. embodiment 65 the system of any preceding embodiment, wherein the combustion products have less than approximately 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 200, 300, 400, 500, 1000, 2000, 3000, 4000, or 5000 parts per million by volume (ppmv) of oxidant unburnt fuel, nitrogen oxides (e.g., no x ), carbon monoxide (co), sulfur oxides (e.g., so x ), hydrogen, and other products of incomplete combustion. this written description uses examples to disclose the invention, including the best mode, and also to enable any person skilled in the art to practice the invention, including making and using any devices or systems and performing any incorporated methods. the patentable scope of the invention is defined by the claims, and may include other examples that occur to those skilled in the art. such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal language of the claims.
173-598-530-435-128
US
[ "US" ]
G06F17/30,G06F16/36,G06F16/33
2007-12-20T00:00:00
2007
[ "G06" ]
method and apparatus for searching using an active ontology
embodiments of the present invention provide a method and apparatus for searching using an active ontology. one embodiment of a method for searching a database includes receiving a search string, where the search string comprises one or more words, generating a semantic representation of the search string in accordance with an ontology, searching the database using the semantic representation, and outputting a result of the searching.
1 . a method for searching a database, comprising: receiving a search string, the search string comprising one or more words; generating a semantic representation of the search string in accordance with an ontology; searching the database using the semantic representation; and outputting a result of the searching. 2 . the method of claim 1 , wherein the generating comprises: splitting the search string into one or more tokens, where each of the one or more tokens represents at least one of the one or more words; parsing the one or more tokens, using the ontology; and producing an interpretation of the search string as a result of the parsing. 3 . the method of claim 2 , wherein the splitting comprises: identifying one or more missing search criteria in the search string; and selecting a default value for the one or more missing search criteria from at least one of: a user profile and a user search history. 4 . the method of claim 2 , wherein the parsing comprises: matching the one or more tokens to one or more nodes in the ontology. 5 . the method of claim 2 , wherein the producing comprises: producing a plurality of interpretations of the search string; assigning a weight to each of the plurality of interpretations, the weight indicating a confidence that an associated one of the plurality of interpretations is correct; and selecting a one of the plurality of interpretations with a highest weight. 6 . the method of claim 1 , wherein the ontology comprises: a plurality of nodes, each of the plurality of nodes representing a class or an attribute; and a plurality of links connecting the plurality of nodes, each of the plurality of links representing a relation between nodes linked thereby. 7 . the method of claim 6 , wherein the relation comprises at least one of: an is-a relation, a has-a relation, or a causal relation. 8 . the method of claim 1 , wherein the ontology is customized for a particular purpose. 9 . the method of claim 1 , further comprising: storing a record comprising at least one of: the search string, the semantic representation, the result, and a time stamp indicating a reception time of the search string. 10 . a computer readable storage medium containing an executable program for searching a database, where the program performs the steps of: receiving a search string, the search string comprising one or more words; generating a semantic representation of the search string in accordance with an ontology; searching the database using the semantic representation; and outputting a result of the searching. 11 . the computer readable storage medium of claim 10 , wherein the generating comprises: splitting the search string into one or more tokens, where each of the one or more tokens represents at least one of the one or more words; parsing the one or more tokens, using the ontology; and producing an interpretation of the search string as a result of the parsing. 12 . the computer readable storage medium of claim 11 , wherein the splitting comprises: identifying one or more missing search criteria in the search string; and selecting a default value for the one or more missing search criteria from at least one of: a user profile and a user search history. 13 . the computer readable storage medium of claim 11 , wherein the parsing comprises: matching the one or more tokens to one or more nodes in the ontology. 14 . the computer readable storage medium of claim 11 , wherein the producing comprises: producing a plurality of interpretations of the search string; assigning a weight to each of the plurality of interpretations, the weight indicating a confidence that an associated one of the plurality of interpretations is correct; and selecting a one of the plurality of interpretations with a highest weight. 15 . the computer readable storage medium of claim 10 , wherein the ontology comprises: a plurality of nodes, each of the plurality of nodes representing a class or an attribute; and a plurality of links connecting the plurality of nodes, each of the plurality of links representing a relation between nodes linked thereby. 16 . the computer readable storage medium of claim 15 , wherein the relation comprises at least one of: an is-a relation, a has-a relation, or a causal relation. 17 . the computer readable storage medium of claim 10 , wherein the ontology is customized for a particular purpose. 18 . the computer readable storage medium of claim 10 , further comprising: storing a record comprising at least one of: the search string, the semantic representation, the result, and a time stamp indicating a reception time of the search string. 19 . a system for searching a database, comprising: means for receiving a search string, the search string comprising one or more words; means for generating a semantic representation of the search string in accordance with an ontology; means for searching the database using the semantic representation; and means for outputting a result of the searching. 20 . the system of claim 19 , wherein the generating comprises: splitting the search string into one or more tokens, where each of the one or more tokens represents at least one of the one or more words; parsing the one or more tokens, using the ontology; and producing an interpretation of the search string as a result of the parsing.
cross reference to related applications this application claims the benefit of u.s. provisional patent application ser. no. 61/015,495, filed dec. 20, 2007, which is herein incorporated by reference in its entirety. field of the invention the invention relates generally to database searching and relates more specifically to searching using an active ontology. background of the disclosure searching by keywords is well known in the field of database searching. for example, when using an internet search engine, a user typically enters one or more keywords as a search terms, such that the search results will include database content associated with the keywords. often, the creator of the content will choose the keywords that will cause the content to be retrieved by a database search (e.g., by “tagging” the content with the keywords). for example, the creator of a review of a fancy italian restaurant named restaurant x may tag the review with keywords such as “italian,” “restaurant,” and “fancy” such that the review is retrieved when a user enters one or more of those keywords in a query. a drawback of this approach is that keywords may not capture all of the synonyms that users will use in practice when searching. for example, referring to the example above, the review of restaurant x might not be retrieved if the user instead enters keywords such as “italian” and “elegant” or “upscale.” these consequences are particularly significant in the field of advertising, where advertisers rely on users viewing their advertisements to generate sales. moreover, conventional database search systems that search by keywords may have trouble determining the high level intent of what a user is seeking. for example, a search system may be unable to determine that the keywords “restaurant x,” “friday,” and “8:00 pm” indicate that the user wishes to make reservations for friday at 8:00 pm at restaurant x. thus, there is a need in the art for a method and apparatus for searching using an active ontology. summary of the invention embodiments of the present invention provide a method and apparatus for searching using an active ontology. one embodiment of a method for searching a database includes receiving a search string, where the search string comprises one or more words, generating a semantic representation of the search string in accordance with an ontology, searching the database using the semantic representation, and outputting a result of the searching. brief description of the drawings fig. 1 is a flow diagram illustrating one embodiment of a method for searching using an active ontology, according to the present invention; fig. 2 illustrates one embodiment of an exemplary active ontology that may be used to facilitate a search in accordance with the method illustrated in fig. 1 ; and fig. 3 is a high level block diagram of the present search method that is implemented using a general purpose computing device. detailed description in one embodiment, the present invention is a method and apparatus for searching using an active ontology. an “ontology”, generally, is a data structure that represents domain knowledge, where distinct classes, attributes, and relations among classes are defined. a separate engine may operate or reason on this data structure to produce certain results. in certain embodiments of the present invention, an ontology is used to select content (e.g., a set of advertisements) from a database given a user query. the approach to searching that is embodied in the present application may be of particular use in the field of advertising, although the invention is not limited as such. specifically, the semantic structure employed by embodiments of the present invention allows for improved advertisement indexing. moreover, the use of links (such as “suggests” and causal links) in the search ontology facilitates the prediction of upcoming relevant content or user actions, and these links can be automatically learned through use. fig. 1 is a flow diagram illustrating one embodiment of a method 100 for searching using an active ontology, according to the present invention. the basic task of the method 100 is to take a user query (i.e., search string) and return a set of relevant content (e.g., advertisements). in one embodiment, the content is sorted by the user's preferences. the method 100 is initialized at step 102 and proceeds to step 104 , where the method 100 receives a search string from a user. in one embodiment, the search string is substantially similar to a search string typically given to an online search engine (e.g., a phrase such as “find fancy italian food” or “italian food in san francisco”). in step 106 , the method 100 splits the search string into one or more tokens, each token representing at least one word in the search string. the method 100 then proceeds to step 108 and matches the tokens to nodes of an active ontology. fig. 2 , for example, illustrates one embodiment of an exemplary active ontology 200 that may be used to facilitate a search in accordance with the method 100 . as illustrated, the active ontology 200 comprises a plurality of nodes 202 1 - 202 n (hereinafter collectively referred to as “nodes 202 ”). the nodes 202 represent concepts, which may be categories or classes (e.g., as in the case of node 202 4 , which represents the concept or category “restaurant”) or attributes of the classes (e.g., as in the case of nodes 202 7 , 202 8 , and 202 n , which represent, respectively, the concepts or attributes “style,” “price range,” and “location”). the nodes 202 are connected by links 204 1 - 204 n (hereinafter collectively referred to as “links 204 ”) which represent the relations among the classes and attributes represented by the nodes 202 . for instance, the link 204 10 represents the fact that the class “restaurant” has an attribute of “style.” referring back to fig. 1 , the individual tokens into which the search string is split will activate word matching nodes in the active ontology. in one embodiment, the active ontology is customized for a particular purpose, such as advertising. the method 100 will try to parse the list of tokens, using the active ontology, as a whole phrase, in order to try to determine the overall intent of the user. thus, the method 100 will try to parse as many of the tokens as possible. this means that if there are multiple ambiguous interpretations of the search string, the method 100 will try to evaluate each weighted alternative based on all of the tokens derived from the search string. the interpretation with the best weight (i.e., the highest confidence) will be used to generate a semantic representation of the search string in step 110 . specifically, in step 110 , the method 100 generates a semantic representation of the search string using the ontology nodes. the ontology nodes corresponding to the best weighted interpretation will create the semantic representation of the phrase. this semantic structure will contain the contextual information that was extracted from the search string. for instance, if the search string was “find fancy italian food,” the method 100 might translate the search string into a semantic structure such as ‘find(restaurant, [style(“italian”)], [price_range(“fancy”)])’. this structure captures the user's intent to find a restaurant and it also specifies an additional constraint using a type attribute, restricting the results to those restaurants that are fancy and serve italian food. in step 112 , the method 100 uses the semantic representation of the search string to search a database (e.g., a database of advertisers). that is, the method 100 searches the database for content that best matches all of the criteria embodied in the semantic representation. in the above example, for instance, a database of advertisements or reviews for restaurants (such as zagat survey, llc's zagat.com®) would be searched, restricted to those restaurants that are fancy and serve italian food. however, if the original or a subsequent search string included the additional constraint of “friday, 8:00 pm,” a semantic representation of this additional constraint might motivate search in a different database, such as a database that allows a user to make restaurant reservations (such as opentable, inc's opentable.com®), as illustrated in fig. 2 . the additional constraint of day (“friday”) and time (“8:00 pm”) changes the resultant semantic representation in a subtle way that cannot be easily mapped to traditional keyword approaches. as discussed above, the user's original search string may be ambiguous, but the method 100 will parse the search string and translate it to a precise semantic structure that can be used to construct a database query. in this way, the search string is used to search for content based on semantically meaningful attributes and not just based on keywords. the method 100 outputs the results of the database search to the user in step 114 , before terminating in step 116 . in one embodiment, the method 100 stores the results in addition to outputting them. in one embodiment, the stored results comprise a record including at least one of: the search string, the semantic representation of the search string, the search results, and a time stamp indicating when the search string was received. the record allows the results to be retrieved by the user at a later time. in addition, the record also allows the method 100 to learn patterns of user behavior that may assist in updating the ontology, as discussed in greater detail below. in one embodiment, if the search string received in step 104 appears unclear or incomplete (e.g., some of the search criteria are missing), the method 100 examines the user's profile or search history to select default values. for instance, if a first search string was “find fancy italian restaurants in san francisco” and a second search string is “get evening showtimes,” then the method 100 will remember the location san francisco, calif. from the first query when selecting the locations for movie theaters. also, the user's profile may specify a preference for art movies, so that preference may be added automatically to the second query. embodiments of the present invention will therefore parse a user's query and determine the higher level concepts and categories that describe what the user is seeking. these concepts are then used as an index into the database of relevant content. content that triggers on a particular concept will also be triggered on the subconcepts. for instance, a user query for “italian restaurants” will automatically trigger ads for “sicilian restaurants” as well, because “sicilian” is a subconcept of “italian.” content providers (e.g., advertisers) only need to register on the highest level category that they wish to match, and they will automatically be triggered for subcategories and their synonyms as well. referring back to fig. 2 , as discussed above, links 204 in the active ontology 200 indicate relations among the classes and attributes represented by the nodes 202 . each of these links 204 represents a specific kind of relation. in one embodiment, the types of relations represented by the links 204 in the active ontology 200 include at least one of: an is-a relation, a has-a relation, a causal relation (such as, for example, a suggests relation). for example, in one embodiment, is-a relations are used to link categories (i.e., concepts in the ontology) to broader categories. in further embodiments, sets of synonyms are defined for concepts. in this way, the search string can be translated into a semantic search for content based on broader categories like “european restaurants” or “fancy restaurants” or “expensive food”. in a further embodiment, has-a relations are used to specify additional search criteria that will be associated with a concept or category. for instance, when searching for a restaurant, a city location may be a mandatory search parameter, as illustrated in fig. 2 . this is specified using a mandatory has-a link (link 204 n ) in the ontology 200 from the restaurant concept node (node 202 4 ) to the location node (node 202 n ). a price range is also a useful search parameter, but may be optional. thus, a has-a link (link 204 11 ) from the restaurant concept node (node 202 4 ) to the price range concept node (node 202 8 ) may be established and marked as optional. the concepts that have has-a links become gather type nodes. when the user's search string is parsed, the semantic slots for these has-a links are filled in using the parsed tokens, or else default values are used from the user's profile and search history. therefore, the present invention has this detailed information available when searching a database. in further embodiments, the concepts of the present invention are used to model basic processes. in one embodiment, the ontology includes causal links or suggests links between concepts. causal links would be used if one concept directly causes another concept, or if one action usually precedes another action. suggests links are especially useful and would link user actions that often occur together but not in a particular order. for example, the concept nodes for restaurant booking (node 202 3 ) and movie booking (node 202 2 ) could be linked bidirectionally with a suggests link (link 204 5 ), as illustrated in fig. 2 . an atm concept node (not shown), which represents a user visit to an automated teller machine (atm), could be linked with a causal node to both restaurant booking (node 202 3 ) and movie booking (node 202 2 ) because a visit to an atm often precedes dinner and a movie closely in time. in further embodiments, a system according to the present invention utilizes the process model to help determine what else might interest a given user. for example, given a search string “find restaurants,” the present invention would activate the restaurant concept node (node 202 4 ) and indirectly activate the restaurant booking (node 202 3 ) and movie (node 202 2 ) concept nodes as well. if the search string was received during evening hours, then the restaurant booking node (node 202 3 ) would have higher confidence. this in turn would increase activation of suggests-linked nodes (e.g., the movie node 202 2 ). therefore, the system would query its database for restaurants and could also produce additional results for nearby movies. each of the search results would be associated with the concepts that triggered them, so that the results for movies could be presented separately to the user. although this scenario utilizes a process model that is explicitly encoded into an ontology, those skilled in the art will appreciate that some of the links could be learned using data mining techniques from the logs of a particular user or the aggregated behavior of many users. over time, users of the present invention may ask for movies, restaurants, atms, gas stations, book stores, or the like. in one embodiment, the inventive system logs the corresponding semantic structures for each of the received search strings and the time stamps indicating when the search strings were received. these logs can be scanned in temporal order, and all of the search strings that happen within various time windows can be analyzed to make co-occurrence counts. by counting and ranking those pairs of events that co-occur over different time scales, patterns of behavior would emerge over a large body of users. for example, the logs may show many occurrences of movie and restaurant queries that happen within four hours of each other. if so, then the ontology could be automatically augmented with a suggests link between those nodes. in addition, atm may also co-occur frequently with both movie and restaurant, but atm should precede movie and restaurant in time with high probability. if so, then two causal links could be added from atm to restaurant and movie. in this way, statistics could be collected for a particular user or for many users in aggregate. the system would offer related search results based on how frequently a related concept co-occurs with the user's current search string. fig. 3 is a high level block diagram of the present search method that is implemented using a general purpose computing device 300 . in one embodiment, a general purpose computing device 300 comprises a processor 302 , a memory 304 , a search module 305 and various input/output (i/o) devices 306 such as a display, a keyboard, a mouse, a modem, and the like. in one embodiment, at least one i/o device is a storage device (e.g., a disk drive, an optical disk drive, a floppy disk drive). it should be understood that the search module 305 can be implemented as a physical device or subsystem that is coupled to a processor through a communication channel. alternatively, the search module 305 can be represented by one or more software applications (or even a combination of software and hardware, e.g., using application specific integrated circuits (asic)), where the software is loaded from a storage medium (e.g., i/o devices 306 ) and operated by the processor 302 in the memory 304 of the general purpose computing device 300 . thus, in one embodiment, the search module 305 for database searching described herein with reference to the preceding figures can be stored on a computer readable medium or carrier (e.g., ram, magnetic or optical drive or diskette, and the like). it should be noted that although not explicitly specified, one or more steps of the methods described herein may include a storing, displaying and/or outputting step as required for a particular application. in other words, any data, records, fields, and/or intermediate results discussed in the methods can be stored, displayed, and/or outputted to another device as required for a particular application. furthermore, steps or blocks in the accompanying figures that recite a determining operation or involve a decision, do not necessarily require that both branches of the determining operation be practiced. in other words, one of the branches of the determining operation can be deemed as an optional step. while foregoing is directed to the preferred embodiment of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof.
173-780-198-497-861
US
[ "US" ]
H04M3/42,H04M1/724,H04M3/00,H04M3/493
2006-10-31T00:00:00
2006
[ "H04" ]
method and system for service provider awareness
embodiments of the present disclosure are directed to a method and system for service provider awareness by receiving information associated with a potential call between an originator and an intended recipient, determining whether the potential call is in-network or out-of-network, notifying at least one of the originator or the intended recipient based on the determination, creating a message based on whether the potential call is in-network or out-of-network, encapsulating the message, and transmitting the message to a mobile device.
1 . a method, comprising: receiving information associated with a potential call between an originator and an intended recipient; determining whether the potential call is in-network or out-of-network; and notifying at least one of the originator or the intended recipient based on the determination. 2 . the method of claim 1 , wherein determining whether the potential call is in-network or out-of-network comprises: sending a verification request to a database for data relating to the originator and the recipient; and receiving a verification request result from the database. 3 . the method of claim 2 , wherein receiving information associated with a potential call as in-network when the verification request result indicates that data relating to both the originator and the recipient exists in the database. 4 . the method of claim 2 , wherein receiving information associated with a potential call as out-of-network when the verification request result indicates that data relating to at least one of the originator and the recipient does not exist in the database. 5 . the method of claim 1 , wherein notifying at least one of the originator or the recipient comprises providing information on whether the potential call is in-network or out-of-network. 6 . the method of claim 5 , wherein the method further comprises: creating a message based on whether the potential call is in-network or out-of-network; encapsulating the message; and transmitting the message to a mobile device. 7 . the method of claim 6 , wherein the message indicates to the mobile device of the originator that the potential call is in-network in the event that the data relating to both the originator and the intended recipient exist in the database. 8 . the method of claim 6 , wherein the message indicates to the mobile device of the originator that the potential call is out-of-network in the event that the data relating to the intended recipient does not exist in the database. 9 . the method of claim 6 , wherein the method further comprises providing an option to continue the potential call or to terminate the potential call. 10 . the method of claim 6 , wherein the message indicates to the mobile device of the intended recipient that the potential call is in-network in the event that the data relating to both the originator and the intended recipient exist in the database. 11 . the method of claim 6 , wherein the message indicates to the mobile device of the intended recipient that the potential call is out-of-network in the event that the data relating to the originator does not exist in the database. 12 . the method of claim 11 , wherein the method further comprises storing the data relating to the out-of-network call in a visitor database. 13 . the method of claim 6 , wherein the message is a text message. 14 . the method of claim 6 , wherein encapsulating the message comprises encapsulating the message for wireless transmission to the mobile device. 15 . the method of claim 6 , wherein transmitting the message further comprises transmitting a caller id notification and a multimedia notification that corresponds to the message, wherein the multimedia notification includes at least one of an audio pattern, a vibration pattern, an image, and a video. 16 . a computer readable media comprising code to perform the acts of the method of claim 6 . 17 . a system, comprising: a first network element that receives information associated with a potential call between an originator and an intended recipient and determines whether the potential call is in-network or out-of-network; and a second network element that notifies at least one of the originator or the intended recipient based on the determination. 18 . the system of claim 17 , wherein the first network element receives information associated with a potential call as in-network when the data relating to both the originator and the intended recipient exists in the database. 19 . the system of claim 17 , wherein the first network element receives information associated with a potential call as out-of-network when the data relating to at least one of the originator and the intended recipient does not exist in the database. 20 . the system of claim 17 , wherein the second network element further comprises providing information on whether the potential call is in-network or out-of-network. 21 . the system of claim 20 , wherein the second network element notifies at least one of the originator or the intended recipient by creating a message based on whether the potential call is in-network or out-of-network, encapsulating the message, and transmitting the message to a mobile device. 22 . a method, comprising: receiving information on whether a potential call between an originator and an intended recipient is in-network or out-of-network; creating a message based on the information; and transmitting the message to a mobile device. 23 . the method of claim 22 , wherein the message indicates to the mobile device of a subscriber that the potential call is an in-network call when the information indicates the existence of data relating to both the originator and the intended recipient in a database, wherein the subscriber is either the originator or the intended recipient. 24 . the method of claim 22 , wherein the message indicates to the mobile device of a subscriber that the potential call is an out-of-network call when the information indicates the existence of either one of data relating the originator or the intended recipient in a database, wherein the subscriber is either the originator or the intended recipient. 25 . the method of claim 22 , wherein transmitting the message further comprises transmitting a caller id notification and a multimedia notification that corresponds to the message, wherein the multimedia notification comprises at least one of an audio pattern, a vibration pattern, an image, and a video.
background information wireless phones and other mobile devices are very popular modes of communication for most people. calling plans for these phones and devices are typically based on a monthly payment schedule such that a flat-fee rate covers all calls for a predetermined number of minutes. night and weekend calls may be included without charge. however, most service providers charge a premium rate for every minute over the monthly allotted minutes in a calling plan. as a result, when a user runs out of minutes and/or repeatedly goes over the allotted minutes in his or her plan, the user may end up with very high bills. brief description of the drawings in order to facilitate a fuller understanding of the exemplary embodiments, reference is now made to the appended drawings. these drawings should not be construed as limiting, but are intended to be exemplary only. fig. 1 is an exemplary illustration of wireless network infrastructure, according to an embodiment of the disclosure. fig. 2 depicts an exemplary flowchart illustrating a service provider awareness method, according to an embodiment of the disclosure. fig. 3 depicts an exemplary flowchart illustrating a service provider awareness method, according to an embodiment of the disclosure. fig. 4 depicts an exemplary mobile device illustrating a service provider identification method, according to an embodiment of the disclosure. fig. 5 depicts an exemplary flowchart illustrating a service provider awareness method, according to an embodiment of the disclosure. figs. 6a and 6b an exemplary mobile device illustrating a service provider identification method, according to an embodiment of the disclosure. detailed description of embodiments a system and process of a preferred embodiment of the disclosure provides a service provider identification feature to subscribers within a mobile communications network. a subscriber, who has an account with a service provider, may ascertain whether a caller is calling from the same service provider (in-network) or from a different service provider (out-of-network). also, a subscriber, who is the caller, may have the ability to identify whether the destination party is in-network or out-of-network. since in-network calls (unlike out-of-network calls) usually do not incur additional fees, a subscriber having this feature may better manage his or her monthly allotted minutes. for example, if the subscriber has a habit of running over his or her monthly allotted minutes, which may lead to excessive overage charges on his or her account, the subscriber may be tempted to upgrade to a plan that provides more minutes. however, this upgrade option may result in paying a higher monthly fee and give him or her too many unneeded minutes. if one of the reasons that the subscriber goes over the monthly allotted minutes is because he or she calls or receives calls from out-of-network callers, a service provider awareness feature may assist the subscriber in being selective with those out-of-service calls. in this example, the subscriber may actually save more money from a provider identification service than from upgrading to another plan. thus, service provider awareness, which may be similar to caller id, may be an optional add-on feature to identify whether a caller or a called party is in-network or out-of-network. a service provider may charge its subscribers for its use or it may be provided as a free or packaged service to attract more subscribers. fig. 1 is an exemplary wireless network infrastructure for a service provider, according to an embodiment of the present invention. fig. 1 depicts a system 100 for supporting wireless communications, in particular, a wireless network for providing a service provider awareness/identification feature. as illustrated, a mobile device 110 may be coupled to one or more base transceiving stations (bts) 112 , 114 . each base transceiving station 112 , 114 may be monitored and controlled by a base station controller (bsc) 116 . a mobile switching center (msc) 118 may control the base station controller 116 . in one embodiment, an additional mobile switching center 122 may be provided. in another embodiment, mobile switching center 118 and/or 122 may include one or more visitor location registers (vlr). an authentication center (auc) 126 , an equipment identity register (eic) 128 , a home location register (hlr) 132 , and other database 134 may connect to the mobile switching center/visitor location register 118 . mobile switching center/visitor location register 118 may interface with a public switched telephone network (pstn) 124 through a gateway mobile switching center (gmsc) 120 . network identifier alert generator 130 may also be connected to mobile switching center/visitor location register 118 and the various databases 126 , 128 , 132 , 134 to identify calls as in-network or out-of-network. mobile device 110 may include a wireless device with which a subscriber may interface with a network system 100 . such a device may include a wireless phone, a personal digital assistant (pda), a computer (e.g., a laptop notebook), a gaming device, or other similar device. other various embodiments may also be considered. base transceiving stations 112 , 114 may hold radio transceivers that define a cell and may coordinate radio-link protocols with a mobile device 110 . base transceiving stations 112 , 114 may also provide a networking component of a mobile communications system from which all signals are sent and received. base transceiving stations 112 , 114 may be controlled and monitored by base station controller 116 . in turn, base station controller 116 may be controlled via mobile switching center/visitor location register 118 . in one embodiment, additional mobile switching centers/visitor location registers, e.g., mobile switching center/visitor location register 122 , may also be provided. mobile switching center 118 may include a switching node that assumes the technical functions of a landline network switching node, for example, path search, signal path switching, and/or processing of supplementary services. additionally, if there is a requirement for a connection to a subscriber in a landline network, the request may be forwarded by mobile switching center 118 to the landline network over a switching path. other various implementations may also be provided. in order for a network system 100 to provide various services to its subscribers, mobile switching center 118 may also access a variety of databases. in one embodiment, mobile switching center 118 may connect to a subscriber database, such as a home location register 132 , which may store information that identifies subscribers using its network and which services they use. this information may be stored in a home location register 132 as data including a subscriber's customer number, services, and/or other identifiers. other various storage data and formats may also be provided. in another embodiment of the present invention, mobile switching center 118 may access information from visitor location register 118 . in order to establish a landline network connection to a mobile device, for example, the network provider may need to know where the subscriber is physically located and whether his or her mobile device is switched on. this information may be stored in visitor location register 118 . in another embodiment, the information may be stored in a home location register 132 or a combination of visitor location register 118 and a home location register 132 . in yet another embodiment of the present invention, mobile switching center 118 may include network elements, such as software, to determine from the data in the home location register 132 , for example, whether a call is in-network or out-of-network. determining this may include sending a verification request to the subscriber database and receiving a verification from the database. this process will be discussed in further detail below. system 100 may also include authentication center 126 , which may store algorithms, subscriber-related keys, and other similar data. in one embodiment, this information may be useful, for example, during an authentication or verification check where network system 100 may determine whether or not a subscriber is entitled to use the mobile telecommunication network. for example, the subscriber may take out a card contract or use a pre-paid mobile device, where the subscriber pre-pays the service provider for service rather than getting billed at the end of every billing cycle. in this instance, the authentication center 126 , in conjunction with other network elements, may determine whether or not the funds in a pre-paid mobile device have run out. other various embodiments may also be considered. equipment identity register 128 may comprise an optional database that may be maintained by system 100 . equipment identity register 128 may store data including details of mobile transceivers permitted on the network. in one embodiment, this information may be broken down into a plurality of groups, e.g., white, grey and black lists. the white list may include a register of all the mobile devices which are functioning reliably. the grey list may contain details about devices which may possibly be defective. the black list may hold details of devices which either have a fault or have been reported stolen. while databases 126 , 128 , 132 , 134 are shown as separate databases, it should be appreciated that the contents of these databases may be combined into fewer or greater numbers of databases and may be stored on one or more data storage systems and in more than one format. system 100 may also include gateway mobile switching center 120 . gateway mobile switching center 120 may provide an edge (enhanced data rates for gsm evolution) function within a public land mobile network (plmn) to terminate the public switched telephone network (pstn) 124 signalling and traffic formats. edge may provide enhanced general packet radio service (egprs), which may be used for any packet switched applications such as an internet connection. high-speed data applications such as video services and other multimedia may benefit from egprs' increased data capacity. edge may also serve as a bolt-on enhancement to general packet radio service (gprs) networks. the technology may function on any network with gprs deployed on it, provided the carrier implements the necessary upgrades. in another embodiment, gateway mobile switching center 120 may convert this to a mobile networks protocol. network identifier alert generator 130 may be connected to mobile switching center/visitor location register 118 and/or the databases 126 , 128 , 132 , 134 . network identifier alert generator 130 may receive and exchange data, e.g., a verification, with the mobile switching center 118 or with the databases directly. network identifier alert generator 130 may also create and transmit messages based on data received from or exchanged with the mobile switching center 118 and/or databases. in one embodiment, network identifier alert generator 130 may receive data from the mobile switching center 118 and determine whether a potential call is in-network or out-of-network. this process will be discussed in further detail below. fig. 2 depicts an exemplary flowchart illustrating a service provider awareness method, according to an embodiment of the disclosure. in step 210 , system 100 may identify a potential call between an originator and an intended recipient by receiving information associated with a potential call. in step 220 , the mobile switching center 118 may send a verification request to a subscriber database, e.g., home location register 132 , for data and/or information relating to the originator and the intended recipient. in step 230 , the mobile switching center 118 may receive a verification from the database. in step 240 , the potential call may be identified as an in-network call or an out-of-network call. also, an out-of-network call may be registered or stored in a visitor database, such as the vlr 118 . in this example, the stored information may be later retrieved for a quick determination that the potential call is out-of-network for calls made to or from the same number. in a preferred embodiment, a mobile switching center 118 may include network elements, such as software, to determine whether a call is in-network or out-of-network based on the verification from the subscriber database. in another embodiment, the verification may be routed from the mobile switching center 118 to a network identifier alert generator 130 . in this example, the network identifier alert generator 130 may include network elements to determine whether a call is in-network or out-of-network. a potential call between an originator and an intended recipient may be identified as an in-network call when data relating to both the originator and the intended recipient exists in the subscriber database or home location register 132 . a potential call may be identified as an out-of-network call when data relating at least one of the originator and the intended recipient does not exist in the subscriber database (e.g., home location register 132 ). for example, when an individual subscribes with a wireless operator, he or she may be registered in the subscriber database of that operator. the subscriber database may be used for storage and management of subscriptions and may store data about subscribers of the service provider, such as a subscriber's service profile, location information, activity status, and other similar data. as a result, if data of an originator and data of an intended recipient exists in the home location register 132 , for example, they may both be identified as subscribers of the same network, and a potential call between them may be considered in-network. on the other hand, if either the data of an originator or the data of the intended recipient does not exist in the home location register 132 , they may be identified as being on different networks, and a potential call between them may be considered out-of-network. other techniques and locations within the network for determining in-network and out-of-network calls may also be used. fig. 3 depicts an exemplary flowchart illustrating a service provider awareness feature, according to an embodiment of the disclosure. in this example, an originator of a potential call may be a subscriber. when the originator/subscriber makes a potential call to an intended recipient and the potential call is determined to either be in-network or out-of-network according to the steps discussed in fig. 2 , the network identifier alert generator 130 may create a short message (e.g., a text message or a sms message) to indicate an in-network or out-of-network call, as depicted in step 310 . for example, in one embodiment, when the originator makes a call and it is determined that the call is in-network, the network identifier alert generator 130 may create an “in-network” message. in another embodiment, when it is determined that the call made by the originator to the intended recipient is out-of-network, the network identifier alert generator 130 may create an “out-of-network” message. in addition to messages, other variations may also be provided, such as a small icon identifier, a ring tone pattern, a vibration pattern, etc. in each of these examples, one icon and/or pattern may indicate in-network and another icon/pattern may indicate out-of-network. other various embodiments may also be provided. in step 320 , the network identifier alert generator 130 may forward the message to the msc 118 . the mobile switching center 118 may process the message, for example, by encapsulating the message into a wireless air protocol. encapsulating the message into a wireless air protocol may include formatting the message for a more rapid wireless transmission. other various embodiments may also be provided. in step 330 , the encapsulated message may then be transmitted as a service provider identification message to the mobile device 110 of the originator/subscriber of the potential call. in a preferred embodiment, the originator/subscriber may be provided an option to continue the potential call or to terminate the potential call based on the service provider identification message, as depicted in step 340 . this option may be provided at the mobile device 110 in a variety of ways, including an interactive menu, in which the subscriber may select a “continue” option to continue the call or the “terminate” option to not proceed with the call. other various menu options may also be provided, such as a recorded voice menu, a distinctive ringing pattern/sequence, and other notification. it should be appreciated that various forms of service provider identification may exist but one such illustrative example is depicted in fig. 4 . fig. 4 depicts an exemplary mobile device 400 illustrating a service provider identification method, according to an embodiment of the disclosure. in this example, the originator/subscriber may place a call by entering a number, such as 202-555-5555 into the keypad 410 . after dialing the number, the display screen 420 may display that the call is an “out-of-network” call 422 . in one embodiment, this message 422 may be accompanied by a audible ring tone 424 or similar notification. the display screen may also display another message 426 asking the originator/subscriber whether he or she wishes to “continue” to the call. two menu options may be provided—“yes” or “no.” if the originator wishes to continue the call, he or she may push a button 412 corresponding to the “yes” option. the originator may place his or her ear at by the speaker 418 and speak at the input 416 to talk to the intended recipient at 202-555-5555. at this point, out-of-network minutes and/or charges may me applied to the subscriber for proceeding with this call. if the originator/subscriber wishes to end or terminate the call, he or she may push a button 414 corresponding to the “no” option. in this case, the subscriber may decide not to incur the charges that may accompany the out-of-network call. fig. 5 depicts an exemplary flowchart illustrating a service provider awareness feature, according to an embodiment of the disclosure. in this example, an intended recipient of a potential call may be a subscriber. when the intended recipient/subscriber receives a call from an originator and the call is determined to either be in-network or out-of-network according to the steps discussed in fig. 2 , the network identifier alert generator 130 may create a short message (e.g., a text message or a sms message) to indicate an in-network call or an out-of-network call, as depicted in step 510 . for example, in one embodiment, when the intended recipient receives a call and it is determined that the call is in-network, the network identifier alert generator 130 may create an “in-network” message. in another embodiment, when it is determined that the call made by the originator to the intended recipient is out-of-network, the network identifier alert generator 130 may create an “out-of-network” message. in addition to messages, other variations may also be provided, such as a small icon identifier, a ring tone pattern, a vibration pattern, etc. in each of these, one icon and/or pattern may indicate in-network and another icon/pattern may indicate out-of-network. other various embodiments may also be provided. in step 520 , the network identifier alert generator 130 may forward the message to the msc 118 . the mobile switching center 118 may process the message, for example, by encapsulating the message into a wireless air protocol. encapsulating the message into a wireless air protocol may include formatting the message for a more rapid wireless transmission. other alternatives may also be provided. in step 530 , the encapsulated message may then be transmitted as a service provider identification message to the mobile device 110 of the intended recipient/subscriber of the potential call. in one embodiment, a caller id message may be sent along with the service provider notification message. in another embodiment, a multimedia notification may be transmitted along with and corresponding to the service provider notification message, as depicted in step 440 . the multimedia notification may include an audio pattern, a vibration pattern, an image, a video, etc. other various notifications may also be provided. it should be appreciated that various forms of service provider identification may exist but one such illustrative example is depicted in figs. 6a and 6b . since fig. 6a is similar to fig. 4 , it should be understood in relation to fig. 4 in that the placement and relationship between elements as described in relation to fig. 4 should apply to fig. 6a as well. figs. 6a and 6b depicts an exemplary mobile device 600 illustrating a service provider identification method, according to an embodiment of the disclosure. in this example, the intended recipient/subscriber may receive a call from an originator at a number, such as 202-555-5555. if the mobile device is opened, the number may be displayed on an internal screen 620 . if the mobile device is closed, the originator number may be displayed on an external display screen 630 . the internal screen 620 and/or the external screen 630 may display that the call is “in-network.” in one embodiment, the service provider notification message 622 , 632 may be accompanied by a audible ring tone similar to that depicted in fig. 4 . in the closed position, the audible ring may be outputted from an external speaker 642 . in another embodiment, an audible notification may be disabled, as depicted in 624 , 634 . in this case, the mobile device 600 may notify the intended recipient/subscriber by another notification, e.g., a vibration pattern or visual/blinking display pattern. in one embodiment, the intended recipient/subscriber may see that the call is “in-network” and simply flip open the device 600 to accept the call from the originator. in another embodiment, the display screen 620 may also display another message 626 asking the originator/subscriber whether he or she wishes to “continue” to the call when he or she flips open the phone. similar to fig. 4 , two menu options may be provided—“yes” or “no.” if the intended recipient wishes to continue the call, he or she may push a button 612 corresponding to the “yes” option. if the originator/subscriber wishes to end or terminate the call, he or she may push a button 614 corresponding to the “no” option. in this case, because the call is “in-network,” the intended recipient/subscriber may not incur any charges after accepting this call. an advantage of a service provider feature, according to an embodiment of the present invention, may include significant flexibility for users/subscribers to manage their minute usage by choosing to limit time spent on out-of-network calls. for example, it may provide them the option to ignore or terminate an out-of-network call and return the call from a wireline phone or delay the call until night/weekends when all calls are free. such a feature may also be provided to users/subscribers as an add-on feature with calling plans. it may also be used as promotional incentive for customers to choose a particular service provider. additionally, the feature may open up potential partnership deals with other service providers to provide a similar feature over more than one network. for example, a first service provider and a second service provide may team up so that all calls between the first and second service provider are considered in-network. it should be appreciated that a “potential call” from an originator to an intended recipient may become an activated call once at least one of the parties agrees to the terms of the call and accepts. for example, when an originator and/or the intended recipient receives a service provider identification at his or her device and presses “yes/continue” to answer the potential call, the potential call may then become a fully activated call at that point. it should be appreciated that while embodiments of the disclosure are directed to provider identification in wireless and mobile devices, other implementations may be provided as well. for example, in voice over ip (voip), subscribers of a particular network may call or receive calls free of charge while calls outside of a subscriber's network may incur charges. thus, a service provider identification feature may be utilized in voip to provide similar functionalities and benefits as discussed above. other various implementations may also be provided. in the preceding specification, various preferred embodiments have been described with reference to the accompanying drawings. it will, however, be evident that various modifications and changes may be made thereto, and additional embodiments may be implemented, without departing from the broader scope of the invention as set forth in the claims that follow. the specification and drawings are accordingly to be regarded in an illustrative rather than restrictive sense.
173-805-797-381-682
US
[ "US" ]
G02F1/1333,G02F1/1335
1997-07-18T00:00:00
1997
[ "G02" ]
combined spatial light modulator and phase mask for holographic storage system
a light modulator for both intensity and phase modulating coherent light on a pixel-by-pixel basis includes a modulating material responsive to an electric potential for modulating the intensity of coherent light passing through the modulating material, and electrodes for applying an electric potential across the modulating material on a pixel-by-pixel basis. the coherent light associated with a first set of pixels has a different optical path length through the modulating material than does the coherent light associated with a second set of pixels. the modulating material is a liquid crystal material. the electrodes include a set of first reflective pixel electrodes embedded in the liquid crystal material, the first reflective pixel electrodes having a first thickness, and a set of second reflective pixel electrodes embedded in the liquid crystal material, the second reflective pixel electrodes having a second thickness. the coherent light associated with the first set of pixels is reflected by the first reflective pixel electrodes, and the coherent light associated with the second set of pixels is reflected by the second reflective pixel electrodes. the reflective surfaces of the first reflective pixel electrodes are embedded in the liquid crystal material a different amount than are the reflective surfaces of the second reflective pixels. in particular, the first reflective pixel electrodes are a different thickness than the second reflective pixel electrodes.
1. a light modulator for intensity and phase modulating coherent light on a pixel-by-pixel basis, the modulator comprising: a modulating material responsive to an electric potential for modulating the intensity of coherent light passing through the modulating material; and electrodes for applying an electric potential across the modulating material on a pixel-by-pixel basis, the electrodes including: a) a first set of first reflective pixel electrodes embedded in the modulating material, the first reflective pixel electrodes having a first thickness; and b) a set of second reflective pixel electrodes embedded in the modulating material, the second reflective pixel electrodes having a second thickness; c) wherein the first thickness is different than the second thickness; wherein the coherent light associated with a first set of pixels has a different optical path length through the modulating material than does the coherent light associated with a second set of pixels. 2. the light modulator of claim 1, wherein the modulating material is a liquid crystal material. 3. the light modulator of claim 2, wherein the reflective surfaces of the first reflective pixel electrodes are embedded in the liquid crystal material a different amount than are the reflective surfaces of the second reflective pixels. 4. a spatial light modulator comprising: modulating material responsive to an electric potential for modulating the intensity of light passing through the modulating material; and first and second reflective electrodes embedded in the material, the first reflective electrode having a first thickness and the second reflective electrode having a second thickness, different from the first thickness, wherein the first reflective electrode is positioned so that light passing through the modulating material and impinging the first reflective electrode travels through less of the modulating material than does light passing through the modulating material and impinging the second reflective electrode. 5. the spatial light modulator of claim 4, wherein the modulating material is a liquid crystal material. 6. the spatial light modulator of claim 5, additionally comprising integrated electronics coupled to the first and second reflective electrodes for individually establishing an electric potential across the liquid crystal material at each of the reflective electrodes/ 7. the spatial light modulator of claim 6, wherein the first and second reflective electrodes comprise electrodes formed on the integrated electronics. 8. the spatial light modulator of claim 4 for use with light to be modulated by the liquid crystal material having wavelength .lambda. in a vacuum, and wherein the liquid crystal material has an average refractive index n, wherein the difference between the first thickness and the second thickness is approximately .lambda./4n. 9. a combined spatial light modulator and phase mask for a holographic storage system, comprising: a plurality of first reflective pixel electrodes, each of which is individually activatable, and each of which has a first thickness; a plurality of second reflective pixel electrodes, each of which is individually activatable, and each of which has a second thickness different from the first thickness; and a liquid crystal material covering the first and second reflective pixel electrodes so that the optical path of light reflected by the first reflective pixel electrodes is of a different length than the optical path of light reflected by the second reflective pixel electrodes. 10. the combined spatial light modulator and phase mask of claim 9, additionally comprising: a conducting layer adjacent the liquid crystal material opposite the first and second reflective pixel electrodes; and integrated electronics connected to the first and second reflective pixel electrodes for individually activating the reflective pixel electrodes and creating an electric potential between the activated pixel electrode and the conducting layer. 11. a combined spatial light modulator and phase mask for a holographic storage system, comprising: a semiconductor substrate; integrated electronics formed on the semiconductor substrate; a plurality of first reflective pixel electrodes, each of which is connected to the integrated electronics to be individually activatable, and each of which has a first thickness; a plurality of second reflective pixel electrodes, each of which is connected to the integrated electronics to be individually activatable, and each of which has a second thickness, different from the first thickness; a layer of liquid crystal material covering the first and second reflective pixel electrodes, wherein the liquid crystal material has an index of refraction n; a cover glass over the layer of liquid crystal material; a conducting layer on the cover glass connected so that an electric potential is created between each activated pixel electrode and the conducting layer; and a polarizer for polarizing an incoming coherent light beam of wavelength .lambda. before the light enters the cover glass, wherein the polarizer is arranged at an angle relative to the first and second reflective pixel electrodes to reflect toward a target light reflected by the reflective pixel electrodes. 12. the combined spatial light modulator and phase mask of claim 11, wherein the difference between the first thickness and the second thickness is .lambda./4n. 13. a combined spatial light modulator and phase mask providing electrically variable intensity control and static phase control of a narrowband laser beam, the device comprising: a plurality of first reflective pixel electrodes, each of which has a first thickness; a plurality of second reflective pixel electrodes, each having a second thickness, different from the first thickness; and a liquid crystal material covering the first and second reflective pixel electrodes so that the optical path of light reflected by the first reflective pixel electrodes is of a different length than the optical path of light reflected by the second reflective pixel electrodes; wherein the liquid crystal material is connectible to an electrical source to vary the intensity of the reflected light in response to electrical signals applied to the liquid crystal materials; and wherein the different thickness pixel electrodes result in a corresponding difference of phase values of the reflected light.
background of the invention the present invention relates to storage systems for holographic data and images. more specifically, the present invention relates to spatial light modulators for use in storing holographic data and images. holography is a lensless, photographic method that uses coherent (laser) light to produce three-dimensional images by splitting the laser beam into two beams and recording on a storage medium, such as a photographic plate, the interference patterns made by the reference light waves reflected directly form a mirror, and the waves modulated when simultaneously reflected from the subject. in a holographic data/image storage system, the information to be stored is written into the storage medium with a spatially varying light intensity produced by the coherent interference between an information (object) beam and a reference beam. details of this process are well understood and described in the literature. see, for example, j. goodman, introduction to fourier optics, chapter 8 (mcgraw-hill, 1968). data is encoded onto the information beam by spatially modulating the intensity of the beam. a common method for intensity modulation is to use a two dimensional array of elements (pixels) in which the properties of the individual pixels are varied to control the ratio of the light transmitted or reflected to that incident on the pixel. such a device is known as a spatial light modulator (slm). methods to achieve these objectives are well known and documented. see, for example, u. efron (ed.), spatial light modulators (dekker, 1994). fig. 1 is a cross-sectional view of a portion of a single row of pixels of a liquid crystal reflective spatial light modulator (slm) of a known type. the slm is formed on a silicon substrate 20. integrated electronics 22 are formed on the silicon substrate using conventional semiconductor planar processes. an element of the integrated electronics 22 corresponds to each pixel of the slm array. an individual pixel electrode 24 is electrically connected to be driven by a corresponding element of the integrated electronics 22. liquid crystal material 32 covers the pixel electrodes 24. a layer of slm cover glass 38 contains the liquid crystal material. a conducting layer 36 covers the underside of the cover glass 38. an electric field can then be produced across the liquid crystal material at a particular pixel by applying an electric potential between the particular pixel electrode 24 and the conducting layer 36. the electric potential for a particular pixel is controlled by the element of the integrated electronics 22 associated with that pixel electrode 24. the potential across the liquid crystal material 32 at a particular driven pixel electrode 24 causes the liquid crystal material to modulate the light beam at that pixel. a polarizer 40 polarizes the incoming light beam 51. after being polarized by the polarizer 40, the incoming light beam passes through the slm cover glass 38 and traverses the liquid crystal material 32. the light beam is reflected by the driven pixel electrode 24, reversing its path as reflected beam 53. the reflected beam 53 passes back through the liquid crystal material and the cover glass 38 to impinge the polarizer 40. the optical axes of the liquid crystal material 32 are oriented so that the reflected light 53 is polarized orthogonally to the incoming beam 51. the orthogonal polarization of the reflected beam 53 causes the reflected beam 53 to be reflected by the polarizer 40, rather than passing through it. the reflected beam 53 is directed toward the storage medium (not shown). a liquid crystal alignment layer 34 may be included between the liquid crystal material 32 and the cover glass conducting layer 36. the liquid crystal alignment layer provides alignment to the liquid crystal modulator medium. those skilled in the art understand that the performance of a holographic storage system may be improved by randomizing the phase of the information beam. see, for example, j. hong, et al., "influence of phase masks on cross talk in holographic memory," optics letters, vol. 21, no. 20, pp. 1694-96 (oct. 15, 1996). a phase mask is typically used to change the phase of the information beam and accomplish such randomization. the phase mask is placed in the path of the reflected modulated beam 53. the phase mask is constructed and aligned in the reflected modulated light beam 53 to provide a particular phase value to each pixel of the modulated information beam. thus, the distribution of the phase values in the phase mask corresponds to the distribution of the phase of the information beam. to create a particular phase distribution in the information beam, the phase values are distributed across the phase mask in a corresponding pattern. if a random phase distribution pattern is desired in the information beam, phase values may be randomly distributed across the phase mask. binary phase values of 0 and .pi. may be distributed in a random manner across the array to generate the randomization of the information beam. those skilled in the art will recognize that other phase values may be used to generate the appropriate randomization, or that other phase patterns may be desired in the information beam. each phase value applied to the information beam by the phase mask must be accurately aligned with the associated pixel of the reflected modulated beam 53. therefore, the spatial light modulator (slm) and the phase mask must be carefully aligned. misalignment of the slm and the phase mask pixels results in increased cross-talk between detector array pixels during readout of the storage medium. it has been found that in an arrangement with a ten micron pixel pitch, the alignment between the slm pixels and the phase mask pixels should be kept to about one tenth of a micron. summary of the invention a light modulator for both intensity and phase modulating coherent light on a pixel-by-pixel basis includes a modulating material responsive to an electric potential for modulating the intensity of coherent light passing through the modulating material, and electrodes for applying an electric potential across the modulating material on a pixel-by-pixel basis. the coherent light associated with a first set of pixels has a different optical path length through the modulating material than does the coherent light associated with a second set of pixels. the modulating material is a liquid crystal material. the electrodes include a set of first reflective pixel electrodes embedded in the liquid crystal material, the first reflective pixel electrodes having a first thickness, and a set of second reflective pixel electrodes embedded in the liquid crystal material, the second reflective pixel electrodes having a second thickness. the coherent light associated with the first set of pixels is reflected by the first reflective pixel electrodes, and the coherent light associated with the second set of pixels is reflected by the second reflective pixel electrodes. the reflective surfaces of the first reflective pixel electrodes are embedded in the liquid crystal material a different amount than are the reflective surfaces of the second reflective pixels. in particular, the first reflective pixel electrodes are a different thickness than the second reflective pixel electrodes. an object of the present invention is to provide a reflective spatial light modulator and a phase mask with precision registration for use in a holographic storage system. an object of the present invention is to provide a reflective spatial light modulator and phase mask with improved alignment between the pixels of the spatial light modulator and the pixels of the phase mask. an object of the present invention is to provide permanent alignment or registration between the spatial light modulator and the phase mask. an object of the present invention is to provide an integrated phase mask and reflective spatial light modulator. an object of the present invention is to form a reflective spatial light modulator and phase mask together for improved registration. an object of the present invention is to provide a spatial light modulator and a phase mask in which the pixels are permanently aligned. an object of the present invention is to provide a reflective spatial light modulator and phase mask that can be readily manufactured. an object of the present invention is to provide a reflective spatial light modulator and phase mask that can be accurately manufactured using conventional manufacturing techniques. an object of the present invention is to provide a reflective spatial light modulator and phase mask that has relatively low manufacturing costs. brief description of the figures fig. 1 is a cross sectional view of a portion of one row of pixel electrodes of a reflective spatial light modulator of a known type; and fig. 2 is a cross sectional view of a portion of one row of pixel electrodes of a combined spatial modulator and phase mask constructed according to the invention. detailed description of the preferred embodiment a cross section of a small portion of a single row of pixels of a spatial light modulator and phase mask constructed according to the invention is shown in fig. 2. several of the elements are similar to the corresponding elements in the known type of reflective spatial light modulator shown in fig. 1. the combined slm and phase mask is formed on a silicon substrate 220. integrated electronics 222 are formed on the silicon substrate using conventional semiconductor planar processes. an element of the integrated electronics 222 corresponds to each pixel of the slm array. each individual pixel electrode 224, 225 is electrically connected to be driven by a corresponding element of the integrated electronics 222. liquid crystal material 232 covers the pixel electrodes 224, 225. a layer of slm cover glass 238 contains the liquid crystal material. a conducting layer 236 covers the underside of the cover glass 238. an electric field can then be produced across the liquid crystal material at a particular pixel by applying an electric potential between the particular pixel electrode 224, 225 and the conducting layer 236. the electric potential for a particular pixel is controlled by the element of the integrated electronics 222 associated with that pixel electrode 224, 225. the potential across the liquid crystal material 232 at a particular pixel electrode 224, 225 causes the liquid crystal material to modulate the intensity of the light beam at that pixel. in accordance with the illustrated embodiment of the invention, the phase mask is integrated with the slm by providing reflective pixel electrodes 224, 225 in the liquid crystal material that provide different length optical paths through the liquid crystal material for light reflected from the pixels 224 than from the pixels 225. the reflective surface of the pixel 225 is embedded into the liquid crystal material a different amount than is the reflective surface of the pixel 224. the different positioning of the reflective surfaces of the pixels 224, 225 may be achieved by using pixel electrodes 224, 225 of different thicknesses. the optimal difference between the thickness of the thinner electrodes 224 and the thicker electrodes 225 depends on the type of liquid crystal material 232. for example, the thickness difference will be different if nematic liquid crystal material is used than if ferro-electric liquid crystal material is used. for example, for a binary phase mask in which phase values of 0 and .pi. are to be applied to the pixels, the difference in thickness between the thin pixel electrode 224 and the thicker pixel electrode 225 may be .lambda./4n, in which .lambda. is the optical wavelength of the incoming light beam 251 in a vacuum, and n is the average refractive index of the liquid crystal material 232. a polarizer 240 polarizes the incoming light beam 251, 252. after being polarized by the polarizer 240, the incoming light beam passes through the slm cover glass 238 and traverses the liquid crystal material 232. the light beam is reflected by the driven pixel electrodes 224, 225, reversing its path as reflected beam 253, 254. the reflected beam 253, 254 passes back through the liquid crystal material and the cover glass 238 to impinge the polarizer 240. the light reflected from the top of the thinner pixel electrode 224 travels through the liquid crystal material 232 a distance greater than the light reflected from the top of the thicker pixel electrode 225. in particular, the light reflected from the thinner electrode 224 has traveled a distance greater by twice the difference in thickness between the thinner electrode 224 and the thicker electrode 225. thus, the length of the optical path through the liquid crystal material for light reflected by the thicker pixel electrode 225 is shorter than the optical path through the liquid crystal material for light reflected by the thinner pixel electrode 224. the difference between the thickness of the thinner pixel electrode 224 and the thickness of the thicker pixel electrode 225 may be considered to be d. the portion of the incoming beam 252 that is reflected by the thinner pixel electrode 224 to become reflected beam 254 travels 2 d farther through the liquid crystal material than does the portion of the incoming beam 251 that is reflected by the thicker pixel electrode 225 to become the reflected beam 253. because the reflected beam portions 253, 254 have traveled different length optical paths through the liquid crystal material, they have different phases. the phase difference is determined by the difference in thickness between pixel electrodes 224, 225. thus, the reflected beams 253, 254 have been both intensity and phase modulated. a separate phase mask is not necessary. phase randomization of the light beam is achieved by randomly distributing the different thickness pixel electrodes 224, 225 across the slm array. those skilled in the art will recognize that other distribution of phase values may be used. in addition, the different thickness pixel electrodes may be arranged in any spatial pattern to achieve a corresponding spatial distribution of phase values. those skilled in the art will recognize that other phase values may be used to generate the appropriate randomization, or other phase patterns are desired in the information beam. different phase values may be obtained by varying the differences in thickness between the pixel electrodes. in addition, more than two phase values may be used. more than two phase values may be obtained by using pixel electrodes of more than two different thicknesses. the optical axes of the liquid crystal material 232 are oriented so that the reflected and phase shifted light 253, 254 is polarized orthogonally to the incoming beam 251, 252. the orthogonal polarization of the reflected beam 253, 254 causes the reflected beam 253, 254 to be reflected by the polarizer 240, rather than passing through it. the reflected beam 253, 254 is directed toward the storage medium (not shown). the electric potential between a particular electrode 224, 225 and the ground plane 236 may be adjusted to compensate for the difference in spacing between 1) the top of the electrode 224 and the ground conducting layer 236, and 2) the top of the electrode 225 and the ground plane 236. such adjustment may be necessary to preserve the proper intensity modulation on the beam. a liquid crystal alignment layer 234 may be included between the liquid crystal material 232 and the cover glass conducting layer 236. the liquid crystal alignment layer provides alignment to the liquid crystal modulator medium. the combined slm and phase mask may be manufactured using conventional planar semiconductor manufacturing techniques. registration of the phase mask and the slm is accomplished during fabrication. that allows the use of conventional highly developed photolithographic techniques to produce the desired pattern by either additive or subtractive manufacturing processes. the phase mask pixels and the slm pixels of the combined modulator and phase shifter described above are permanently aligned with one another. because they are manufactured together in the same silicon manufacturing process, permanent alignment is assured. both the path length differences for the beams 251/253 and 252/254, and the registration may be measured at the wafer stage of the manufacturing process. at that stage, rework or rejection of individual slm wafers may take place, before the expensive assembly operations that are necessary to produce a fully functional slm. therefore, the expensive assembly operations may be avoided on wafers that are out of specification. out of specification performance may be detected and corrected before a fully functional slm is constructed. that will reduce overall manufacturing costs for spatial light modulators. while a preferred embodiment of the invention has been described herein, it will be appreciated that a number of modifications and variations will suggest themselves to those skilled in the pertinent arts. these variations and modifications that may suggest themselves should be considered within the spirit and scope of the present invention, as defined in the claims that follow.
175-448-822-624-43X
US
[ "JP", "TW", "US", "MX", "EP" ]
G21C5/06,G21C3/33,G21C3/34,G21C15/00,G21C3/356,G21C3/322,G21C5/16,G21C15/06,G21C17/00,G21D3/00
2010-12-28T00:00:00
2010
[ "G21" ]
optimized fuel support and method for manufacturing reactor core using the same
problem to be solved: to provide a fuel support body having an appropriate loss coefficient of flow.solution: in a fuel support body 148, a fluid channel passing through the fuel support body 148 is defined by an opening formed to receive a nuclear fuel bundle and entrance orifices 195a, 195b, and the fluid channel is constituted for fluid flow characteristics in an inner peripheral bundle in the reactor core of a nuclear reactor. the fuel support body 148 is provided with the two entrance orifices 195a, 195b, and the two entrance orifices 195a, 195b have different constitutions and related fluid flow characteristics from each other. further, the diameters of the entrance orifices 195a, 195b are different from each other.
1 . a fuel support for use in a nuclear reactor, the fuel support defining a fluid flow path through the support by, an opening shaped to receive a nuclear fuel bundle, and an inlet orifice, the flow path configured for a fluid flow characteristic at an inner periphery bundle position within a core of the nuclear reactor. 2 . the fuel support of claim 1 , wherein the fuel support defines two inlet orifices, and wherein each of the two inlet orifices has a different configuration and associated fluid flow characteristic from each other of the two inlet orifices of the fuel support. 3 . the fuel support of claim 2 , wherein the two orifices each have a different diameter from each other of the two orifices. 4 . the fuel support of claim 1 , wherein the fuel support is connected to a core plate in the nuclear reactor, the fuel support further defining an opening for a control blade. 5 . the fuel support of claim 1 , wherein the flow path is configured for the fluid flow characteristic by at least one of inlet orifice diameter and blockage in the flow path. 6 . a reactor core for a nuclear reactor, the core comprising: a plurality of fuel supports arranged at a base of the core, each of the fuel supports defining a fluid flow path through the support and configured to receive a fuel bundle, at least three of the defined flow paths having different flow loss coefficients from each other; and a plurality of fuel bundles each seated into a corresponding one of the fuel supports. 7 . the reactor core of claim 6 , wherein a first of the three flow paths is located at an outer periphery of the core, wherein a second of the three flow paths is located at an inner periphery of the core, and wherein a third of the three flow paths is located in a central portion of the core. 8 . the reactor core of claim 7 , wherein the first flow path begins at a first inlet orifice of a fuel support, wherein the second flow path begins at a second inlet orifice of a fuel support, and wherein the third flow path begins at a third inlet orifice of a fuel support 9 . the reactor core of claim 8 , wherein the first orifice has a highest flow loss coefficient of the three orifices, and wherein the third orifice has a lowest flow loss coefficient of the three orifices. 10 . the reactor core of claim 8 , wherein each of the three orifices has a different diameter. 11 . the reactor core of claim 6 , further comprising: a core plate, each of the fuel supports being connected to and supported by the core plate. 12 . the reactor core of claim 11 , further comprising: at least one control blade passing through the core plate and one of the fuel supports. 13 . the reactor core of claim 6 , wherein the three flow paths include different blockages from each other. 14 . the reactor core of claim 6 , wherein, a first subset of the fuel supports extends around an outer periphery of the core, a second subset of the fuel supports extends around an inner periphery of the core adjacent to the first subset of the fuel supports, and a third subset of the fuel supports extends throughout a central portion of the core within the second subset of fuel supports, and wherein, the first subset of fuel supports define a first flow path of the three flow paths having a first flow loss coefficient, the second subset of fuel supports define a second flow path of the three flow paths having a second flow loss coefficient, and the third subset of fuel supports define a third flow path of the three flow paths having a third flow loss coefficient, and wherein the first, second, and third flow loss coefficients are each different from each other. 15 . the reactor core of claim 14 , wherein the third flow loss coefficient is approximately 20-25% of the first flow loss coefficient, and wherein the second flow loss coefficient is approximately 40-80% of the first flow loss coefficient. 16 . the reactor core of claim 14 , wherein all fuel supports of the first subset define first flow paths having the first flow loss coefficient, all fuel supports of the second subset define second flow paths having the second flow loss coefficient, and all fuel supports of the third subset define third flow paths having the third flow loss coefficient. 17 . the reactor core of claim 14 , wherein the second subset includes a first group of fuel supports directly adjacent to the fuel supports of the first subset and a second group of fuel supports directly adjacent to the fuel supports of the first subset, and wherein the first group and the second group define flow paths having different flow loss coefficients. 18 . the rector core of claim 17 , wherein the third flow loss coefficient is approximately 20-25% of the first flow loss coefficient, the flow loss coefficient of the first group is approximately 35-45% of the first flow loss coefficient, and the flow loss coefficient of the second group is approximately 75-85% of the first flow loss coefficient. 19 . a method of configuring fuel supports in a nuclear core, the method comprising: modifying a flow loss coefficient for at least one bundle location in a configuration of the nuclear core; simulating core performance with the modified flow loss coefficient; analyzing the simulated core performance; and configuring at least one fuel support to achieve the modified flow loss coefficient at the at least one bundle position, if the analyzing indicates the simulated core performance is favorable. 20 . the method of claim 19 , wherein the analyzing includes at least one of comparing the simulated core performance against a performance threshold and comparing the simulated core performance against a previously simulated core performance with different flow loss coefficients.
background as shown in fig. 1 , a conventional nuclear reactor, such as a boiling water reactor (bwr), may include a reactor pressure vessel (rpv) 12 with a generally cylindrical shape. rpv 12 may be closed at a lower end by a bottom head 28 and at a top end by a removable top head 29 . a cylindrically-shaped core shroud 34 may surround reactor core 36 , which includes several nuclear fuel elements or assemblies, called bundles herein, that generate power through fission. shroud 34 may be supported at one end by a shroud support 38 and may include a removable shroud head 39 and separator tube assembly at the other end. one or more control blades 20 may extend upwards into core 36 , so as to control the fission chain reaction within fuel elements of core 36 . additionally, one or more instrumentation tubes 50 may extend into reactor core 36 from outside rpv 12 , such as through bottom head 28 , permitting instrumentation, such as neutron monitors and thermocouples, to be inserted into and enclosed within the core 36 from an external position. fuel bundles may be aligned and supported by fuel supports 48 located on a core plate 49 at the base of core 36 . fuel supports 48 may receive individual fuel bundles or groups of bundles and permit coolant flow through the same. fuel supports 48 may further permit instrumentation tubes 50 , control blades 20 , and/or other components to pass into core 36 through or between fuel supports 48 . a fluid, such as light or heavy water, is circulated up through core 36 and core plate 48 , and in a bwr, is at least partially converted to steam by the heat generated by fission in the fuel elements. the steam is separated and dried in separator tube assembly and steam dryer structures 15 and exits rpv 12 through a main steam nozzle 3 near a top of rpv 12 . other fluid coolant/moderators may be used in other reactor designs, with or without phase change. figs. 2a and 2b are detailed views of a related art fuel support 48 useable in the nuclear plant of fig. 1 , for example, that can receive and support up to four individual fuel bundles. as shown in figs. 2a and 2b , fuel support 48 includes openings 90 shaped to receive a lower end of a fuel bundle so as to support and align fuel bundles seated in fuel support 48 . openings 90 are open and permit coolant flow 80 through fuel support 48 into fuel bundles supported thereon. openings 90 receive fluid flow through fuel support 48 from inlet or lower orifices 95 , permitting fluid coolant/moderator to flow through fuel support 48 . a cruciform or other opening 21 may permit a control blade 20 to pass between bundles supported by fuel support 48 . it is understood however, that control blades 20 may not be present in every possible core location, such that opening 21 may be unfilled or nonexistent. fig. 3 is an illustration of a conventional core map, showing a quadrant of a conventional reactor core 36 . each grid location in fig. 3 represents a fuel bundle location in the core 36 , with each fuel bundle seating into an associated opening in a fuel support. each grid location in fig. 3 is identified with a number showing fuel support inlet orifice configuration in a conventional reactor core 36 . that is, orifices 95 ( figs. 2a & 2b ) conventionally have two different sizes, or diameters, to achieve two different flow rates through the core. grid locations marked with a “1” in core 36 of fig. 3 correspond to locations with orifices sized for central fuel bundles. orifices for central fuel bundles at “1” locations are larger, permitting increased coolant and moderator flow through fuel supports and fuel bundles at the associated location. orifices for peripheral fuel bundles at “2” locations are smaller, permitting less coolant and moderator flow into bundles at the periphery. in this way, as shown in fig. 3 , conventional cores 36 may have standard, larger fluid flow through central fuel bundles with a lower level of fluid flow occurring in the outermost ring or periphery of fuel bundles in the core. summary example embodiments are directed to fuel supports and reactor cores including the same. example embodiment fuel supports include an inlet orifice that permits a coolant/moderator to flow through the support into an associated fuel bundle seated into the support, and the inlet orifice is specially designed to achieve a desired fluid flow characteristic, such as coolant/moderator flow rate through the associated fuel bundle. the desired fluid flow characteristic may be determined based on a position of a bundle associated with the inlet orifice within a core of the nuclear reactor. any number of differently-configured inlet orifices, having different associated fluid flow characteristics, may be used throughout the core and in individual supports. example embodiment fuel support configurations may include different inlet orifice diameters or use of flow blockages such as filters, venturis, choke plates, etc., to achieve a desired flow loss coefficient or flow rate under known conditions, for example. example embodiment fuel support may be positioned within a core plate in the nuclear reactor, permitting coolant/moderator flow and potentially a control blade and instrumentation tubes to pass through or between the fuel supports. several example embodiment fuel supports may be placed at the base of the reactor core, each support having physical configuration to achieve a desired flow characteristic at the associated fuel bundle position. for example, three different configurations may be used at outer core periphery, inner core periphery, and central portions of the core. the configuration at the outer periphery—those positions at the edge of the core and not surrounded by fuel bundles on each side—may have a highest flow loss coefficient so as to limit coolant/moderator flow to periphery bundles requiring less moderation and heat transfer. the configuration at an inner periphery, defined herein as the two or three bundle positions immediately inside the outer periphery, may have intermediate flow loss coefficients, and the configuration in the central portion may have the lowest flow loss coefficients, providing the highest levels of coolant/moderator flow to central bundles at higher power levels. example methods configure flow path characteristics of fuel supports in a nuclear core. example methods may include modifying flow loss coefficients at particular bundle locations, simulating core performance with the modified flow loss coefficients, analyzing the simulated core performance, and/or configuring at least one fuel support to achieve the modified flow loss coefficients. analyzing may be performed by comparing simulated core performance against desired performance characteristics or comparing simulated core performance against a previously simulated core performance with different flow loss coefficients, in an iterative manner. brief description of drawings fig. 1 is an illustration of a conventional nuclear fuel reactor. figs. 2a and 2b are two different views of a conventional fuel support. fig. 3 is an illustration of a core map of a conventional reactor core. fig. 4 is an illustration of an example embodiment fuel support. fig. 5 is an illustration of a core map of an example embodiment reactor core. fig. 6 is a graph of experimental results using example embodiment versus conventional fuel supports. fig. 7 is a flow chart illustrating example methods. detailed description hereinafter, example embodiments will be described in detail with reference to the attached drawings. however, specific structural and functional details disclosed herein are merely representative for purposes of describing example embodiments. for example, although example embodiments and methods are described in connection with a boiling water reactor (bwr), it is understood that example embodiments and methods are useable with several other reactor types, including pwrs, esbwrs, heavy-water reactors, breeder reactors, etc. all using a fluid coolant and/or moderator. the example embodiments may be embodied in many alternate forms and should not be construed as limited to only example embodiments set forth herein. it will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. these terms are only used to distinguish one element from another. for example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of example embodiments. as used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. it will be understood that when an element is referred to as being “connected,” “coupled,” “mated,” “attached,” or “fixed” to another element, it can be directly connected or coupled to the other element or intervening elements may be present. in contrast, when an element is referred to as being “directly connected” or “directly coupled” to another element, there are no intervening elements present. other words used to describe the relationship between elements should be interpreted in a like fashion (e.g., “between” versus “directly between”, “adjacent” versus “directly adjacent”, etc.). as used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the language explicitly indicates otherwise. it will be further understood that the terms “comprises”, “comprising,” “includes,” and/or “including,” when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. it should also be noted that in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures or described in the specification. for example, two figures or steps shown in succession may in fact be executed in parallel and concurrently or may sometimes be executed in the reverse order or repetitively, depending upon the functionality/acts involved. example embodiments fig. 4 is an illustration of example embodiment fuel support 148 that includes inlet orifices 195 with optimized fluid flow properties. example embodiment fuel supports are shown as similar to, and may be used in place of, conventional fuel supports 48 shown in figs. 2a and 2b to provide alignment, support, and coolant/moderator flow to fuel bundles seated in the supports; for example, an optional control blade opening 121 and/or four fuel bundle openings 190 may be present in example embodiment fuel support 148 to match conventional fuel support characteristics. it is understood, however, that example embodiment fuel supports may have several different features and configurations, including physical shape, opening number, control blade accommodation, etc., from conventional fuel supports 48 ( figs. 2a & 2b ) and example embodiment fuel support 148 shown in fig. 4 . example embodiment fuel supports 148 include one or more inlet orifices 195 that are specially sized or configured to optimize fluid flow rates through associated fuel bundles, based on position within a nuclear core. individual inlet orifices 195 a, b may each be uniquely configured and differ from each other, or may be substantially the same. inlet orifices 195 of different example embodiment fuel supports 148 within a core may be configured the same as some other inlet orifices in other fuel supports or be entirely unique, all based on the desired flow characteristics at the position of the inlet orifices 195 . that is, although example embodiment fuel support 148 is shown with two inlet orifices 195 a and 195 b having different respective diameters da and db, it is understood that all inlet orifices 195 in example embodiment fuel support 148 could have a same diameter and other aspects, while differing from other inlet orifices in other example embodiment fuel supports positioned elsewhere in a core and not shown in fig. 4 . example methods for determining orifice configuration of example embodiment fuel supports and associated fluid flow characteristics are discussed following example embodiments below. inlet orifices 195 of example embodiment fuel supports 148 are physically configured to provide a desired flow level of fluid coolant/moderator through an associated bundle during plant operation. the configuration may be achieved in several ways. for example, diameter d a of inlet orifice 195 a may be set during fabrication of fuel support 148 to permit a desired level of coolant flow 180 a therethrough. similarly, diameter d a may be achieved or adjusted following fabrication through machining or remolding, for example. or, for example, diameter d b of inlet orifice 195 b may be achieved or adjusted through addition of an insert, such as an annular choke plate, that reduces diameter d b and achieves a desired lower flow rate 180 b therethrough. additionally, inserts, baffles, filters and/or any other structure may be used in example embodiment fuel support 148 , on either side of inlet orifices 195 , to affect fluid flow loss coefficients of, and a resulting amount of fluid flow through, a given inlet orifice 195 to a desired level during plant operation. for example, a flow restrictor or blockage may be placed in a flow path prior to or in opening 190 to adjust an amount of coolant/moderator flowing into an associated inlet orifice 195 and ultimately through the fuel bundle seated into the associated opening 190 . the levels of fluid coolant/moderator flow permitted by various diameters and/or other configurations of inlet orifices 195 in example embodiment fuel supports 148 may be set at any desired level. local flow loss coefficients caused by these configurations on a given fluid may provide a universal metric to compare individual inlet orifice 195 functionality in an operating nuclear plant. for a universal inlet pressure and fluid, a higher loss coefficient correlates with less fluid moderator/coolant flow through an orifice and associated fuel bundle, resulting in less moderation and fuel usage while directing more flow to other bundles. higher loss coefficients may be achieved in example embodiment fuel supports 148 by decreasing inlet orifice 195 diameter and/or providing other flow-interrupting structures within inlet orifice 195 or fuel support 148 , as discussed above. under the same universal inlet pressure and coolant/moderator fluid, a lower loss coefficient correlates with increased fluid moderator/coolant flow through an orifice and associated fuel bundle, resulting in greater moderation and fission energy generation while decreasing flow available to other bundles. lower loss coefficients may be achieved in example embodiment fuel supports 148 by increasing inlet orifice 195 diameter and/or removing flow-interrupting structures in example embodiment fuel supports 148 . several different types of orifice configurations may be used together on a same fuel support 148 or even on a same inlet orifice 195 , based on the flow characteristics desired of that orifice. fig. 5 is an illustration of a core map showing how a quadrant of an example embodiment core 236 may be populated with example embodiment fuel supports 148 ( fig. 4 ). each grid position in fig. 5 corresponds to a fuel bundle location and associated fuel support inlet orifice 195 ( fig. 4 ) providing a coolant/moderator flow 180 into the bundle. as such, it is understood that example embodiment fuel supports 148 ( fig. 4 ) may span one or more grid positions of fig. 5 , depending on the number of orifices and shape of example embodiment fuel supports. although example embodiment core 236 is shown with 19×19 radial fuel bundles in a quadrant, it is understood that other numbers of fuel bundles and core shapes are useable with example embodiments and methods, including bwr, esbwr, abwr, pwr, non-lwr designs, and/or any other type of reactor design where fuel supports with orifices are useable. as shown in fig. 5 , each grid location is associated with a unique orifice configuration denoted by a numeral “1,” “2,” “3,” or “4” in the grid location. as shown in the legend of fig. 5 , inlet orifices at “1” locations have a central configuration, with the lowest loss coefficients, achieved with largest orifice diameters and/or fewest flow obstructions, for example. inlet orifices at “2” locations have an outer peripheral configuration, with the highest loss coefficients and smallest orifice diameters/most flow obstructions. as a specific example, the loss coefficient for orifices in example embodiment fuel supports at “1” central locations may be approximately 20-25%, such as 21%, of the loss coefficient for orifices at “2” peripheral locations; i.e., greater flow losses and less flow occur at “2” positions. orifices at “3” and “4” inner periphery positions, defined herein as the two or three bundle positions immediately inside the outer periphery, have intermediate loss coefficients, between those of orifices at “1” and “2” central and peripheral positions. for example, orifices at “3” positions may have approximately 77-83%, such as 80%, the loss coefficient of orifices at “2” peripheral positions, and orifices at “4” positions may have approximately 37-43%, such as 40%, the loss coefficient of orifices at “2” peripheral positions. these unique loss coefficients of different orifices in fuel supports may be achieved through the different configuring techniques discussed above for changing the flow parameters discussed above, including varying orifice diameter, adding/removing flow obstructions, etc. in this way, example embodiment core 236 includes example embodiment fuel supports having several types of orifices with intermediate variations of loss coefficients, from outer periphery orifices with the highest loss coefficients to inner central orifices with the lowest loss coefficients. the quadrant shown in fig. 5 may be mirrored about three axes to produce a full example embodiment core 236 that is symmetrical in orifice layout about these axes. bundles at peripheral and intermediate positions “2,” “3,” and “4” in example embodiment core 236 may possess lower fuel enrichment (through age or initial enrichment) and suffer from increased neutron loss at core boundaries, resulting in lower fission energy production. due to the lower power levels at peripheral and inner peripheral positions, less moderator/coolant flow may be required to maintain bundles at these positions at operating temperature and maximum power production. example embodiment core 236 provides higher loss coefficients, and thus less flow, for fluid moderator/coolant through intermediate bundles with orifices at “3” and “4” locations, compared to conventional cores, such as the core shown in fig. 3 , which provide full, central orifices for the same bundles at intermediate, inner periphery locations. in this way, example embodiment core 236 may direct more moderator/coolant to bundles at central locations “1,” while directing less moderator/coolant to bundles at intermediate peripheral locations “3” or “4,” with the same whole-core flow rates, compared to conventional cores such as those shown in fig. 3 . bundles at central “1” locations may have higher enrichment and power rates compared to intermediate or peripheral locations “2,” “3,” or “4”. bundles at central “1” locations may thus benefit from the increased neutron moderation and fluid energy transfer from the fluid coolant/moderator in example embodiment core 236 . further, near end of operating cycles, bundles in a given core are more depleted in fissionable material, and bundles at peripheral and intermediate positions “2,” “3,” and “4” in example embodiment core 236 may possess especially low fuel enrichment due to age and lower initial enrichment. operators in end of cycle conditions may increase total core flow so as to provide additional moderator to the depleted bundles, sustaining a fission chain reaction for several more days beyond what typical or rated core flow would be able to sustain. however, due to the lower power levels from low enrichment and neutron loss at peripheral and inner peripheral positions, the increased moderator/coolant flow in end of cycle conditions may be wasted on peripheral and inner peripheral positions and result in wet moderator with high moisture carryover to pass through the core through these positions. example embodiment core 236 provides higher loss coefficients for increased core flow, and thus even less flow at end of cycle conditions using increased core flow, for fluid moderator/coolant through intermediate bundles with orifices at “3” and “4” locations, compared to conventional cores. in this way, example embodiment core 236 may further decrease moisture carry-over and increase steam quality and plant efficiency for plants operating with increased core flow to extend cycle life. other example embodiment core configurations are achievable with example embodiment fuel supports and individualized orifices therein. for example, as shown in scenarios 1-3 below in fig. 6 , only a single type of intermediate orifice may be used at both “3” and “4” inner periphery positions of example embodiment core 236 in order to increase fuel support standardization or achieve other core flow characteristics. or, for example, inlet orifices with higher loss coefficients may be used at controlled locations, which are bundle positions directly adjacent to a control blade that typically require less moderation and coolant. using example embodiment fuel supports to restrict coolant/moderator flow at controlled positions may further decrease moisture carryover and/or provide additional flow to higher-energy fuel bundles to increase plant efficiency. because of the flexibility offered by example embodiment fuel supports, almost any desired flow characteristics can be achieved with proper fuel bundle configuration, resulting in an example embodiment core configuration having desired coolant flow, and thus energy generating or safety-margin complying, properties. the inventors compared example embodiment fuel supports and cores, with more than two different inlet orifice and thus bundle flow characteristics, with conventional cores having only two, central and peripheral, inlet orifice flow characteristics. fig. 6 is a graph of the results showing the percent difference of fluid coolant/moderator flow through example embodiment fuel supports in central locations of example embodiment cores versus conventional fuel support orifices in central locations of conventional cores (bars). fig. 6 further shows the change in minimum critical power ratio (mcpr, a ratio between power levels producing critical boiling transition in a single fuel bundle versus operating power levels) value between the same scenarios (line and points). to generate the results of fig. 6 , five esbwr cores having a same whole-core flow rate of 77 mlb/hr and energy density of 54 kw/l for 100% rated power were simulated using a known panacea (tracg04/panac11) core thermodynamic code. each core contained 1132 bundles at associated positions, such as the layout of example embodiment core 236 shown in fig. 5 . the only parameter varied in the simulations was the fuel support configuration at specific locations to achieve different loss coefficients and flow rates at different core locations, as done with example embodiment cores. the exception is scenario 5, which used the same parameters of scenario 4, but with different operation cycle length and reduced reload fuel requirements in combination with an optimized fuel design. table 1 summarizes the varied parameters of scenarios 1-5, shown in fig. 6 : intermediate (3, 4)-to-peripheralcentral (1)-to-peripheral (2)scenario(2) loss coefficient ratioloss coefficient ratioreference0.230.2310.400.2020.600.1930.800.1940.40 (4), 0.80 (3)0.195*0.40 (4), 0.80 (3)0.19 the simulated channel flow and mcpr values for scenarios 1-5 were compared against the results of the simulated channel flow and mcpr values for the reference scenario, and the percentage change or value difference was graphed in fig. 6 . as shown in fig. 6 , each example embodiment core using more than two different types of orifices showed significant improvement in channel flow (at least 3% increase) and mcpr (at least 0.02 improvement). at current uranium costs, every 0.01 mcpr improvement in an operating commercial light water reactor translates to approximately $400,000.00 in reduced fuel costs. as such, example embodiment fuel supports with variable orifice characteristics and flow rates based on core position of the orifice and associated bundle may be used in example embodiment reactor cores to increase reactor efficiency. example methods example methods generate nuclear core configurations having customized fuel supports to achieve several different desired levels of coolant/moderator flow within the core. as shown in fig. 7 , in s 100 a known core configuration, including fuel characteristics, bundle location, core operating parameters, etc. is identified for optimization. for example, a program may receive input of several reactor core operational characteristics in s 100 . in s 110 , one or more loss coefficients are proposed or modified for one or more associated bundle positions in the known core configuration. for example, a user may input or alter bundle location flow loss coefficients, or a computer processor may iteratively cycle through all potential flow loss coefficients within an acceptable range, in s 110 . the resulting core with modified core flow coefficients for each bundle position is then simulated with a thermodynamic reactor modeling code in s 120 . the results, including variables such as central bundle flow rate and mcpr, are determined by the simulator and output for analysis or comparison in s 130 . for example, a user or computer program may determine if the resulting core operational parameters exceed a minimum performance threshold or compare the operational results against previous results from previous iterations with different flow loss coefficients in s 130 . actions s 100 , s 110 , s 120 , and/or s 130 may then optionally be repeated for any number of iterations until an acceptable or best flow loss coefficient map is determined for a particular core. in s 140 , the accepted core flow loss coefficient map is achieved by identifying fuel support configurations that possess the accepted flow loss coefficients for each core position. example embodiment fuel supports may then be fabricated or otherwise configured to achieve the identified flow loss coefficients for each core location in s 140 . example methods including s 100 -s 140 may be executed for each bundle location within a core or only a subset of bundle locations of interest. alternatively, example methods may be executed only with respect to a particular bundle in order to, for example, optimize core operating characteristics or fix a limiting problem with respect to the particular bundle location. similarly, example methods may be used as an integral part of core design or as a separate step performed alternatively and/or iteratively with other known methods of core design. for example, a known core design program may output a core map using fuel bundle characteristics and core parameters using uniform orifice configuration and associated flow loss coefficients. example methods including s 100 -s 140 may then be performed on some or all fuel bundle locations involved in the map, changing their operational characteristics including flow loss coefficient. the core design program may then be re-executed with the modified characteristics, and this core configuring involving example and other core optimization methods may continue until no further optimization is possible or desired. or, example methods may be used as an integral part of otherwise known core design methods, treating flow loss coefficient parameters affected by orifice configuration as additional variables in the core design process. it is also recognized that one or more actions s 100 -s 140 may be executed by different programs or parties in the fuel services and licensee context. example embodiments and methods thus being described, it will be appreciated by one skilled in the art that example embodiments may be varied through routine experimentation and without further inventive activity. for example, it is readily appreciated upon reading the above disclosure that other core configurations and fuel support shapes and capacities from the specific example embodiments described may be achieved. variations are not to be regarded as departure from the spirit and scope of the example embodiments, and all such modifications as would be obvious to one skilled in the art are intended to be included within the scope of the following claims.
181-745-181-801-202
US
[ "EP", "US", "KR" ]
C09K11/06,H05B33/10,H01L51/00,H01L51/50,C07F5/02,C07C211/54
2012-09-24T00:00:00
2012
[ "C09", "H05", "H01", "C07" ]
organic compounds containing b-n heterocycles
in certain embodiments, the invention provides boron-nitrogen heterocycles having formula (i): wherein one of the e 1 and e 2 is n, and one of the e 1 and e 2 is b; wherein e 3 and e 4 is carbon; wherein ring y and ring z are 5-membered or 6-membered carbocyclic or heterocyclic aromatic ring fused to ring x; wherein r 2 and r 3 represent mono, di, tri, tetra substitutions or no substitution; wherein r 2 and r 3 are each independently selected from various substituents; and wherein any two adjacent r 2 , and r 3 are optionally joined to form a ring, which may be further substituted. in certain embodiments, the invention provides devices, such as organic light emitting devices, that comprise such boron-nitrogen heterocycles.
a compound having the formula (i): wherein one of the e 1 and e 2 is n, and one of the e 1 and e 2 is b; wherein e 3 and e 4 is carbon; wherein ring y and ring z are 5-membered or 6-membered carbocyclic or heterocyclic aromatic rings fused to ring x; wherein r 2 and r 3 represent mono, di, tri, tetra substitutions or no substitution; wherein r 2 and r 3 are each independently selected from the group consisting of hydrogen, deuterium, halide, alkyl, cycloalkyl, heteroalkyl, arylalkyl, alkoxy, aryloxy, amino, silyl, alkenyl, cycloalkenyl, heteroalkenyl, alkynyl, aryl, heteroaryl, acyl, carbonyl, carboxylic acids, ester, nitrile, isonitrile, sulfanyl, sulfinyl, sulfonyl, phosphino, and combinations thereof; wherein r 1 is selected from the group consisting of halide, alkyl, cycloalkyl, heteroalkyl, arylalkyl, alkoxy, aryloxy, amino, silyl, alkenyl, cycloalkenyl, heteroalkenyl, alkynyl, aryl, heteroaryl, acyl, carbonyl, carboxylic acids, ester, nitrite, isonitrile, sulfanyl, sulfinyl, sulfonyl, phosphino, and combinations thereof; wherein any two adjacent r 2 , and r 3 are optionally joined to form a ring, which may be further substituted; and wherein ring y is selected from the group consisting of: wherein e 5 is selected from the group consisting of nr, o, s, and se; and wherein r is selected from the group consisting of halide, alkyl, cycloalkyl, heteroalkyl, arylalkyl, alkoxy, aryloxy, amino, silyl, alkenyl, cycloalkenyl, heteroalkenyl, alkynyl, aryl, heteroaryl, acyl, carbonyl, carboxylic acids, ester, nitrile, isonitrile, sulfanyl, sulfinyl, sulfonyl, phosphino, and combinations thereof. the compound of claim 1, wherein e 1 is b, and e 2 is n. the compound of claim 1, wherein e 1 is n, and e 2 is b. the compound of claim 1, wherein ring y is the compound of claim 1, wherein ring z is selected from the group consisting of: wherein e 5 is selected from the group consisting of nr, o, s, and se; and wherein r is selected from the group consisting of halide, alkyl, cycloalkyl, heteroalkyl, arylalkyl, alkoxy, aryloxy, amino, silyl, alkenyl, cycloalkenyl, heteroalkenyl, alkynyl, aryl, heteroaryl, acyl, carbonyl, carboxylic acids, ester, nitrile, isonitrile, sulfanyl, sulfinyl, sulfonyl, phosphino, and combinations thereof. the compound of claim 1, wherein ring z is the compound of claim 1, wherein the compound has the formula the compound of claim 1, wherein the compound is a compound having formula (ii): wherein r 21 and r 22 are independently aryl or heteroaryl, each of which may be further substituted; r 23 and r 24 are independently aryl or heteroaryl, each of which may be further substituted; and g is arylene, heteroarylene, or combinations thereof. the compound of claim 8, which is: the compound of claim 1, wherein the compound is a compound having formula (iii): wherein r 31 and r 32 independently are aryl or heteroaryl, which may be further substituted; wherein g 1 . g 2 , and g 3 independently are arylene or heteroarylene. the compound of claim 10, which is: the compound of claim 1, wherein the compound is a compound having formula (iv): wherein g 4 is arylene or heteroarylene. the compound of claim 12, which is: or the compound of claim 1, wherein the compound is a compound of formula (v): wherein r 51 is aryl or heteroaryl. the compound of claim 14, which is or the compound of claim 1, wherein the compound is a compound of formula (vi): wherein r 61 and r 62 independently are aryl or heteroaryl. the compound of claim 16, which is: the compound of claim 1, wherein the compound is a compound of formula (vii): wherein r 71 and r 72 independently are aryl or heteroaryl. the compound of claim 18, which is: the compound of claim 1, wherein the compound is a compound having formula (viii): wherein g 5 is arylene or heteroarylene; and r 81 and r 82 are independently aryl or heteroaryl, each of which is substituted with at least one acyclic or cyclic aliphatic substituent. the compound of claim 20, which is:
field of the invention the present invention relates to organic light emitting devices (oleds). in particular, the invention relates to boron-nitrogen heterocycles that may have improved electrochemical and photophysical properties, especially when used in an oled. background opto-electronic devices that make use of organic materials are becoming increasingly desirable for a number of reasons. many of the materials used to make such devices are relatively inexpensive, so organic opto-electronic devices have the potential for cost advantages over inorganic devices. in addition, the inherent properties of organic materials, such as their flexibility, may make them well suited for particular applications such as fabrication on a flexible substrate. examples of organic opto-electronic devices include organic light emitting devices (oleds), organic phototransistors, organic photovoltaic cells, and organic photodetectors. for oleds, the organic materials may have performance advantages over conventional materials. for example, the wavelength at which an organic emissive layer emits light may generally be readily tuned with appropriate dopants. oleds make use of thin organic films that emit light when voltage is applied across the device. oleds are becoming an increasingly interesting technology for use in applications such as flat panel displays, illumination, and backlighting. several oled materials and configurations are described in u.s. pat. nos. 5,844,363 , 6,303,238 , and 5,707,745 . one application for phosphorescent emissive molecules is a full color display. industry standards for such a display call for pixels adapted to emit particular colors, referred to as "saturated" colors. in particular, these standards call for saturated red, green, and blue pixels. color may be measured using cie coordinates, which are well known to the art. one example of a green emissive molecule is tris(2-phenylpyridine) iridium, denoted ir(ppy) 3 , which has the following structure: in this, and later figures herein, we depict the dative bond from nitrogen to metal (here, ir) as a straight line. as used herein, the term "organic" includes polymeric materials as well as small molecule organic materials that may be used to fabricate organic opto-electronic devices. "small molecule" refers to any organic material that is not a polymer, and "small molecules" may actually be quite large. small molecules may include repeat units in some circumstances. for example, using a long chain alkyl group as a substituent does not remove a molecule from the "small molecule" class. small molecules may also be incorporated into polymers, for example as a pendent group on a polymer backbone or as a part of the backbone. small molecules may also serve as the core moiety of a dendrimer, which consists of a series of chemical shells built on the core moiety. the core moiety of a dendrimer may be a fluorescent or phosphorescent small molecule emitter. a dendrimer may be a "small molecule," and it is believed that all dendrimers currently used in the field of oleds are small molecules. as used herein, "top" means furthest away from the substrate, while "bottom" means closest to the substrate. where a first layer is described as "disposed over" a second layer, the first layer is disposed further away from substrate. there may be other layers between the first and second layer, unless it is specified that the first layer is "in contact with" the second layer. for example, a cathode may be described as "disposed over" an anode, even though there are various organic layers in between. as used herein, "solution processible" means capable of being dissolved, dispersed, or transported in and/or deposited from a liquid medium, either in solution or suspension form. a ligand may be referred to as "photoactive" when it is believed that the ligand directly contributes to the photoactive properties of an emissive material. a ligand may be referred to as "ancillary" when it is believed that the ligand does not contribute to the photoactive properties of an emissive material, although an ancillary ligand may alter the properties of a photoactive ligand. as used herein, and as would be generally understood by one skilled in the art, a first "highest occupied molecular orbital" (homo) or "lowest unoccupied molecular orbital" (lumo) energy level is "greater than" or "higher than" a second homo or lumo energy level if the first energy level is closer to the vacuum energy level. since ionization potentials (ip) are measured as a negative energy relative to a vacuum level, a higher homo energy level corresponds to an ip having a smaller absolute value (an ip that is less negative). similarly, a higher lumo energy level corresponds to an electron affinity (ea) having a smaller absolute value (an ea that is less negative). on a conventional energy level diagram, with the vacuum level at the top, the lumo energy level of a material is higher than the homo energy level of the same material. a "higher" homo or lumo energy level appears closer to the top of such a diagram than a "lower" homo or lumo energy level. as used herein, and as would be generally understood by one skilled in the art, a first work function is "greater than" or "higher than" a second work function if the first work function has a higher absolute value. because work functions are generally measured as negative numbers relative to vacuum level, this means that a "higher" work function is more negative. on a conventional energy level diagram, with the vacuum level at the top, a "higher" work function is illustrated as further away from the vacuum level in the downward direction. thus, the definitions of homo and lumo energy levels follow a different convention than work functions. more details on oleds, and the definitions described above, can be found in us pat. no. 7,279,704 . de 2012 000 064 a1 describes organic light emitting devices comprising heterocyclic compounds with boron-triamine moieties. summary of the invention the invention is described in the independent claims, preferred embodiments are described in the dependent claims. boron-nitrogen heterocycles are described having the formula (i): wherein one of the e 1 and e 2 is n, and one of the e 1 and e 2 is b; wherein e 3 and e 4 is carbon; wherein ring y and ring z are 5-membered or 6-membered carbocyclic or heterocyclic aromatic ring fused to ring x; wherein r 2 and r 3 represent mono, di, tri, tetra substitutions or no substitution; wherein r 2 and r 3 are each independently selected from the group consisting of hydrogen, deuterium, halide, alkyl, cycloalkyl, heteroalkyl, arylalkyl, alkoxy, aryloxy, amino, silyl, alkenyl, cycloalkenyl, heteroalkenyl, alkynyl, aryl, heteroaryl, acyl, carbonyl, carboxylic acids, ester, nitrile, isonitrile, sulfanyl, sulfinyl, sulfonyl, phosphino, and combinations thereof; wherein r 1 is selected from the group consisting of halide, alkyl, cycloalkyl, heteroalkyl, arylalkyl, alkoxy, aryloxy, amino, silyl, alkenyl, cycloalkenyl, heteroalkenyl, alkynyl, aryl, heteroaryl, acyl, carbonyl, carboxylic acids, ester, nitrile, isonitrile, sulfanyl, sulfinyl, sulfonyl, phosphino, and combinations thereof; and wherein any two adjacent r 2 , and r 3 are optionally joined to form a ring, which may be further substituted. in some embodiments, e 1 is b, and e 2 is n. in some other embodiments, e 1 is n, and e 2 is b. in some embodiments, ring y is selected from the group consisting of: wherein e 5 is selected from the group consisting of nr, o, s, and se; and wherein r is selected from the group consisting of halide, alkyl, cycloalkyl, heteroalkyl, arylalkyl, alkoxy, aryloxy, amino, silyl, alkenyl, cycloalkenyl, heteroalkenyl, alkynyl, aryl, heteroaryl, acyl, carbonyl, carboxylic acids, ester, nitrile, isonitrile, sulfanyl, sulfinyl, sulfonyl, phosphino, and combinations thereof. in some further embodiments, ring y is in some embodiments, ring z is selected from the group consisting of: wherein e 5 is selected from the group consisting of nr, o, s, and se; and wherein r is selected from the group consisting of halide, alkyl, cycloalkyl, heteroalkyl, arylalkyl, alkoxy, aryloxy, amino, silyl, alkenyl, cycloalkenyl, heteroalkenyl, alkynyl, aryl, heteroaryl, acyl, carbonyl, carboxylic acids, ester, nitrile, isonitrile, sulfanyl, sulfinyl, sulfonyl, phosphino, and combinations thereof. in some further embodiments, ring z is in some embodiments, r 1 , r 2 , and r 3 are independently selected from the group consisting of phenyl, pyridine, triazine, pyrimidine, phenanthrene, naphthalene, anthracene, triphenylene, pyrene, chrysene, fluoranthene, carbazole, dibenzothiophene, dibenzofuran, dibenzoselenophene, aza-carbazole, aza-dibenzothiophene, aza-dibenzofuran, and aza-dibenzoselenophene, each of which may be further substituted. in some embodiments, the boron-nitrogen heterocycles have the formula where the variables have the meanings defined above. in some embodiments, the boron-nitrogen heterocycles have formula (ii): wherein r 21 and r 22 are independently are aryl or heteroaryl, each of which may be further substituted; r 23 and r 24 are independently aryl or heteroaryl, each of which may be further substituted; and g is arylene, heteroarylene, or combinations thereof. in some such embodiments, g is phenylene, pyridine, biphenylene, triazine, pyrimidine, triphenylene, naphthylene, anthracene, chrysene, pyrene, fluoranthene, phenanthrene, carbazole, dibenzothiophene, dibenzofuran, dibenzoselenophene, aza-carbazole, aza-dibenzothiophene, aza-dibenzofuran, aza-dibenzoselenophene, or combinations thereof. in some embodiments, the boron-nitrogen heterocycle is a compound, which is in some embodiments, the boron-nitrogen heterocycle is a compound having formula (iii): wherein r 31 and r 32 independently are aryl or heteroaryl, which may be further substituted; and wherein g 1 , g 2 , and g 3 independently are arylene or heteroarylene. in some such embodiments, g 1 , g 2 , and g 3 independently are phenylene, pyridine, biphenylene, triazine, pyrimidine, triphenylene, naphthylene, anthracene, chrysene, pyrene, fluoranthene, phenanthrene, carbazole, dibenzothiophene, dibenzofuran, dibenzoselenophene, aza-carbazole, aza-dibenzothiophene, aza-dibenzofuran, aza-dibenzoselenophene, or combinations thereof. in some embodiments, the boron-nitrogen heterocycle is a compound, which is: which is: in some embodiments, the boron-nitrogen heterocycle is a compound having formula (iv): wherein g 4 is arylene or heteroarylene. in some such embodiments, g 4 is phenylene, pyridine, biphenylene, triazine, pyrimidine, triphenylene, naphthylene, anthracene, chrysene, pyrene, fluoranthene, phenanthrene, carbazole, dibenzothiophene, dibenzofuran, dibenzoselenophene, aza-carbazole, aza-dibenzothiophene, aza-dibenzofuran, aza-dibenzoselenophene, or combinations thereof. in some embodiments, the boron-nitrogen heterocycle is a compound, which is: or in some embodiments, the boron-nitrogen heterocycle is a compound having formula (v): wherein r 51 is aryl or heteroaryl. in some such embodiments, r 51 is phenyl, pyridyl, biphenylyl, triazinyl, pyrimidinyl, triphenylyl, naphthyl, anthracenyl, chrysene, pyrene, fluoranthene, phenanthrene, carbazole, dibenzothiophene, dibenzofuran, dibenzoselenophene, aza-carbazole, aza-dibenzothiophene, aza-dibenzofuran, aza-dibenzoselenophene, or combinations thereof. in some embodiments, the born-nitrogen heterocycle is a compound, which is: or in some embodiments, the boron-nitrogen heterocycle is a compound having formula (vi): wherein r 61 and r 62 independently are aryl or heteroaryl. in some such embodiments, r 61 is phenyl, pyridyl, biphenylyl, triazinyl, pyrimidinyl, triphenylyl, naphthyl, anthracenyl, chrysene, pyrene, fluoranthene, phenanthrene, carbazole, dibenzothiophene, dibenzofuran, dibenzoselenophene, aza-carbazole, aza-dibenzothiophene, aza-dibenzofuran, aza-dibenzoselenophene, or combinations thereof; and r 62 is phenyl, pyridyl, biphenylyl, triazinyl, pyrimidinyl, triphenylyl, naphthyl, anthracenyl, chrysene, pyrene, fluoranthene, phenanthrene, carbazole, dibenzothiophene, dibenzofuran, dibenzoselenophene, aza-carbazole, aza-dibenzothiophene, aza-dibenzofuran, aza-dibenzoselenophene, or combinations thereof. in some embodiments, the boron-nitrogen heterocycle is a compound, which is: in some embodiments, the boron-nitrogen heterocycle is a compound having formula (vii): wherein r 71 and r 72 independently are aryl or heteroaryl. in some such embodiments, r 71 is phenyl, pyridyl, biphenylyl, triazinyl, pyrimidinyl, triphenylyl, naphthyl, anthracenyl, chrysene, pyrene, fluoranthene, phenanthrene, carbazole, dibenzothiophene, dibenzofuran, dibenzoselenophene, aza-carbazole, aza-dibenzothiophene, aza-dibenzofuran, aza-dibenzoselenophene, or combinations thereof; and r 72 is phenyl, pyridyl, biphenylyl, triazinyl, pyrimidinyl, triphenylyl, naphthyl, anthracenyl, chrysene, pyrene, fluoranthene, phenanthrene, carbazole, dibenzothiophene, dibenzofuran, dibenzoselenophene, aza-carbazole, aza-dibenzothiophene, aza-dibenzofuran, aza-dibenzoselenophene, or combinations thereof. in some embodiments, the boron-nitrogen heterocycle is a compound, which is: in some embodiments, the boron-nitrogen heterocycle is a compound having formula (viii): wherein g 5 is arylene or heteroarylene; and r 81 and r 82 are independently aryl or heteroaryl, each of which is substituted with at least one acyclic or cyclic aliphatic substituent. in some such embodiments, g 5 is phenylene, pyridine, biphenylene, triazine, pyrimidine, triphenylene, naphthylene, anthracene, chrysene, pyrene, fluoranthene, phenanthrene, carbazole, dibenzothiophene, dibenzofuran, dibenzoselenophene, aza-carbazole, aza-dibenzothiophene, aza-dibenzofuran, aza-dibenzoselenophene, or combinations thereof. in some embodiments, the boron-nitrogen heterocycle is a compound, which is: furthermore, devices are described that include boron-nitrogen heterocycles, such as those described in the foregoing paragraphs. in some embodiments, the invention provides a first device comprising a first organic light emitting device, which further comprises: an anode; a cathode; and an organic layer disposed between the anode and the cathode, which comprises a boron-nitrogen heterocycle according to any of the above embodiments. in some embodiments, the first device is a consumer product. in some embodiments, the first device is an organic light emitting device (oled). in some embodiments, the first device comprises a lighting panel. the boron-nitrogen heterocycles can serve various roles within a device. in some embodiments, the first device comprises an organic layer that is an emissive layer, where the emissive layer comprises an emissive dopant, which is a boron-nitrogen heterocycle of any of the above embodiments. in some such embodiments, the first device is a delayed fluorescence device. in some other embodiments, the first device comprises an organic layer that is an emissive layer, and also comprises a host. in some such embodiments, the host is a boron-nitrogen heterocycle of any of the above embodiments. in some such embodiments, the organic layer comprises an emissive dopant. in some embodiments, the emissive dopant is a fluorescent dopant. in some embodiments, the emissive dopant is a transition metal complex having at least one ligand or part of a ligand if the ligand is more than bidentate, where the more than bidentate ligand is selected from the group consisting of: and wherein r a , r b , r c , and r d may represent mono, di, tri, or tetra substitution, or no substitution; and wherein r a , r b , r c , and r d are independently selected from the group consisting of hydrogen, deuterium, halide, alkyl, cycloalkyl, heteroalkyl, arylalkyl, alkoxy, aryloxy, amino, silyl, alkenyl, cycloalkenyl, heteroalkenyl, alkynyl, aryl, heteroaryl, acyl, carbonyl, carboxylic acids, ester, nitrile, isonitrile, sulfanyl, sulfinyl, sulfonyl, phosphino, and combinations thereof; and wherein two adjacent substituents of r a , r b , r c , and r d are optionally joined to form a fused ring or form a multidentate ligand. in some embodiments, the organic layer in the first device is a hole injecting layer or a hole transporting layer. in some other embodiments, the organic layer is an electron injecting layer or an electron transporting layer. in some embodiments, the organic layer is an exciton blocking layer or a charge blocking layer. brief description of the drawings fig. 1 shows an organic light emitting device. fig. 2 shows an inverted organic light emitting device that does not have a separate electron transport layer. fig. 3 shows an embodiment of a boron-nitrogen heterocycle detailed description generally, an oled comprises at least one organic layer disposed between and electrically connected to an anode and a cathode. when a current is applied, the anode injects holes and the cathode injects electrons into the organic layer(s). the injected holes and electrons each migrate toward the oppositely charged electrode. when an electron and hole localize on the same molecule, an "exciton," which is a localized electron-hole pair having an excited energy state, is formed. light is emitted when the exciton relaxes via a photoemissive mechanism. in some cases, the exciton may be localized on an excimer or an exciplex. non-radiative mechanisms, such as thermal relaxation, may also occur, but are generally considered undesirable. the initial oleds used emissive molecules that emitted light from their singlet states ("fluorescence") as disclosed, for example, in u.s. pat. no. 4,769,292 . fluorescent emission generally occurs in a time frame of less than 10 nanoseconds. more recently, oleds having emissive materials that emit light from triplet states ("phosphorescence") have been demonstrated. baldo et al., "highly efficient phosphorescent emission from organic electroluminescent devices," nature, vol. 395, 151-154, 1998 ; ("baldo-i") and baldo et al., "very high-efficiency green organic light-emitting devices based on electrophosphorescence," appl. phys. lett., vol. 75, no. 3, 4-6 (1999 ) ("baldo-ii"). phosphorescence is described in more detail in us pat. no. 7,279,704 at cols. 5-6. fig. 1 shows an organic light emitting device 100. the figures are not necessarily drawn to scale. device 100 may include a substrate 110, an anode 115, a hole injection layer 120, a hole transport layer 125, an electron blocking layer 130, an emissive layer 135, a hole blocking layer 140, an electron transport layer 145, an electron injection layer 150, a protective layer 155, a cathode 160, and a barrier layer 170. cathode 160 is a compound cathode having a first conductive layer 162 and a second conductive layer 164. device 100 may be fabricated by depositing the layers described, in order. the properties and functions of these various layers, as well as example materials, are described in more detail in us 7,279,704 at cols. 6-10. more examples for each of these layers are available. for example, a flexible and transparent substrate-anode combination is disclosed in u.s. pat. no. 5,844,363 . an example of a p-doped hole transport layer is m-mtdata doped with f.sub.4-tcnq at a molar ratio of 50:1, as disclosed in u.s. patent application publication no. 2003/0230980 . examples of emissive and host materials are disclosed in u.s. pat. no. 6,303,238 to thompson et al. an example of an n-doped electron transport layer is bphen doped with li at a molar ratio of 1:1, as disclosed in u.s. patent application publication no. 2003/0230980 . u.s. pat. nos. 5,703,436 and 5,707,745 disclose examples of cathodes including compound cathodes having a thin layer of metal such as mg:ag with an overlying transparent, electrically-conductive, sputter-deposited ito layer. the theory and use of blocking layers is described in more detail in u.s. pat. no. 6,097,147 and u.s. patent application publication no. 2003/0230980 . examples of injection layers are provided in u.s. patent application publication no. 2004/0174116 , which is incorporated by reference in its entirety. a description of protective layers may be found in u.s. patent application publication no. 2004/0174116 . fig. 2 shows an inverted oled 200. the device includes a substrate 210, a cathode 215, an emissive layer 220, a hole transport layer 225, and an anode 230. device 200 may be fabricated by depositing the layers described, in order. because the most common oled configuration has a cathode disposed over the anode, and device 200 has cathode 215 disposed under anode 230, device 200 may be referred to as an "inverted" oled. materials similar to those described with respect to device 100 may be used in the corresponding layers of device 200. fig. 2 provides one example of how some layers may be omitted from the structure of device 100. the simple layered structure illustrated in figs. 1 and 2 is provided by way of non-limiting example, and it is understood that embodiments of the invention may be used in connection with a wide variety of other structures. the specific materials and structures described are exemplary in nature, and other materials and structures may be used. functional oleds may be achieved by combining the various layers described in different ways, or layers may be omitted entirely, based on design, performance, and cost factors. other layers not specifically described may also be included. materials other than those specifically described may be used. although many of the examples provided herein describe various layers as comprising a single material, it is understood that combinations of materials, such as a mixture of host and dopant, or more generally a mixture, may be used. also, the layers may have various sublayers. the names given to the various layers herein are not intended to be strictly limiting. for example, in device 200, hole transport layer 225 transports holes and injects holes into emissive layer 220, and may be described as a hole transport layer or a hole injection layer. in one embodiment, an oled may be described as having an "organic layer" disposed between a cathode and an anode. this organic layer may comprise a single layer, or may further comprise multiple layers of different organic materials as described, for example, with respect to figs. 1 and 2 . structures and materials not specifically described may also be used, such as oleds comprised of polymeric materials (pleds) such as disclosed in u.s. pat. no. 5,247,190 to friend et al. by way of further example, oleds having a single organic layer may be used. oleds may be stacked, for example as described in u.s. pat. no. 5,707,745 to forrest et al. the oled structure may deviate from the simple layered structure illustrated in figs. 1 and 2 . for example, the substrate may include an angled reflective surface to improve out-coupling, such as a mesa structure as described in u.s. pat. no. 6,091,195 to forrest et al. , and/or a pit structure as described in u.s. pat. no. 5,834,893 to bulovic et al. unless otherwise specified, any of the layers of the various embodiments may be deposited by any suitable method. for the organic layers, preferred methods include thermal evaporation, ink-jet, such as described in u.s. pat. nos. 6,013,982 and 6,087,196 , organic vapor phase deposition (ovpd), such as described in u.s. pat. no. 6,337,102 to forrest et al. , and deposition by organic vapor jet printing (ovjp), such as described in u.s. patent application ser. no. 10/233,470 . other suitable deposition methods include spin coating and other solution based processes. solution based processes are preferably carried out in nitrogen or an inert atmosphere. for the other layers, preferred methods include thermal evaporation. preferred patterning methods include deposition through a mask, cold welding such as described in u.s. pat. nos. 6,294,398 and 6,468,819 , and patterning associated with some of the deposition methods such as ink-jet and ovjd. other methods may also be used. the materials to be deposited may be modified to make them compatible with a particular deposition method. for example, substituents such as alkyl and aryl groups, branched or unbranched, and preferably containing at least 3 carbons, may be used in small molecules to enhance their ability to undergo solution processing. substituents having 20 carbons or more may be used, and 3-20 carbons is a preferred range. materials with asymmetric structures may have better solution processability than those having symmetric structures, because asymmetric materials may have a lower tendency to recrystallize. dendrimer substituents may be used to enhance the ability of small molecules to undergo solution processing. devices fabricated in accordance with embodiments of the present invention may further optionally comprise a barrier layer. one purpose of the barrier layer is to protect the electrodes and organic layers from damaging exposure to harmful species in the environment including moisture, vapor and/or gases, etc. the barrier layer may be deposited over, under or next to a substrate, an electrode, or over any other parts of a device including an edge. the barrier layer may comprise a single layer, or multiple layers. the barrier layer may be formed by various known chemical vapor deposition techniques and may include compositions having a single phase as well as compositions having multiple phases. any suitable material or combination of materials may be used for the barrier layer. the barrier layer may incorporate an inorganic or an organic compound or both. the preferred barrier layer comprises a mixture of a polymeric material and a non-polymeric material as described in u.s. pat. no. 7,968,146 , pct pat. application nos. pct/us2007/023098 and pct/us2009/042829 . to be considered a "mixture", the aforesaid polymeric and non-polymeric materials comprising the barrier layer should be deposited under the same reaction conditions and/or at the same time. the weight ratio of polymeric to non-polymeric material may be in the range of 95:5 to 5:95. the polymeric material and the non-polymeric material may be created from the same precursor material. in one example, the mixture of a polymeric material and a non-polymeric material consists essentially of polymeric silicon and inorganic silicon. devices fabricated in accordance with embodiments of the invention may be incorporated into a wide variety of consumer products, including flat panel displays, computer monitors, medical monitors, televisions, billboards, lights for interior or exterior illumination and/or signaling, heads up displays, fully transparent displays, flexible displays, laser printers, telephones, cell phones, personal digital assistants (pdas), laptop computers, digital cameras, camcorders, viewfinders, micro-displays, vehicles, a large area wall, theater or stadium screen, or a sign. various control mechanisms may be used to control devices fabricated in accordance with the present invention, including passive matrix and active matrix. many of the devices are intended for use in a temperature range comfortable to humans, such as 18 degrees c. to 30 degrees c., and more preferably at room temperature (20-25 degrees c.). the materials and structures described herein may have applications in devices other than oleds. for example, other optoelectronic devices such as organic solar cells and organic photodetectors may employ the materials and structures. more generally, organic devices, such as organic transistors, may employ the materials and structures. the terms halo, halogen, alkyl, cycloalkyl, alkenyl, alkynyl, arylkyl, heterocyclic group, aryl, aromatic group, and heteroaryl are known to the art, and are defined in us 7,279,704 at cols. 31-32. aromatic six-membered boron-nitrogen heterocycles that are isoelectronic with benzene can have significantly different properties in comparison to their carbocyclic analogues, especially where the heterocycle contains a single b-n substitution. examples of such compounds include 1,2-azaborine, 1,3-azaborine, 1,4-azaborine, and substituted variants thereof. embodiments of the present invention include polycyclic boron-nitrogen heterocycles, which have more complex structures than azaborine. in many instances, such polycyclic boron-nitrogen heterocycles can exhibit desirable photophysical and electrochemical properties, particularly in comparison to their analogues that lack a b-n bond. this may be due at least in part to the strong dipole moment, which is induced by the effect of the boron-nitrogen substitution on the overall electronegativity and resonance of the resulting compound. polycyclic boron-nitrogen heterocycles can be synthesized by extending and modifying the principles described in liu et al., ang. chem. int. ed., vol. 51, pp. 6074-6092 (2012 ) and abbey et al., j. am. chem. soc., vol. 133, pp. 11508-11511 (2011 ). carbazole and carbazole-containing compounds can serve as an important building block for organic electronic materials. for example, carbazole-containing compounds can be used as: host materials for both fluorescent and phosphorescent oleds; fluorescent emitters; photoconductors; and active materials for organic tfts and solar cells. incorporating b-n into a carbazole structure, as shown below, can yield certain beneficial properties. for example, the homo/lumo energy levels and photophysical properties can be modified due to the donor-acceptor nature of the b-n structure, which may make such compounds more suitable for applications in organic electronic devices. certain applications may require a lower triplet energy than that provided by carbazole or a carbazole-containing compound. thus, in such instances, the use of a b-n substituted carbazole can lead to a lower triplet and increased stabilization. such changes can lead to a more balanced charge transport profile, and can also provide for better injection properties. boron-nitrogen heterocycles are described having the formula (i): wherein one of the e 1 and e 2 is n, and one of the e 1 and e 2 is b; wherein e 3 and e 4 is carbon; wherein ring y and ring z are 5-membered or 6-membered carbocyclic or heterocyclic aromatic ring fused to ring x; wherein r 2 and r 3 represent mono, di, tri, tetra substitutions or no substitution; wherein r 2 and r 3 are each independently selected from the group consisting of hydrogen, deuterium, halide, alkyl, cycloalkyl, heteroalkyl, arylalkyl, alkoxy, aryloxy, amino, silyl, alkenyl, cycloalkenyl, heteroalkenyl, alkynyl, aryl, heteroaryl, acyl, carbonyl, carboxylic acids, ester, nitrile, isonitrile, sulfanyl, sulfinyl, sulfonyl, phosphino, and combinations thereof; wherein r 1 is selected from the group consisting of halide, alkyl, cycloalkyl, heteroalkyl, arylalkyl, alkoxy, aryloxy, amino, silyl, alkenyl, cycloalkenyl, heteroalkenyl, alkynyl, aryl, heteroaryl, acyl, carbonyl, carboxylic acids, ester, nitrile, isonitrile, sulfanyl, sulfinyl, sulfonyl, phosphino, and combinations thereof; and wherein any two adjacent r 2 , and r 3 are optionally joined to form a ring, which may be further substituted. in some embodiments, e 1 is b, and e 2 is n. in some other embodiments, e 1 is n, and e 2 is b. in some embodiments, ring y is selected from the group consisting of: wherein e 5 is selected from the group consisting of nr, o, s, and se; and wherein r is selected from the group consisting of halide, alkyl, cycloalkyl, heteroalkyl, arylalkyl, alkoxy, aryloxy, amino, silyl, alkenyl, cycloalkenyl, heteroalkenyl, alkynyl, aryl, heteroaryl, acyl, carbonyl, carboxylic acids, ester, nitrile, isonitrile, sulfanyl, sulfinyl, sulfonyl, phosphino, and combinations thereof. in some further embodiments, ring y is in some embodiments, ring z is selected from the group consisting of: wherein e 5 is selected from the group consisting of nr, o, s, and se; and wherein r is selected from the group consisting of halide, alkyl, cycloalkyl, heteroalkyl, arylalkyl, alkoxy, aryloxy, amino, silyl, alkenyl, cycloalkenyl, heteroalkenyl, alkynyl, aryl, heteroaryl, acyl, carbonyl, carboxylic acids, ester, nitrile, isonitrile, sulfanyl, sulfinyl, sulfonyl, phosphino, and combinations thereof. in some further embodiments, ring z is in some embodiments, r 1 , r 2 , and r 3 are independently selected from the group consisting of phenyl, pyridine, triazine, pyrimidine, phenanthrene, naphthalene, anthracene, triphenylene, pyrene, chrysene, fluoranthene, carbazole, dibenzothiophene, dibenzofuran, dibenzoselenophene, aza-carbazole, aza-dibenzothiophene, aza-dibenzofuran, and aza-dibenzoselenophene, each of which may be further substituted. in some embodiments, the boron-nitrogen heterocycles have the formula where the variables have the meanings defined above. in some embodiments, the boron-nitrogen heterocycles have formula (ii): wherein r 21 and r 22 are independently are aryl or heteroaryl, each of which may be further substituted; r 23 and r 24 are independently aryl or heteroaryl, each of which may be further substituted; and g is arylene, heteroarylene, or combinations thereof. in some such embodiments, g is phenylene, pyridine, biphenylene, triazine, pyrimidine, triphenylene, naphthylene, anthracene, chrysene, pyrene, fluoranthene, phenanthrene, carbazole, dibenzothiophene, dibenzofuran, dibenzoselenophene, aza-carbazole, aza-dibenzothiophene, aza-dibenzofuran, aza-dibenzoselenophene, or combinations thereof. in some embodiments, such compounds are hole injection or hole transporting materials. in some embodiments, the boron-nitrogen heterocycle is a compound, which is in some embodiments, the boron-nitrogen heterocycle is a compound having formula (iii): wherein r 31 and r 32 independently are aryl or heteroaryl, which may be further substituted; and wherein g 1 , g 2 , and g 3 independently are arylene or heteroarylene. in some such embodiments, g 1 , g 2 , and g 3 independently are phenylene, pyridine, biphenylene, triazine, pyrimidine, triphenylene, naphthylene, anthracene, chrysene, pyrene, fluoranthene, phenanthrene, carbazole, dibenzothiophene, dibenzofuran, dibenzoselenophene, aza-carbazole, aza-dibenzothiophene, aza-dibenzofuran, aza-dibenzoselenophene, or combinations thereof. in some embodiments, such compounds are hole injection or hole transporting materials. in some embodiments, the boron-nitrogen heterocycle is a compound, which is: which is: in some embodiments, the boron-nitrogen heterocycle is a compound having formula (iv): wherein g 4 is arylene or heteroarylene. in some such embodiments, g 4 is phenylene, pyridine, biphenylene, triazine, pyrimidine, triphenylene, naphthylene, anthracene, chrysene, pyrene, fluoranthene, phenanthrene, carbazole, dibenzothiophene, dibenzofuran, dibenzoselenophene, aza-carbazole, aza-dibenzothiophene, aza-dibenzofuran, aza-dibenzoselenophene, or combinations thereof. in some embodiments, such compounds are host materials. in some embodiments, the boron-nitrogen heterocycle is a compound, which is: or in some embodiments, the boron-nitrogen heterocycle is a compound having formula (v): wherein r 51 is aryl or heteroaryl. in some such embodiments, r 51 is phenyl, pyridyl, biphenylyl, triazinyl, pyrimidinyl, triphenylyl, naphthyl, anthracenyl, chrysene, pyrene, fluoranthene, phenanthrene, carbazole, dibenzothiophene, dibenzofuran, dibenzoselenophene, aza-carbazole, aza-dibenzothiophene, aza-dibenzofuran, aza-dibenzoselenophene, or combinations thereof. in some embodiments, such compounds are host materials. in some other embodiments, such compounds are fluorescent emitters. in some embodiments, the born-nitrogen heterocycle is a compound, which is: or in some embodiments, the boron-nitrogen heterocycle is a compound having formula (vi): wherein r 61 and r 62 independently are aryl or heteroaryl. in some such embodiments, r 61 is phenyl, pyridyl, biphenylyl, triazinyl, pyrimidinyl, triphenylyl, naphthyl, anthracenyl, chrysene, pyrene, fluoranthene, phenanthrene, carbazole, dibenzothiophene, dibenzofuran, dibenzoselenophene, aza-carbazole, aza-dibenzothiophene, aza-dibenzofuran, aza-dibenzoselenophene, or combinations thereof; and r 62 is phenyl, pyridyl, biphenylyl, triazinyl, pyrimidinyl, triphenylyl, naphthyl, anthracenyl, chrysene, pyrene, fluoranthene, phenanthrene, carbazole, dibenzothiophene, dibenzofuran, dibenzoselenophene, aza-carbazole, aza-dibenzothiophene, aza-dibenzofuran, aza-dibenzoselenophene, or combinations thereof. in some embodiments, such compounds are host materials. in some other embodiments, such compounds are fluorescent emitters. in some embodiments, the boron-nitrogen heterocycle is a compound, which is: in some embodiments, the boron-nitrogen heterocycle is a compound having formula (vii): wherein r 71 and r 72 independently are aryl or heteroaryl. in some such embodiments, r 71 is phenyl, pyridyl, biphenylyl, triazinyl, pyrimidinyl, triphenylyl, naphthyl, anthracenyl, chrysene, pyrene, fluoranthene, phenanthrene, carbazole, dibenzothiophene, dibenzofuran, dibenzoselenophene, aza-carbazole, aza-dibenzothiophene, aza-dibenzofuran, aza-dibenzoselenophene, or combinations thereof; and r 72 is phenyl, pyridyl, biphenylyl, triazinyl, pyrimidinyl, triphenylyl, naphthyl, anthracenyl, chrysene, pyrene, fluoranthene, phenanthrene, carbazole, dibenzothiophene, dibenzofuran, dibenzoselenophene, aza-carbazole, aza-dibenzothiophene, aza-dibenzofuran, aza-dibenzoselenophene, or combinations thereof. in some embodiments, such compounds are host materials. in some embodiments, the boron-nitrogen heterocycle is a compound, which is: in some embodiments, the boron-nitrogen heterocycle is a compound having formula (viii): wherein g 5 is arylene or heteroarylene; and r 81 and r 82 are independently aryl or heteroaryl, each of which is substituted with at least one acyclic or cyclic aliphatic substituent. in some such embodiments, g 5 is phenylene, pyridine, biphenylene, triazine, pyrimidine, triphenylene, naphthylene, anthracene, chrysene, pyrene, fluoranthene, phenanthrene, carbazole, dibenzothiophene, dibenzofuran, dibenzoselenophene, aza-carbazole, aza-dibenzothiophene, aza-dibenzofuran, aza-dibenzoselenophene, or combinations thereof. in some embodiments, such compounds are fluorescent emitters. in some embodiments, the boron-nitrogen heterocycle is a compound, which is: furthermore, devices are described that include boron-nitrogen heterocycles, such as those described in the foregoing paragraphs. in some embodiments, the invention provides a first device comprising a first organic light emitting device, which further comprises: an anode; a cathode; and an organic layer disposed between the anode and the cathode, which comprises a boron-nitrogen heterocycle according to any of the above embodiments. in some embodiments, the first device is a consumer product. in some embodiments, the first device is an organic light emitting device (oled). in some embodiments, the first device comprises a lighting panel. the boron-nitrogen heterocycles can serve various roles within a device. in some embodiments, the first device comprises an organic layer that is an emissive layer, where the emissive layer comprises an emissive dopant, which is a boron-nitrogen heterocycle of any of the above embodiments. in some such embodiments, the first device is a delayed fluorescence device. in some other embodiments, the first device comprises an organic layer that is an emissive layer, and also comprises a host. in some such embodiments, the host is a boron-nitrogen heterocycle of any of the above embodiments. in some such embodiments, the organic layer comprises an emissive dopant. in some embodiments, the emissive dopant is a fluorescent dopant. in some embodiments, the emissive dopant is a transition metal complex having at least one ligand or part of a ligand if the ligand is more than bidentate, where the more than bidentate ligand is selected from the group consisting of: and wherein r a , r b , r c , and r d may represent mono, di, tri, or tetra substitution, or no substitution; and wherein r a , r b , r c , and r d are independently selected from the group consisting of hydrogen, deuterium, halide, alkyl, cycloalkyl, heteroalkyl, arylalkyl, alkoxy, aryloxy, amino, silyl, alkenyl, cycloalkenyl, heteroalkenyl, alkynyl, aryl, heteroaryl, acyl, carbonyl, carboxylic acids, ester, nitrile, isonitrile, sulfanyl, sulfinyl, sulfonyl, phosphino, and combinations thereof; and wherein two adjacent substituents of r a , r b , r c , and r d are optionally joined to form a fused ring or form a multidentate ligand. in some embodiments, the organic layer in the first device is a hole injecting layer or a hole transporting layer. in some other embodiments, the organic layer is an electron injecting layer or an electron transporting layer. in some embodiments, the organic layer is an exciton blocking layer or a charge blocking layer. combination with other materials hil/htl: a hole injecting/transporting material to be used in the present invention is not particularly limited, and any compound may be used as long as the compound is typically used as a hole injecting/transporting material. examples of such a material include, but not limit to: a phthalocyanine or porphyrin derivative; an aromatic amine derivative; an indolocarbazole derivative; a polymer containing fluorohydrocarbon; a polymer with conductivity dopants; a conducting polymer, such as pedot/pss; a self-assembly monomer derived from compounds such as phosphonic acid and silane derivatives; a metal oxide derivative, such as moo x ; a p-type semiconducting organic compound, such as 1,4,5,8,9,12-hexaazatriphenylenehexacarbonitrile; a metal complex, and a cross-linkable compounds. examples of aromatic amine derivatives used in hil or htl include, but not limit to the following general structures: each of ar 1 to ar 9 is selected from the group consisting aromatic hydrocarbon cyclic compounds such as benzene, biphenyl, triphenyl, triphenylene, naphthalene, anthracene, phenalene, phenanthrene, fluorene, pyrene, chrysene, perylene, azulene; group consisting aromatic heterocyclic compounds such as dibenzothiophene, dibenzofuran, dibenzoselenophene, furan, thiophene, benzofuran, benzothiophene, benzoselenophene, carbazole, indolocarbazole, pyridylindole, pyrrolodipyridine, pyrazole, imidazole, triazole, oxazole, thiazole, oxadiazole, oxatriazole, dioxazole, thiadiazole, pyridine, pyridazine, pyrimidine, pyrazine, triazine, oxazine, oxathiazine, oxadiazine, indole, benzimidazole, indazole, indoxazine, benzoxazole, benzisoxazole, benzothiazole, quinoline, isoquinoline, cinnoline, quinazoline, quinoxaline, naphthyridine, phthalazine, pteridine, xanthene, acridine, phenazine, phenothiazine, phenoxazine, benzofuropyridine, furodipyridine, benzothienopyridine, thienodipyridine, benzoselenophenopyridine, and selenophenodipyridine; and group consisting 2 to 10 cyclic structural units which are groups of the same type or different types selected from the aromatic hydrocarbon cyclic group and the aromatic heterocyclic group and are bonded to each other directly or via at least one of oxygen atom, nitrogen atom, sulfur atom, silicon atom, phosphorus atom, boron atom, chain structural unit and the aliphatic cyclic group. wherein each ar is further substituted by a substituent selected from the group consisting of hydrogen, deuterium, halide, alkyl, cycloalkyl, heteroalkyl, arylalkyl, alkoxy, aryloxy, amino, silyl, alkenyl, cycloalkenyl, heteroalkenyl, alkynyl, aryl, heteroaryl, acyl, carbonyl, carboxylic acids, ester, nitrile, isonitrile, sulfanyl, sulfinyl, sulfonyl, phosphino, and combinations thereof. ar 1 to ar 9 may be independently selected from the group consisting of: k is an integer from 1 to 20; x 101 to x 108 is c (including ch) or n; z 101 is nar 1 , o, or s; ar 1 has the same group defined above. examples of metal complexes used in hil or htl include, but not limit to the following general formula: met is a metal; (y 101 -y 102 ) is a bidentate ligand, y 101 and y 102 are independently selected from c, n, o, p, and s; l 101 is another ligand; k' is an integer value from 1 to the maximum number of ligands that may be attached to the metal; and k'+k" is the maximum number of ligands that may be attached to the metal. (y 101 -y 102 ) may be a 2-phenylpyridine derivative. (y 101 -y 102 ) may be a carbene ligand. met may be selected from ir, pt, os, and zn. the metal complex may have a smallest oxidation potential in solution vs. fc + /fc couple less than about 0.6 v. host: the light emitting layer of the organic el device of the present invention preferably contains at least a metal complex as light emitting material, and may contain a host material using the metal complex as a dopant material. examples of the host material are not particularly limited, and any metal complexes or organic compounds may be used as long as the triplet energy of the host is larger than that of the dopant. while the table below categorizes host materials as preferred for devices that emit various colors, any host material may be used with any dopant so long as the triplet criteria is satisfied. examples of metal complexes used as host are preferred to have the following general formula: met is a metal; (y 103 -y 104 ) is a bidentate ligand, y 103 and y 104 are independently selected from c, n, o, p, and s; l 101 is another ligand; k' is an integer value from 1 to the maximum number of ligands that may be attached to the metal; and k'+k" is the maximum number of ligands that may be attached to the metal. the metal complexes may be: (o-n) is a bidentate ligand, having metal coordinated to atoms o and n. met may be selected from ir and pt. (y 103 -y 104 ) may be a carbene ligand. examples of organic compounds used as host are selected from the group consisting aromatic hydrocarbon cyclic compounds such as benzene, biphenyl, triphenyl, triphenylene, naphthalene, anthracene, phenalene, phenanthrene, fluorene, pyrene, chrysene, perylene, azulene; group consisting aromatic heterocyclic compounds such as dibenzothiophene, dibenzofuran, dibenzoselenophene, furan, thiophene, benzofuran, benzothiophene, benzoselenophene, carbazole, indolocarbazole, pyridylindole, pyrrolodipyridine, pyrazole, imidazole, triazole, oxazole, thiazole, oxadiazole, oxatriazole, dioxazole, thiadiazole, pyridine, pyridazine, pyrimidine, pyrazine, triazine, oxazine, oxathiazine, oxadiazine, indole, benzimidazole, indazole, indoxazine, benzoxazole, benzisoxazole, benzothiazole, quinoline, isoquinoline, cinnoline, quinazoline, quinoxaline, naphthyridine, phthalazine, pteridine, xanthene, acridine, phenazine, phenothiazine, phenoxazine, benzofuropyridine, furodipyridine, benzothienopyridine, thienodipyridine, benzoselenophenopyridine, and selenophenodipyridine; and group consisting 2 to 10 cyclic structural units which are groups of the same type or different types selected from the aromatic hydrocarbon cyclic group and the aromatic heterocyclic group and are bonded to each other directly or via at least one of oxygen atom, nitrogen atom, sulfur atom, silicon atom, phosphorus atom, boron atom, chain structural unit and the aliphatic cyclic group. wherein each group is further substituted by a substituent selected from the group consisting of hydrogen, deuterium, halide, alkyl, cycloalkyl, heteroalkyl, arylalkyl, alkoxy, aryloxy, amino, silyl, alkenyl, cycloalkenyl, heteroalkenyl, alkynyl, aryl, heteroaryl, acyl, carbonyl, carboxylic acids, ester, nitrile, isonitrile, sulfanyl, sulfinyl, sulfonyl, phosphino, and combinations thereof. the host compound may contain at least one of the following groups in the molecule: r 101 to r 107 is independently selected from the group consisting of hydrogen, deuterium, halide, alkyl, cycloalkyl, heteroalkyl, arylalkyl, alkoxy, aryloxy, amino, silyl, alkenyl, cycloalkenyl, heteroalkenyl, alkynyl, aryl, heteroaryl, acyl, carbonyl, carboxylic acids, ester, nitrile, isonitrile, sulfanyl, sulfinyl, sulfonyl, phosphino, and combinations thereof, when it is aryl or heteroaryl, it has the similar definition as ar's mentioned above. k is an integer from 1 to 20; k'" is an integer from 0 to 20. x 101 to x 108 is selected from c (including ch) or n. z 101 and z 102 is selected from nr 101 , o, or s. hbl: a hole blocking layer (hbl) may be used to reduce the number of holes and/or excitons that leave the emissive layer. the presence of such a blocking layer in a device may result in substantially higher efficiencies as compared to a similar device lacking a blocking layer. also, a blocking layer may be used to confine emission to a desired region of an oled. the compound used in hbl may contain the same molecule or the same functional groups used as host described above. the compound used in hbl may contain at least one of the following groups in the molecule: k is an integer from 1 to 20; l 101 is another ligand, k' is an integer from 1 to 3. etl: electron transport layer (etl) may include a material capable of transporting electrons. electron transport layer may be intrinsic (undoped), or doped. doping may be used to enhance conductivity. examples of the etl material are not particularly limited, and any metal complexes or organic compounds may be used as long as they are typically used to transport electrons. the compound used in etl may contain at least one of the following groups in the molecule: r 101 is selected from the group consisting of hydrogen, deuterium, halide, alkyl, cycloalkyl, heteroalkyl, arylalkyl, alkoxy, aryloxy, amino, silyl, alkenyl, cycloalkenyl, heteroalkenyl, alkynyl, aryl, heteroaryl, acyl, carbonyl, carboxylic acids, ester, nitrile, isonitrile, sulfanyl, sulfinyl, sulfonyl, phosphino, and combinations thereof, when it is aryl or heteroaryl, it has the similar definition as ar's mentioned above. ar 1 to ar 3 has the similar definition as ar's mentioned above. k is an integer from 1 to 20. x 101 to x 108 is selected from c (including ch) or n. the metal complexes used in etl may contain, but not limit to the following general formula: (o-n) or (n-n) is a bidentate ligand, having metal coordinated to atoms o, n or n, n; l 101 is another ligand; k' is an integer value from 1 to the maximum number of ligands that may be attached to the metal. in any above-mentioned compounds used in each layer of the oled device, the hydrogen atoms can be partially or fully deuterated. thus, any specifically listed substituent, such as, without limitation, methyl, phenyl, pyridyl, etc. encompasses undeuterated, partially deuterated, and fully deuterated versions thereof. similarly, classes of substituents such as, without limitation, alkyl, aryl, cycloalkyl, heteroaryl, etc. also encompass undeuterated, partially deuterated, and fully deuterated versions thereof. in addition to and / or in combination with the materials disclosed herein, many hole injection materials, hole transporting materials, host materials, dopant materials, exciton/hole blocking layer materials, electron transporting and electron injecting materials may be used in an oled. non-limiting examples of the materials that may be used in an oled in combination with materials disclosed herein are listed in table 1 below. table 1 lists non-limiting classes of materials, non-limiting examples of compounds for each class, and references that disclose the materials. table-tabl0001 table 1 material examples of material publications hole injection materials phthalocyanine and porphyrin compounds appl. phys. lett. 69, 2160 (1996 ) starburst triarylamines j. lumin. 72-74, 985 (1997 ) cf x fluorohydrocarbon polymer appl. phys. lett. 78, 673 (2001 ) conducting polymers (e.g., pedot:pss, polyaniline, polythiophene) synth. met. 87, 171 (1997 ) wo2007002683 phosphonic acid and silane sams us20030162053 triarylamine or polythiophene polymers with conductivity dopants ep1725079a1 and organic compounds with conductive inorganic compounds, such as molybdenum and tungsten oxides us20050123751 sid symposium digest, 37, 923 (2006 ) wo2009018009 n-type semiconducting organic complexes us20020158242 metal organometallic complexes us20060240279 cross-linkable compounds us20080220265 polythiophene based polymers and copolymers wo 2011075644 ep2350216 hole transporting materials triarylamines (e.g., tpd, α-npd) appl. phys. lett. 51, 913 (1987 ) us5061569 ep650955 j. mater. chem. 3, 319 (1993 ) appl. phys. lett. 90, 183503 (2007 ) appl. phys. lett. 90, 183503 (2007 ) triarylamine on spirofluorene core synth. met. 91, 209 (1997 ) arylamine carbazole compounds adv. mater. 6, 677 (1994 ), us20080124572 triarylamine with (di)benzothiophene/(di)ben zofuran us20070278938 , us20080106190 us20110163302 indolocarbazoles synth. met. 111, 421 (2000 ) isoindole compounds chem. mater. 15, 3148 (2003 ) metal carbene complexes us20080018221 phosphorescent oled host materials red hosts arylcarbazoles appl. phys. lett. 78, 1622 (2001 ) metal 8-hydroxyquinolates (e.g., alq 3 , balq) nature 395, 151 (1998 ) us20060202194 wo2005014551 wo2006072002 metal phenoxybenzothiazole compounds appl. phys. lett. 90, 123509 (2007 ) conjugated oligomers and polymers (e.g., polyfluorene) org. electron. 1, 15 (2000 ) aromatic fused rings wo2009066779 , wo2009066778 , wo2009063833 , us20090045731 , us20090045730 , wo2009008311 , us20090008605 , us20090009065 zinc complexes wo2010056066 chrysene based compounds wo2011086863 green hosts arylcarbazoles appl. phys. lett. 78, 1622 (2001 ) us20030175553 wo2001039234 aryltriphenylene compounds us20060280965 us20060280965 wo2009021126 poly-fused heteroaryl compounds us20090309488 us20090302743 us20100012931 donor acceptor type molecules wo2008056746 wo2010107244 aza-carbazole/dbt/dbf jp2008074939 us20100187984 polymers (e.g., pvk) appl. phys. lett. 77, 2280 (2000 ) spirofluorene compounds wo2004093207 metal phenoxybenzooxazole compounds wo2005089025 wo2006132173 jp200511610 spirofluorene-carbazole compounds jp2007254297 jp2007254297 indolocarbazoles wo2007063796 wo2007063754 5-member ring electron deficient heterocycles (e.g., triazole, oxadiazole) j. appl. phys. 90, 5048 (2001 ) wo2004107822 tetraphenylene complexes us20050112407 metal phenoxypyridine compounds wo2005030900 metal coordination complexes (e.g., zn, al with n^n ligands) us20040137268 , us20040137267 blue hosts arylcarbazoles appl. phys. lett, 82, 2422 (2003 ) us20070190359 dibenzothiophene/dibenz ofuran-carbazole compounds wo2006114966 , us20090167162 us20090167162 wo2009086028 us20090030202 , us20090017330 us20100084966 silicon aryl compounds us20050238919 wo2009003898 silicon/germanium aryl compounds ep2034538a aryl benzoyl ester wo2006100298 carbazole linked by nonconjugated groups us20040115476 aza-carbazoles us20060121308 high triplet metal organometallic complex us7154114 phosphorescent dopants red dopants heavy metal porphyrins (e.g., ptoep) nature 395, 151 (1998 ) iridium(iii) organometallic complexes appl. phys. lett. 78, 1622 (2001 ) us2006835469 us2006835469 us20060202194 us20060202194 us20070087321 us20080261076 us20100090591 us20070087321 adv. mater. 19, 739 (2007 ) wo2009100991 wo2008101842 us7232618 platinum(ii) organometallic complexes wo2003040257 us20070103060 osmium(iii) complexes chem. mater. 17, 3532 (2005 ) ruthenium(ii) complexes adv. mater. 17, 1059 (2005 ) rhenium (i), (ii), and (iii) complexes us20050244673 green dopants iridium(iii) organometallic complexes inorg. chem. 40, 1704 (2001 ) and its derivatives us20020034656 us7332232 us20090108737 wo2010028151 ep1841834b us20060127696 us20090039776 us6921915 us20100244004 us6687266 chem. mater. 16, 2480 (2004 ) us20070190359 us 20060008670 jp2007123392 wo2010086089 , wo2011044988 adv. mater. 16, 2003 (2004 ) angew. chem. int. ed. 2006, 45, 7800 wo2009050290 us20090165846 us20080015355 us20010015432 us20100295032 monomer for polymeric metal organometallic compounds us7250226 , us7396598 pt(ii) organometallic complexes, including polydentated ligands appl. phys. lett. 86, 153505 (2005 ) appl. phys. lett. 86, 153505 (2005 ) chem. lett. 34, 592 (2005 ) wo2002015645 us20060263635 us20060182992 us20070103060 cu complexes wo2009000673 us20070111026 gold complexes chem. commun. 2906 (2005 ) rhenium(iii) complexes inorg. chem. 42, 1248 (2003 ) osmium(ii) complexes us7279704 deuterated organometallic complexes us20030138657 organometallic complexes with two or more metal centers us20030152802 us7090928 blue dopants iridium(iii) organometallic complexes wo2002002714 wo2006009024 us20060251923 us20110057559 us20110204333 us7393599 , wo2006056418 , us20050260441 , wo2005019373 us7534505 wo2011051404 us7445855 us20070190359 , us20080297033 us20100148663 us7338722 us20020134984 angew. chem. int. ed. 47, 1 (2008 ) chem. mater. 18, 5119 (2006 ) inorg. chem. 46, 4308 (2007 ) wo2005123873 wo2005123873 wo2007004380 wo2006082742 osmium(ii) complexes us7279704 organometallics 23, 3745 (2004 ) gold complexes appl. phys. lett.74,1361 (1999 ) platinum(ii) complexes wo2006098120 , wo2006103874 pt tetradentate complexes with at least one metal-carbene bond us7655323 exciton/hole blocking layer materials bathocuprine compounds (e.g., bcp, bphen) appl. phys. lett. 75, 4 (1999 ) appl. phys. lett. 79, 449 (2001 ) metal 8-hydroxyquinolates (e.g., balq) appl. phys. lett. 81, 162 (2002 ) 5-member ring electron deficient heterocycles such as triazole, oxadiazole, imidazole, benzoimidazole appl. phys. lett. 81, 162 (2002 ) triphenylene compounds us20050025993 fluorinated aromatic compounds appl. phys. lett. 79, 156 (2001 ) phenothiazine-s-oxide wo2008132085 silylated five-membered nitrogen, oxygen, sulfur or phosphorus dibenzoheterocycles wo2010079051 aza-carbazoles us20060121308 electron transporting materials anthracene-benzoimidazole compounds wo2003060956 us20090179554 aza triphenylene derivatives us20090115316 anthracene-benzothiazole compounds appl. phys. lett. 89, 063504 (2006 ) metal 8-hydroxyquinolates (e.g., alq 3 , zrq 4 ) appl. phys. lett. 51, 913 (1987 ) us7230107 metal hydroxybenzoquinolates chem. lett. 5, 905 (1993 ) bathocuprine compounds such as bcp, bphen, etc. appl. phys. lett. 91, 263503 (2007 ) appl. phys. lett. 79, 449 (2001 ) 5-member ring electron deficient heterocycles (e.g., triazole, oxadiazole, imidazole, benzoimidazole) appl. phys. lett. 74, 865 (1999 ) appl. phys. lett. 55, 1489 (1989 ) jpn. j. apply. phys. 32, l917 (1993 ) silole compounds org. electron. 4, 113 (2003 ) arylborane compounds j. am. chem. soc. 120, 9714 (1998 ) fluorinated aromatic compounds j. am. chem. soc. 122, 1832 (2000 ) fullerene (e.g., c60) us20090101870 triazine complexes us20040036077 zn (n^n) complexes us6528187 experimental example 1 - compound synthesis the boron-nitrogen heterocycles can be made by any suitable method. in some embodiments, such compounds can be made in a manner consistent with scheme 1, shown on the following page. such methods may be modified as applied to the synthesis of any particular compound, based on the knowledge of persons of skill in the art. example 2 - computational examples dft calculations with the gaussian software package at the b3lyp/cep-31g functional and basis set were carried out for the compounds shown below in table 2, below. table 2 shows the calculated values for the homo and the lumo, and shows the respective homo-lumo gap, as well as the wavelengths of light corresponding to the singlet s 1 and triplet t 1 transitions and the calculated dipole of the compounds. the boron-nitrogen heterocycle showed stabilization of the triplet (t 1 ) state without any significant change in the singlet (s 1 ) state relative to carbazole. table-tabl0002 compound structure homo (ev) lumo (ev) gap (ev) dipole (debye) t 1 (nm) s 1 (nm) -5.51 -0.80 4.70 0.44 430 298 -5.44 -0.64 4.80 1.65 389 299
182-882-859-673-409
US
[ "CN", "MX", "CA", "EP", "US", "SG", "IL", "PH", "WO" ]
A61B5/00,A61B5/332,A61B5/308
2014-11-14T00:00:00
2014
[ "A61" ]
systems and methods for performing electrocardiograms
a system for performing an electrocardiogram (ecg) can include a handheld electrocardiograph device having a right arm electrode, a left arm electrode, and a left leg electrode, and can be configured to receive signals from the electrodes and to send data based on the electrode signals to a mobile electronic device. the mobile electronic device can be configured to process and analyze the receive information to provide ecg data, such as 6-lead ecg data. the mobile electronic device can analyze the ecg data to provide diagnostic information. the mobile electronic device can transfer the ecg data to a remote computing system, which can analyze the ecg data to provide diagnostic information.
1 . a system for performing an electrocardiogram comprising: an electrocardiograph device, comprising: a housing comprising three electrodes, wherein the three electrodes are not coupled to the device with wires exterior to the housing; a communication interface; one or more computer readable storage media; program instructions stored on the one or more computer readable storage media that, when executed by a controller, direct the controller to: receive signals from the three electrodes; analyze the signals to determine signal-related data; and transmit the signal-related data, via the communication interface, to a coupled interpretive device. 2 . the system of claim 1 , wherein the electrocardiograph device is a portable handheld device. 3 . the system of claim 1 , wherein the three electrodes comprise a right arm electrode, a left arm electrode, and a left leg electrode. 4 . the system of claim 1 , wherein the electrocardiograph device further comprises: one or more amplifiers configured to amplify analog signals received from the three electrodes; and program instructions stored on the one or more computer readable storage media that, when executed by the controller, direct the controller to receive and analyze amplified analog signals from the one or more amplifiers. 5 . the system of claim 1 , wherein the electrocardiograph device further comprises: an analog signal processor configured to perform analog signal processing on analog signals received from the three electrodes; and program instructions stored on the one or more computer readable storage media that, when executed by the controller, direct the controller to receive and analyze processed analog signals from the analog signal processor. 6 . the system of claim 1 , wherein the electrocardiograph device further comprises: an analog-to-digital converter configured to convert analog signals from the three electrodes to digital signals; and program instructions stored on the one or more computer readable storage media that, when executed by the controller, direct the controller to receive and analyze the digital signals from the analog-to-digital converter. 7 . the system of claim 1 , wherein the electrocardiograph device further comprises program instructions stored on the one or more computer readable storage media that, when executed by the controller, direct the controller to perform digital signal processing on the signals received from the three electrodes. 8 . the system of claim 1 , further comprising: a mobile electronic device comprising: a second communication interface; second program instructions stored on second computer readable storage media that, when executed by second controller, direct the second controller to: receive the signal-related data transmitted by the communication interface of the electrocardiograph device; and provide 6-lead electrocardiogram data based at least in part on the data received by the second communication interface. 9 . the system of claim 8 , wherein the 6-lead electrocardiogram data includes lead i, lead ii, lead iii, avr, avl, and avf. 10 . the system of claim 8 , wherein the mobile electronic device further comprises program instructions stored on the second computer readable storage media that, when executed by the second controller, direct the second controller to: display user interface elements configured to receive input from a user to initiate an electrocardiogram procedure; and in response to receiving the input from the user, sending an instruction to the electrocardiograph device to initiate the electrocardiogram procedure. 11 . the system of claim 10 , wherein the system delays sending the instruction to initiate the electrocardiogram procedure by a delay time after receiving the input from the user. 12 . the system of claim 8 , wherein the mobile electronic device further comprises program instructions stored on the second computer readable storage media that, when executed by the second controller, direct the second controller to: display user interface elements configured to output information based on the 6-lead electrocardiogram data. 13 . the system of claim 8 , wherein the mobile electronic device further comprises program instructions stored on the second computer readable storage media that, when executed by the second controller, direct the second controller to: display user interface elements configured to receive input from a user to assign each of the three electrodes to a particular limb; and in response to receiving the input from the user, sending an instruction to the electrocardiograph device to associate readings from each of the three electrodes to the particular limb. 14 . the system of claim 8 , further comprising a remote computing system comprising: third program instructions stored on third computer readable storage media that, when executed by third controller, direct the third controller to: receive the 6-lead electrocardiogram data provided from the mobile electronic device; and analyze the 6-lead electrocardiogram data; and provide diagnostic information. 15 . the system of claim 14 , wherein the mobile electronic device further comprises program instructions stored on the second computer readable storage media that, when executed by the second controller, direct the second controller to: receive the diagnostic information provided from the remote computing system; and display user interface elements configured to output the diagnostic information provided from the remote computing system. 16 . the system of claim 8 , wherein the electrocardiograph device is removably attached to the mobile electronic device. 17 . the system of claim 1 , wherein the signal-related data includes 6-lead electrocardiogram data comprising lead i, lead ii, lead iii, avr, avl, and avf. 18 . the system of claim 1 , wherein the three electrodes are fixed immovably to the outside of the housing of the electrocardiograph device. 19 . the system of claim 1 , wherein the three electrodes comprise dry electrodes. 20 . the system of claim 19 , wherein the electrocardiograph device further comprises program instructions stored on the one or more computer readable storage media that, when executed by the controller, direct the controller to compensate for higher impedance of the dry electrodes.
cross-reference to a related application this application claims the priority benefit of u.s. provisional application ser. no. 62/080,203, filed nov. 14, 2014, which is incorporated herein by reference in its entirety. background of the invention the electrocardiogram (ecg or ekg) is recognized as one of the most successful and important tools for rapid, noninvasive assessments of cardiac conditions. the resting 12-lead ecg (standard 12-lead ecg) recordings have been used to determine cardiac conditions in the presence of conflicting or ambiguous clinical symptoms. a 12-lead ecg can be obtained by attaching 10 electrodes to a patient: 4 limb lead electrodes are attached to limbs (left and right wrist, left and right ankle) and 6 precordial lead electrodes are attached to the torso. this configuration allows for recording leads i, ii, vi leads (where i=1 to 6), and calculating leads iii, avr, avl and avf. electrocardiographs can be used to display/print ecg waveforms and for generating clinical statements based on diagnostic criteria derived from ecg measurements. interpretation of an ecg is performed by electrocardiogram waveform analysis and can sometimes be performed by a serial comparison of a current ecg to a previously recorded ecg. however, the resting 12-lead ecg obtained in the hospital or doctor's office can have limitations imposed by the recording environment. everyday life, exercise, stress and a number of physiological conditions can elicit cardiac problems that can be masked or are not present during recordings on the human body at rest. therefore, a stress test and ambulatory recordings can be used as additional sources of information on cardiac status. during a stress test, limb electrodes can be moved to the torso to reduce noise and artifacts caused by movement of long wires, muscle activity, and unstable electrode-skin interface. moreover, the acquisition of cardiac signals from a patient while in a non-hospital setting can be hampered by a variety of circumstances. to obtain high-quality ecg recordings, the electrode-skin interface needs to be stable, otherwise noise and artifacts can distort the recording of signals. furthermore, in some situations, it is impractical to attach electrodes and wires to the body of a patient in motion. in ambulatory settings, it can be impractical to record with a large number of wires, so a small recorder can be used to record only a few ecg channels. brief summary of the invention systems and techniques are disclosed for obtaining electrocardiogram recordings with a portable handheld device that enables obtaining 6-lead electrocardiogram data. obtaining 6-lead electrocardiogram data requires a device capable of recording leads i and ii simultaneously in standard ecg mode. such recordings require connection of the device with a patient's left arm, right arm, and left leg, therefore various embodiments disclosed herein relate to electrocardiograph devices that can have three electrodes (e.g., three dry electrodes). to obtain recordings, left and right hand device electrodes can be held by left and right hands and the third electrode can be pressed against the left leg. the device's third electrode can be pressed against the skin, for example, just above the knee or above the ankle. in some embodiments, 3-electrode electrocardiograph devices can be coupled with mobile electronic devices that can provide 6-lead electrocardiogram data. in some cases, a mobile electronic device can display user interface elements configured to output information based on the 6-lead electrocardiogram data. in certain embodiments, a remote computing system may receive the 6-lead electrocardiogram data and provide diagnostic information. this summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. this summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. brief description of the drawings fig. 1 shows an example embodiment of a system for performing electrocardiogram (ecg or ekg) recordings. fig. 2 shows an example embodiment of a handheld electrocardiograph device. fig. 3 shows an example embodiment of a handheld electrocardiograph device that is removably attached to a mobile electronic device. fig. 4 shows a flow chart of an example method for performing electrocardiogram recordings. fig. 5 shows a flow chart of an example method of operating an electrocardiogram system. fig. 6 shows an example embodiment of a user interface for an electrocardiogram system. fig. 7 shows a block diagram illustrating components of a computing device or system used in some implementations of systems and techniques for performing an electrocardiogram. detailed description of the invention various embodiments disclosed herein relate to a portable handheld electrocardiograph device with three dry electrodes allowing for recordings of left arm (la), right arm (ra) and left leg (ll) signals, which can be used to obtain 6 ecg leads (i, ii, iii, avr, avf, and avl), as discussed herein. various embodiments relate to medical instrumentation and information systems. a handheld electrocardiographic device, as disclosed herein, can provide the ability to record limb leads and auxiliary limb leads (e.g., 6 leads total) from subjects in ambulatory settings, which can be comparable the 12 leads recorded using a standard 12-lead electrocardiograph in hospital settings. use of a presently available single-lead handheld ecg device in ambulatory settings has limited diagnostic value compared to 12-lead ecg recorders. it can provide basic heart monitoring and it can be useful for characterizing various arrhythmias. when a recording is made between the left and right hand, it represents lead i (i=la-ra) and it is equivalent to only lead i of a standard 12-lead ecg. a device capable of recording leads i and ii simultaneously in standard ecg mode would increase diagnostic yield compared to using a single-lead device. as recordings of lead ii (ii=ll−ra) requires connection of the device to the left leg, various embodiments disclosed herein relate to ecg devices that can have three electrodes (e.g., three dry electrodes). to obtain recordings, device left and right hand electrodes can be held by the subject's left and right hands and the third electrode can be pressed against the left leg. the device's third electrode can be pressed against the skin, for example, just above the knee or above the ankle. ecg signals i and ii obtained from electrodes can be amplified and digitized (e.g., by a microcontroller with an internal analog-to-digital converter. data can then be transferred (e.g., via serial interface and bluetooth module) to a mobile electronic device (e.g., a cellular phone) for initial display and storage. the mobile electronic device (e.g., a cellular phone) can perform initial processing and transmit data to a remote computing system (e.g., an ecg server or ecg cloud service) for interpretation, serial comparison, and analysis. various embodiments disclosed herein can relate to a handheld electrocardiographic device for simultaneous acquisition of six leads (limb leads and auxiliary limb leads). the device can include three dry electrodes for obtaining ecg signals i and ii from a subject. signals i and ii can be obtained in the same manner as on a traditional 12-lead electrocardiograph. leads iii and auxiliary leads avr, avl and avf can be calculated (e.g., based on leads i and ii). to emphasize ambulatory use, the conventional wet electrodes (usually silver-silver chloride ag/agcl) and skin preparation that hospitals use are replaced with dry electrodes requiring no skin preparation. lead i is defined as la−ra, and can be obtained by holding the device's left and right electrodes with both the left and right hands while the device is faced down. lead ii is defined as ll−ra. lead ii can be obtained by holding the device with both the left and right hands while simultaneously pressing the third electrode against the skin just above the subject's knee or ankle. the electrodes can be connected to amplifiers. the input of the amplifiers can be designed to accept signals from the dry electrodes. the output of the amplifiers can be connected to an analog-to-digital converter (adc). digital data from the adc can be connected to a microcontroller. data from the microcontroller can be sent to a communication interface (e.g., a bluetooth interface) for transmission to a mobile electronic device (e.g., a cellular phone). the mobile electronic device can be used for initial data evaluation and/or to transfer data to a remote computing system (e.g., an ecg server). the data on the remote computing system (e.g., ecg server) can be evaluated (e.g., by automatic algorithms) and the diagnosis/results can be sent to the end user or to a doctor or other medical professional. fig. 1 shows an example of a system 100 for performing electrocardiogram (ecg or ekg) recordings according to some embodiments. the system 100 can be configured to perform a 6-lead ecg. the system 100 can include an electrocardiograph (ecg) device 102 and a mobile electronic device 104 , and in some embodiments the system 100 can include a remote computing system 106 . the ecg device 102 can include three electrodes, such as a right arm electrode 108 , a left arm electrode 110 , and a left leg electrode 112 . in some embodiments, the system 100 can use fewer electrodes than a traditional 12-lead ecg, which would use ten electrodes, which can facilitate performance of the ecg procedure, especially for ecg procedures performed by a patient himself or herself. the system 100 can be configured to perform an ecg procedure (e.g., a 6-lead ecg) without using a right leg electrode, a v 1 electrode, a v 2 electrode, a v 3 electrode, a v 4 electrode, a v 5 electrode, or a v 6 electrode, which would ordinarily be used for a traditional 12-lead ecg. the system 100 can be configured to perform the ecg procedure using only the three electrodes 108 , 110 , and 112 . in some embodiments, the electrodes 108 , 110 , and 112 can be dry electrodes, which can be configured to be used by a patient without applying a gel between the electrodes and the skin and/or with little or no skin preparation (e.g., shaving, cleaning, sanding, etc.). in some embodiments, the use of dry electrodes can result in higher impedance, and the system 100 (e.g., with amplifier 114 ) can be configured to compensate for the higher impedance that can result from the use of dry electrodes instead of wet electrodes, which would generally be used for a traditional 12-lead ecg. in some embodiments, the electrodes 108 , 110 , and 112 can be made of stainless steel (e.g., low-carbon stainless steel such as 316l grade stainless steel). various other conductive materials can be used for the electrodes 108 , 110 , and 112 , such as gold, silver, copper, aluminum, metal alloys, and various other suitably conductive materials. in some embodiments, one or more wet electrodes can be used, but the use of dry electrodes can facilitate the performance of quick ecg recording procedures, especially those performed by the patient using a mobile device without direct involvement of a medical professional. the ecg device 102 can include one or more amplifiers 114 configured to amplify signals (e.g., analog signals) from the electrodes 108 , 110 , and 112 . in some embodiments, each electrode 108 , 110 , and 112 has a corresponding amplifier 114 that is configured to amplify the signals from that electrode. in some embodiments, a single amplifier 114 can amplify the signals from two or all three of the electrodes 108 , 110 , and 112 . the one or more amplifiers 114 can be configured to amplify the signals to compensate for impedance, which may be produced, e.g., by the use of dry electrodes. the ecg device 102 can include a signal processor 116 , which can be configured to perform one or more signal processing operations on the signals received from the right arm electrode 108 , from the left arm electrode 110 , and from the left leg electrode 112 (e.g., on the amplified analog signals output by the one or more amplifiers 114 ). in some cases, the signal processor 116 can be configured to perform analog signal processing operations. in some embodiments, the signal processor 116 can be configured to compare and calculate signals from the different electrodes 108 , 110 , and 112 . for example, a first lead (lead i) can be based at least in part on a voltage difference measured (e.g., by the signal processor 116 ) between the left arm electrode 110 and the right arm electrode 108 , and a second lead (lead ii) can be based at least in part on a voltage difference measured (e.g., by the signal processor 116 ) between the left leg electrode 112 and the right arm electrode 108 . in some embodiments, the signal processor 116 can be configured to perform one or more signal processing operations to improve the signal-to-noise ratio for the signals. in some embodiments, the signal processor 116 can be configured to perform one or more signal processing operations to remove or reduce baseline wander. in some embodiments, the signal processor 116 can be configured to perform one or more signal processing operations to compensate for impedance (e.g., produced by the use of dry electrodes). the ecg device 102 can include an analog-to-digital converter (adc) 118 , which can be configured to convert analog signals (e.g., received from the signal processor 116 , from the one or more amplifiers 114 , or directly from the electrodes 108 , 110 , and 112 ) to digital signals. the ecg device 102 can include a controller 120 . the controller 120 can be a processor or processing system as described herein. in some embodiments, the ecg device 102 can include memory 122 , which can store executable program instructions that can be executed by the controller 120 to implement various methods, operations, and features described herein. memory 122 can be a type of computer readable storage media as described herein. in some embodiments, the controller 120 can store data to the memory 122 . for example, data corresponding to the digital signals received over time can be stored on the memory 122 for use in signal processing operations that depend on previous signals. results of signal processing and/or data analysis can be stored in the memory 122 , and can accessed by the controller 120 be used for later calculations. in some embodiments, data received and/or generated (e.g., by the controller 120 ) can be stored on the memory 122 so that it can be periodically transmitted by the communication interface 124 (e.g., as packets of data). in some embodiments, the controller 120 can be a digital controller and can be configured to receive digital signals (e.g., digital signals output by the adc). the controller 120 can receive, for example, digital representations of the signals from the right arm electrode 108 , the left arm electrode 110 , and the left leg electrode 112 , or of the amplified (from 114 ) and/or signal-processed (from 116 ) versions of the original analog signals from the electrodes 108 , 110 , and 112 . the controller 120 can receive separate signals corresponding to the three electrodes 108 , 110 , and 112 , or the controller 120 can receive signals that represent information from different combinations of the electrodes 108 , 110 , and 112 (e.g., signals associated with the voltage differences between electrodes). for example, in some embodiments, the controller 120 can receive a digital signal representing a voltage difference between the left arm electrode 110 and the right arm electrode 108 and a digital signal representing a voltage difference between the left leg electrode 112 and the right arm electrode 108 . the controller 120 can perform one or more signal processing operations (e.g., digital signal processing on the digital signals received). the controller 120 can perform one or more digital signal processing operations to remove or reduce baseline wander, to improve the signal-to-noise ratio, to compensate for impedance (e.g., produced by the use of dry electrodes), etc. in some embodiments, the controller 120 can perform one or more linear phase filtering operations (e.g., recursive or non-recursive linear phase filtering). the controller 120 can analyze the signals received by the controller 120 . for example, the controller 120 can compare and analyze signals corresponding to the different electrodes 108 , 110 , and 112 , for example, to determine a voltage difference between the left arm electrode 110 and the right arm electrode 108 (lead i) and/or to determine a voltage difference between the left leg electrode 112 and the right arm electrode 108 (lead ii). in some embodiments, the controller 120 can determine the 6 leads for a 6-lead ecg, as discussed herein. the ecg device 102 can include a communication interface 124 , which can be configured to enable the ecg device 102 to communicate with other communication interfaces to coupled interpretive devices, e.g., the communication interface 130 on the mobile electronic device 104 , the communication interface 140 on the remote computing system 106 , and/or other external systems for reporting results (e.g., a hospital information system, and a doctor email system). the controller 120 can send data to the communication interface 124 for transmission to coupled interpretative devices and/or external devices and systems. the communication interfaces 124 , 130 , and 140 described herein can be wireless communication interfaces as described with respect to device 1000 ( fig. 7 ). communication interfaces 124 , 130 , and 140 can include, for example, wi-fi, bluetooth, bluetooth low energy (ble), near field communication (nfc), 3g, and 4g. in some embodiments, the communication interface 124 , 130 , 140 can use a wire or cable or physical communication port to communicate data. for example, the ecg device 102 can include an electrical connector (e.g., a micro-usb connector or lightning connector) that is configured to engage a corresponding port on the mobile electronic device 104 (e.g., a micro-usb port or lightning port), or on another external device or system, to communicate information between the devices and/or systems. other communication methods can be used as well. for example, the communication interfaces 124 , 130 , and 140 can be configured to transfer data via an audio input port or microphone. the ecg device 102 can be a portable device, such as an accessory for use with the mobile electronic device 104 . the ecg device 102 can include a battery 126 , which can facilitate the portable nature of the ecg device 102 . other power sources can be used. for example, the ecg device 102 can receive electrical power from an external power source (e.g., a wall outlet), or a battery 138 of the mobile electronic device 104 can supply electrical power to the ecg device 102 , for instance, when the ecg device 102 and the mobile electronic device 104 are coupled via a wire or cable (e.g., via a micro-usb or lightning connection) or passive charging system. the mobile electronic device 104 can be a mobile phone (e.g., a smart phone), a tablet computer, a laptop computer, or other computing device. the mobile electronic device 104 can include a communication interface 130 as discussed. the communication interface 130 can be configured to send and/or receive information to and/or from the ecg device 102 (e.g., via a first communication protocol, which can have a relatively short range, such as bluetooth, ble, or nfc). the communication interface 130 can be configured to send and/or receive information to and/or from a remote computing system 106 (e.g., using a second communication protocol, which can have a relatively long range, such as wi-fi, 3g, 4g, tcp/ip over ethernet, the internet, etc.). in some embodiments, the mobile electronic device 104 can operate as a middleman to relay information between the ecg device 102 and the remote computing system 106 (or another external device or system). the mobile electronic device 104 can include a controller 132 . the controller 132 can be a processor or processing system as described herein. in some embodiments, the mobile electronic device 104 can include memory 134 , which can store executable instructions that can be executed by the controller 132 to implement various methods, operations, and features described herein. in some embodiments, the controller 132 can store data to the memory 134 . for example, data corresponding to the digital signals received over time can be stored on the memory 134 for use in signal processing operations that depend on previous signals. results of signal processing and/or data analysis can be stored in the memory 134 , and can accessed by the controller 132 to be used for later calculations. in some embodiments, data received and/or generated (e.g., by the controller 132 ) can be stored on the memory 134 , such as for archiving, for later reference, or to be periodically transmitted by the communication interface 130 (e.g., as packets of data). memory 134 can be a type of computer readable storage media as described herein. in some embodiments, the controller 132 can run an application or program (which can be stored on memory 134 ), which can perform the ecg processing, as described herein. in some embodiments, an application or program can run remotely (e.g., on the remote computing system 106 , using cloud computing, or as software as a service (saas)) to perform the ecg procedure. the controller 132 can be configured to perform one or more signal processing operations (e.g., digital signal processing) on the data received from the ecg device 102 . the controller 132 can perform one or more digital signal processing operations to remove or reduce baseline wander, to improve the signal-to-noise ratio, to compensate for impedance (e.g., produced by the use of dry electrodes), etc. in some embodiments, the controller 132 can perform one or more linear phase filtering operations (e.g., recursive or non-recursive linear phase filtering). the controller 132 can analyze data (e.g., received from the ecg device 102 ). for example, the controller 132 can compare signals corresponding to the different electrodes 108 , 110 , and 112 , for example, to determine a voltage difference between the left arm electrode 110 and the right arm electrode 108 (lead i) and/or to determine a voltage difference between the left leg electrode 112 and the right arm electrode 108 (lead ii). in some embodiments, the controller 132 can determine the 6 leads for a 6-lead ecg, as discussed herein. the controller 132 can provide a 6-lead ecg having three limb leads: lead i, lead ii, and lead iii, and three augmented limb leads: augmented vector right (avr), augmented vector left (avl), and augmented vector foot (avf). the 6 leads can be represented by the following equations: lead i=la−ra; lead ii=ll−ra; lead iii=ll−la; augmented vector right (avr)=ra−½(la+ll); augmented vector left (avl)=la−½(ra+ll); and augmented vector foot (avf)=ll−½(ra+la). in the equations above, la can correspond to a voltage of the left arm electrode 110 , ra can correspond to a voltage of the right arm electrode 108 , and ll can correspond to a voltage of the left leg electrode 112 . in some embodiments, the system 100 does not produce the precordial leads, which would normally be produced by a 12-lead ecg. in some embodiments, lead iii, a vr, a vl, and a vf can be calculated based on lead i and lead ii, as set forth in the following equations (wherein “i” corresponds to lead i and “ii” corresponds to lead ii): lead iii=ii−i; avr=−(i+ii)/2; avl=i−ii/2; and avf=ii−i/2. in some embodiments, the controller 132 can perform analysis on the ecg data (e.g., the 6-lead ecg data) to determine a heart rate, to make a determination of normal heart rhythm, and/or to diagnose one or more disorders. in some embodiments, data, algorithms, and methods that are established for analysis of 12-lead ecg data can be used to analyze the 6-lead ecg data (which can include 6 of the same leads as a traditional 12-lead ecg). the mobile electronic device 104 can include a user interface 136 , which can be configured to receive input from a user and/or to output information to a user. in some embodiments, the user interface 136 can include one or more user input elements (e.g., buttons, switches, etc.), a microphone (e.g., for receiving dictated instructions), a display, a touchscreen display, a speaker, etc. the user interface 136 can receive an instruction (e.g., via input from a user) to initiate an ecg recording procedure. the communication interface 130 of the mobile electronic device 104 can send an instruction to the ecg device 102 to initiate the ecg procedure. the user interface 136 can provide instructions to a user for performing the ecg or related activities (e.g., to hold or touch the electrodes 108 , 110 , and 112 , to wait during a delay period or while signals are collected, to contact a doctor or emergency services, etc.). the user interface 136 can report information to a user (e.g., a heart rate, an ecg tracing, an indication of normal rhythm, a diagnosis, etc.). the mobile electronic device can include a battery 138 , which can facilitate the portable nature of the mobile electronic device 104 . other power sources can be used. for example, the mobile electronic device 104 can receive electrical power from an external power source (e.g., a wall outlet). the communication interface 130 of the mobile electronic device 104 can be configured to send ecg data (e.g., 6-lead ecg data) to the remote computing system 106 (using the communication interface 140 ), as discussed herein. the remote computing system 106 can be a server, a computer, or other computing system. the remote computing system 106 can include a controller 142 . the controller 142 can be a processor or processing system as described herein. in some embodiments, the remote computing system 106 can include memory 144 , which can store executable program instructions that can be executed by the controller 142 to implement various methods, operations, and features described herein. memory 144 can be a type of computer readable storage media as described herein. in some embodiments, the controller 142 can store data to the memory 144 . for example, data corresponding to the digital signals received over time can be stored on the memory 144 for use in signal processing operations that depend on previous signals. results of signal processing and/or data analysis can be stored in the memory 144 , and can accessed by the controller 142 be used for later calculations. in some embodiments, data received and/or generated (e.g., by the controller 142 ) can be stored on the memory 144 , such as for archiving, for later reference, etc. in some embodiments, the controller 142 can run an application or program (which can be stored on memory 144 ), which can perform the ecg procedure (e.g., using cloud computing, or as software as a service (saas)). the controller 142 of the remote computing system 106 can execute program instructions for performing an analysis on the ecg data (e.g., the 6-lead ecg data), which can be received from the mobile electronic device 104 , for example, to determine a heart rate, to make a determination of normal heart rhythm, and/or to diagnose one or more disorders. in some embodiments, data, algorithms, and methods that are established for analysis of 12-lead ecg data can be used to analyze the 6-lead ecg data (which can include 6 of the same leads as a traditional 12-lead ecg). in some embodiments, the remote computing system 106 may have access to data and program instructions (e.g., stored in memory 144 ) that are not directly accessible to the mobile electronic device 104 and/or more resources such as more powerful processor(s), so that the remote computing system 106 can perform more thorough analysis on the ecg data than would be performed on the mobile electronic device 104 . in some embodiments, the mobile electronic device 104 can perform an initial analysis (which can be performed relatively quickly on the local device) on the ecg data to make one or more initial determinations (e.g., regarding diagnosis and rhythm analysis), and the remote computing system 106 can perform a more detailed analysis (which may take longer time due to transmission of data, backlog of analysis requests, complexity of algorithms for data analysis, and/or the volume of calculations needed for the detailed analysis). in some implementations, the controller 142 of the remote computing system 106 can be used to perform various signal processing and data analysis tasks described herein (e.g., digital signal processing, improvement of signal-to-noise ratio, removal or reduction of baseline wander, compensation for impedance, linear phase filtering, providing a 6-lead ecg), especially for embodiments where an application or program that performs the ecg procedure runs on the remote computing system 106 (e.g., using cloud computing or saas). in some embodiments, the ecg device 102 can have a relatively low power processor (e.g., controller 120 ) as compared to the processor(s) of the mobile electronic device 104 and/or the remote computing system 106 (e.g., controllers 132 and 142 ), and the ecg device 102 can have more limited resources (e.g., less battery power, less memory storage, etc.) than the mobile electronic device 104 and/or the remote computing system 106 . accordingly, in some implementations, the system 100 is configured to minimize or reduce the operations performed by the ecg device 102 and may preferentially perform operations on the mobile electronic device 104 and/or the remote computing system 106 . in some embodiments, the ecg device 102 can be configured to perform the digital signal processing and analysis, as discussed herein, because the signals are converted to digital data before being transmitted from the ecg device 102 . in some embodiments, the ecg device 102 is configured to perform operations to reduce the amount of data to be transferred by the communication interface 124 , which can save power and time during the transfer of data from the ecg device 102 . for example, in some embodiments, the ecg device 102 is configured to send data corresponding to two voltage differences (leads i and ii) instead of sending data corresponding to three signals from the three electrodes 108 , 110 , and 112 . any or all of these variations of raw, transformed, or interpreted signal data may be referred to as “signal-related data.” the system 100 shown and described in connection with fig. 1 can be modified in various ways. for example, in some embodiments, the ecg device 102 and the mobile electronic device 104 can be combined into a single device (which can be a dedicated ecg and mobile device). in some embodiments, an ecg system can include a single device that performs the functions of the ecg device 102 , the mobile electronic device 104 , and the remote computing system 106 (e.g., a 6-lead ecg machine such as for use in a hospital or doctor's office). embodiments of a combined ecg system may include fewer components than three separate devices to eliminate redundancy (e.g., controller 120 , 132 , and 142 may be combined into a single controller, communication interface 124 , 130 , and 140 into a single communication interface, etc.). fig. 2 shows an example embodiment of an ecg device 102 . the ecg device 102 can include a housing 150 , which can house or enclose various components of the ecg device 102 (e.g., the one or more amplifiers 114 , the signal processor 116 , the analog-to-digital converter 118 , the controller 120 , the memory 122 , the communication interface 124 , and the battery 126 ). the ecg device 102 can include the right arm electrode 108 , the left arm electrode 110 , and the left leg electrode 112 , which can be exposed to facilitate contact to the patient's skin. the electrodes 108 , 110 and 112 can be positioned on the bottom of the housing 150 . the right arm electrode 108 can be positioned on a right side to facilitate contact to the patient's right arm or hand. the left arm electrode 110 can be positioned on a left side to facilitate contact to the patient's left arm or hand. the left leg electrode 112 can be positioned in a central portion, to facilitate contact to the patient's left leg (e.g., to the left knee, left ankle, or left foot). the left leg electrode 112 can be positioned closer to the patient than the right arm electrode 108 and/or the left arm electrode 110 , when the electrodes face downward with the right arm electrode 108 on the right and the left arm electrode 110 on the left, which can reduce undesired contact between the electrodes 108 , 110 , and 112 with undesired parts of the patient's body and/or can reduce undesired contact between the parts of the patient's body being monitored, which could interfere with the readings for the ecg procedure. in some embodiments, the right arm electrode 108 and the left arm electrode 110 may be positioned differently, e.g., in reverse of the configuration shown in fig. 2 . this positioning may vary in accordance with the chosen placement of the left leg electrode 112 of the device on the left leg, for example, on the dorsal or ventral sides of the left leg. in some embodiments, the designation of the pads as right arm electrode 108 and left arm electrode 110 may be selectable or configurable by the user, for instance via a user interface provided on the ecg device 102 or the mobile electronic device 104 . in some embodiments, mobile electronic device 104 can include a user interface that can receive input from the user to assign or reassign one of the three electrodes to a particular limb. the designation can then be transmitted, in some cases, to the ecg device 102 via a communication interface (e.g., 124 , 130 ) to associate a particular electrode to a limb reading. advantageously, a user may be able to reassign the function of a physical electrode to a limb designation that is more comfortable to the user for gathering a particular signal. for example, if the user is more comfortable reading the left leg signal from the dorsal (rear) side of the leg, then the right and left arm electrodes may be assigned to the right and left electrodes, respectively, when the ecg device 102 is facing up. on the other hand, if the user is more comfortable reading the left leg signal from the ventral (front) side of the leg, then the right and left arm electrodes may be assigned to the left and right electrodes (i.e., opposite), respectively, when the ecg device 102 is facing up. in some embodiments, the electrodes 108 , 110 , and 112 can be positioned immovably on the housing and immovably with respect to each other. in some embodiments, the ecg device 102 does not include wires or cables outside of the housing 150 that couple to the electrodes 108 , 110 , and 112 . the electrodes 108 , 110 , and 112 can be positioned close to each other to facilitate the portable and compact nature of the ecg device 102 , and the electrodes 108 , 110 , and 112 can be spaced apart sufficiently to reduce the likelihood of unintended contact between the electrodes 108 , 110 , and 112 (e.g., such as a body part contacting two or more of the electrodes simultaneously). the electrodes 108 , 110 , and 112 can be spaced apart by a distance that is at least about 3 mm, at least about 5 mm, at least about 10 mm, at least about 25 mm, at least about 50 mm or more, less than or equal to about 100 mm, less than or equal to about 75 mm, less than or equal to about 50 mm, less than or equal to about 25 mm, less than or equal to about 10 mm or less, although values outside these ranges can be used in some instances. fig. 3 shows an example embodiment of an ecg device 102 that is removably attachable to a mobile electronic device 104 (e.g., a smart phone). the mobile electronic device 104 can include an attachment mechanism 152 , which can be configured to interface with an attachment mechanism 154 on the ecg device 102 to removably attach the ecg device 102 to the mobile electronic device 104 (e.g., onto the back side of the mobile electronic device 104 such as on a side opposite the display on a smart phone). the attachment mechanisms 152 and 154 can use sliding engagement, a snap fit, a clamp, etc. to removably couple the ecg device 102 to the mobile electronic device 104 . in some embodiments, only one or the other of the ecg device 102 and the mobile electronic device 104 may include an attachment mechanism. in the embodiment illustrated in fig. 3 , the attachment mechanism 152 can include rails or guides 156 a and 156 b (e.g., formed at the sides of a raised platform) that are configured to slidably engage rails or guides 158 a and 158 b (e.g., formed at the sides of a recessed slot in the housing 150 ). various alternatives are possible for the attachment mechanisms. for example, in some embodiments, the ecg device 102 can be incorporated into a protective case that is configured to enclose at least a portion of the mobile electronic device 104 . fig. 4 is a flowchart of an example method 200 of operation for performing an ecg procedure in accordance with some embodiments herein. at block 202 , the system can receive a command (e.g., from a user, which can be the patient) to start an ecg procedure. the command from the user can be received by the user interface 136 on the mobile electronic device, although in some embodiments, the ecg device can include a user input element configured to receive a user command to start an ecg procedure. in some embodiments, the ecg device 102 can receive an instruction to start an ecg procedure from the mobile electronic device 104 (e.g., via the communication interfaces 124 and 130 ). at block 204 , the method 200 can include a delay, which can give the user time to position the electrodes 108 , 110 , and 112 into contact with the proper body portions (e.g., since the patient can be the user that issued the start command such as by pressing a button on the mobile electronic device 104 ). the delay can be between about 1 and about 10, between about 2 seconds and about 5 second, or about 3 seconds, although other amounts of delay outside these ranges can be used in some instances. in some cases the delay may terminate when the user issues a continue command, e.g., by interacting with an element of the user interface on the touchscreen of the mobile electronic device 104 or by pressing a button on the housing of 104 or ecg device 102 . at block 206 , signals from the electrodes 108 , 110 , and 112 can be received, as discussed herein. at block 208 , the signals from the electrodes 108 , 110 , and 112 can be amplified, as discussed herein. the amplification can compensate for impedance (such as produced by the use of dry electrodes). the amplification can be performed on analog signals received from the electrodes 108 , 110 , and 112 . at block 210 , analog signal processing can be performed, such as described in connection with the signal processing module 116 . at block 212 , the analog signals can be converted to digital signals, such as by the analog-to-digital converter 118 , as discussed herein. at block 214 , digital signal processing can be performed, such as by the controller 120 , as discussed herein. at block 216 , the ecg device 102 can communicate data (e.g., raw or interpreted data, depending on embodiment) to the mobile electronic device 104 (e.g., via the communication interfaces 124 and 130 ). at block 218 , the mobile electronic device 104 can perform signal processing on the received data, as discussed herein. at block 220 , the mobile electronic device 104 can perform data analysis, such as to produce ecg data (e.g., 6-lead ecg data), to analyze the data to provide a heart rate, to provide a determination of normal or abnormal heart rhythm, and/or to provide a diagnosis of a heart disorder. at block 222 , information can be reported (e.g., to the user/patient, to a doctor, or other entity such as a hospital information system). the memory 134 can include information to facilitate reporting to external devices and systems, such as a doctor email address, hospital information system access information, etc.) in some information, information can be reported to a user via the user interface 136 on the mobile electronic device 104 . at block 224 , data can be communicated to a remote computing system 106 (e.g., via the communication interfaces 130 and 140 ). in some embodiments, the ecg data (e.g., 6-lead ecg data) can be transmitted to the remote computing system 106 , for example, for further processing and/or analysis (blocks 226 and 228 ). in some embodiments, the mobile electronic device 104 can send information to the remote computing system 106 regarding initial determinations made by the analysis performed by the mobile electronic device 104 , and the remote computing system 106 can perform additional analysis to confirm or refute the initial determinations made by the mobile electronic device 104 . at block 230 , information, including concerning an additional analysis, can be reported (e.g., to the user/patient, to a doctor, or to another entity such as a hospital information system). the memory 144 at the remote computing system 106 can include information to facilitate reporting to external devices and systems, such as a doctor email address, hospital information system access information, etc. reporting information can be transferred from the remote computing system 106 to the mobile electronic device 104 (via the communication interfaces 130 and 140 ) for reporting to the user (e.g., via the user interface 136 ). many variations are possible. for example, various operations shown and described in connection with fig. 4 can be omitted, combined with other operations, or separated into sub-operations, and additional operations can be added. fig. 5 is a flowchart showing an example embodiment of a method of use 300 for an ecg system. at block 302 , the user can provide an instruction to start an ecg procedure (e.g., via the user interface 136 on the mobile electronic device 104 ). at block 304 , the user can contact the right atm electrode 108 to a portion of the user's right arm, such as by holding the ecg device 102 with a right thumb or finger on the right arm electrode 108 . at block 306 , the user can contact the left arm electrode 110 to a portion of the user's left arm, such as by holding the ecg device 102 with a left thumb or finger on the left arm electrode 110 . at block 308 , the user can contact the left leg electrode 112 to a portion of the user's left leg, such as by holding the ecg device 102 such that the left leg electrode 112 contacts the user's left leg (e.g., at the left knee or left ankle). at block 310 , the user can hold the contact with the electrodes 108 , 110 , and 112 for the duration of the ecg procedure, for example, until instructed via the user interface 136 that the procedure is completed such as with a visual, auditory, or tactile signal from the mobile electronic device 104 or ecg device 102 . fig. 6 shows an example embodiment of a user interface 400 for an ecg system, which can be used, for example, for the user interface 136 , described herein. the user interface can be implemented on a display, such as a touch screen display, of a mobile electronic device 104 . the user interface 400 can include a user input element (e.g., a digital button on a touch screen display) for initiating an ecg procedure, such as a start button 404 . the user interface 400 can include a notification element 406 to notify the user of a delay period after receipt of a command to start an ecg procedure, such as a displayed count down from 3 to 2 to 1. the user interface 400 can include an ecg tracing portion 408 , which can be configured to show ecg tracing information during the ecg procedure, which can alert the user that the ecg procedure is being performed. in some embodiments, the ecg tracing portion 408 can display information that is unprocessed or only partially processed, which can result in the ecg tracing portion displaying a graphical representation that does not necessarily look like a normal ecg waveform, but which can inform the user that the system is successfully gathering information from the electrodes 108 , 110 , 112 . the user interface 400 can include an ecg waveform portion 410 , which can display a processed ecg tracing (e.g., for a single beat). the processed ecg tracing shown by portion 410 can be an average or weighted average based on some or all of the ecg data that was collected and processed. the user interface 400 can display heart rate information 412 . the user interface 400 can include a reporting portion for displaying commands or reports for the user. for example, as shown in element 414 , the reporting portion can report to the user that the ecg process was completed, that a determination of normal heart rhythm was determined, and that the user's doctor was notified. the user interface 410 can have an options section 416 , which can enable the user to change various options and parameters of the system. for example, the user can set an email address or other contact information for a doctor to be notified of ecg results, the user can change the delay time, etc. fig. 7 shows a block diagram illustrating components of a computing device or system used in some implementations of techniques and systems for performing an electrocardiogram as described herein. for example, components of the system, including an electrocardiograph device, mobile electronic device, and/or remote computing system may be implemented as described with respect to device 1000 . device 1000 can itself include one or more computing devices. the hardware can be configured according to any suitable computer architectures such as symmetric multi-processing (smp) architecture or non-uniform memory access (numa) architecture. the device 1000 can include a processing system 1001 , which may include a processing device such as a central processing unit (cpu) or microprocessor and other circuitry that retrieves and executes software 1002 from storage system 1003 . processing system 1001 may be implemented within a single processing device but may also be distributed across multiple processing devices or sub-systems that cooperate in executing program instructions. a controller, such as might be found on one or more system devices, can be a processing system or processor as described herein. examples of processing system 1001 include general purpose central processing units, application specific processors, and logic devices, as well as any other type of processing device, combinations, or variations thereof. the one or more processing devices may include multiprocessors or multi-core processors and may operate according to one or more suitable instruction sets including, but not limited to, a reduced instruction set computing (risc) instruction set, a complex instruction set computing (cisc) instruction set, or a combination thereof. in certain embodiments, one or more digital signal processors (dsps) may be included as part of the computer hardware of the system in place of or in addition to a general purpose cpu. storage system 1003 may comprise any computer readable storage media readable by processing system 1001 and capable of storing software 1002 including, e.g., processing instructions performing an electrocardiogram as described herein. storage system 1003 may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. examples of storage media include random access memory (ram), read only memory (rom), magnetic disks, optical disks, cds, dvds, flash memory, solid state memory, phase change memory, 3d-xpoint memory, or any other suitable storage media. certain implementations may involve either or both virtual memory and non-virtual memory. in no case do storage media consist of a propagated signal. in addition to storage media, in some implementations, storage system 1003 may also include communication media over which software 1002 may be communicated internally or externally. storage system 1003 may be implemented as a single storage device but may also be implemented across multiple storage devices or sub-systems co-located or distributed relative to each other. storage system 1003 may include additional elements capable of communicating with processing system 1001 . software 1002 may be implemented in program instructions and, among other functions, may, when executed by device 1000 in general or processing system 1001 in particular, direct device 1000 or processing system 1001 to operate as described herein for performing an electrocardiogram. software 1002 may provide program instructions 1004 that implement components for performing an electrocardiogram. software 1002 may implement on device 1000 components, programs, agents, or layers that implement in machine-readable processing instructions 1004 the methods and techniques described herein. in general, software 1002 may, when loaded into processing system 1001 and executed, transform device 1000 overall from a general-purpose computing system into a special-purpose computing system customized to perform an electrocardiogram in accordance with the techniques herein. indeed, encoding software 1002 on storage system 1003 may transform the physical structure of storage system 1003 . the specific transformation of the physical structure may depend on various factors in different implementations of this description. examples of such factors may include, but are not limited to, the technology used to implement the storage media of storage system 1003 and whether the computer-storage media are characterized as primary or secondary storage. software 1002 may also include firmware or some other form of machine-readable processing instructions executable by processing system 1001 . software 1002 may also include additional processes, programs, or components, such as operating system software and other application software. device 1000 may represent any computing system on which software 1002 may be staged and from where software 1002 may be distributed, transported, downloaded, or otherwise provided to yet another computing system for deployment and execution, or yet additional distribution. device 1000 may also represent other computing systems that may form a necessary or optional part of an operating environment for the disclosed techniques and systems, e.g., remote computing system or mobile electronic device. a communication interface 1005 may be included, providing communication connections and devices that allow for communication between device 1000 and other computing systems (not shown) over a communication network or collection of networks (not shown) or the air. examples of connections and devices that together allow for inter-system communication may include network interface cards, antennas, power amplifiers, rf circuitry, transceivers, and other communication circuitry. the connections and devices may communicate over communication media to exchange communications with other computing systems or networks of systems, such as metal, glass, air, or any other suitable communication media. the aforementioned communication media, network, connections, and devices are well known and need not be discussed at length here. it should be noted that many elements of device 1000 may be included in a system-on-a-chip (soc) device. these elements may include, but are not limited to, the processing system 1001 , a communications interface 1005 , and even elements of the storage system 1003 and software 1002 . alternatively, or in addition, the functionality, methods and processes described herein can be implemented, at least in part, by one or more hardware modules (or logic components). for example, the hardware modules can include, but are not limited to, application-specific integrated circuit (asic) chips, field programmable gate arrays (fpgas), system-on-a-chip (soc) systems, complex programmable logic devices (cplds) and other programmable logic devices now known or later developed. when the hardware modules are activated, the hardware modules perform the functionality, methods and processes included within the hardware modules. in some cases, one or more capabilities, e.g., the processing system, storage system, and communication interface may be included on a single device such as a microcontroller. furthermore, while certain types of user interfaces and controls are described herein for illustrative purposes, other types of user interfaces and controls may be used. a user interface may be generated on a local computer or on a mobile device, or it may be generated from a service or cloud server and sent to a client for rendering, e.g., in a browser or “app.” certain aspects of the invention provide the following non-limiting embodiments: example 1 a system for performing an electrocardiogram comprising: an electrocardiograph device, comprising: a housing comprising three electrodes, wherein the three electrodes are not coupled to the device with wires exterior to the housing; a communication interface; one or more computer readable storage media; program instructions stored on the one or more computer readable storage media that, when executed by a controller, direct the controller to: receive signals from the three electrodes; analyze the signals to determine signal-related data; transmit the signal-related data, via the communication interface, to a coupled interpretive device. example 2 the system of example 1, wherein the electrocardiograph device is a portable handheld device. example 3 the system of any of examples 1-2, wherein the electrocardiograph device further comprises: one or more amplifiers configured to amplify analog signals received from the three electrodes; and program instructions stored on the one or more computer readable storage media that, when executed by the controller, direct the controller to receive and analyze amplified analog signals from the one or more amplifiers. example 4 the system of any of examples 1-3, wherein the electrocardiograph device further comprises: an analog signal processor configured to perform analog signal processing on analog signals received from the three electrodes; and program instructions stored on the one or more computer readable storage media that, when executed by the controller, direct the controller to receive and analyze processed analog signals from the analog signal processor. example 5 the system of any of examples 1-4, wherein the electrocardiograph device further comprises: an analog-to-digital converter configured to convert analog signals from the three electrodes to digital signals; and program instructions stored on the one or more computer readable storage media that, when executed by the controller, direct the controller to receive and analyze the digital signals from the analog-to-digital converter. example 6 the system of any of examples 1-5, wherein the electrocardiograph device further comprises program instructions stored on the one or more computer readable storage media that, when executed by the controller, direct the controller to perform digital signal processing on the signals received from the three electrodes. example 7 the system of any of examples 1-6, further comprising: a mobile electronic device comprising: a second communication interface; second program instructions stored on second computer readable storage media that, when executed by second controller, direct the second controller to: receive the signal-related data transmitted by the communication interface of the electrocardiograph device; and provide 6-lead electrocardiogram data based at least in part on the data received by the second communication interface. example 8 the system of example 7, wherein the 6-lead electrocardiogram data includes lead i, lead ii, lead iii, avr, avl, and avf. example 9 the system of any of examples 7-8, wherein the mobile electronic device further comprises program instructions stored on the second computer readable storage media that, when executed by the second controller, direct the second controller to: display user interface elements configured to receive input from a user to initiate an electrocardiogram procedure; and in response to receiving the input from the user, sending an instruction to the electrocardiograph device to initiate the electrocardiogram procedure. example 10 the system of example 9, wherein the system delays sending the instruction to initiate the electrocardiogram procedure by a delay time after receiving the input from the user. example 11 the system of any of examples 7-10, wherein the mobile electronic device further comprises program instructions stored on the second computer readable storage media that, when executed by the second controller, direct the second controller to: display user interface elements configured to output information based on the 6-lead electrocardiogram data. example 12 the system of any of examples 7-11, further comprising a remote computing system comprising: third program instructions stored on third computer readable storage media that, when executed by third controller, direct the third controller to: receive the 6-lead electrocardiogram data provided from the mobile electronic device; and analyze the 6-lead electrocardiogram data; and provide diagnostic information. example 13 the system of example 12, wherein the mobile electronic device further comprises program instructions stored on the second computer readable storage media that, when executed by the second controller, direct the second controller to: receive the diagnostic information provided from the remote computing system; and display user interface elements configured to output the diagnostic information provided from the remote computing system. example 14 the system of any of examples 7-13, wherein the electrocardiograph device is removably attached to the mobile electronic device. example 15 the system of any of examples 1-14, wherein the signal-related data includes 6-lead electrocardiogram data comprising lead i, lead ii, lead iii, avr, avl, and avf. example 16 the system of any of examples 1-15, wherein the three electrodes are fixed immovably to the outside of the housing of the electrocardiograph device. example 17 the system of any of examples 1-16, wherein the three electrodes comprise dry electrodes. example 18 the system of any of examples 17, wherein the electrocardiograph device further comprises program instructions stored on the one or more computer readable storage media that, when executed by the controller, direct the controller to compensate for higher impedance of the dry electrodes. example 19 the system of any of examples claim 1 - 18 , wherein the three electrodes comprise a right arm electrode, a left arm electrode, and a left leg electrode. example 20 the system of any of examples 7-18, wherein the mobile electronic device further comprises program instructions stored on the second computer readable storage media that, when executed by the second controller, direct the second controller to: display user interface elements configured to receive input from a user to assign each of the three electrodes to a particular limb; and in response to receiving the input from the user, sending an instruction to the electrocardiograph device to associate readings from each of the three electrodes to the particular limb. the embodiments discussed herein are provided by way of example, and various modifications can be made to the embodiments described herein. certain features that are described in this disclosure in the context of separate embodiments can also be implemented in combination in a single embodiment. conversely, various features that are described in the context of a single embodiment can be implemented in multiple embodiments separately or in various suitable subcombinations. also, features described in connection with one combination can be excised from that combination and can be combined with other features in various combinations and subcombinations. various features can be added to the example embodiments disclosed herein. also, various features can be omitted from the example embodiments disclosed herein. similarly, while operations are depicted in the drawings or described in a particular order, the operations can be performed in a different order than shown or described. other operations not depicted can be incorporated before, after, or simultaneously with the operations shown or described. in certain circumstances, parallel processing or multitasking can be used. also, in some cases, the operations shown or discussed can be omitted or recombined to form various combinations and subcombinations.
183-851-530-147-600
US
[ "US", "CN", "EP", "WO" ]
G05F1/67,H02J1/00,H02J3/38,H02M7/49,H02J1/10,H02M3/00,H02M7/48,H02M1/00
2012-05-25T00:00:00
2012
[ "G05", "H02" ]
circuit for interconnected direct current power sources
controlling a power converter circuit for a direct current (dc) power source is disclosed. the power converter may be operative to convert input power received from the dc power source to an output power and to perform maximum power point tracking of the power source. the power converter is adapted to provide the output power to a load that also performs maximum power point tracking.
1 . an apparatus comprising: a power converter having input terminals and output terminals and being operative to convert input power received from a direct current power source at said input terminals to an output power at said output terminals; an input sensor coupled to the input terminals and configured to sense an input parameter which includes input current, input voltage or the input power; and a contr'ol circuit coupled to the input terminals and configured to maximize said input power to about at maximum power point at said input terminals based on the input parameter, wherein, for at least a time interval, the control circuit is configured to set input power or output power to measurably less than the maximum power point and after said time interval, the control circuit is configured to set input power or output power to about equal to the maximum power point to enable an external maximum power point tracking circuit to track the output power. 2 - 23 . (canceled)
background embodiments described in this application relate generally to control of power production from distributed current sources such as direct current (dc) power sources. recent interest in renewable energy has led to increased research in systems for distributed generation of energy, such as photovoltaic cells (pv), fuel cells and batteries. various inconsistencies in manufacturing may cause two otherwise identical sources to provide different output characteristics. similarly, two such sources may react differently to operating conditions, e.g. load and/or environmental conditions, e.g. temperature. in installations, different sources may also experience different environmental conditions, e.g., in solar power installations some panels may be exposed to full sun, while others may be shaded, thereby delivering different power output. in a multiple battery installation, some of the batteries may age differently, thereby delivering different power output. brief summary various embodiments relate to power conversion in a distributed energy system that may have some of characteristics described above. while the various embodiments may be applicable to any distributed power system, the following discussion turns to solar energy so as to provide a better understanding by way of example without limitation to other applications. distributed power systems are described, including a power converter circuit for a direct current (dc) power source such as one or more photovoltaic panels, photovoltaic substrings or photovoltaic cells. a load, e.g. grid-tied inverter, is may be connected by dc power lines to receive the harvested power from one or more of the power converter circuits. according to an aspect, the power converter circuit may include a direct current to direct current (dc/dc) power converter configured to convert dc power received on a dc/dc power converter input from the photovoltaic panel(s) to a dc/dc power converter output. the circuit may include a control circuit, which is configured to sense input voltage and/or input current and to determine input power received on the dc/dc power converter input (output power from the photovoltaic panel). the control circuit may be configured to maximize the input power by operating the power source (e.g., photovoltaic panel) at a current and voltage that is tracked to maximize the power yield of the power source, or its maximum power point. since the maximum power point tracking is performed at the input of the power converter, the output voltage or current of the power converter is not fully constrained. while the power output from the dc/dc converter is about equal to the input power from the photovoltaic power times the efficiency of the conversion, the voltage and current at the output of the dc/dc power converter may be set, determined and/or controlled by the load or by a control circuit at the input of the load. the load may be an inverter adapted to convert the dc power to alternating current (ac) at the frequency of the grid. according to an aspect, the inverter does not utilize a maximum power point tracking (mppt) module since the maximum power from each dc source is already tracked individually for each panel by the control circuits. the inverter may have a control block at its input which sets the input voltage at a convenient value, optionally a predetermined value, and/or optionally a constant value, e.g. 400 volts, for instance to maximize the efficiency of the load, e.g. inverter, or to minimize power loss in the dc lines. however, many commercially available inverter modules already include integrated mppt tracking circuits designed for use with conventional photovoltaic distributed power systems that do not include individual mppt tracking for each power source as described above. it would be desirable that standard commercially available inverters with integrated mppt modules be compatible with the dc/dc power converter circuits with the control circuits, which individually maximize power from the dc power sources, e.g. photovoltaic panels. however, since the control circuit maintains the photovoltaic panel at its maximum power point, the power output of the dc/dc converters may not present to the input of the inverter a power peak that can be tracked by the inverter's integrated mppt as current or voltage at the output of the dc/dc converter varies. as a result, an mppt module, if present at the inverter input may not be able to stabilize and lock onto any particular voltage that maximizes power at the input to the inverter. as a result, the mppt module of the inverter if used in a system according to aspects may force the input to the inverter to an extreme voltage (or current), and/or become unstable and considerable power may be lost. thus, there is a need for and it would be advantageous to have power converter circuits which operate universally with all or most types of inverters whether equipped with an mppt module or not and for a load equipped with a control block which sets input voltage to the load to a convenient optionally constant value as described above. various methods, systems and/or devices are disclosed herein, which provide a power converter circuit including a power converter connectible to a direct current (dc) power source such as a photovoltaic panel. the direct current (dc) power source may include one or more photovoltaic solar cells or solar panels interconnected in series and/or in parallel. the power converter includes input terminals adapted for connecting to the direct current (dc) power source and output terminals. the power converter may be operative to convert input power received from the dc power source at the power converter input terminals to an output power at the power converter output terminals. the power converter may have a control circuit connected at the power converter input terminals so that during operation of the power converter, the control circuit sets the input voltage or the input current at the power converter input terminals to maximize the input power, e.g., to perform maximum power point tracking (mppt). a maximum power point tracking circuit may also be connected to the power converter output terminals. the power converter may include multiple like power converter circuits series connected at their output terminals into serial strings. the serial strings may be parallel connected and input to the load via the maximum power point tracking circuit. the having load input terminals and load output terminals may be configured to receive power from the power converter, e.g., via the maximum power point tracking circuit connected to the power converter output terminals. the load may be an inverter or a dc/dc power converter. according to different features: a. the output voltage of the power converter may be sensed. the control circuit may be configured to set the input power received at the input terminals of the power converter to a maximum power only at a predetermined output voltage point or output voltage range or at a predetermined output current point or output current range. away from the predetermined output voltage or predetermined output current, the control circuit may be configured to set the input power received at the input terminals to less than the maximum available power. in this way, the maximum power point tracking circuit operatively connected to the output terminals of the power converter may stably track the predetermined voltage and/or current point or range. b. the control circuit may be configured to set the input power received at the input terminals to the power converter to a maximum power. a power attenuator may be connected to the output terminals of the power converter. the power attenuator may be configured to attenuate power output at output voltages other than at a predetermined output voltage range (or a predetermined output current range) and not to attenuate output power at the predetermined output voltage or current point or range. the maximum power point tracking circuit may be connected to the attenuated power output. the maximum power point tracking circuit may be configured to lock onto the maximum power point at the predetermined output voltage range or at the predetermined output current range. the load may be typically configured for receiving power from the power converter via the power attenuator and via the maximum power point tracking circuit connected to the attenuated power output. c. the control circuit may be configured to set the input power received at the input terminals of the power converter to the maximum power point of the power source. a control circuit connected to the input terminals is configured to vary the voltage conversion ratio defined as the ratio of input voltage to output voltage of the power converter. the voltage conversion ratio may be varied or perturbed to slowly approach maximum power on the output terminals. the term “slowly” as used herein is relative to the response time of the mppt circuit associated with the load (e.g., at the output of the power converter). the conversion ratio may be selected to achieve maximum power. since the output power from the power converter approaches slowly maximum power, the mppt circuit associated with the load responds accordingly and locks onto the predetermined output voltage at maximum output power. d. the maximum power point tracking circuit associated with the load during the course of its operation may perturb its voltage or current input (output to the power converter). the power converter may include a control circuit to set the input power received at the input terminals of the power converter to the maximum power point and a control circuit configured to sense output voltage. the conversion ratio of the power conversion is slowly varied by the control circuit to slowly approach the selected conversion ratio and the predetermined output voltage at the maximum power point. e. the features of paragraphs c and d are not exclusive and may be used in combination. if a change in output voltage at the output of the power converter is sensed then the conversion ratio of the power conversion is slowly varied by the control circuit to slowly approach the selected conversion ratio and the predetermined output voltage. otherwise if a substantial change in output voltage is not sensed, the control circuit is configured to vary the output voltage to slowly approach the desired conversion ratio while the mppt circuit approaches the maximum power point brief description of the drawings the present disclosure is illustrated by way of example and not limited in the accompanying figures in which like reference numerals indicate similar elements and in which: fig. i illustrates a conventional centralized power harvesting system using dc power sources; fig. 2 illustrates current versus voltage characteristic curves for one serial string of dc sources; fig. 3 illustrates a distributed power harvesting system, according to embodiments, using dc power sources; figs. 4a and 4b illustrate the operation of the system of fig. 3 under different conditions, according to embodiments; fig. 4c illustrates a distributed power harvesting system, according to embodiments, wherein the inverter controls the input current; fig. 5 illustrates a distributed power harvesting system, according to other embodiments, wherein the voltage at the input of the inverter is controlled; fig. 6 illustrates an exemplary dc-to-dc converter according to embodiments; fig. 7 illustrates a power converter including control features according to various embodiments; fig. 8a illustrates graphically behavior of power output from solar panels as a function of output current in a conventional system; fig. 8b illustrates graphically power input or output versus output current from one photovoltaic module or a system of series/parallel connected photovoltaic modules and/or strings; fig. 8c illustrates in a block diagram of a distributed power harvesting system according to various embodiments; fig. 8d illustrates graphically power output as a function of current modified according to various embodiments; fig. 8e illustrates a circuit for modifying output power according to various embodiments; fig. 8f illustrates a process of power conversion and tracking maximum power, according to various embodiments; fig. 8g which illustrates a process for operating an inverter equipped with an mppt module according to various embodiments; fig. 9 illustrates in a simplified block diagram of a distributed power harvesting system according to various embodiments; fig. 9a and fig. 9b illustrate processes performed in parallel at the power source and at the maximum power point tracking circuit, respectively, according to various embodiments; fig. 9c illustrates graphically variation of power output from one or more photovoltaic modules as a function to time, according to various embodiments; fig. 10a and fig. 10b illustrate processes performed in parallel at the photovoltaic module and maximum power point tracking circuit, respectively, according to various embodiments. the foregoing and/or other aspects will become apparent from the following detailed description when considered in conjunction with the accompanying drawing figures. detailed description reference will now be made in detail to embodiments, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the like elements throughout. the embodiments are described below to explain examples by referring to the figures. a conventional installation of solar power system 10 is illustrated in fig. 1 . since the voltage provided by each individual solar panel 101 may be low, several panels may be connected in series to form a string of panels 103 . for a large installation, when higher current may be utilized, several strings 103 may be connected in parallel to form the overall system 10 . solar panels 101 may be mounted outdoors, and their leads may be connected to a maximum power point tracking (mppt) module 107 and then to an inverter 104 . the mppt 107 may be implemented as part of the inverter 104 . the harvested power from the dc sources may be delivered to the inverter 104 , which converts the fluctuating direct-current (dc) into alternating-current (ac) having a desired voltage and frequency at the inverter output, which may be, e.g., i iov or 220v at 60 hz, or 220v at 50 hz. in some examples, inverters that produce 220v may be then split into two i iov feeds in an electric box. the ac current from the inverter 104 may then be used for operating electric appliances or fed to the power grid. alternatively, if the installation is not tied to the grid, the power extracted from inverter 104 may be directed to a conversion and charge/discharge circuit to store the excess power created as charge in batteries. in case of a battery-tied application, the inversion stage might be skipped altogether, and the dc output of the mppt stage 107 may be fed into the charge/discharge circuit. as noted above, each solar panel 101 supplies relatively very low voltage and current. a challenge facing the solar array designer may be to produce a standard ac current at 120v or 220v root-mean-square (rms) from a combination of the low voltages of the solar panels. the delivery of high power from a low voltage may utilize very high currents, which may cause large conduction losses on the order of the second power of the current (iq). furthermore, a power inverter, such as the inverter 104 , which may be used to convert dc current to ac current, may be most efficient when its input voltage may be slightly higher than its output rms voltage multiplied by the square root of 2. hence, in many applications, the power sources, such as the solar panels 101 , may be combined in order to reach the correct voltage or current. a common method may be to connect the power sources in series in order to reach the desirable voltage and in parallel in order to reach the desirable current, as shown in fig. 1 . a large number of the panels 101 may be connected into a string 103 and the strings 103 may be connected in parallel to the power inverter 104 . the panels 101 may be connected in series in order to reach the minimal voltage for the inverter. multiple strings 103 may be connected in parallel into an array to supply higher current, so as to enable higher power output. while this configuration may be advantageous in terms of cost and architecture simplicity, several drawbacks have been identified for such architecture. one drawback may be inefficiencies caused by non-optimal power draw from each individual panel, as explained below. the output of the dc power sources may be influenced by many conditions. therefore, to maximize the power draw from each source, one may need to draw the combination of voltage and current that provides the peak power for the currently prevailing conditions of the power source. as conditions change, the combination of voltage and current draw may need to be changed as well. fig. 2 illustrates an example of one serial string of dc sources, e.g., solar panels 101 a 101 d, and mppt circuit 107 integrated with inverter 104 . the current versus voltage (iv) characteristics are plotted ( 210 a - 210 d ) to the left of each dc source 101 . for each dc source 101 , the current decreases as the output voltage increases. at some voltage value, the current goes to zero, and in some applications may assume a negative value, meaning that the source becomes a sink. bypass diodes may be used to prevent the source from becoming a sink. the power output of each source 101 , which may be equal to the product of current and voltage (p=i*v), varies depending on the voltage across the source. at a certain current and voltage, close to the falling off point of the current, the power reaches its maximum. it may be desirable to operate a power generating power source (e.g., photovoltaic panel, cell, etc.) at this maximum power point. the purpose of the mppt may be to find this point and operate the system at this point to draw the maximum power from the sources. in a typical, conventional solar panel array, different algorithms and techniques may be used to optimize the integrated power output of the system 10 using the mppt module 107 . the mppt module 107 may receive the current extracted from all of the solar panels together and may track the maximum power point for this current to provide the maximum average power such that if more current is extracted, the average voltage from the panels starts to drop, thus lowering the harvested power. mppt module 107 maintains a current that yields the maximum average power from the overall system 10 . however, since sources 101 a - 101 d may be connected in series to a single mppt 107 , the mppt may select a single power point, which would be somewhat of an average of the maximum power points (mpp) of each of the serially connected sources. in practice, it may be very likely that the mppt would operate at an i-v point that may be optimum to only a few or none of the sources. in the example of fig. 2 , each of the sources operate at the same current since the sources are connected in series, but the maximum power point for each source (indicated by a dot on curves 210 a - 210 d ) may be at different currents. thus, the selected current operating point by mppt 107 may be the maximum power point for source 101 b, but may be off the maximum power point for sources 101 a, 101 c and 101 d. consequently, the arrangement may be not operated at best achievable efficiency. turning back to the example of system 10 of fig. 1 , fixing a predetermined constant output voltage from the strings 103 may cause solar panels 101 to supply lower output power than otherwise possible. further, each string 103 carries a single current that is passed through all of solar panels 101 along string 103 . if solar panels 101 are mismatched due to manufacturing differences, aging or if they malfunction or placed under different shading conditions, the current, voltage and power output of each panel may be different. forcing a single current through all of panels 101 of string 103 may cause individual panels 101 to work at a non-optimal power point and can also cause panels 101 , which may be highly mismatched to generate “hot spots” due to the high current flowing through them. due to these and other drawbacks of conventional centralized methods of mppt, panels 101 may be matched improperly. in some cases, external diodes may be used to bypass panels 101 that are highly mismatched. in conventional multiple string configurations all strings 103 may be composed of exactly the same number of solar panels and panels 101 may be selected of the same model and may be installed at exactly the same spatial orientation, being exposed to the same sunlight conditions at all times. installation according to these constraints may be very costly. during installation of a solar array according to the conventional configurations 10 , the installer can verify the correctness of the installation and performance of the solar array by using test equipment to check the current-voltage characteristics of each panel, each string and the entire array. in practice, however, individual panels and strings may be either not tested at all or tested only prior to connection. current measurement may be performed by a series connection to the solar array such as with a series resistor in the array, which is typically not convenient. instead, typically only high-level pass/fail testing of the overall installation is performed. after the initial testing of the installation, the solar array may be connected to inverter 104 , which may include a monitoring module, which monitors performance of the entire array. the performance information gathered from monitoring within inverter 104 may include integrated power output of the array and the power production rate, but the information lacks any fine details about the functioning of individual solar panels 101 . therefore, the performance information provided by monitoring at the inverter 104 may be insufficient to understand if power loss may be due to environmental conditions, from malfunctions or from poor installation or maintenance of the solar array. furthermore, integrated information may not pinpoint which of solar panels 101 are responsible for a detected power loss. fig. 3 illustrates a distributed power harvesting configuration 30 , according to an embodiment. configuration 30 enables connection of multiple power sources, for example, solar panels 101 a - 101 d, into a single power supply. in one aspect, the series string of all of the solar panels may be coupled to an inverter 304 . in another aspect, several serially connected strings of solar panels may be connected to a single inverter 304 . the inverter 304 may be replaced by other elements, such as, e.g., a charging regulator for charging a battery bank. in configuration 30 , each solar panel 101 a - 101 d may be connected to a separate power converter circuit 305 a - 305 d. one solar panel 101 together with its connected power converter circuit forms a module, e.g., photovoltaic module 302 (only one of which is labeled). each converter 305 a - 305 d adapts optimally to the power characteristics of the connected solar panel 101 a - 101 d and transfers the power efficiently from converter input to converter output. the converters 305 a - 305 d may be buck converters, boost converters, buck/boost converters, flyback or forward converters, etc. the converters 305 a - 305 d may also contain a number of component converters, for example a serial connection of a buck and a boost converter. each converter 305 a - 305 d may include a control circuit 311 that receives a feedback signal, not from the converter's output current or voltage, but rather from the converter's input coming from the solar panel 101 . an input sensor measures an input parameter, input power, input current and/or input voltage and sets the input power. an example of such a control circuit may be a maximum power point tracking (mppt) circuit. the mppt circuit of the converter locks the input voltage and current from each solar panel i ola-i old to its optimal power point. in the converters 305 a - 305 d, according to aspects, a controller within converter 305 monitors the voltage and current at the converter input terminals and determines the pulse width modulation (pwm) of the converter in such a way that maximum power may be extracted from the attached panel 101 a - 101 d. the controller of the converter 305 dynamically tracks the maximum power point at the converter input. in various aspects, the feedback loop of control circuit 311 may be closed on the input power in order to track maximum input power rather than closing the feedback loop on the output voltage as performed by conventional dc-to-dc voltage converters (e.g., mppt 107 ). as a result of having a separate control circuit 311 in each converter 305 a - 305 d, and consequently for each solar panel 101 a - 101 d, each string 303 in system 30 may have a different number or different brand of panels 101 a - 101 d connected in series. control circuit 311 of fig. 3 continuously maximizes power on the input of each solar panel 101 a - 101 d to react to changes in temperature, solar radiance, shading or other performance factors that impact that particular solar panel 101 a -i old. as a result, control circuit 311 within the converters 305 a - 305 d harvests the maximum possible power from each panel 101 a - 101 d and transfers this power as output power regardless of the parameters impacting the other solar panels. as such, the embodiments shown in fig. 3 continuously track and maintain the input current and the input voltage to each converter 305 at the maximum power point of the connected dc power source. the maximum power of the dc power source that may be input to converter 305 may be also output from converter 305 . the converter output power may be at a current and voltage different from the converter input current and voltage. while maintaining the total power given the minor power loss due to inefficiency of the power conversion, the output current and output voltage from converter 305 may be responsive to requirements of the series connected portion of the circuit. in one embodiment, the outputs of converters 305 a - 305 d may be series connected into a single dc output that forms the input to the load, in this example, inverter 304 . the inverter 304 converts the series connected dc output of the converters into an ac power supply. the load, in this case inverter 304 , may regulate the voltage at the load's input using control circuit 320 . that may be, in this example, an independent control loop 320 which may hold the input voltage at a predetermined set value, e.g. 400 volts. consequently, input current of inverter 304 may be dictated by the available power, and this may be the current that flows through all serially connected dc sources. while the output of the dc-dc converters 305 are constrained by current and or voltage regulation at the input of inverter 304 , the current and voltage input to power converter circuit 305 may be independently controlled using control circuit 311 . aspects provide a system and method for combining power from multiple dc power sources 101 into a distributed power supply. according to these aspects, each dc power source 101 , e.g. photovoltaic panel 101 may be associated with a dc-dc power converter 305 . modules formed by coupling the dc power sources 101 to their associated converters 305 may be coupled in series to provide a string of modules. the string of modules may be then coupled to inverter 304 having its input voltage fixed. a maximum power point control circuit control circuit 311 in each converter 305 harvests the maximum power from each dc power source 101 and transfers this power as output from power converter 305 . for each converter 305 , the input power may be converted to the output power, such that the conversion efficiency may be 95°0 or higher in some situations. further, the controlling may be performed by fixing the input current or input voltage of the converter to the maximum power point and allowing output voltage of the converter to vary. for each power source 101 , one or more sensors may monitor the input power level to the associated converter 305 . in some embodiments, a microcontroller may perform the maximum power point tracking and control in each converter 305 by using pulse width modulation to adjust the duty cycle used for transferring power from the input to the output. an aspect may provide a greater degree of fault tolerance, maintenance and serviceability by monitoring, logging and/or communicating the performance of each solar panel. in various embodiments, the microcontroller that may be used for maximum power point tracking may also be used to perform the monitoring, logging and communication functions. these functions allow for quick and easy troubleshooting during installation, thereby significantly reducing installation time. these functions may be also beneficial for quick detection of problems during maintenance work. aspects allow easy location, repair, or replacement of failed solar panels. when repair or replacement may be not feasible, bypass features provide increased reliability. in an aspect, arrays of solar cells are provided where the power from the cells may be combined. each converter 305 may be attached to a single solar cell, or a plurality of cells connected in series, in parallel, or both, e.g., parallel connection of strings of serially connected cells. in an embodiment, each converter 305 may be attached to one or more panels of a photovoltaic string. however, while applicable in the context of solar power technology, the aspects may be used in any distributed power network using dc power sources. for example, they may be used in batteries with numerous cells or hybrid vehicles with multiple fuel cells on board. the dc power sources may be solar cells, solar panels, electrical fuel cells, electrical batteries, and the like. further, although the discussion below relates to combining power from an array of dc power sources into a source of ac voltage, the aspects may also apply to combining power from dc sources into another dc voltage. in these dc-to-dc voltage converters, a controller within the converter may monitor the current or voltage at the input, and the voltage at the output. the controller may also determine the appropriate pulse width modulation (pwm) duty cycle to fix the output voltage to the predetermined value by increasing the duty cycle if the output voltage drops. accordingly, the conventional converter may include a feedback loop that closes on the output voltage and uses the output voltage to further adjust and fine-tune the output voltage from the converter. as a result of changing the output voltage, the current extracted from the input may be also varied. figs. 4a and 4b illustrate an operation of the system of fig. 3 under different conditions, according to embodiments. an exemplary configuration 40 may be similar to configuration 30 of fig. 3 . in the example shown, ten dc power sources 101 /i through 101 / 10 may be connected to ten power converters 305 /i through 305 / 10 , respectively. the modules formed by the dc power sources 101 and their connected converters 305 may be coupled together in series to form a string 303 . in one embodiment, the series-connected converters 305 may be coupled to a dc-to-ac inverter 404 . the dc power sources may be solar panels 101 and the example may be discussed with respect to solar panels as one illustrative case. each solar panel 101 may have a different power output due to manufacturing tolerances, shading, or other factors. for the purpose of the present example, an ideal case may be illustrated in fig. 4a , where efficiency of the dc-to-dc conversion may be assumed to be 100% and the panels 101 may be assumed to be identical. in some aspects, efficiencies of the converters may be quite high and range at about 95°/0-99°/0. so, the assumption of 100% efficiency may not unreasonable for illustration purposes. moreover, according to embodiments, each of the dc-dc converters 305 may be constructed as a power converter, i.e., it transfers to its output the entire power it receives in its input with very low losses. power output of each solar panel 101 may be maintained at the maximum power point for the panel by a control loop 31 within the corresponding power converter 305 . in the example shown in fig. 4a , all of panels 101 may be exposed to full sun illumination and each solar panel 101 provides 200 w of power. consequently, the mppt loop may draw current and voltage level that will transfer the entire 200 w from the panel to its associated converter 305 . that is, the current and voltage dictated by the mppt form the input current i in and input voltage v in to the converter. the output voltage may be dictated by the constant voltage set at the inverter 404 , as will be explained below. the output current lout would then be the total power, i.e., 200 w, divided by the output voltage vout. referring back to conventional system 10 , figs. i and 2 , the input voltage to load 104 varies according to the available power. for example, when a lot of sunshine may be available in a solar installation, the voltage input to inverter 104 can vary even up to 1000 volts. consequently, as sunshine illumination varies, the voltage varies with it, and the electrical components in inverter 104 (or other power supplier or load) may be exposed to varying voltage. this tends to degrade the performance of the components and may ultimately cause them to fail. on the other hand, by fixing or limiting the voltage or current to the input of the load or power supplier, e.g., inverter 304 , the electrical components may always be exposed to the same voltage or current and possibly have extended service life. for example, the components of the load (e.g., capacitors, switches and coil of the inverter) may be selected so that at the fixed input voltage or current they operate at, say, 60% of their rating. this may improve the reliability and prolong the service life of the component, which may be critical for avoiding loss of service in applications such as solar power systems. as noted above, according to an embodiment, the input voltage to inverter 404 may be controlled by inverter 404 (in this example, kept constant), by way of control loop 420 (similar to control loop 320 of inverter 304 above). for the purpose of this example, assume the input voltage may be kept as 400v (ideal value for inverting to 220 vac). since it is assumed that there may be ten serially connected power converters, each providing 200 w, the input current to the inverter 404 is 2000 w/400v=5 a. thus, the current flowing through each of the converters 101 /i- 101 / 10 may be 5 a. this means that in this idealized example each of converters 101 provides an output voltage of 200 w/5 a=40v. now, assume that the mppt for each panel 101 (assuming perfect matching panels) dictates that the maximum power point voltage for each panel is vmpp 32v. this means that the input voltage to inverter 404 would be 32v, and the input current would be 200 w/32v=6.25 a. we now turn to another example, where system 40 may be still maintained at an ideal mode (i.e., perfectly matching dc sources and entire power may be transferred to inverter 404 ), but the environmental conditions may different for different panels. for example, one dc source may be overheating, may be malfunctioning, or, as in the example of fig. 4b , the ninth solar panel 101 / 9 may be shaded and consequently produces only 40 w of power. since all other conditions as in the example of fig. 4a are kept, the other nine solar panels 101 may be unshaded and still produce 200 w of power. the power converter 305 / 9 includes mppt to maintain the solar panel 101 / 9 operating at the maximum power point, which may be now lowered due to the shading. the total power available from the string may be now 0×200 w+40 w=1840 w. since the input to inverter 404 may be still maintained at 400v, the input current to inverter 404 will now be 1840 w/40v=4.6 a. this means that the output of all of the power converters 305 / 1 - 305 / 10 in the string may be at 4.6 a. therefore, for the nine unshaded panels, the converters will output 200 w/4.6 a=43.5v. on the other hand, the converter 305 / 9 attached to the shaded panel 101 / 9 will output 40 w/4.6 a=8.7v. checking the math, the input to inverter 404 can be obtained by adding nine converters providing 43.5 v and one converter providing 8.7v, i.e.,(9×43.5v)+8.7v=400v. the output of the nine non-shaded panels would still be controlled by the mppt as in fig. 4a , thereby standing at 32v and 6.25 a. on the other hand, since the ninth panel 101 / 9 is shaded, assume its mpp voltage dropped to 28v. consequently, the output current of the ninth panel is 40 w/28v=1.43 a. as can be seen by this example, all of the panels may be operated at their maximum power point, regardless of operating conditions. as shown by the example of fig. 4b , even if the output of one dc source drops dramatically, system 40 still maintains relatively high power output by fixing the voltage input to the inverter, and controlling the input to the converters independently so as to draw power from each dc source at the mpp. as can be appreciated, the benefit of the topology illustrated in figs. 4a and 4b may be numerous. for example, the output characteristics of the serially connected dc sources, such as solar panels, need not match. consequently, the serial string may utilize panels from different manufacturers or panels installed on different parts of the roofs (i.e., at different spatial orientation). moreover, if several strings are connected in parallel, it may not be necessary that the strings match; rather each string may have different panels or different number of panels. this topology may also enhance reliability by alleviating the hot spot problem. as shown in fig. 4b , the output of the shaded panel 101 / 9 is 1.43 a, while the current at the output of the unshaded panels is 6.25 a. this discrepancy in current when the components are series connected may cause a large current being forced through the shaded panel that may cause overheating and malfunction at this component. however, by the exemplary aspects of the topology shown, the input voltage may be set independently, and the power draw from each panel to its converter may be set independently according to the panel's mpp at each point in time, the current at each panel may be independent on the current draw from the serially connected converters. it may be easily realized that since the power may be optimized independently for each panel, panels may be installed in different facets and directions in building-integrated photovoltaic (bipv) installations. thus, the problem of low power utilization in buildingintegrated installations may be solved, and more installations may now be profitable. the described system may also easily solve the problem of energy harvesting in low light conditions. even small amounts of light may be enough to make the converters 305 operational, and they then start transferring power to the inverter. if small amounts of power are available, there may be a low current flow—but the voltage will be high enough for the inverter to function, and the power may indeed be harvested. according to embodiments, inverter 404 may include a control loop 420 to maintain an optimal voltage at the input of inverter 404 . in the example of fig. 4b , the input voltage to inverter 404 may be maintained at 400v by the control loop 420 . the converters 305 may be transferring substantially all (e.g., >95%) of the available power from the solar panels to the input of the inverter 404 . as a result, the input current to the inverter 404 may be dependent only on the power provided by the solar panels and the regulated set, i.e., constant, voltage at the inverter input. conventional inverter 104 , shown in fig. i and fig. 2 , may have a very wide input voltage to accommodate for changing conditions, for example a change in luminance, temperature and aging of the solar array. this may be in contrast to inverter 404 that may be designed according to aspects. the inverter 404 does not utilize a wide input voltage and may be therefore simpler to design and more reliable. this higher reliability may be achieved, among other factors, by the fact that there may be no voltage spikes at the input to the inverter and thus the components of the inverter experience lower electrical stress and may last longer. when the inverter 404 may be a part of a circuit, the power from the panels may be transferred to a load that may be connected to the inverter. to enable the inverter 404 to work at its optimal input voltage, any excess power produced by the solar array, and not used by the load, may be dissipated. excess power may be handled by selling the excess power to the utility company if such an option is available. for off-grid solar arrays, the excess power may be stored in batteries. yet another option may be to connect a number of adjacent houses together to form a micro-grid and to allow load-balancing of power between the houses. if the excess power available from the solar array is not stored or sold, then another mechanism may be provided to dissipate excess power. the features and benefits explained with respect to figs. 4a and 4b stem, at least partially, from having inverter 404 control the voltage provided at its input. conversely, a design may be implemented, where inverter 404 controls the current at its input. such an arrangement may be illustrated in fig. 4c . fig. 4c illustrates an embodiment where the inverter controls the input current. power output of each solar panel 101 may be maintained at the maximum power point for the panel by a control loop within the corresponding power converter 305 . in the example shown in fig. 4c , all of the panels may be exposed to full sun illumination and each solar panel 101 provides 200 w of power. consequently, the mppt loop will draw current and voltage level that will transfer the entire 200 w from the panel to its associated converter. that is, the current and voltage controlled by the mppt form the input current iin and input voltage vin to the converter. the output voltage of the converter may be determined by the constant current set at the inverter 404 , as will be explained below. the output voltage vout would then be the total power, i.e., 200 w, divided by the output current lout. as noted above, according to an embodiment, the input current to inverter 404 may be controlled by the inverter by way of control loop 420 . for the purpose of this example, assume the input current is kept as 5 a. since it is assumed that there may be ten serially connected power converters, each providing 200 w, the input voltage to the inverter 404 is 2000 w/5 a=400v. thus, the current flowing through each of the converters 101 /i- 101 / 10 may be 5 a. this means that in this idealized example each of the converters provides an output voltage of 200 w/5 a=40v. now, assume that the mppt for each panel (assuming perfect matching panels) controls the mpp voltage of the panel to vmpp=32v. this means that the input voltage to the inverter would be 32v, and the input current would be 200 w/32v=6.25 a. consequently, similar advantages have been achieved by having inverter 404 control the current, rather than the voltage. however, unlike conventional art, changes in the output of the panels may not cause changes in the current flowing to the inverter, as that may be set by the inverter itself. therefore, inverter 404 may be designed to keep the current or the voltage constant, then regardless of the operation of the panels, the current or voltage to inverter 404 will remain constant. fig. 5 illustrates a distributed power harvesting system 50 , according to other embodiments, using dc power sources. fig. 5 illustrates multiple strings 303 coupled together in parallel. each of strings 303 may be a series connection of multiple modules and each of the modules includes a dc power source 101 that may be coupled to a converter 305 . the dc power source may be a solar panel. the output of the parallel connection of the strings 303 may be connected, again in parallel, to a shunt regulator 506 and a load 504 . the load 504 may be an inverter as with the embodiments of figs. 4a and 4b . shunt regulators automatically maintain a constant voltage across its terminals. the shunt regulator 506 may be configured to dissipate excess power to maintain the input voltage at the input to the inverter 504 at a regulated level and prevent the inverter input voltage from increasing. the current which flows through shunt regulator 506 complements the current drawn by inverter 504 in order to ensure that the input voltage of the inverter may be maintained at a constant level, for example at 400v. by fixing the inverter input voltage, the inverter input current may be varied according to the available power draw. this current may be divided between the strings 303 of the series connected converters. when each converter 305 includes a control loop 311 maintaining the converter input voltage at the maximum power point of the associated dc power source, the output power of converter 305 may be determined. the converter power and the converter output current together may determine the converter output voltage. the converter output voltage may be used by a power conversion circuit in the converter for stepping up or stepping down the converter input voltage to obtain the converter output voltage from the input voltage as determined by the mppt. fig. 6 illustrates an illustrative example of dc-to-dc converter 305 according to embodiments. dc-to-dc converters may be conventionally used to either step down or step up a varied or constant dc voltage input to a higher or a lower constant voltage output, depending on the requirements of the circuit. however, in the embodiment of fig. 6 the dc-dc converter may be used as a power converter, i.e., transferring the input power to output power, the input voltage varying according to the maximum power point, while the output current being dictated by the constant input voltage to inverter 304 , 404 , or 504 . that is, the input voltage and current may vary at any time and the output voltage and current may vary at any time, depending on the operating condition of the dc power sources. the converter 305 may be connected to a corresponding dc power source 101 (or 101 ) at input terminals 614 and 616 . the converted power of the dc power source 101 may be output to the circuit through output terminals 610 and 612 . between the input terminals 614 and 616 and the output terminals 610 and 612 , the remainder of the converter circuit may be located that includes input and output capacitors 620 and 640 , back flow prevention diodes 622 and 642 and a power conversion circuit including a controller 606 and an inductor 608 . the inputs 616 and 614 may be separated by a capacitor 620 , which may act as an open circuit to a dc voltage. the outputs 610 and 612 may be also separated by a capacitor 640 that also acts an open circuit to dc output voltage. these capacitors may be dc blocking or ac-coupling capacitors that short circuit when faced with alternating current of a frequency, which may be selectable. capacitor 640 coupled between the outputs 610 and 612 may also operate as a part of the power conversion circuit discussed below. diode 642 may be coupled between the outputs 610 and 612 with a polarity such that current may not backflow into the converter 305 from the positive lead of the output 612 . diode 622 may be coupled between the positive output lead 612 through inductor 608 , which acts as a short for dc current and the negative input lead 614 with such a polarity to prevent a current from the output 612 to backflow into the solar panel 101 . the dc power source 101 may be a solar panel, solar cell, string or solar panels or a string of solar cells. a voltage difference may exist between the wires 614 and 616 due to the electron-hole pairs produced in the solar cells of panel 101 . converter 305 may maintain maximum power output by extracting current from the solar panel 101 at its peak power point by continuously monitoring the current and voltage provided by the panel and using a maximum power point tracking algorithm. controller 606 may include an mppt circuit or algorithm for performing the peak power tracking. peak power tracking and pulse width modulation, pwm, may be performed together to achieve the desired input voltage and current. the mppt in the controller 606 may be any conventional mppt, such as, e.g., perturb and observe (p&o), incremental conductance, etc. however, notably, the mppt may be performed on the panel directly, i.e., at the input to the converter, rather than at the output of the converter. the generated power may be then transferred to the output terminals 610 and 612 . the outputs of multiple converters 305 may be connected in series, such that the positive lead 612 of one converter 305 may be connected to the negative lead 610 of the next converter 305 (e.g., as shown in fig. 4a ). in fig. 6 , the converter 305 may be shown as a buck plus boost converter. the term “buck plus boost” as used herein may be a buck converter directly followed by a boost converter as shown in fig. 6 , which may also appear in the literature as “cascaded buck-boost converter”. if the voltage is to be lowered, the boost portion may be shorted (e.g., fet switch 650 statically closed). if the voltage is to be raised, the buck portion may be shorted (i.e., fet switch 630 statically closed). the term “buck plus boost” differs from buck/boost topology, which may be a classic topology that may be used when voltage is to be raised or lowered. the efficiency of “buck/boost” topology may be inherently lower than a buck plus boost converter. additionally, for given requirements, a buck/boost converter may need bigger passive components than a buck plus boost converter in order to function. therefore, the buck plus boost topology of fig. 6 may have a higher efficiency than the buck/boost topology. however, the circuit of fig. 6 may have to continuously decide whether it may be bucking (operating the buck portion) or boosting (operating the boost portion). in some situations when the desired output voltage may be similar to the input voltage, then both the buck and boost portions may be operational. the controller 606 may include a pulse width modulator, pwm, or a digital pulse width modulator, dpwm, to be used with the buck and boost converter circuits. the controller 606 controls both the buck converter and the boost converter and determines whether a buck or a boost operation is to be performed. in some circumstances both the buck and boost portions may operate together. that is, as explained with respect to the embodiments of figs. 4a and 4b , the input voltage and input current may be selected independently of the selection of output current and output voltage. moreover, the selection of either input or output values may change at any given moment depending on the operation of the dc power sources. therefore, in the embodiment of fig. 6 the converter may be constructed so that at any given time a selected value of input voltage and input current may be up converted or down converted depending on the output requirement. in one implementation, an integrated circuit (ic) 604 may be used that incorporates some of the functionality of converter 305 . ic 604 may be a single asic able to withstand harsh temperature extremes present in outdoor solar installations. asic 604 may be designed for a high mean time between failures (mtbf) of more than 25 years. however, a discrete solution using multiple integrated circuits may also be used in a similar manner in the exemplary embodiment shown in fig. 6 , the buck plus boost portion of the converter 305 may be implemented as the ic 604 . practical considerations may lead to other segmentations of the system. for example, in one embodiment, the ic 604 may include two ics, one analog ic, which handles the high currents and voltages in the system, and one simple low-voltage digital ic, which includes the control logic. the analog ic may be implemented using power fets that may alternatively be implemented in discrete components, fet drivers, a/ds, and the like. the digital ic may form the controller 606 . in the exemplary circuit shown, the buck converter includes the input capacitor 620 , transistors 628 and 630 , a diode 622 positioned in parallel to transistor 628 , and an inductor 608 . the transistors 628 and 630 may each have a parasitic body diode 624 and 626 , respectively. in the exemplary circuit shown, the boost converter includes the inductor 608 , which may be shared with the buck converter, transistors 648 and 650 , a diode 642 positioned in parallel to transistor 650 , and the output capacitor 640 . the transistors 648 and 650 may each have a parasitic body diode 644 and 646 , respectively. fig. 7 illustrates another illustrative embodiment of a power converter 305 , according to embodiments. fig. 7 highlights, among others, a monitoring and control functionality of a dc-to-dc converter 305 , according to embodiments. a dc voltage source 101 is also shown in the figure. portions of a simplified buck and boost converter circuit are shown for converter 305 . the portions shown include the switching transistors 728 , 730 , 748 and 750 and the common inductor 708 . each of the switching transistors may be controlled by a power conversion controller 706 . the power conversion controller 706 includes the pulse-width modulation (pwm) circuit 733 , and a digital control machine 743 including a protection portion 737 . the power conversion controller 706 may be coupled to microcontroller 790 , which includes an mppt algorithm 719 , and may also include a communication module 709 , a monitoring and logging module 711 , and a protection module 735 . a current sensor 703 may be coupled between the dc power source 101 and the converter 305 , and output of the current sensor 703 may be provided to the digital control machine 743 through an associated analog to digital converter 723 . a voltage sensor 704 may be coupled between the dc power source 101 and the converter 305 and output of the voltage sensor 704 may be provided to the digital control machine 743 through an associated analog to digital converter 724 . the current sensor 703 and the voltage sensor 704 may be used to monitor current and voltage output from the dc power source, e.g., the solar panel 101 . the measured current and voltage may be provided to the digital control machine 743 and may be used to maintain the converter input power at the maximum power point. the pwm circuit 733 controls the switching transistors of the buck and boost portions of the converter circuit. the pwm circuit may be a digital pulse-width modulation (dpwm) circuit. outputs of the converter 305 taken at the inductor 708 and at the switching transistor 750 may be provided to the digital control machine 743 through analog to digital converters 741 , 742 , so as to control the pwm circuit 733 . a random access memory (ram) module 715 and a non-volatile random access memory (nvram) module 713 may be located outside the microcontroller 790 but coupled to the microcontroller 790 . a temperature sensor 779 and one or more external sensor interfaces 707 may be coupled to the microcontroller 790 . the temperature sensor 779 may be used to measure the temperature of the dc power source 101 . a physical interface 717 may be coupled to the microcontroller 790 and used to convert data from the microcontroller into a standard communication protocol and physical layer. an internal power supply unit 739 may be included in the converter 305 . in various embodiments, the current sensor 703 may be implemented by various techniques used to measure current. in one embodiment, the current measurement module 703 may be implemented using a very low value resistor. the voltage across the resistor will be proportional to the current flowing through the resistor. in another embodiment, the current measurement module 703 may be implemented using current probes, which use the hall effect to measure the current through a conductor without adding a series resistor. after translating the current measurement to a voltage signal, the data may be passed through a low pass filter and then digitized. the analog to digital converter associated with the current sensor 703 may be shown as the a/d converter 723 in fig. 7 . aliasing effect in the resulting digital data may be avoided by selecting an appropriate resolution and sample rate for the analog to digital converter. if the current sensing technique does not utilize a series connection, then the current sensor 703 may be connected to the dc power source 101 in parallel. in one embodiment, the voltage sensor 704 uses simple parallel voltage measurement techniques in order to measure the voltage output of the solar panel. the analog voltage may be passed through a low pass filter in order to minimize aliasing. the data may be then digitized using an analog to digital converter. the analog to digital converter associated with the voltage sensor 704 may be shown as the a/d converter 724 in fig. 7 . the a/d converter 724 has sufficient resolution to generate an adequately sampled digital signal from the analog voltage measured at the dc power source 101 that may be a solar panel. the current and voltage data collected for tracking the maximum power point at the converter input may be used for monitoring purposes also. an analog to digital converter with sufficient resolution may correctly evaluate the panel voltage and current. however, to evaluate the state of the panel, even low sample rates may be sufficient. a low-pass filter makes it possible for low sample rates to be sufficient for evaluating the state of the panel. the current and voltage data may be provided to the monitoring and logging module 711 for analysis. temperature sensor 779 enables the system to use temperature data in the analysis process. the temperature may be indicative of some types of failures and problems. furthermore, in the case that the power source may be a solar panel, the panel temperature may be a factor in power output production. the one or more optional external sensor interfaces 707 enable connecting various external sensors to the converter 305 . external sensors 707 may be used to enhance analysis of the state of the solar panel 101 , or a string or an array formed by connecting the solar panels 101 . examples of external sensors 707 include ambient temperature sensors, solar radiance sensors, and sensors from neighboring panels. external sensors may be integrated into the converter 305 instead of being attached externally. in one embodiment, the information acquired from the current and voltage sensors 703 , 704 and the optional temperature and external sensors 707 may be transmitted to a central analysis station for monitoring, control, and analysis using the communications interface 709 . the central analysis station is not shown in the figure. the communication interface 709 connects a microcontroller 790 to a communication bus. the communication bus can be implemented in several ways. in one embodiment, the communication bus may be implemented using an off-the-shelf communication bus such as ethernet or rs422. other methods such as wireless communications or power line communications, which could be implemented on the power line connecting the panels, may also be used. if bidirectional communication is used, the central analysis station may request the data collected by the microcontroller 790 . alternatively or in addition, the information acquired from sensors 703 , 704 , 707 may be logged locally using the monitoring and logging module 711 in local memory such as the ram 715 or the nvram 713 . analysis of the information from sensors 703 , 704 , 707 enables detection and location of many types of failures associated with power loss in solar arrays. smart analysis can also be used to suggest corrective measures such as cleaning or replacing a specific portion of the solar array. analysis of sensor information can also detect power losses caused by environmental conditions or installation mistakes and prevent costly and difficult solar array testing. consequently, in one embodiment, the microcontroller 790 simultaneously maintains the maximum power point of input power to the converter 305 from the attached dc power source or solar panel 101 based on the mppt algorithm in the mppt module 719 , and manages the process of gathering the information from sensors 703 , 704 , 707 . the collected information may be stored in the local memory 713 , 715 and transmitted to an external central analysis station. in one embodiment, the microcontroller 790 may use previously defined parameters stored in the nvram 713 in order to operate converter 305 . the information stored in the nvram 713 may include information about the converter 305 such as serial number, the type of communication bus used, the status update rate and the id of the central analysis station. this information may be added to the parameters collected by the sensors before transmission. converters 305 may be installed during the installation of the solar array or retrofitted to existing installations. in both cases, converters 305 may be connected to a panel junction connection box or to cables connecting the panels 101 . each converter 305 may be provided with the connectors and cabling to enable easy installation and connection to solar panels 101 and panel cables. in one embodiment, physical interface 717 may be used to convert to a standard communication protocol and physical layer so that during installation and maintenance, the converter 305 may be connected to one of various data terminals, such as a computer or pda. analysis may then be implemented as software, which will be run on a standard computer, an embedded platform or a proprietary device. the installation process of converters 305 may include connecting each converter 305 to a solar panel 101 . one or more of sensors 703 , 704 , 707 may be used to ensure that the solar panel 101 and the converter 305 may be properly coupled together. during installation, parameters such as serial number, physical location and the array connection topology may be stored in the nvram 713 . these parameters may be used by analysis software to detect future problems in solar panels 101 and arrays. when the dc power sources 101 are solar panels, one of the problems facing installers of photovoltaic solar panel arrays may be safety. the solar panels 101 may be connected in series during the day when there may be sunlight. therefore, at the final stages of installation, when several solar panels 101 may be connected in series, the voltage across a string of panels may reach dangerous levels. voltages as high as 600v may be common in domestic installations. thus, the installer faces a danger of electrocution. the converters 305 that may be connected to the panels 101 may use built-in functionality to prevent such a danger. for example, the converters 305 may include circuitry or hardware of software safety module that limits the output voltage to a safe level until a predetermined minimum load may be detected. only after detecting this predetermined load does the microcontroller 790 ramps up the output voltage from the converter 305 . another method of providing a safety mechanism may be to use communications between the converters 305 and the associated inverter for the string or array of panels. this communication, that may be for example a power line communication, may provide a handshake before any significant or potentially dangerous power level may be made available. thus, the converters 305 would wait for an analog or digital release signal from the inverter in the associated array before transferring power to inverter. the above methodology for monitoring, control and analysis of the dc power sources 101 may be implemented on solar panels or on strings or arrays of solar panels or for other power sources such as batteries and fuel cells. reference is now made to fig. 8a , which illustrates graphically behavior of power output in fig. 2 from solar panels 101 (and which is input to inverter module 104 ) as a function of current in conventional system 10 . power increases approximately linearly until a current at which a maximum power point mpp may be found which may be some average over the mpp points of all connected solar panels 101 . conventional mppt module 107 locks (e.g., converges) on to the maximum power point. reference is now also made to fig. 8b which illustrates graphically power input or power output versus output current from series/parallel connected modules 302 or strings 303 ( fig. 3 ). it may be readily seen that by virtue of control circuit 311 in modules 3302 , power as a function of current output may be approximately constant. similarly, power as a function of voltage output may be approximately constant. it is desirable and it would be advantageous to have a system in which modules 302 and/or string 303 of fig. 3 operate with the conventional inverter 104 equipped with an mppt module 107 of fig. 2 . however, as shown in fig. 8b , mppt 107 does not have a maximum power peak (versus current or voltage) on which to lock on to and mppt circuit 107 may become unstable with varying or oscillating current/voltage at the input of inverter module 104 . in order to avoid this potential instability, according to a feature, a maximum power at an output voltage or current at least for a time period may be output or presented to conventional inverter module 104 equipped with mppt module 107 according to various aspects. reference is now made to fig. 8c which illustrates in a simplified block diagram of a photovoltaic distributed power harvesting system 80 including photovoltaic panels 101 a 101 d connected respectively to power converter circuits 305 a - 305 d. solar panel 101 together with its associated power converter circuit 305 forms photovoltaic module 302 . each converter 305 a - 305 d adapts to the power characteristics of the connected solar panel 101 a - 101 d and transfers the power efficiently from converter input to converter output. each converter 305 a - 305 d includes control circuit 311 that receives a feedback signal from the input from solar panel 101 . control circuit 311 may be a maximum power point tracking (mppt) control loop. the mppt loop in converter 305 locks the input voltage and current from each solar panel 101 a - 101 d to its optimal power point (i.e., to converge on the maximum power point). system 80 includes a series and/or parallel connection between outputs of strings 303 and the input of a conventional inverter 104 with an integrated mppt module 107 . inverter 104 with integrated mppt module 107 is designed to be connected directly to the outputs with series/parallel connections of conventional solar panels 101 as in conventional system 10 of fig. 1 . referring back to fig. 7 , mppt algorithm 719 of microcontroller 790 in converters 305 may, in various embodiments, provide a slight maximum input power at a predetermined output voltage or current or conversion ratio into mppt 107 . the input power into mppt 107 may be maximized at a predetermined value of output voltage or current. in one embodiment, as shown in fig. 81 ), the maximum at the predetermined maximum power point may be very slight with a total variation of just a few percent to several percent over the entire input range of current or voltage of inverter 104 . in other embodiments, a circuit, 81 disposed between panels 101 or strings 303 and inverter 104 may be used to present to mppt module 107 with a maximum power point onto which to lock (e.g., converge). reference is now made to fig. 8e which illustrates an embodiment of circuit 81 for generating a maximum power point at the input of mppt module 107 in configuration 80 ( fig. 8 ), according to an embodiment. circuit 81 may be a power attenuator interposed between parallel-connected strings 303 and mppt module 107 . circuit 81 may include a non-linear current sink “f” configured to draw a small amount of current at a particular voltage or voltage range from the dc power line connecting strings 303 to mppt module 107 . the output of current sink “f” may be fed into the positive input of operational amplifier a 1 . the output of operational amplifier a 1 feeds the base of transistor t 1 , the emitter of which may be connected and fed back to the negative input of operational amplifier a 1 . the collector of transistor t 1 connects to the positive dc power line. the negative dc power line may be connected to the emitter of transistor t 1 through a shunt resistor rs. 8 f, which illustrates a simplified method for operating modules 302 and/or strings 303 with inverter 104 equipped with an mppt module 107 . reference is also made again to figs. 6 and 7 . the output voltage of power converter 305 is sensed (step 801 ) across output terminals 610 and 612 . control circuit 311 may be configured to set (step 803 ) the input power received at the input terminals 614 / 616 to a maximum power for a predetermined output voltage point or voltage range or at a predetermined output current point or current range. the predetermined values may be stored in memory 713 and/or 715 or may be received through communications interface 709 . away from the predetermined output voltage or predetermined output current, the control circuit may be configured to set (step 803 ) the input power received at the input terminals to less than the maximum available power (i.e., decrease the input power in response to the difference between the output current and the predetermined current increasing, and increase the input power towards the maximum available power in response to the difference between the output current and the predetermined current decreasing). in certain variations, the predetermined output current values may be selected such that the output power of module 302 or string 303 is as shown in fig. 81 ). the predetermined output voltage values versus output power may be selected in a similar way. while fig. 8d illustrates one possible embodiment, other embodiments may present mppt module 107 with other output power versus current (or voltage) curves that have one or more local maximum to which the mppt 107 can track and lock (e.g., converge). in this way, maximum power point tracking circuit 107 , if present, may stably track (step 805 ) the voltage and/or current point or range. when a maximum is reached (decision block 807 ), mppt tracking circuit 107 locks (step 809 ) onto the power point (e.g., the “predetermined point” in fig. 81 )). reference is now made to fig. 9 , which illustrates in a simplified block diagram a photovoltaic distributed power harvesting system 90 including photovoltaic panels 101 a 101 d connected respectively to power converter circuits 905 a - 905 d. one solar panel 101 together with its associated connected power converter circuit 905 forms a photovoltaic module 902 . each converter 905 a - 905 d adapts to the power characteristics of the connected solar panel 905 a - 905 d and transfers the power efficiently from converter input to converter output. each converter 905 a - 905 d includes a control circuit 900 that receives a feedback signal from input sensor 904 . specifically, input current sensors and/or voltage sensors 904 are used to provide the feedback to control circuit 900 . control circuit 900 may also receive a signal from output current and/or output voltage sensors 906 . inverter 104 with integrated mppt module 107 is designed to be connected directly to the outputs with series/parallel connections of conventional solar panels 101 as in conventional system 10 of fig. 1 . although photovoltaic modules 902 may be designed to be integrated with inverters 304 it may be advantageous that each panel module 902 may also be integrated with a respective conventional inverter (similar to inverter 104 ) between the converter 905 output and the serially connected outputs of module 902 (not illustrated). system 90 includes a series and/or parallel connection between outputs of strings 903 input to a conventional inverter 104 with an integrated mppt module 107 . reference is now made to fig. 8g , which illustrates another method 821 for operating modules 902 , and/or strings 903 with inverter 104 equipped with an mppt module 107 . in step 823 , a scan is made by control circuit 900 making a variation of the voltage conversion ratio between input voltage and output voltage (vout) of a power converter circuit 905 . during the variation, multiple measurements may be made (step 825 ) of the input and/or output power (e.g., by measuring input and output current and voltage) of converter 905 for different voltage conversion ratios that are set by control circuit 900 during the variation. the power measurements made for each different voltage conversion ratio may then be used to determine (step 827 ) the maximum power point of the connected photovoltaic source. from the determination of the maximum power point of the connected photovoltaic source, the voltage conversion ratio for the maximum point may be used to set (step 829 ) the conversion ratio for a continued operation of converter 905 . the continued operation of converter 905 continues for a time period (step 831 ) before applying another variation of the voltage conversion ratio in step 823 . reference is now made to flow diagrams of figs. 9a and 9b , according to various aspects. power converter 905 may control output voltage by varying (step 811 ) the output voltage from power converter 905 . the input voltage to power converter 905 may be maintained at the maximum power point. the conversion ratio defined as the ratio of input voltage to output voltage may be varied or perturbed to slowly approach (step 811 ) maximum power on the output terminals. the term “slowly” as used herein is relative to the response time of mppt circuit 107 associated with load 104 . the conversion ratio or output voltage may be selected. by adjusting the conversion ratio of the power converter, the efficiency of the converter can be adjusted, thereby increasing or decreasing the output power for a received input power. thus, in one example, while a maximum power point is maintained at the power converter input, the output can be adjusted to increase the output power to provide a maximum power point for mppt 107 (e.g., predetermined point in fig. 81 )). since the output power from power converter 905 approaches slowly maximum power, mppt circuit 107 responds accordingly and locks onto the output voltage at maximum output power. referring now to fig. 9b , in the meantime mppt circuit 107 associated with load 104 tracks the slow variation of output power from photovoltaic modules 902 . in fig. 9c , a graph is shown which indicates the slow variation of output power from photovoltaic modules 902 , which varies typically over many seconds (dt). according to various embodiments, the processes of 9 a and 9 b may be performed in conjunction with other previously described embodiments to move the maximum power point presented to the inputs of mppt circuit 107 . for example, the maximum point illustrated in fig. 81 ) or (other maximum point) may be shifted to a different current and/or voltage such that maximum power is maintained over changing power production and conversion conditions (e.g., light, temperature, faults, etc.) of systems 30 / 40 / 50 / 80 / 90 . the rate of adapting the system (e.g., moving the peak) is slower than the tracking rate of mppt 107 , such that the mppt maintains lock (e.g., convergence) on the current/voltage/power at its input of inverter 104 within the power peak (e.g., the “maximum point” in fig. 81 )). 10 a and 10 b, which together illustrate another process that allows systems 30 / 90 to be integrated with inverter 104 equipped with mppt circuit 107 . in fig. ioa, mppt circuit 107 perturbs (step 191 ) voltage or current across string 303 . control circuit 900 senses (step 195 ) the voltage or current perturbation of mppt circuit 107 . control circuit 900 via sensor 906 in step 197 slowly maximizes output power at a particular voltage conversion ratio of converter 905 . input power from a photovoltaic panel 101 may be maximized. in decision block 817 , a maximum output power is being reached and in step 193 mppt 107 locks onto the maximum output power. the articles “a”, “an”, as used hereinafter are intended to mean and be equivalent to “one or more” or “at least one”, for instance, “a direct current (dc) power source ” means “one or more direct current (dc) power sources”. aspects of the disclosure have been described in terms of illustrative embodiments thereof. while illustrative systems and methods as described herein embodying various aspects of the present disclosure are shown, it will be understood by those skilled in the art, that the disclosure is not limited to these embodiments. modifications may be made by those skilled in the art, particularly in light of the foregoing teachings. for example, each of the features of the aforementioned illustrative examples may be utilized alone or in combination or sub combination with elements of the other examples. for example, any of the above described systems and methods or parts thereof may be combined with the other methods and systems or parts thereof described above. for example, one of ordinary skill in the art will appreciate that the steps illustrated in the illustrative figures may be performed in other than the recited order, and that one or more steps illustrated may be optional in accordance with aspects of the disclosure. it will also be appreciated and understood that modifications may be made without departing from the true spirit and scope of the present disclosure. the description is thus to be regarded as illustrative instead of restrictive on the present disclosure.
183-865-680-086-679
US
[ "US" ]
B29B13/02,B29B17/00,B30B9/30
1990-02-07T00:00:00
1990
[ "B29", "B30" ]
apparatus for the thermal densification of thermoplastic articles
an apparatus for thermally densifying thermoplastic articles, particularly those of the expendable type. the apparatus includes a thermally insulated cabinet having mounted within a two-section hopper and a convection heating system. the apparatus is adapted for heating the thermoplastic articles placed within the chamber to a temperature effective for the thermal densification of the thermoplastic articles. the convection heating system creates a circulating air flow between the hopper and the cabinet walls. the air flow insulates the hot hopper from the apparatus cabinet and densifies the thermoplastic articles. the apparatus also renders any food particles in or on the thermoplastic articles bacterially inert.
1. an apparatus for the batch thermal densification of thermoplastic articles, comprising: a) a thermally insulated closeable cabinet including a top access with a lid for loading thermoplastic articles and a lower access for the removal of densified thermoplastic material; b) a hopper mounted within the cabinet, the hopper comprised of: i) an upper section; ii) a lower pan chamber having a top and a bottom, the lower pan chamber being in communication with the upper section and the access, wherein the lower pan chamber contains a separate front access; and iii) an air distribution plenum located at the top of the lower pan chamber; c) a convection heating system comprising: i) an air heat exchange chamber having an upper section and a lower section and positioned between the cabinet and the hopper wherein the air heat exchange chamber is adapted to create a circulating air flow around the hopper; ii) an upper heater chamber adjacent to the hopper and in communication with the air heat exchange chamber, the upper heater chamber being adapted to create a circulating air flow around the hopper and into the air distribution plenum; iii) a heater in the heater chamber adapted to heat the circulating air flow to a temperature effective for the thermal densification of thermoplastic articles in the hopper; and iv) an air temperature control responsive to a temperature sensor located in the upper heater chamber, wherein the air temperature control and the means for heating cooperate to maintain the upper heater chamber circulating air flow at a temperature effective for the thermal densification of thermoplastic articles; g) a removable pan located in the lower pan chamber for collecting densified thermoplastic material; and h) a pan heater system adapted to maintain the pan at a temperature suitable for the even filling of the pan with densified thermoplastic material responsive to a pan temperature sensor. 2. the apparatus of claim 1 wherein the means for heating the circulating air comprises at least one electrical resistance heater. 3. the apparatus of claim 2 wherein the means for heating the circulating air is adapted to maintain the circulating air within a range of about 149.degree. c. to 232.degree. c. 4. the apparatus of claim 3 further comprising an exhaust air dilution system comprising: a) at least one set of dilution air inlets; b) an exhaust chamber in communication with the at least one set of dilution air inlets; e) a dilution air fan connected to the exhaust duct; and f) means for increasing air flow through the dilution air system when the top access is opened. 5. the apparatus of claim 4 wherein the means for increasing air flow comprises: a) an exhaust duct in communication with the exhaust chamber and located upstream of the dilution air fan; b) an exhaust duct baffle pivotally mounted within the exhaust duct, wherein the exhaust duct baffle contains an opening therethrough; and c) means for pivoting the exhaust duct baffle within the exhaust duct when the top access is opened. 6. the apparatus of claim 5 wherein the means for pivoting is a linkage assembly connected at an upper end to a guide hinge member and at a lower end to the exhaust duct baffle. 7. the apparatus of claim 6 further comprising a means for indicating that the pan is full of densified thermoplastic material. 8. the apparatus of claim 1, wherein the temperature effective for thermal densification renders food remains present in or on the thermoplastic articles bacterially inert.
field of the invention the present invention relates to an apparatus for increasing the bulk density of discarded thermoplastic articles. more particularly, this invention relates to an apparatus employing a thermal process to densify discarded thermoplastic articles and to render food remains attached to such thermoplastic articles bacterially inert. background of the invention although expendable thermoplastic packaging is preferred by suppliers and consumers alike for many applications, many people are now concerned over the disposal of such packaging as landfill space becomes increasingly scarce. packaging materials and containers make up approximately 30 percent of our municipal solid waste stream with packaging produced from thermoplastics accounting for approximately 13 percent of those packaging materials and containers. greater emphasis is now being placed on the recycling of packaging materials as an important means of reducing our solid waste load. a significant economical problem exists in the collection of plastic packaging of low bulk density. for example, the typical blow-molded one gallon milk bottle produced from high density polyethylene (hdpe) weighs only 60 grams yet occupies a volume in excess of 230 cubic inches. this equates to a bulk density on the order of less than 1 lb/ft.sup.3, whereas hdpe in solid block form has a density of approximately 60 lbs/ft.sup.3. this difference is even more pronounced for packaging produced from foamed polystyrene where container bulk densities on the order of 0.25 lb/ft.sup.3 are typical even though the density of the polystyrene in solid block form is approximately the same as that of hdpe. newly made foamed polystyrene food containers nested in stacks weigh about 4 to 6 lbs/ft.sup.3 which is as dense as the product can be made without destroying its intended use. it is difficult to get discarded material this dense even with some compaction. as such, it is generally not economically feasible for the recycler to pick up discarded containers from consumers or businesses without some form of incentive to do so. the plastic food packaging that often contains food residue poses further problems. the landfill disposal of thermoplastic packaging is also impacted to some extent by low bulk density. although the problem at the landfill is certainly lessened by the fact that the thermoplastic articles are greatly compacted by the weight of compacting equipment and of subsequently disposed loads, they contribute to the volume of waste in the landfills and add to the cost of collecting and hauling such articles to the disposal site. an industry which has seen a rapid increase in the use of thermoplastic packaging is the fast-food industry. thermoplastic packaging offers many highly desirable characteristics and good economic value. foamed polystyrene is used to form serving trays, hot drink cups, sandwich containers, containers for segregated hot and cold food, and compartmentalized hot food containers. a typical fast-food restaurant may use approximately 20 pounds of foamed polystyrene packaging per day. this small weight is still noteworthy given the fact that the typical sandwich container weighs less than 6 grams or approximately 1/100 of a pound. even if it is assumed that one-half of this packaging material is taken off the premises of the restaurant in the form of carry-out items, a significant bulk volume of material (an equivalent of more than 750 sandwich containers) is left on site for disposal by the restaurant each day. although the volume of material that must be handled in this case is quite large, the weight of recoverable polystyrene material is exceedingly small. if such a restaurant were to sell its recovered thermoplastic material to a recycler, the cost of collecting and transporting this material could easily exceed its value. an added problem is that much of the post-consumer thermoplastic containers have residual food waste present on their inner walls. without very quick collection and recycling, bacterial activity can present a health problem. this complicates the collection process at many locations where very low tonnage is discarded each day. the shear volume due to the low bulk density and the need to move the material to avoid health and safety issues make accumulating economical amounts for recycling prohibitive. another problem area is the disposal of thermoplastic waste at sea. at sea, waste materials are often collected and disposed of by dumping overboard. while much of the waste will decompose with time, or sink to the bottom of the sea, thermoplastic packaging materials generally do neither and eventually wash up on shore. the problems associated with collecting and storing low bulk density material at sea are more acute because of the limited space available for such tasks. therefore what is needed to address the disposal problems associated with thermoplastic packaging and containers of low bulk density is an apparatus and method for densifying these discarded thermoplastic articles and for rendering any food remains on these articles bacterially inert. summary of the invention according to the present invention, there is provided an apparatus for thermally densifying thermoplastic articles comprising a chamber for placing thermoplastic articles within, means for heating the thermoplastic articles placed within the chamber to a temperature effective for thermal densification of the thermoplastic articles, a temperature sensing means located proximate to the heating means, means for controlling the heating means responsive to the temperature sensing means so as to provide a temperature within a range from about the temperature effective for thermally densifying thermoplastic articles to a temperature below the point of significant thermal degradation of the thermoplastic articles. in a preferred embodiment, the apparatus of the present invention is effective to render any food remains in or on the thermoplastic articles bacterially inert. a process for the thermal densification of thermoplastic articles is also provided. therefore, it is an object of the present invention to provide an apparatus to increase the bulk density of discarded thermoplastic articles, packaging and waste materials through a thermal process. it is another object of the present invention to provide an apparatus capable of thermally densifying thermoplastic waste materials produced at a commercial or manufacturing location. it is a further object of the present invention to provide an apparatus for the thermal densification of thermoplastic articles that would render food remains present on such articles bacterially inert. it is still another object of the present invention to provide an apparatus for the thermal densification of thermoplastic waste having utility aboard naval vessels. yet another object of the present invention is to provide an apparatus for the thermal densification of thermoplastic waste or articles having a greatly reduced requirement for insulation in the walls thereof. another object of the present invention is to provide an apparatus for the thermal densification of thermoplastic waste or articles which utilizes circulating air to insulate the outside surfaces of the apparatus from its hot interior. it is a still further object of the present invention to provide an effective process for the thermal densification of thermoplastic articles. it is yet another object of the present invention to provide a process for the thermal densification of thermoplastic articles that would render food remains present on such articles bacterially inert. it is a still further object of the present invention to provide a process for the thermal densification of thermoplastic waste at sea. other objects and the several advantages of the present invention will become apparent to those skilled in the art upon a reading of the specification and the claims appended thereto. brief description of the drawings fig. 1 is a frontal view of one embodiment of a thermal densification apparatus according to the present invention. fig. 2 is a side view of the fig. 1 embodiment of a thermal densification apparatus. fig. 3 is a view along section a--a of the fig. 1 embodiment of a thermal densification apparatus. fig. 4 is a frontal view of second embodiment of a thermal densification apparatus according to the present invention. fig. 5 is a side view of the fig. 4 embodiment of a thermal densification apparatus. fig. 6 is a view along section b--b of the fig. 4 embodiment of a thermal densification apparatus. fig. 7 is a view in perspective of an alternate embodiment of a thermal densification apparatus according to the present invention. fig. 8 is a side view of a refuse collection vehicle having two thermal densification systems installed therein. fig. 9 is an enlarged view in perspective of a portion of the two thermal densification systems of the refuse collection vehicle. fig. 10 is a front view of a third embodiment of the thermal densification apparatus of the present invention. fig. 11 is a perspective view of the third embodiment. fig. 12 is a side elevation of the third embodiment. fig. 13 is a rear elevation of the third embodiment with the rear cover panels of the controls/blower cabinet and certain components removed. fig. 13a is a sectional view taken along 13--13 in fig. 13. fig. 14 is a sectional view of the third embodiment taken along 14--14 in fig. 12. fig. 15 is a partial cut away perspective view of the third embodiment showing air flow therethrough. fig. 16 is a sectional elevation taken along 15--15 in fig. 14. fig. 17 is a partial cut away view of the third embodiment from the side showing the pan carrier assembly. fig. 18 is phantom view of the third embodiment including arrows that trace air flow through the apparatus. fig. 19 is a second phantom view of the third embodiment shown without the cabinet. fig. 20 is a third phantom view of the third embodiment showing the air flow from the air distribution plenum through the hopper and out of the unit. detailed description of the invention the present invention relates to an apparatus and process for the thermal densification of thermoplastic articles, particularly those articles of the expendable type. the apparatus is adapted for primary use at a commercial establishment, such as a fast-food restaurant or a grocery store, and is sized to easily handle the thermal densification of thermoplastic waste produced during a day's business activities. the present invention is best understood by reference to the appended figures, which are given by way of example and not of limitation. referring now to fig. 1, a frontal view of one embodiment of a thermal densification apparatus 1, according to the present invention, is shown. the apparatus shown is of a size appropriate for use in a commercial establishment, having the ability to house at least one large trash bag (approximately 20 to 40 gallons) of non-densified thermoplastic waste materials inside. the apparatus of fig. 1 has an upper section 2 which has an opening at its top, covered by cover 6. thermoplastic articles are placed inside apparatus 1 by first opening cover 6 by lifting handle 8, causing cover 6 to pivot away from the front of the unit on a pair of hinge members 10. (hinge members 10 may be viewed in more detail by reference to fig. 2.) upon completion of the process of the present invention, the fully densified thermoplastic material may be removed by opening door 18 of lower section 4, using handle 20, and sliding out removable pan 34. (removable pan 34 is shown in fig. 3.) heated process air is exhausted through exhaust stack 16 affixed to exhaust port housing 14. fresh air for use in exhaust stream air dilution (optional) enters at fresh air entry 15. legs 22 and adjustable feet 23 are provided to elevate the bottom of the apparatus from contact with the ground or floor upon which it is located and to level the apparatus. also, casters can be mounted to legs 22 to facilitate movement of the apparatus. the legs also permit air to circulate underneath the apparatus. referring now to fig. 2, a side view of the fig. 1 embodiment of a thermal densification system is presented. as shown, hinge stop 28 is provided to prevent cover 6 from pivoting completely behind the thermal densification unit. enclosure 24 is shown mounted to the rear of the thermal densification unit. enclosure 24 houses the electrical controls for the unit, which include switching systems, temperature controllers, a fusing system, electrical wiring, and an electrical junction board (not shown). access to these components is provided by hinged cover 26. for an extra measure of safety, contact switch 11 is provided to disable the heating system when cover 6 is opened for loading. section line a--a is shown for reference thereto from fig. 3. a sectioned view of thermal densification apparatus 1 is presented in fig. 3. as indicated, the section is taken along the line a--a, referred to in fig. 2. the apparatus, including interior walls 50 which in part define heated air circulation chamber 54 and densification chamber 52, cover 6, outer cabinet sections 2 and 4, exhaust stack 16 and exhaust housing 14, is constructed of sheet metal. a wide variety of material is suitable for this application. for example, galvanized steel, aluminum, cold rolled steel, and stainless steel are excellent materials for constructing the apparatus of the present invention with cold rolled steel and stainless steel particularly preferred materials. removable pan 34 may also be constructed of the same sheet metal material used to fabricate the apparatus. pan 34 may be provided with tapered side walls, as shown, to facilitate removal of solidified material. to further facilitate removal of material, the interior surfaces of pan 34 may be coated with a non-stick surface coating such as industrial teflon or stoneware. additional details concerning the apparatus and its operation will now be described by reference to fig. 3. thermoplastic articles are placed within the apparatus through cover 6, and enter densification chamber 52 to begin processing. such articles may be loaded either by dumping individual articles into the apparatus, loosely, or by placing a thermoplastic trash bag which contains such articles inside. as can be appreciated, when seeking to segregate materials for recycling by thermoplastic material type, it may not always be desirable to discard the bag together with the articles, as the thermoplastic bag material may differ from the thermoplastic articles which it contains, resulting in significant contamination of the densified material and reducing its recovery value. when the thermoplastic articles are sufficiently reduced in size, the denser mass material will drop into pan 34. thermoplastic articles in chamber 52 are initially heated by a forced-air heating system which is comprised of air inlet blower 30, one to two electrical resistance heaters 36, heated air circulation chamber 54 and a plurality of hot air inlets 32 leading into chamber 52. as can be envisioned, by virtue of the placement of air inlet blower 30, the heated air will circulate around heated air circulation chamber 54, which is defined in part by the outer walls of densification chamber 52, in a counterclockwise manner and pass through hot air inlets 32, at flow rates related to the resistance to flow imparted at each inlet by the thermoplastic articles undergoing the densification process and other factors. greater densification is achieved by the combination of heating provided by resistance heaters 36 which heat circulating air in chamber 52 and by resistance heaters 38 located at the bottom of removable pan 34. heated air is exhausted through exhaust outlet 42 which leads to exhaust port 44 and exhaust stack 16. the air to be exhausted reaches exhaust outlet 42 through upward migration from the densification chamber 52 and pan 34. exhaust and odor dilution may be provided by the use of fresh air pulled in and mixed with the exhaust from exhaust port 44 through the use of optional exhaust dilution fan 46. optional filter 48 can also be employed in the exhaust stream to remove any smoke particles in the air exhausted from the apparatus. with regard to heating requirements, while electrical resistance heating is particularly preferred, any type of heater capable of heating the contents of the apparatus to a temperature effective for the thermal densification of thermoplastic articles is acceptable. a type of electrical heating element which has been demonstrated to have utility in this application in chamber 54 is a serpentine-wound resistance heater. these elements can operate on 120 volts or 240 volts ac, depending upon the wattage used. as can be envisioned, a plurality of these heating elements can be used to more evenly heat the apparatus. as shown in fig. 3, a total of four heating elements are provided, with elements 36 located in heated air circulation chamber 54 to provide forced air heating and elements 38 located below removable pan 34 to provide additional heating mainly from the bottom of the pan. a particularly preferred heating arrangement provides a total heating capacity of from 4,000 to 12,000 watts, an amount effective to thermally densify thermoplastic articles even when the apparatus is used outdoors during severe winter conditions. temperature sensors 68 and 70 are provided for monitoring system temperatures. while thermocouples are preferred for use as temperature sensors 68 and 70, thermistors, pyrometers and the like are also acceptable. although an inner air circulation chamber temperature monitoring arrangement is shown for temperature sensor 68, it is known that other arrangements, such as an inner or outer chamber surface monitoring arrangement, would produce entirely acceptable results. surface monitoring is particularly pertinent to the location of temperature sensor 70 which can be attached to bottom wall 72 of lower heating chamber 74, as shown, or mounted so as to contact the bottom of pan 34. the output of temperature sensors 68 and 70 are fed into temperature controllers (not shown, but located within enclosure 24) creating temperature feedback loops capable of assuring that the heating provided is of a level capable for effective thermoplastic thermal densification, but not so high as to chemically decompose or ignite the thermoplastic contents of the apparatus. the temperature controller can be of the adjustable variety, such as those marketed by eurothem, inc. or tempco, inc., permitting the safe and effective thermal densification of a wide variety of thermoplastic materials. the temperature setting for the process of the present invention will generally be one which is at least effective for the thermal densification of the thermoplastic articles placed within the apparatus. while this will generally be a temperature of at least about 250.degree. f., it is preferred that the temperature not exceed a value which would alter the molecular weight of the thermoplastic articles by an amount exceeding 50% of their original molecular weight. in no case should the temperature selected be one which produces thermal ignition of the thermoplastic or other material in the apparatus. to minimize process energy requirements and keep the apparatus from becoming excessively hot to the touch, insulation should be advantageously utilized. as shown in fig. 3, a preferred arrangement employs insulative panels adjacent to virtually all heated areas of the apparatus. as may be seen, insulative panel 56 is located within the walls of cover 6, insulative panels 58 surround the upper portion of chamber 52, insulative panels 62 surround heated air circulation chamber 54, insulative panels 60 surround the mid portion of chamber 52, as well as the top portion of heated air circulation chamber 54, insulative panels 64 surround the sides of removable pan 34, and insulative panel 66 is located within the walls of the bottom of the apparatus. since the temperature required to thermally densify most thermoplastic materials will normally be in excess of about 250.degree. f. (usually about 300.degree. f. to about 350.degree. f.), the insulative material selected should be one able to withstand such temperatures. fiberglass-based insulation is one such material known to have utility in this application. to prevent the build-up of excessive moisture and fumes within the apparatus during use, a flow-through ventilation system is provided. this system consists in its essential elements of outlet 42, exhaust port 44, located within exhaust port housing 14, and exhaust stack 16 in communication with exhaust port 44. air is pulled into the apparatus at inlet blower 30 and is adjusted to circulate air at a flow rate of about 15 to 35 scfm through the unit, with a flow rate of about 25 scfm preferred. optionally, exhaust dilution fan 46 may be provided to provide a flow of fresh air into the exhaust system for diluting the exhaust heat which passes into the exhaust port from exhaust outlet 42. exhaust dilution fan 46 should be capable of flowing at least about 100 scfm for optimal effectiveness. another option is the use of a filter element 48, which can be of the activated charcoal-type, to remove any smoke particles from the exhaust. it is also within the scope of the present invention to provide an inert gas ventilation system (not shown), rather than a fan-assisted ventilation system. pressurized nitrogen can be effectively used in this regard. the use of inert gas can provide an additional measure of safety in the operation of the process of this invention. a second embodiment of the present invention is depicted in fig. 4. referring now to fig. 4, a frontal view of thermal densification apparatus 100 is shown. as was the case with the previously described embodiment, the apparatus of fig. 4 is of a size appropriate for use in a commercial establishment, having the ability to house one large trash bag (approximately 20 to 40 gallons) of non-densified thermoplastic waste materials inside. the apparatus of fig. 4 has cabinet 102, which houses inside chambers 150 and 152 (see fig. 6), into which thermoplastic articles are placed for thermal densification by first lifting handle 108 of cover 106, causing cover 106 to pivot away from the front of the unit on a pair of hinge members 110 (see fig. 5.) once inside, the thermoplastic articles pass into and through chambers 150 and 152 and into pan 134 through opening 180 (see fig. 6). referring now to figs. 4 and 6, the densified thermoplastic material in pan 134 may be removed by opening door 118, using handle 120, and sliding out removable pan 134. when the thermoplastic material is cooled, it may be removed from pan 134 by turning the pan upside down. pan 134 is provided with tapered side walls to facilitate removing the material. to further facilitate removal of material, the interior surfaces of pan 134 may be coated with a non-stick surface coating such as industrial teflon or stoneware. hot air is exhausted through exhaust port 144, located within exhaust port housing 114, shown in fig. 6. legs 122 are provided to elevate the bottom of the apparatus from contact with the ground or floor upon which it is located, permitting air to circulate underneath the apparatus. legs are also provided with adjustable feet 123 which enable the apparatus to be leveled upon an uneven surface. also, casters can be mounted to legs 122 to facilitate movement of the apparatus. referring now to fig. 5, a side view of the fig. 4 embodiment is presented. as shown, hinge stop 128 is provided to prevent cover 106 from pivoting completely behind the thermal densification unit. enclosure 124 is shown mounted to the rear of the thermal densification unit. enclosure 124 houses the electrical controls for the unit, which, as with the previously described embodiment of fig. 1, include a switching system, temperature controller, fusing system, electrical wiring and an electrical connection junction board (not shown). access to these components is provided by hinged cover 126. contact switch 111 is provided to disable the heating system when cover 106 is opened for loading. section line b--b is shown for reference thereto from fig. 6. a sectioned view of the thermal densification apparatus of the fig. 4 embodiment is presented in fig. 6. as indicated, the section is taken along the line b--b, referred to in fig. 5. the apparatus, including outer-cabinet 102, cover 106, pre-shrink chamber 150, conical densification chamber 152 and heat chamber 154 is constructed of sheet metal. again, a wide variety of material is suitable for this application, with cold rolled steel and stainless steel particularly preferred. removable pan 134 may also be constructed of the same sheet metal material used to fabricate the apparatus and coated with a non-stick material on the interior surfaces. additional details concerning the apparatus and its operation will now be described by reference to fig. 6. thermoplastic articles are placed within apparatus 100 through cover 106, and into pre-shrink chamber 150 to begin processing. as before, the articles may be loaded either by dumping individual articles into the apparatus, loosely, or by placing a thermoplastic trash bag which contains such articles inside. pre-shrinking is effected through heated air which radiates upward from heated densification chamber walls 156, as well as by the flow of air, heated as described below, which emanates from ports 132 and 174. when the heat collapses the thermoplastic articles to a sufficient degree, the thermoplastic articles in pre-shrink chamber 150 will pass to densification chamber 152. when the articles pass to densification chamber 152, they continue to be heated by hot air and by heated sidewalls 156. the heating system located in heat chamber 154 is comprised of air inlet blower 130, two electrical resistance heaters 136, heated air circulation chamber 154, heated chamber walls 156 and a plurality of hot air outlets 132 and 140. again, by virtue of the placement of inlet blower 130, the heated air circulates around heated air chamber 154, which is defined in part by the outer walls of densification chamber 152, in a counterclockwise manner and passes through hot air inlets 132 and 140 at flow rates related to the resistance to flow imparted at each outlet by the thermoplastic articles undergoing the densification process and other factors. the material, in a more dense state, flows down the chamber walls 156 through opening 180 and into pan 134. to facilitate material flow, chamber walls 156 may be coated with a non-stick material such as industrial teflon. orifice 180 may be sized based upon the overall dimensions of the typical articles to be densified. in other words, the articles generally should not drop through to the pan without first being subjected to the densification process. accordingly, when fast-food type thermoplastic articles are to be the primary articles to be densified, it is preferred that orifice 180 be sized within a range of from about 2 inches to about 8 inches in diameter, with about 4 inches to about 6 inches in diameter particularly preferred. should orifice 180 be formed to be substantially non-circular in cross section, its cross-sectional area should be sized to fall within the range of the cross-sectional areas of the preferred circular orifices. greater densification is achieved as the thermoplastic articles lose air cells and shape and the material passes through the narrowing conical densification chamber 152 and exits through orifice 180 as a viscous material into removable pan 134. here the material is further heated by hot air from ports 140 and flows forming a block of densified material. referring still to fig. 6, to achieve good performance from the apparatus of the present invention, it is preferred that the sidewalls 156 be fabricated to have an angle c, measured from a vertical plane through the apparatus, as shown, which falls within a range of angles from one which enables the bulk density of the articles to be increased by at least about 100 percent of original bulk density for the level of heat provided, up to one which permits the densified material to still flow downward without significant material accumulating on sidewalls 156. it is preferred that angle c fall within a range of from about 15.degree. to 45.degree., with an angle c of 20.degree. to 25.degree. being particularly preferred. heated air is exhausted through exhaust outlet 142 which leads to exhaust port 144 and exhaust pipe 116. exhaust air may reach exhaust outlet 142 through several ways, including upward radiation. another route is by the upward heated air flow through orifice 180 and/or through tubes 170 which are fed by heated air flowing from heated chamber 154 through outlets 140. exhaust heat and odor dilution may be provided by the use of fresh air pulled in and mixed with exhaust from exhaust port 144 through the use of an optional exhaust dilution fan, not shown, but similar to that depicted for the embodiment of figs. 1 through 3. optional filter 148 can also be employed in the exhaust stream to remove any smoke particles from the air exhausted from the apparatus. again, with regard to heating requirements, any type of heater capable of heating the contents of the apparatus to a temperature effective for the thermal densification of thermoplastic articles is acceptable. the electrical heating element described above as being preferred in the fig. 1 embodiment has also been demonstrated to have utility in this embodiment. again, this is a serpentine-wound resistance heater. these elements can operate on 120 or 140 volts depending on the wattage used. as shown, two such heating elements are used to evenly heat the apparatus, these being depicted in fig. 6 as heating elements 136. heating elements 136 are located in heating chamber 154. this particularly preferred heating arrangement provides a total heating capacity of about 4,000 to 10,000 watts, an amount effective to thermally densify thermoplastic articles even when the apparatus is used outdoors during severe winter conditions. temperature sensor 168 is provided for monitoring the temperature of the heating system. although a particular monitoring arrangement is depicted, it is known that other arrangements would produce entirely acceptable results. the output of temperature sensor 168 is fed into a temperature controller (not shown, but located within enclosure 124) creating a temperature feedback loop capable of assuring that the heating provided is of a level capable for effective thermal densification of thermoplastic materials, but not so high as to chemically decompose or ignite the thermoplastic, or other materials, placed within the apparatus. the temperature controller can be of the same adjustable variety as those previously described, permitting the safe and effective thermal densification of a wide variety of thermoplastic materials. the temperature setting used for the second embodiment of the present invention will again be one which is at least effective for the thermal densification of the thermoplastic articles placed within the apparatus. as before, while this will generally be a temperature of at least about 250.degree. f., it is preferred that the temperature not exceed a value which would alter the molecular weight of the thermoplastic articles by an amount exceeding 50% of their original molecular weight. in no case should the temperature selected be one which produces thermal ignition of the thermoplastic material. to minimize process energy requirements and keep the apparatus from becoming excessively hot to the touch, insulation is recommended. as shown in fig. 6, a preferred arrangement employs insulative panels adjacent to a majority of the areas heated. as shown, insulative panel 160 is located within the walls of cover 106, insulative panels 158 surround pre-shrink chamber 150 and heated chamber 152, insulative panels 164 surround the sides of removable pan 134, and insulative panel 166 is located within the walls of the bottom of the apparatus. since the temperature required to thermally densify most thermoplastic materials will normally be in excess of about 250.degree. f. (usually about 300.degree. f. to 350.degree. f.), the insulative material selected should be one able to withstand such temperatures, with fiberglass-based insulation being one preferred material. to prevent the build-up of excessive heat and fumes within the apparatus during use, a flow-through ventilation system is also provided in this embodiment of the present invention. this system consists in its essential elements of inlet fan 130, the forced air heating system, previously described, exhaust outlet 142, exhaust port 144, located within exhaust port housing 114, and exhaust pipe 116. optionally, an exhaust dilution fan, not shown, may be provided to mix fresh air into the exhaust system and dilute the exhaust air which passes into the exhaust port 144. also, optional filter element 148, which can be of the activated charcoal-type, can be provided. moreover, an inert gas ventilation system (not shown) can be provided, rather than a fan-assisted ventilation system. turning to figs. 10 and 11, a front elevation and frontal perspective view of a third embodiment of the thermal densification apparatus of the present invention is illustrated. this apparatus is also sized for use in a commercial establishment with the ability to accept one large trash bag of non-densified thermoplastic articles. this embodiment incorporates significant improvements to the operating principles of the previously described embodiments. this embodiment is comprised of a thermally insulated, closed cabinet, a hopper disposed or mounted within the cabinet and a convection heating system. referring to fig. 15, it can be seen that the hopper has an upper section 361 and a lower pan chamber 371 in communication with the upper section 361. the upper section 361 is generally funnel-shaped and is formed by flat sloping walls 366 which intersect vertical walls 367. the convection heating system includes an air heat exchanging chamber having an upper section 350 and a lower section 354, an upper heater chamber 359, means for heating circulating air flow therethrough 368, and a forced-air inlet blower 285 (fig. 13). except for the insulating air space, each of the chambers is adapted to create a generally circular air flow therethrough about the hopper. the system further includes air distribution plenum 370 shown in figs. 14, 15 and 16 which is positioned at the top of the lower pan chamber 371 and is in open communication therewith as described in more detail herein below. turning now to figs. 11, which shows a front perspective view and 13, which shows a rear elevation of the apparatus showing the rear door and cover removed, it can be seen that a control panel 276 is positioned on the front of the apparatus and contains indicating lights which communicate to the operator the current operational status of the apparatus. the control panel 276 is connected to the electronic control circuitry 282 located at the rear of the apparatus via wiring in the external raceway 277. a removable door 279 provides access to the electronic control circuitry 282 and the forced-air inlet blower 285 (see fig. 13). an exhaust chamber shown generally at 405 is located adjacent to the cabinet 278 and is accessed by removing a separate cover. the electronic control circuitry 282 for the apparatus is located in the left section of the cabinet. the forced-air inlet blower 285 is adjacent the control circuitry separated by a partition. turning now to figs. 14, 15 and 17, there is shown the lower pan chamber 371 of the hopper which houses a pan carriage assembly shown generally at 500. the lower pan chamber 371 is supported on the floor of the apparatus by feet 501. the pan carriage assembly 500 holds the removable pan 134 (fig. 15) in which the densified thermoplastic material or waste is collected. the assembly is comprised of a pan carrier 504 which is movably suspended above the floor of the apparatus between two upright supports 502, two leaf springs 503 and a coil spring 505 located on each side of the pan carrier. as the pan fills with densified material, the pan carrier swings down and towards the rear of the apparatus. travel stops 509 located on each side of the pan carrier may be incorporated to ensure that the pan carrier does not touch the back wall of the pan chamber. during its rearward movement, the pan carrier engages a micro-switch 506 that causes the illumination of an indicating light on the control panel 276. the weight and indirectly the size of the block of densified material collected in the pan 134 may be controlled by adjustment means 510 (fig. 13) which changes the amount of force required to engage the micro-switch 506. during the densification process the thermoplastic articles or waste contained in the hopper upper section 361 and lower pan chamber 371 tend to initially densify into a mounded shape in the pan 134 due in part to the self-insulating property of some thermoplastic materials such as polystyrene. the mounded shape is caused by the material at the bottom of the pan 134 becoming insulated from the heat introduced near the surface of the mound. the mounding is also a byproduct of the relatively low temperatures used to densify the thermoplastic material. it would be possible to use higher temperatures in the convection heating system. however, problems with ignition of paper mixed in with the thermoplastic materials and thermal degradation of the thermoplastic materials could result. this problem is addressed by the pan heater 507 (fig. 17) which is located underneath the pan carrier 504. the heater provides sufficient heat to cause the densified material to spread across the bottom of the pan 134. the operation of this heater is also controlled by the electronic control circuitry 282 which cycles the heater on and off as needed in response to the output of temperature sensor 508 secured to the bottom of the pan carrier 504. the same temperature sensor 508 or an additional sensor may also be used to detect overtemperature conditions. in a preferred embodiment, the temperature sensor 508 in conjunction with the electronic circuitry 282 controls the operation of the pan heater 507 to maintain a temperature of about 149.degree. c. to 232.degree. c. (300.degree. c. to 450.degree. f.) at the base of the pan carrier. in this embodiment the heater is a tubular resistance-type electric heater having a power rating of about 1200 to about 2000 watts. more preferably a heater power of about 1800 watts is used. a particularly preferred unit is the model ch-810xx manufactured by the chromalox corporation. the practice of the present invention includes the use of other types and shapes of heater elements to address the mounding problem. the power rating of the heater may easily be adjusted by one of ordinary skill in the art depending on factors that include the size of pan used and whether the apparatus will be used indoors or outdoors. referring now to figs. 10 and 11, additional components of this embodiment include a lower door 281 having mounted thereon a push-pull handle and latch assembly 280 that permits easy, one-hand operation for opening and closing. a secondary handle 283 is also provided. door supports 284 are constructed of a resilient material capable of cushioning the impact of the supports against the floor as the door is lowered into an open position. the supports also serve to hold the open door in a horizontal position to facilitate the removal of the pan 134 containing the densified thermoplastic material. an exhaust chamber 405 is depicted in fig. 13 at the rear of the apparatus. the exhaust chamber 405 holds the exhaust pipe 404 which is attached at an upper end to the exhaust port 352 (fig. 14) in the hopper upper section. air exits the exhaust pipe 404 through a metal-sock filter 407 which traps any particulate matter in the air stream. exhaust duct 400 is attached to an opening in the lower end of the exhaust chamber. the exhaust duct 400 has an inside diameter of about 15.24 cm to 20.32 cm (6 to 8 in). an exhaust baffle 403 is pivotally mounted within the exhaust duct 400. linkage assembly 402 is connected at an upper end to a guide hinge member 322 and is connected at a lower end to the exhaust baffle. for the duct diameter specified above, the baffle contains an opening 410 of about 5.08 cm to 10.16 cm (2 to 4 in). during normal operation with the lid 305 closed the baffle 403 is closed and positioned perpendicular to the air flow through the exhaust duct 400. as can be seen in fig. 13a, the baffle thus restricts the cross sectional area of the exhaust duct 400 to that of the opening 410. (baffle 403 has been shown smaller that actual size for clarity.) as the lid is opened, the action of the linkage assembly 402 rotates the baffle 90 degrees to be parallel to air flow and thus greatly increase the volume of air pulled through the exhaust duct 400. in the side view of fig. 12, there is shown a novel hinge assembly indicated generally at 320. the hinge assembly 320 is comprised of main hinge member 321, guide hinge member 322, and first hinge spring 323, second hinge spring 324, and hinge guide 325. hinge members 321, 322 are pivotally mounted at upper pivot points 332, 333 to the lid 305 and at a lower pivot points 334, 335 to the side panel 303 of the apparatus. a guide hinge slot 326 is provided in the guide hinge member 322 at that member's lower end. in similar fashion the upper ends of the hinge springs 323, 324 are pivotally connected to the lid at the same point that the hinge members 321, 322 are so mounted. the lower end of the first hinge spring 323 is pivotally mounted to the guide hinge member 322 about midway down its length. the lower end of the second hinge spring 324 is pivotally mounted to the hinge guide 325 which is rigidly attached to the side panel 303. the hinge guide 325 provides a stop to limit the backward travel of the main hinge member 321 and therefore the lid during opening. the hinge guide 325 also prevents side to side movement of the hinge members 321, 322 and thus the lid 305. the guide hinge slot 326 is provided to permit the lid to be adjusted to be parallel to gasket 304 to ensure a tight seal therebetween. as the lid is initially installed, the connection between the lower end of the guide hinge member 322 and the side panel 303 is left loose. as the lid is securely aligned atop the gasket 304, the guide hinge member 322 will travel a short distance downwardly so that its lower pivot point 333 may move up from the bottom of the slot 326. after lid alignment is complete, the lower end of the guide hinge member 322 is pivotally secured to the side panel 303. as the gasket 304 wears during use, the guide hinge member connection may be loosened to permit vertical adjustment of lid 305 against the gasket 304 and then retightened for operation. a tight fit between lid 305 and gasket 304 is important to prevent vapors/odors from escaping the unit during use. a significant aspect of the adjustable feature of this hinge assembly is that the closed lid may be adjusted vertically without changing the location of the hinge member pivot points on the apparatus. note that the location of pivot point 333 within the slot 326 will vary, but the location of the pivot point's attachment to the side panel will not vary. adjustment of the hinge assembly is accomplished quickly and simply. as can be seen in fig. 12, when the lid 305 is closed, the first hinge spring 323 is under tension and the second hinge spring 324 is relaxed. the lid is supported by hinge members 321 and 322 as it is opened. because the substantial weight of the lid may make controlling the opening motion difficult, the second hinge spring 324 dampens the momentum of the lid as it is opened. simultaneously, the first hinge spring 323 which had been under tension is gradually relaxed. the hinge assembly operation is reversed as the lid is closed with the first hinge spring 323 serving to dampen the closing momentum. each of the hinge springs serves to reduce the force required to overcome the inertia of the lid 305 at rest while the opposing spring dampens the momentum generated once the lid is in motion. it should be noted the novel hinge assembly of the present invention causes the lid of the apparatus to move upwardly as it is first opened. this upward motion causes the lid to clear the gasket 304 without sliding contact therewith and is a result of the non-vertical alignment of the attachment points of the upper ends and the lower ends of the individual hinge members. wear on the gasket 304 is substantially reduced by this arrangement. the hinge assembly moves the lid 305 rearwardly and downwardly in an arc above the air sweep chamber 328 and the controls/blower cabinet 278. the lid 305 comes to rest in a final position that requires minimal space behind the apparatus. the hinging motion ensures that the hot lower surface of the lid is positioned well away from the operator when the lid is fully opened. moreover, the location of the center of gravity of the lid 305 in the fully open position prevents the lid from closing accidentally. the internal structure and novel convection heating system of this preferred embodiment will now be described in greater detail with reference to figs. 12 through 20. fig. 14 is a sectional view of the apparatus taken along 14--14 in fig. 12. fig. 15 is a partial cutaway perspective view of the apparatus taken from the opposite front corner of that shown in fig. 11. fig. 16 is a sectional view taken along 16--16 in fig. 14. fig. 17 is a partial cutaway view of the apparatus taken from a side view. figs. 18 through 20 are schematic representations of the air flow through the apparatus which is shown in phantom. in each of these figures some elements may be omitted for the sake of clarity. however, all of the essential elements of the apparatus are shown. referring now to the preferred embodiment shown in figs. 12, 14, 15 and 18, outside air 330 is drawn by the forced-air inlet blower 285 through openings 331 in the controls/blower cabinet 278 and pushed into the apparatus. from the blower outlet shown schematically at 351, the outside air is directed into the upper section 350 of an air heat exchanging chamber by baffles 357, 358. at this point in the upper section 350 the incoming air flows across the exhaust pipe 404. the air moves in a counterclockwise direction around the circumference of the hopper upper section 361 as illustrated in fig. 15 by arrows a. the air is preheated during this circuit by heat radiating from the upper section 361 of the hopper. this first air circuit around the apparatus is also schematically illustrated by arrows a in figs. 15 and 18. referring now to fig. 14, as the air returns to the rear of the unit it is directed downward along the top of inclined baffle 353 to the lower section 354 of the air heat exchanging chamber. the air then makes a second circuit around the apparatus through the lower section 354 following the route illustrated by arrows b in figs. 15 and 18. this second circuit is quite extensive encompassing a volume that extends downwardly around the outside of upper heater chamber 359 (located adjacent to the hopper) and around the outside and bottom of the lower pan chamber 371. the circulating air is collected at the rear of the apparatus along the underside of inclined baffle 353 and vertical baffle 355. a floor baffle 372 (figs. 14, 16, and 18) extends vertically from the cabinet floor 374 to the underside of the lower pan chamber 371 and horizontally from the vertical baffle 355 towards the front of the apparatus. this baffle ensures that the air at the bottom of the apparatus must move from the rear to the front of the apparatus prior to entering port 356. in the lower section 354 the air absorbs and carries away heat radiating through the air heat exchange chamber inner walls 365 from the upper heaters 368 and from the pan box chamber 371. the air moving through the upper and lower sections of the air heat exchanging chamber along with the insulating air space 362 between the cabinet side panel 303 and the air heat exchange chamber outer wall 364 serve two purposes. first, they reduce the amount of energy required to raise the air to a temperature effective for densification by preheating the air before it contacts the heating elements described below. second, they eliminate the need for insulated side panels in the apparatus. from the lower section 354 of the air heat exchange chamber the preheated air passes through port 356 and is guided by baffle 363 into the upper heater chamber 359. the upper heater chamber 359 is defined by the inner walls 365 of the air heat exchanger chamber and the hopper upper section walls 366. baffle 363 directs the air through a sharp turn to the right and into direct contact with the upper heaters 368. a desirable serpentine coil heater for this application is the model tri-95xx, part number 393875658001 manufactured by the chromalox corporation. in the upper heater chamber 359 the air makes a third circuit around the hopper (depicted by arrows c in fig. 18) and is raised to a temperature effective for the thermal densification of thermoplastic articles. upper heater baffles 369 are positioned between the upper heaters and the hopper walls 366, 367 to prevent the creation of hot spots opposite the heaters in the walls of the upper section 361 of the hopper. the upper heater baffles 369 distribute the radiant energy generated by the upper heaters 368 in a uniform fashion over a wide area. the hopper upper section 361 is heated by a combination of hot air circulation through the upper heater chamber 359, radiant heat from the upper heaters 368, and hot air rising from the hopper lower pan chamber 371. the hot air exits the upper heater chamber 359 through exit port 380 (fig. 18) located at the rear thereof. the temperature of the exiting air is monitored by sensors placed in or near the exit port 380. these probes are connected to the electronic control circuitry 282 for the purpose of cycling the upper heaters 368 on and off as needed to maintain the circulating air at a temperature effective for thermal densification. in this preferred embodiment, the air exiting the upper heater chamber 359 is maintained at about 149.degree. c. to 232.degree. c. (300.degree. f. to 450.degree. f.). additional sensors may also be provided to operate a high temperature shut down or for maintenance/repair temperature monitoring. from the upper heater chamber 359 the air passes through opening 380 (fig. 18) and enters an air distribution plenum 370 located at the top of the lower pan chamber 371. the air distribution plenum 370 distributes the now hot air evenly over the thermoplastic articles in the lower pan chamber 371 through a plurality of openings in the bottom plate 375 of the plenum 370. the air contacts the articles in the lower pan chamber 371 and rises upwardly through the upper hopper section 361 contacting the articles contained therein (see fig. 20). the air then exits the upper hopper chamber through exhaust port 352 (see fig. 14) and into the exhaust pipe 404. the hot air causes the thermoplastic articles to shrink, collapse and collect into the removable pan 134 to form a solid block of densified material. any moisture contained in the thermoplastic articles is driven off by the hot air and carried out of the apparatus. referring again to figs. 11 through 13 and 17, it can be seen that this embodiment employs a dilution air system which includes a separate external dilution air blower (not shown), an air sweep baffle 327, an air sweep chamber 328, exhaust chamber 405, exhaust duct 400 and exhaust duct baffle 403. the air sweep chamber 328 is defined as the space between the top of the controls/blower cabinet 278 and a cabinet cover 329. it should be noted that the air sweep chamber 328 is not an essential element of the dilution air system. the routing of the dilution air is dependent on the position of the lid 305. when the lid 305 is closed, outside dilution air 336 is drawn into the air sweep chamber 328 through a plurality of angled intakes 408 in the cabinet cover located under the air sweep baffle 327. the baffle may be fitted with a spring 308 shown in fig. 17 to hold it in position. as illustrated by arrows g in fig. 13, the dilution air then travels across the air sweep chamber 328 and downwardly through entry port 406 into the exhaust chamber 405. dilution air is pulled through the exhaust chamber 405 by the external blower at a significantly higher flow rate than that of the air exiting the hopper. in this preferred embodiment the separate external blower is a radial blade blower having a capacity of about 4,026 m.sup.3 /h (2375 scfm). the air flow through the exhaust pipe is about 59.5 to 76.5 m.sup.3 /h (35 to 45 cfm). the dilution air cools the air exiting the hopper upper section 361 and condenses any moisture contained therein. as described herein above, the exhaust duct baffle 403 is in a vertical position when the lid 305 is closed so that air flow therethrough is restricted to the opening 410 in the baffle. returning to fig. 16 it can be seen that the entry point into the apparatus for dilution air changes when the lid is opened. as the lid rotates to the open position, it contacts spring member 309 which is attached to the top of the air sweep baffle 327. the baffle 327 is thereby rotated so as to cover angled intakes 408 and expose horizontal intakes 409 (fig. 16) located atop the air sweep chamber 328. the horizontal intakes 409 are located directly behind the gasket 304 along the rear of the unit. these horizontal intakes 409 are covered and not used when the lid is closed. simultaneously, the linkage assembly 402 (fig. 13) rotates the exhaust duct baffle 403 to a horizontal position within exhaust duct 400 substantially increasing its available cross-sectional area. the greatly increased air flow thereby created pulls odors/vapors 307 escaping from the hopper towards the rear of the unit away from the operator. the open lid acts as an exhaust hood to collect and direct the odors/vapors 307 through horizontal intakes 409 to the exhaust chamber 405 so that they can be removed from the apparatus. another function of the dilution air system in this preferred embodiment is to cool the exhaust pipe 407. although the air in the exhaust pipe is much cooler than that in the warmest section of the interior of the apparatus, operation of the apparatus over several hours generates a great deal of heat in the exhaust chamber 405. the cool dilution air permits the exhaust pipe to be positioned within the apparatus without creating any hot-to-touch surfaces on the exterior thereof. although a preferred embodiment of an air dilution system has been described herein above, there exist alternative means to implement certain elements of the system. for example, a two-speed external fan could be used to create the increased air flow that is desirable when the lid 305 is opened. in such a configuration the fan would normally operate at a lower speed but would be switched to a high by a sensor that would detect the lid in an open position. although it is believed that such an arrangement would not be as efficient as that in the preferred embodiment, use of such a fan could eliminate the need for the baffle pivotally mounted in the exhaust duct. it should be understood that inter alia the dilution air system just described is intended to dilute the strength of food odors emanating from the apparatus during operation. odor generation is a particular problem when the apparatus is used in commercial establishments in close proximity to customers. however, it is possible to install a bank of units in a remote central location to which thermoplastic articles may be transported. in such a remote setting, odor dilution can be eliminated and the exhausts from the bank of units may be vented to atmosphere. accordingly, the dilution air system shown in the this preferred embodiment is not an essential element of the invention. this third embodiment has advantages over those described above. the need for expensive insulative panels adjacent to the heated areas is greatly reduced in this embodiment. the lid 305 and the door 281 are the only two areas protected by insulated panels in this embodiment. the sides of the apparatus remain cool to touch as the heat generated inside the apparatus is carried away by the swirling circulating air flow. it is a characteristic of this embodiment that the thermal insulation of the apparatus is provided primarily by a combination of air space and circulating air flow. a further advantage of the circulating air flow of the present invention is that the incoming air entering the upper heater chamber is preheated during its first two circuits around the circumference of the apparatus. accordingly, the upper heaters 368 can raise the preheated air to a temperature effective for thermal densification utilizing less electrical energy because they cycle on with less frequency. reducing the total on time also extends the life of the upper heaters 368. referring now to fig. 7, an alternate embodiment of a thermal densification apparatus of a type useful in a commercial establishment is depicted. the apparatus consists of a container 201 for placing thermoplastic articles 213 therein, the container also having a cover 205. heating for the thermal densification process is provided by heating unit 202, which may be of the electrical resistance type or any other type capable of heating the contents of the container to a temperature effective for the thermal densification of thermoplastic articles 213. a type of electrical heating unit which has been demonstrated to have utility in this application is one commercially marketed by mcmaster-carr of chicago, ill. and sold as an drum platform heater. preferred is such a heating unit having a power rating of about 1500 watts which operates on 115 volts or 230 volts ac. as may be envisioned, this type of heating unit provides heating of container 201 mainly from the container bottom, which is the manner of heating particularly preferred. as is known to those skilled in the art, it would be difficult to heat in from the sides of container 201, as opposed to heating from the bottom, due to the insulative value of the thermoplastic articles to be densified. moreover, side heating may complicate cleanup of container 201 following use. the platform heater can also be used in conjunction with a circumferential band heater. such heaters are also marketed by mcmaster-carr of chicago, ill. when used with the platform heater, the band heater is placed around container 201 at its bottom. a preferred band heater is one having a power rating of about 750 watts which operates on 115 volts or 230 volts. temperature sensor 203 is provided for monitoring the outer surface temperature of container 201. although an outer surface temperature monitoring arrangement is depicted, it is known that other arrangements, such as an inner surface monitoring arrangement, would produce entirely acceptable results. the output of temperature sensor 203 is fed into temperature controller 204, creating a temperature feedback loop assuring that the heating provided is of a level capable for effective thermoplastic thermal densification, but not so high as to chemically decompose or ignite the contents of container 201. controller 204 can be of the adjustable variety, permitting the safe and effective thermal densification of a wide variety of thermoplastic materials. again, the temperature used for the process of the present invention will generally be one which is at least effective for the thermal densification of the thermoplastic articles 213 placed within container 201. while this will generally be a temperature of at least about 250.degree. f., (usually about 300.degree. f. to about 350.degree. f.), it is preferred that the temperature does not exceed a value which would alter the molecular weight of the thermoplastic articles by an amount exceeding 50% of their original molecular weight. in no case should the temperature selected be one which produces thermal ignition of the thermoplastic material. to minimize process energy requirements, insulation (not shown) may be advantageously utilized on the outer surface of container 201. to further assist in the thermal densification process, vented plate or screen 212 can optionally be provided. vented plate 212 serves two purposes, the first being to exert a downward force on the thermoplastic articles undergoing the densification process, keeping them in intimate contact with the hot inner surface of container 201 and the pool or slurry of already densified thermoplastic material, the second purpose is to reduce system heat loss. the material selected for plate 212 should be one able to withstand process temperatures. as such, iron, steel and stainless steel are preferred materials, with stainless steel and stainless steel screen particularly preferred. when screen material is utilized a frame may be required to hold such material. such a frame should be of a weight sufficient to achieve the first purpose stated above. if an expanded metal screen is utilized, no frame may be required due to the rigidity and weight normally possessed by such material. to prevent the build-up of fumes within the apparatus during use, a fan assisted, flow-through ventilation system is provided in a preferred embodiment of the present invention. this system consists in its essential elements of screened inlet 206, and diametrically opposed outlet duct 207. outlet duct 207 is shown in fig. 7 as having a flange 208 for the mounting of exhaust fan 209 thereon. mounted to the flange of exhaust fan 209 is filter element 210. filter element 210 can be of the paper-type, activated charcoal-type, or the like. it is also within the scope of the present invention to provide an inert gas ventilation system (not shown), rather than a fan-assist ventilation system. pressurized nitrogen can be effectively used in this regard. the use of inert gas can provide an additional measure of safety in the practice of the process of this invention. also, casters 214 can be provided to facilitate movement of the apparatus. as may be envisioned, the apparatus depicted in fig. 7 will find utility in fast food restaurants, where the densification and recycling of polystyrene foam is a chief concern; at grocery stores, where the densification and recycling of thermoplastic grocery sacks and other plastics is desired; and, aboard ships where the disposal of thermoplastic waste at sea is becoming an ever-increasing concern among environmentalists. at most, only minor modifications to the basic apparatus would be required to adapt the thermal densification unit of the present invention to one of these applications. when adapted for use in a refuse collection vehicle, the thermal densification apparatus of fig. 7 will generally differ only in that it will be configured for housing within a separate compartment of the body of that vehicle and be capable of mobile operation. an example of such an embodiment is depicted in fig. 8. as shown in fig. 8, the body 221 of refuse collection vehicle 220 is equipped with two segregated thermal densification units 222 and 223. the two units are provided for the purpose of thermoplastic segregation. for example, as indicated in fig. 8, thermal densification unit 222 is dedicated to increasing the bulk density of plastic milk jugs and like material containers, which are generally produced from hdpe, while thermal densification unit 223 is used for the densification of miscellaneous thermoplastics. inserted within the body compartments of body 221 are removable containers 224 and 225 which are mounted upon separate heating elements 226 and 227. heating elements 226 and 227 can advantageously be controlled by separate controllers using separate temperature sensing means (not shown) similar to those previously described. such an arrangement would permit the tailoring of separate thermal densification units to the materials sought to be densified by each. of course, such electrical equipment would have to be adapted to mobile use, which could be accomplished through the use of an ac to dc inverter, as those skilled in the art would clearly recognize. each unit is also shown equipped with vented plates 228 and 229 which serve to place a downward force on the material to be densified while also assisting in system heat retention. vented plates 228 and 229 are shaped to essentially conform to the shape of containers 224 and 225 and may also be equipped with handles to aid in the use thereof. as with the thermal compaction unit of fig. 7, vented plates 228 and 229 may also be constructed of a screen material so long as the resultant elements achieve the purpose of exerting a downward force on the thermoplastic articles. each body compartment is shown having a hinged door (230 and 231), which can be closed when the vehicle is travelling from one pick-up site to another. door 230 is shown in the opened position. the thermal compaction units of the vehicle shown in fig. 8 are provided with a cross-flow ventilation system to prevent the build up of fumes within the units. as indicated, a single system can be utilized to ventilate both thermal compaction units. the system shown provides a screened air inlet vent 232, a co-communication path 233, between containers 224 and 225 exhaust pipe section 234, fan 235 and atmospheric duct 236. further clarification regarding the details of this arrangement may be obtained by referring to fig. 9 which provides an enlarged perspective view of key thermal densification system elements including removable containers 224 and 225, heating elements 226 and 227, as well as the ventilation system just described. as may be envisioned, the present invention will find utility in fast-food restaurants, where the densification and recycling of polystyrene foam is a chief concern; at grocery stores, distribution centers and warehouses for the densification of thermoplastic packaging and containers; in fabricating facilities utilizing thermoplastic materials; aboard ships where the disposal of thermoplastic waste at sea is becoming an ever-increasing concern; and elsewhere. at most, only minor modifications to the basic apparatus would be required to adapt the present invention to any of these applications. when segregated plastic densification is to be practiced, it may be advantageous to place the relevant spi (the society of the plastics industry) recycling code upon the resultant resin block to aid in recycling. this can be accomplished by placing a metal die plate at the bottom of the container prior to densification. upon cooling the molten material, the spi code will be imprinted on the block. this procedure can also be utilized with any embodiment of the present invention. the following example further illustrates the essential features of the apparatus and method of the present invention. as will be apparent to those skilled in the art, the conditions used in the example are not meant to limit the scope of the invention. example 1 this example demonstrates the ability of the apparatus and method of the present invention to effectively increase the bulk density of thermoplastic material, in particular, polystyrene foam articles. a thermal densification unit of the type shown in figs. 4 through 6, having an interior volume of approximately 40 gallons, was designed and fabricated. the unit was constructed of cold rolled and galvanized steel. the heating system had a total capacity of 8000 watts, using commercially available heating elements. a thermocouple was located as shown in fig. 6, the output of which was connected to the input of a commercially available temperature controller. fiberglass insulation was employed. to demonstrate the effectiveness of the unit at handling an average day's plastic waste for a typical, high volume fast-food restaurant, ten, 30-gallon bags of polystyrene foam containers (1000 containers) were obtained. such containers are produced by, and available from, mobil chemical co. of canandaigua, n.y. the controller of the unit's heater was set to provide a temperature of about 300.degree. f. at the location of the thermocouple. the containers were dumped into the unit at the rate of one 30-gallon bag (100 containers) every 5-10 minutes until all 10 bags of containers had been put into the apparatus. the resultant block of polystyrene melt was then permitted to cool and solidify. the cooled material shrank away from the walls of the removable pan allowing easy removal from the pan. the cooled thermoplastic block had a volume of 0.296 ft.sup.3 and weighed approximately 12 pounds. since the average closed, hinged-lid container, prior to the thermal densification process of the present invention had an average bulk density of approximately 0.25 lbs./ft.sup.3, it can be seen that the very significant densification was obtained. the bulk density of the resultant block was approximately 40.6 lbs/ft.sup.3. as may be envisioned, if the typical fast-food restaurant generates 10-12 pounds of thermoplastic waste per day, a thermal densification unit of the type used in the above example can easily handle such waste in a period of approximately one to one and a half hours, with the store personnel only required to turn on the apparatus, load it, shut it off and remove the densified material from the pan. only a few additional minutes are required each day to operate the densifier. the unit can be turned off overnight, which will permit the block of plastic to solidify for removal the next morning. the blocks so produced can be easily stored on site for pickup by a recycler, or returned to a central location by an unloaded supply truck making routine deliveries to the commercial location equipped with a thermal densification unit of the present invention. example 2 this example demonstrates the ability of an alternate embodiment of the apparatus and method of the present invention to effectively increase the bulk density of thermoplastic material. a thermal densification unit of the type shown in fig. 7, having an interior volume of approximately 40 gallons, was designed and fabricated. the container employed was cylindrical, having an internal diameter of approximately 20 inches, and constructed of stainless steel. the container was fitted with a cover, also of stainless steel. such a container is available from mcmaster-carr of chicago, ill. the heater was a conventional resistance-type drum platform heater, such a heater being available from mcmaster-carr of chicago, ill. a thermocouple was affixed to the outer skin of the container, near its base, the output of which was connected to the input of the controller. the exterior surface of the container was insulated. the unit built for this example had no ventilation system. to demonstrate the effectiveness of the unit at handling an average day's plastic waste for a typical, high volume fast food restaurant, 10, 30-gallon bags of polystyrene foam cartons (1000 cartons) were obtained. such cartons are produced by and available from mobil chemical co. of greenwich, conn. the controller of the unit's heater was set to provide a temperature of about 400.degree. f. at the inner surface of the container. the cartons were dumped into the stainless steel container at the rate of one 30-gallon bag (100 cartons) every 15-20 minutes until all 10 bags of cartons had been dumped into the container. the resultant pool of polystyrene melt was then cooled and solidified. in cooling the material for removal from the container, the following process was used: a) turn off the heat, b) cover the material with water to a 6 inch depth, and c) allow material to sit under water until cool. the cooled material shrank away from the walls of the container, released and floated to the surface for easy removal from the container. the cooled disk measured 20 inches in diameter and was 11/2 to 13/4 inches thick. the disk weighed approximately 12 pounds. since the average carton, prior to the thermal densification process of the present invention had an average bulk density of approximately 0.25 lbs./ft.sup.3, it can be seen that the very significant densification was obtained,since the bulk density of the resultant disk was on the order of approximately 40.6 lbs/ft.sup.3. as may be envisioned, if the typical fast food restaurant generates 10-12 pounds of thermoplastic waste per day, a thermal densification unit of the type used in the instant example can easily handle such waste in a period of approximately 21/2 hours, with the only involvement from store personnel being the loading of the unit, such loading requiring less than about a minute per bag. the unit can be turned off overnight, which will permit the pool of plastic to solidify for removal the next morning. the disks so produced can be easily stored on site for pick-up by a recycler, or returned to a central location by an unloaded supply truck making routine deliveries to the commercial location equipped with the thermal densification unit of the present invention. example 3 this example illustrates the ability of the apparatus and method of the present invention to effectively increase the bulk density of polyethylene bottles after use. for this example, a bench top laboratory apparatus was configured utilizing a conventional hot plate with temperature controller, an insulated stainless steel beaker of a size capable of containing a one-gallon milk bottle prior to densification, a thermocouple and a pyrometer. the thermocouple was located on the outside surface of the beaker, near its bottom. the hot plate was set to control the temperature of the beaker to approximately 400.degree. f. polyethylene milk bottles having an initial bulk density of approximately 0.08 lbs/ft.sup.3 were introduced one at a time into the laboratory densification apparatus. space constraints necessitated this manner of introduction. a total of 8 one-gallon bottles were densified in this manner with the resultant pool permitted to cool and solidify. the resultant block of material was weighed and found to have a bulk density of about 12 lbs/ft.sup.3, equating to a volumetric densification on the order of 150 times original density. when the process of the present invention is to be practiced aboard a ship, recycling of densified material may not be practical and disposal at sea still desired. to assure that the densified material does not float, it may be necessary to increase its specific gravity. to accomplish this, material of higher specific gravity can be added to the molten thermoplastic prior to cooling. lead shot and the like an be utilized for this purpose. while the apparatus of the present invention has been described as having utility primarily at commercial facilities, aboard ships and in refuse collection vehicles, other applications are within the scope of this invention. for example, envisioned is a unit sized and equipped for household use. such units would have particular utility in locations where refuse trucks of the type incorporating the apparatus of the present invention were not available. although the present invention has been described with preferred embodiments, it is to be understood that modifications and variations may be utilized without departing from the spirit and scope of this invention, as those skilled in the art will readily understand. such modifications and variations are considered to be within the purview and scope of the appended claims.
185-075-273-642-451
US
[ "US" ]
H01C7/12,H02H1/00,H02H1/04,H02H3/22,H02H9/06
2003-06-18T00:00:00
2003
[ "H01", "H02" ]
methods and apparatus to protect against voltage surges
a cable device includes an integrated surge protection circuit. in the event that, communication signals conveyed by the cable include (potentially damaging) transient voltages, the surge protection circuit integrated in the cable suppresses the transient voltages at a distance from a corresponding electronic circuit to which the cable is attached. consequently, potentially damaging voltage transients imparted on the communication signals are clamped before reaching potentially sensitive inputs of the electronic circuit.
1. a surge protection device comprising: a cable that supports conveying communication signals to an input of an electronic circuit, the cable including: a surge protection circuit integrated to suppress transient voltages imparted on the communication signals; and a conductor that extends a ground reference associated with the electronic circuit through the cable to the surge protection circuit, the surge protection circuit including clamping circuits to suppress the transient voltages of the communication signals onto the conductor associated with the cable to prevent damaging the input of the electronic circuit; and a connector that removably attaches to a connector of another cable, the connector of the other cable including internal terminations that identify a mode in which to receive the communication signals at the electronic circuit, a state of the internal terminations conveyed to the electronic circuit via internal conductors of the cable. 2. a surge protection device as in claim 1 , wherein the cable includes at least one internal conductor allocated for supporting reception of mode bit information at the electronic circuit whose state identifies a protocol associated with the communication signals received at the input of the electronic circuit. 3. a surge protection device as in claim 1 , wherein the cable is an extension cable for linking an original cable to the electronic circuit and communication signals conveyed by the original cable are susceptible to high voltage transients. 4. a surge protection device as in claim 1 , wherein the conductor is a cylindrical conductive shield surrounding internal twisted pairs of wires in the cable that carry the communication signals to the electronic circuit. 5. a surge protection device as in claim 1 , wherein the surge protection circuit is located between 2 and 30 inches from the electronic circuit when the cable is attached to convey the communication signals to the electronic circuit. 6. a surge protection device as in claim 1 further comprising: moldable plastic that encapsulates the surge protection circuit. 7. a surge protection device as in claim 1 , wherein the cable includes multiple conductors to support communication signals according to multiple serial communication protocols. 8. a surge protection device as in claim 1 , wherein the surge protection device is disposed in a connector assembly at an end of the cable opposite the end of the cable that couples to the electronic circuit. 9. a surge protection device as in claim 1 , wherein a voltage transient present on the other cable and directed to the electronic circuit on one of the communication signals is clamped by the surge protection circuit prior to being received by the input of the electronic circuit, the conductor of the cable that extends the ground reference from the electronic circuit including a path from the surge protection circuit through a wire of the cable to the ground reference associated with the electronic circuit. 10. a surge protection device as in claim 9 , wherein the connector that removably attaches is a first connector and wherein the connector of the other cable is a second connector, the ground reference associated with the electronic circuit extending from the conductor coupled to the surge protection circuit through the first connector to the second connector, the first and second connector being coupled to each other, the second connector including a mechanism to receive the internal terminations that, if a respective one is present in the second connector, connects a respective one of the internal conductors of the cable to the ground reference of the electronic circuit such that the electronic circuit can identify a logic state associated with the respective one of the internal conductors. 11. a surge protection device as in claim 10 , wherein the internal terminations present in the mechanism of the second connector indicate a respective configuration of the other cable and, in response to identifying a configuration of the other cable, the electronic circuit configuring the input to receive the communication signals according to the respective configuration as indicated by the internal terminations present in the mechanism of the second connector. 12. a surge protection device as in claim 11 , wherein the electronic circuit includes pull-up resistors that pull-up a respective one of the internal conductors to a given non-zero voltage value unless a respective internal termination is present in the mechanism of the second connector coupling the respective one of the internal conductors to the ground reference a with the electronic circuit; and wherein the cable includes twisted pairs of wires for conveying the communication signals from the first connector to the electronic circuit and the conductor is a shield wrapped around the twisted pairs of wires. 13. a surge protection device comprising: a cable means that supports conveying communication signals to an input of an electronic circuit, the cable means including: a surge protection circuit means integrated to suppress transient voltages imparted on the communication signals; and a conductor means that extends a ground reference associated with the electronic circuit through the cable to the surge protection circuit means, the surge protection circuit means including clamping circuits to suppress the transient voltages of the communication signals onto the conductor means associated with the cable means to prevent damaging the input of the electronic circuit; and a connector means that removably attaches to a connector of another cable, the connector of the other cable including internal terminations that identify a mode in which to receive the communication signals at the electronic circuit, a state of the internal terminations conveyed to the electronic circuit via internal conductors of the cable. 14. a method comprising: providing a cable including internal conductors for conveying communication signals to an input of an electronic circuit in a communication system susceptible to potentially damaging transient voltages; fabricating one end of the cable to couple the internal conductors to the electronic circuit; integrating a surge protection circuit into the cable for suppressing transient voltages associated with the communication signals on the internal conductors; and allocating a conductor in the cable to extend a ground reference associated with the electronic circuit to the surge protection circuit integrated with the cable, the surge protection circuit suppressing high voltage transients of the communication signals through a clamping circuit onto the conductor to prevent damaging the input of the electronic circuit. 15. a method as in claim 14 further comprising: surrounding the internal conductors of the cable with a cylindrical shield that extends to a ground reference associated with the electronic circuit to provide a path for dissipating voltage transients from the surge protection circuit. 16. a method as in claim 14 further comprising: disposing the surge protection circuit to be a distance between 2 and 30 inches from the electronic circuit. 17. a method as in claim 14 further comprising: disposing the surge protection circuit including a printed circuit board and corresponding voltage clamping circuits into a connector assembly at an end of the cable to clamp high voltage transients associated with the communication signals to a ground reference prior to otherwise reaching the electronic circuit. 18. a method as in claim 14 further comprising: encapsulating at least a portion of the surge protection circuit with moldable plastic. 19. a method as in claim 14 further comprising: utilizing the cable to support communication signals according to multiple serial communication protocols. 20. a method as in claim 14 further comprising: disposing the surge protection circuit in a connector assembly at an end of the cable opposite the end of the cable that couples to the electronic circuit. 21. a method as in claim 14 further comprising: fabricating the cable to be an extension cable that provides a connection between an original cable that, without the extension cable, would connect a remote device transmitting the communication signals to the electronic circuit; the original cable including a shunt circuit that is selectively populated with at least one respective shunt component to set mode bits of the original cable, the mode bits being read by the electronic circuit through the extension cable and indicating a configuration of the original cable; and fabricating the extension cable to include internal conductors that convey the respective mode bits of the original cable to the electronic circuit. 22. a method as in claim 21 further comprising: surrounding the internal conductors of the cable with a shield that extends to a ground reference associated with the electronic circuit to provide a path for dissipating voltage transients from the surge protection circuit, the internal conductors being twisted pairs of wires for conveying the communication signals between original cable and the electronic circuit, the ground reference being extended to the shunt circuit of the original cable such that the ground reference from the electronic circuit is utilized by both the shunt circuit and the surge protection circuit. 23. in a system susceptible to potentially damaging voltage transients, a method comprising: uncoupling an original cable supporting conveyance of communication signals to an electronic circuit device; providing an extension cable including a transient voltage suppression circuit disposed thereon; coupling one end of the extension cable to the original cable and another end of the extension cable to the electronic circuit device to provide a path for communication signals between the original cable and the electronic circuit device, the transient voltage suppression circuit disposed in the extension cable suppressing transient voltages imparted on the communication signals; and wherein coupling the extension cable to the electronic circuit device includes plugging one end of the extension cable into the electronic circuit device to provide a conductive path between the transient voltage suppression circuit of the extension cable and a ground reference associated with the electronic circuit device. 24. a method as in claim 23 further comprising: utilizing the extension cable to support communication signals according to multiple serial communication protocols. 25. a method as in claim 23 , wherein coupling the extension cable to the original cable establishes a connection of a shunt circuit of the original cable to the conductive path of the extension cable such that the shunt circuit in the original cable has a corresponding connection through the extension cable to the ground reference of the electronic circuit, the shunt circuit including at least one internal termination that pulls a respective signal from the electronic circuit to ground. 26. a method as in claim 25 further comprising: populating the shunt circuit with the at least one internal termination to indicate the configuration of the original cable such that the electronic circuit can read a status of mode bits associated with the shunt circuit and identify how to configure itself to receive the communication signals transmitted from a remote device to the electronic circuit over the original cable and the extension cable.
background of the invention cables have long been used to transfer signals between computers and other electrical systems. depending on an operating environment, a cable and/or its internal signals may be susceptible to power or voltage surges. for example, surges can be caused by lightning, static electricity, temporary ground differences, or even glitches in power supply sources. failing to provide adequate protection against these transient voltage spikes (received through cables) typically results in a substantial amount of damage to electronic equipment every year. lightning strikes can cause temporary ground differences between two or more communicating devices, disrupting communication and causing circuit damage. for example, during normal operation, a remote device may send a communication signal referenced to its corresponding (remote) ground. depending on a system's configuration, a local device may not be able to receive the communication signal unless its ground reference approximates that of the remote ground reference. however, during a lightning strike, the remote ground reference may substantially increase for a brief instant of time, thereby imparting excessive voltage onto a communication signal transmitted to the local device. if the local device is not properly protected, it may be damaged as a result of the excessive voltage imparted on the communication signal (caused by the lightning strike). to protect against surges, a conventional approach involves dissipating power surges via suppression circuits that clamp an input voltage to a level that does not cause damage to a corresponding electronic circuit that receives the signal. suppression circuits include transzorbs, zener diodes, arrestor devices such as metal oxide varistors, carbon blocks, thyristors, gas discharge tubes and the like. typically, these clamping circuits are disposed directly on a circuit board including sensitive functional circuitry that needs protection against potentially damaging surges. another technique of protecting against surges involves the use of an optical isolator disposed in series with an electrical cable. such a device converts an electrical signal potentially including transient voltage spikes to an optical signal. the optical signal is then converted back to an electrical signal and transmitted to a target device. generally, optical devices support protocols such as rs-232. a more sophisticated method of protecting against power surges is to employ detector circuits that detect the presence of a lightning storm during which a surge is likely to occur. in response to detecting such a dangerous condition, the detector circuits cause electronic equipment to be mechanically disconnected (via relays) from an external cable connection while the threat of the surge (e.g., a lightning storm) remains present. after the threat has subsided, the cable equipment is then reconnected to the cable again. summary unfortunately, there are deficiencies associated with conventional methods of suppressing transient voltage spikes imparted on cables (or corresponding internal electrical signals) that may otherwise couple to and damage circuit boards. for example, circuit board space constraints may not allow the inclusion of voltage suppression circuits directly on a circuit board to protect against voltage spikes received through the cable. even if space is available and a (susceptible) circuit board can be redesigned to include appropriate voltage suppression circuits, the on-board solution of including protection circuitry directly on the circuit board does not address the high cost of retrofitting or replacing unprotected circuits boards already in the field. consequently, vulnerable circuit boards such as those supporting potentially life-critical applications must be replaced with surge-protected circuit boards. additionally, a conventional technique of employing an optical isolator in series with a cable has deficiencies. for example, such devices are often quite slow and therefore do not provide proper communication bandwidth. according to yet another conventional technique, disconnecting a cable from corresponding equipment during a threatening condition may render the equipment inoperable for extended periods of time. in most situations (such as life critical applications), this is unacceptable. it is an advancement in the art to provide a cable including an integrated surge protection circuit in order to reduce or eliminate damage caused by transient voltages received over a cable or its internal signals. accordingly, one embodiment of the present invention is directed towards a cable device integrated to include a (transient voltage) surge protection circuit. in the event that, e.g., communication signals conveyed by the cable include transient voltages, the suppression circuit integrated in the cable suppresses the transient voltages at a distance from a corresponding electronic circuit to which the cable is attached. more specifically, the surge protection circuit in the cable includes one or more clamping circuits coupled to a ground reference. for example, a conductor such as a cylindrical conductive shield associated with the cable provides a path between the surge protection circuit and a ground reference of the electronic circuit. when transient high voltage are imparted on communication signals in the cable, the surge protection circuit integrated in the cable clamps the voltage transients of the communication signals so that they do not otherwise cause damage to inputs of the electronic circuit. that is, during the clamping process (via clamping circuits in the surge protection circuit), current associated with the transient voltage travels on the conductor of the cable to the ground reference associated with the electronic circuit. consequently, the high voltage transients are clamped before reaching inputs of the electronic circuit. according to one embodiment, the surge protection circuit is located between 2 and 30 inches from an end of the cable that attaches to the electronic circuit. thus, the surge protection circuit integrated with the cable suppresses transient voltages prior to reaching the electronic circuit. according to another embodiment, the clamping circuit is disposed on a printed circuit board (or flex circuit) integrated into a connector assembly of the cable at an end of the cable opposite the end that couples to the electronic circuit. to provide protection against environmental elements, the printed circuit board including the clamping circuits can be encapsulated with moldable plastic. the cable and integrated surge suppression circuit can be an extension cable. for example, a communication system may initially include an original cable that conveys communication signals to an electronic circuit. to protect the electronic circuit from potentially damaging voltage transients, one end of the extension cable is plugged into the original cable and the other end is plugged into the electronic circuit to provide a path for communication signals between the original cable and the electronic circuit device. the transient voltage suppression circuit disposed in the extension cable suppresses transient voltages imparted on the communication signals so that they do not damage the electronic circuit. in one embodiment, the cable includes at least one internal conductor allocated for supporting a reception of mode bit information at the electronic circuit. a state of the mode bit information identifies a protocol associated with the communication signals received over the cable at the input of the electronic circuit. the surge protection device optionally includes a connector that removably attaches to a connector of another cable. the connector of the other cable may include internal terminations defining mode bit information identifying a mode in which to receive the communication signals at the electronic circuit. a state of the internal terminations are conveyed to the electronic circuit via internal conductors of the cable. clamping circuits can be provided to protect against voltage transients on the internal signals of the cable defining mode bits. brief description of the drawings the foregoing and other objects, features and advantages of the invention will be apparent from the following more particular description of preferred embodiments of the invention, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. the drawings are not necessarily drawn to scale, emphasis instead being placed upon illustrating the principles of the present invention. fig. 1 is a block diagram of a communication system including a surge protection device to prevent damage to inputs of an electronic circuit. fig. 2 is a pictorial diagram of a surge protection circuit integrated into a cable. fig. 3a is a pictorial diagram of one end of a cable integrated to include a surge protection circuit. fig. 3b is a pictorial diagram of another end of a cable integrated to include a surge protection circuit. fig. 4 is a circuit diagram of a voltage clamping circuit. fig. 5 is a circuit diagram of a voltage clamping circuit. fig. 6 is a flowchart illustrating a method of fabricating a cable to include a surge protection circuit. detailed description an embodiment of the present invention is directed towards a cable device integrated to include a (transient voltage) surge protection circuit. in the event that, communication signals conveyed by the cable include (damaging) transient voltages, the surge protection circuit integrated in the cable suppresses the transient voltages at a distance from a corresponding electronic circuit to which the cable is attached. for example, a conductor such as a cylindrical conductive shield associated with the cable provides a path between the surge protection circuit and a ground reference of the electronic circuit. when transient voltages are imparted on communication signals in the cable, the surge protection circuit clamps the voltage transients and current associated with the transient voltage travels on the conductor of the cable to the ground reference of the electronic circuit. consequently, high voltage (and potentially damaging) transients imparted on the communication signals are clamped before reaching an input of the electronic circuit. although the techniques described herein are suitable for use in communication systems, and particularly to applications employing protection against transient voltage surges received on communication signals, the techniques are also well-suited for other applications employing surge protection. fig. 1 is block diagram of communication system 100 for transmitting and receiving communication signals between remote device 110 and local device 120 according to an embodiment of the invention. as shown, communication system 100 includes remote device 110 , local device 120 , electrical cable 134 (such as a ‘smart’ cable produced by cisco), and electrical cable 144 . based on coupling provided by electrical cables 134 , 144 , a connective path extends between remote device 110 and local device 120 . one end of electrical cable 134 couples directly to remote device 110 while the other end includes connector 136 that couples to connector 142 at end of electrical cable 144 . at an opposite end of connector 142 , electrical cable 144 includes connector 146 that couples electrical cable 146 to connector 148 of local device 120 . thus, in one respect, cable 146 acts as an extension cable to couple electrical cable 134 to local device 120 . surge protection circuit 150 disposed or integrated in electrical cable 134 suppresses transient voltages imparted on electrical cable 144 to protect local device 120 against potential damage. for example, remote device 110 generates electrical signals (referenced with respect to remote ground 190 ) such as communication signals through electrical cable 134 and electrical cable 144 to local device 120 . during normal operation, when there are no lightning 199 strikes, remote ground 190 and local ground 180 are approximately equal. thus, local device 120 can receive and decipher electrical signals because its receiver circuitry is referenced to approximately the same ground as that of the remote device 110 . however, during a lightning storm, lightning 199 causes remote ground reference 190 to increase (or decrease) dramatically compared to local ground reference 180 . similarly, lightning 199 may strike in a region causing local ground reference 180 to change with respect to remote ground reference 190 . this is largely due to voltage differential 195 (gradient) produced by lightning 199 . for example, charged particles at the remote ground reference 190 cause it to increase or decrease. as a result of a large difference between remote ground reference 190 and local ground reference 180 during lightning 199 , electrical signals generated by remote device 110 through electrical cable 134 include potentially damaging transient voltages because remote device 110 generates electrical signals with respect to its own remote ground reference 190 . surge protection circuit 150 integrated into electrical cable 144 protects local device 120 from potentially damaging transient voltages caused by environmental conditions such as lightning 199 (static electricity discharge, etc.). for example, surge protection circuit 150 suppresses high transient voltages imparted through electrical cable 134 before they would otherwise reach and damage potentially sensitive electrical inputs of local device 120 . this is discussed more particularly in connection with the following figures. fig. 2 is a pictorial diagram illustrating a technique of suppressing transient voltages according to an embodiment of the invention. as shown, electrical cable 144 between respective connectors 142 , 146 measures length, l. according to one embodiment, l is between two and thirty inches in length. in a typical application, electrical cable 144 including connectors 142 , 146 is eight inches in length. consequently, transient voltages imparted on signals in electrical cable 134 (conveyed through connectors 136 , 142 bound for electronic circuit 205 ) are suppressed by surge protection circuit 150 between two and thirty inches from electronic circuit 205 . length, l, however may be more than thirty inches or less than two inches depending on the application. typically electrical cable 134 is between 3 and a hundred or more feet long. figs. 3a and 3b are pictorial diagrams illustrating surge protection circuit 150 in relation to electrical cables 134 , 144 and electronic circuit 205 . as shown (in fig. 3a ), electrical cable 134 includes one or multiple twisted pair of wires 250 that couple to connector 136 . connector 136 includes termination points 271 , 281 , shunt(s) 290 , contacts 270 , 280 , and ground path 291 . connector 142 includes ground path 291 , signal paths 251 , 261 , and ground reference contacts 272 , 282 . surge protection circuit 150 includes circuit board 210 , ground paths 291 , vias (such as electrical nodes of a layered circuit board) 273 , 274 , 283 , 284 , clamping circuits 220 , 230 , circuit traces 252 , 262 , and barrels (such as electrical nodes of a layered circuit board) 275 , 276 , 277 , 285 , 286 , 287 . in fig. 3b , electrical cable 144 includes twisted pairs of wire 253 , 263 , shield(s) 295 , and connector 146 . electronic circuit 205 includes connector 148 , ground path 296 , resistors r 207 , r 208 , and signal interfaces 240 , 242 . as discussed and in connection with components in both figs. 3a and 3b , electrical cable 134 and electrical cable 144 convey electrical signals from remote device 110 to local device 120 (including circuit board 205 ). initially, remote device 210 transmits signals onto twisted pair of wires 250 to connector 136 . signal path 251 conveys electrical signals through connectors 136 , 142 to traces 252 and barrels 275 , 276 of circuit board 210 . in addition to traces 252 , barrels 275 , 276 of circuit board 110 couple to receive ends of twisted pair of wires 253 of cable 144 . in turn, twisted pair of wires 253 couple to connector 146 at the other end of electrical cable 144 and signal interface 242 of electronic circuit 205 . in this way, electrical signals from cable 134 and, more specifically remote device 110 , extend through cable 144 to electronic circuit 205 . in one embodiment, remote device 110 drives one or multiple serial differential communication signals such as those based on rs-232, rs-449, etc. through electrical cables 134 , 144 to signal interface 242 of electronic circuit 205 . in the event that lightning 199 strikes in a vicinity of remote device 110 , electrical signals conveyed on twisted pair of wires 250 potentially include transient high voltages. such transient high voltages from twisted pair of wires 250 travel along signal path 251 (along with communication signal itself) to circuit board traces 252 of circuit board 210 . generally, clamping circuits 220 clamps transient voltages to a non-harmful threshold voltage level such as +/−16 volts. the associated ground reference of clamping circuits 220 extends from circuit board 210 to ground reference 180 of electronic circuit 205 . for example, one end of clamping circuits 220 connects directly to vias 273 , 274 to ground path 291 . cable 144 includes a conductor such as shield 295 (such as braided wire and/or metal foil) to couple ground path 291 (of circuit board 210 ) to ground path 296 of electronic circuit 205 . more specifically, ground path 291 (such as a planar ground reference) disposed in a layer of circuit board 210 (such as a perforated circuit board, flexible circuit board, etc) electrically connects an end of clamping circuits 220 and vias 273 , 274 to barrel 277 (such as a through-hold trace contact for soldering a conductor 294 ). conductor 294 electrically connects barrels 277 , 287 to shield 295 . in turn, shield 295 electrically couples through connectors 146 , 148 to contacts 202 associated with connector 148 . contacts 202 of connector 148 couple shield 295 to ground path 296 (such as ground plane of electronic circuit 205 ), which in turn is electrically connected through contact 297 to ground reference 180 . consequently, during a lightning 199 strike, clamping circuit 220 of surge protection circuit 150 clamps transient voltages (such as 100 volt spikes) prior to otherwise reaching sensitive inputs of signal interface 242 of electronic circuit 205 . thus, the addition of electrical cable 144 (extension cable) in series with electrical cable 134 (original cable) not only enables one to position remote device 110 and local device 120 farther apart from each other, it also provides a level of protection against voltage surges without having to replace electronic circuit 205 with surge protection circuitry on its front end prior to inputs of signal interfaces 240 , 242 . to provide protection against environmental elements, printed circuit board 210 and corresponding electronic components such as clamping circuits 220 , 230 are encapsulated with moldable (non-conductive) plastic. electrical cable 144 also includes a plastic nonconductive coating (insulation). consequently, circuit board 210 appears as a portion of electrical cable 144 between connector 142 and 146 . in one embodiment, connector 136 associated with cable 134 includes one or multiple shunts 290 (e.g., jumpers, zero ohm resistors, low impedance conductors, etc.) to identify one of potentially different types of setups associated with electrical cable 134 . in one application, connector 136 includes up to four shunts 290 . an example of shunts 290 and how they may be used in connector 136 is more particularly shown in u.s. pat. no. 6,004,150 issued on dec. 21, 1999 to chapman, et al. based on the configuration of supporting 4 mode bits in connector 136 (two are shown in fig. 3a , namely, signal path 261 - 1 and 261 - 2 ), a corresponding cable 134 can be configured as one of up to sixteen different types of cables. thus, a cable itself and presence of shunts 290 indicates information about the cable type. electronic circuit 205 reads a status of whether shunts 290 are present in connector 136 for each corresponding dedicated signal to, in turn, configure itself to communicate (transmit and receive) information according to one for multiple protocols. for example, different pairs of wire in cable 134 will be dedicated (by electronic circuit 205 ) to transmitting and/or receiving data information according to one or more selected protocols based on a setting of mode bits. as shown, connector 136 supports two mode bits, one of which is set to a logic low state (path associated with signal path 261 - 1 including shunt 290 ) and the other of which is set to a logic high state (path associated with signal path 261 - 2 not including shunt 290 ). electronic circuit 205 senses a corresponding type of cable 134 by detecting a presence of shunts 290 at signal interface 240 . for example, pull up resistors r 207 and r 208 (such as 1000 ohm resistors) pull-up respective voltages imparted at circuit traces 324 on electronic circuit 205 . in the event that a corresponding shunt 290 is present in connector 136 , a corresponding trace is pulled to ground or logic low. for example, a trace of electronic circuit 205 (such as trace 324 - 1 coupled to wire 263 - 1 connected to trace 262 - 1 of circuit board 210 ) is pulled down to ground via a path including shunt 290 ground path 291 (through connectors 136 , 142 ) to contact 282 . ground path 291 is coupled to contact 282 which is coupled to local ground reference 180 of electronic circuit 205 through a circuit path including conductor 294 , shield 295 , contacts 202 , signal path 296 , and via 297 similar to the path as previously discussed for use in clamping circuits 220 . thus, as shown, circuit trace 324 - 1 is pulled to ground through shunt 290 across termination point 281 and contact 280 of connector 136 . notably, there is no shunt 290 present across termination point 271 and contact 270 of connector 136 . thus, a circuit path including circuit trace 324 - 2 , wire 263 - 2 , trace 262 - 2 to termination point 271 is not pulled-down to ground reference 180 and a corresponding voltage sensed on circuit trace 324 - 2 at signal interface 240 is a logic high voltage. similar to traces 252 and a corresponding circuit path to electronic circuit 205 , traces 324 of circuit board 205 and, more specifically, inputs of signal interface 240 are also protected from transient voltages. for example, traces 324 electrically couple through connectors 146 , 148 to corresponding wires 263 of electrical cable 144 . wires 263 of electrical cable 144 in turn couple to circuit traces 262 and corresponding clamping circuits 230 . in the event of a transient voltage on circuit traces 262 , clamping circuits 230 clamp the transient voltage via a circuit path including ground path 291 , conductor 294 , shield 295 , connectors 146 , 148 ground path 296 , via 297 to local ground reference 180 similar to that as previously discussed for clamping circuit 220 . consequently, inputs of signal interface 240 are also protected from potentially damaging transient voltages. fig. 4 is a diagram of clamping circuit 230 according to an embodiment of the invention. as shown, clamping circuit 230 includes diodes d 310 , d 312 , and d 314 . in operation, contact 283 electrically connects to ground path 291 . thus, during the occurrence of positive transient voltages (potentially caused by lightning 199 ) imparted on contact 285 , diode d 312 turns on (forward biased) and diode d 314 such as a fast-acting zener diode with low-capacitance clamps voltage based on its characteristic reverse breakdown voltage such as +6 volts. during the occurrence of negative transient voltage (potentially caused by lightning 199 ) imparted at contact 285 , diode d 310 turns on (forward biased) to clamp the voltage at contact 283 so that it does not go below −1 volts. consequently, wires 263 and corresponding inputs of signal interface 240 are protected against voltage transients. depending on the embodiment, clamping circuit 230 may include components such as transzorbs, fast-acting tvs zener diodes, arrestor devices such as metal oxide varistors, carbon blocks, thyristors, gas discharge tubes and the like. fig. 5 is a diagram of clamping circuit 220 according to one embodiment of the invention. as shown, clamping circuit 220 includes diodes d 410 , d 412 , d 420 , and d 422 . generally, node 275 of circuit board 210 is protected against transient voltages. for example, during a negative transient voltage at node 275 , diode d 412 turns on (forward breakdown voltage=1.0 v) as well as d 410 (e.g., reverse breakdown voltage=−15 v). clamping circuit 220 thus clamps a voltage at node 275 so that it does not go below −16 volts. diodes d 420 and d 422 are symmetrically disposed as d 410 and d 412 but in a reverse direction between node 275 and ground path 273 . thus, node 275 is protected against positive transient voltages (e.g., +16 volts). depending on the embodiment, clamping circuit 220 may include components such as transzorbs, fast acting diodes, fast-acting tvs zener diodes, arrestor devices such as metal oxide varistors, carbon blocks, thyristors, gas discharge tubes and the like. fig. 6 is a flow chart for fabricating cable 144 according to an embodiment of the invention. in step 610 , an assembler provides a cable 144 including internal conductors (such as wires 253 , 263 ) for conveying communication signals to electronic circuit 205 disposed in communication system 100 susceptible to potentially damaging transient voltages. in step 620 , the assembler produces one end of cable 144 to include connector 146 for coupling cable 144 to electronic circuit 205 . in step 630 , the assembler integrates surge protection circuit 150 into cable 144 for suppressing transient voltages associated with communication system 100 at a distance from electronic circuit 205 . in step 640 , the assembler allocates a conductor in the cable (such as shield 295 ) to extend a ground reference of electronic circuit 205 to circuit board 210 of surge protection circuit 150 . as previously discussed, surge protection circuit 150 suppresses transient high voltages on traces 252 , 262 to prevent damage to inputs (or outputs) of electronic circuit 205 . while this invention has been particularly shown and described with references to preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.
185-468-647-350-363
US
[ "US", "JP" ]
G01B7/31,H01L21/67,G01R31/26,H01L21/68
1984-07-31T00:00:00
1984
[ "G01", "H01" ]
use of an electronic vernier for evaluation of alignment in semiconductor processing
an electronic vernier is presented which detects and quantifies misalignment between layers of material deposited upon a semiconducting wafer. verniers may be constructed which evaluate alignment between two conducting layers, between two conducting layers and an insulating layer and between a semiconducting layer and a capacitive layer. circuitry is described which shows how output from a vernier may be detected and quantified in order to evaluate the amount of misalignment.
1. a device for evaluating alignment of a first layer on an integrated circuit with a second layer on the integrated circuit, the device comprising: a detecting means for electrically detecting misalignment between the first layer and the second layer, the detecting means including, a first plurality of conducting material sections on the first layer, and a second plurality of conducting material sections on the second layer, wherein each section of a first group of the first plurality of conducting material sections is electrically coupled to a corresponding section of a first group of the second plurality of conducting material sections, and each section of a second group of the first plurality of conducting material sections is electrically insulated from a corresponding section of a second group of the second plurality of conducting material sections; and an output means, coupled to the detecting means for producing an output which is an encoded representation of a quantity of misalignment, wherein the output means detects which sections are within the first group of the first plurality of conducting material sections and which sections are within the second group of the first plurality of conducting material sections in order to produce the encoded representation of the quantity of misalignment. 2. a device as in claim 1 wherein the first plurality of conducting material sections are arranged in a sequence so that when the first layer is aligned with the second layer, consecutive sections from the second group of the first plurality of conducting material sections separate some sections in the first group of the first plurality of conducting material sections from other sections in the first group of the first plurality of conducting material sections. 3. a device for evaluating alignment of a first layer on an integrated circuit with a second layer on the integrated circuit, wherein the first layer comprises conducting material and the second layer comprises insulating material, wherein the integrated circuit has a third layer comprising conducting material, and wherein the second layer overlays the third layer and the first layer overlays the second layer, the device comprising: detecting means for electrically detecting misalignment between the first layer and the second layer, the detecting means including, a first plurality of conducting material sections on the first layer; a second plurality of conducting material sections on the third layer; output means, coupled to the detecting means for producing an output which is an encoded representation of a quantity of misalignment; wherein each section of a first group of the first plurality of conducting material sections is electrically coupled through windows within the insulating material to a corresponding section of a first group of the second plurality of conducting material sections, and each section of a second group of the first plurality of conducting material sections is electrically insulated by the insulating material from a corresponding section of a second group of the second plurality of conducting material sections. 4. a device as in claim 3 wherein the output means determines which sections are within the first group of the first plurality of conducting material sections and which sections are within the second group of the first plurality of conducting material sections in order to produce the encoded representation of the quantity of misalignment. 5. a device as in claim 4 wherein the first plurality of conducting material sections are arranged in a sequence so that when the first layer is aligned with the second layer, consecutive sections from the second group of the first plurality of conducting material sections separate some sections in the first group of the first plurality of conducting material sections from other sections in the first group of the first plurality of conducting material sections. 6. a device for evaluating alignment of a first layer on an integrated circuit with a second layer on the integrated circuit, the device comprising: detecting means for electrically detecting misalignment between the first layer and the second layer, the detecting means comprising, a plurality of capacitive material sections on the second layer, and, a plurality of semiconducting material sections on the first layer, each semiconducting material section having a first end, a second end, and a channel, each channel in a first group of the plurality of semiconducting material sections having a first conductivity value and each channel in a second group of the plurality of semiconducting material having a second conductivity value, the conductivity value of each channel depending upon the relative position of the semiconducting material section containing each channel with capacitive material sections associated with each semiconducting material section; and, output means, coupled to the detecting means for producing an output which is an encoded representation of a quantity of misalignment. 7. a device as in claim 6 wherein the detecting means additionally comprises a current source coupled to the first end of each of the semiconducting material sections, and a plurality of current detection means each current detecting means coupled to the second end of the semiconducting material sections, for detecting whether each semiconducting material section has the first conductivity value or the second conductivity value. 8. a device as in claim 7 wherein the plurality of semiconducting material sections are arranged in a sequence so that when the first layer is aligned with the second layer, consecutive sections from the second group of the plurality of semiconducting material sections separate some sections in the first group of the plurality of semiconducting material sections from other sections in the first group of the plurality of semiconducting material sections.
background processing integrated circuits requires the deposit of many layers of material upon a semiconductor wafer. precise alignment of each layer with every other layer is necessary to assure correct functioning of a finished product. traditionally this alignment has been done by an operator examining a circuit under a microscope or by other means using optics. summary of the invention in accordance with the preferred embodiments of the present invention an electronic vernier for the evaluation of alignment of layers in semiconductor processing is provided. the embodiments provided include verniers for aligning a first conducting layer to a second conducting layer; a conducting layer to a non-conducting layer; and a semiconducting layer to a capacitive layer. brief description of the drawings figs. 1a-1f show an embodiment of a vernier for aligning two conductive layers. figs. 2a and 2b show a second embodiment of a vernier for aligning two conductive layers in accordance with a preferred embodiment of the present invention. figs. 3a and 3b show an embodiment of a vernier for aligning a non-conductive layer between two conductive layers. figs. 4a and 4b show an embodiment of a vernier for aligning a semiconducting layer to a capacitive layer. description of the preferred embodiment fig. 1a shows a design for an electronic digital vernier for integrated circuit (ic) process evaluation. conducting strips 101-108 are part of a first conducting layer on an ic. conducting strips 111-118 are part of a second conducting layer on the ic. the second layer is adjacent to the first layer. as can be seen from fig. 1a conducting strip 101 is in contact with conducting strip 111, conducting strip 102 is in contact with conducting strip 112, conducting strip 103 is in contact with conducting strip 113, conducting strip 104 is in contact with conducting strip 114, and conducting strip 105 is in contact with conducting strip 115. there is no contact between conduction strips 106 and 116, 107 and 117, and 108 and 118. alignment may be evaluated as follows. conducting strips 101-108 are held at a voltage vdd (logic 1) by a voltage source 120. conducting strips 111-118 are individually connected to a node 151 of a detection circuit 150, shown in fig. 1c. detection circuit 150 consists of a voltage meter 153 and a resistance 152 coupling node 151 to a reference voltage (logic 0). when node 151 is connected to conducting strips 111-115, voltage meter 153 detects a logic 1. when node 151 is connected to conducting strips 116-118, voltage meter 153 detects a logic 0. thus detection circuit 150 detects a voltage transition between conducting strip 115 and conducting strip 116. in fig. 1b, the second conducting layer has been moved to the right relative to the first conducting layer. therefore, conducting strips 111-118 have moved to the right with respect to conducting strips 101-108. now, when conducting strips 111-118 are individually connected to node 151, detection circuit 150 detects a voltage transition between conducting strip 113 and conducting strip 114. determination of where a voltage transition occurs, therefore, indicates the relative positioning of the first and second detecting layers. placing verniers such as that shown in fig. 1a vertically (in the y direction) and horizontally (in the x direction) on an ic, it is possible to determine alignment of layers in both the x direction and the y direction. fig. 1d is a block diagram showing a vernier 181 with 32 vernier elements, labelled ve.sub.31 -ve.sub.0 (only vernier elements ve.sub.31 -ve.sub.21 and ve.sub.0 are shown), and additional circuitry which could be incorporated on an integrated circuit. vernier outputs from vernier elements ve.sub.31 -ve.sub.0 are coupled to a debouncing circuit 82. in fig. 1d example values 185 are given for vernier outputs. that is, the outputs of vernier elements ve.sub.31 -ve.sub.27 are labelled "1" ("1" represents a logic 1), the output for vernier element ve.sub.26 is labelled "0" ("0" represents a logic 0), and the outputs of vernier elements ve.sub.25 -ve.sub.0 are labelled "0/1" ("0/1" means the output can be either a logic 0 or a logic 1). debouncing circuit 182 comprises debouncing elements de.sub.31 -de.sub.0 (only debouncing elements de.sub.31 -de.sub.21 and de.sub.0 are shown). generally, debouncing circuit 182 scans output from vernier elements starting with output from vernier element de.sub.31. as long as vernier elements ve.sub.31 -ve.sub.0 output a logic 1, corresponding debouncing elements de.sub.31 -de.sub.0 output a logic 0. however once debouncing elements detects a logic 0 output from a vernier element, the remaining debouncing elements output a logic 1. example values 186, corresponding to example values 185, are given for debouncing circuit 182. as can be seen these values are logic 0 for vernier elements dc.sub.31 -dc.sub.27 and logic 1 for the rest of the vernier elements. from the above description it can be seen that debouncer circuit 182 assures that in outputs from its debouncing elements de.sub.31 -de.sub.0 there is at most a single transition from logic 0 to logic 1. the location of the transition is at the highest order output of vernier elements ve.sub.31 14 ve.sub.0 that contains a logic 0. a detector circuit 183 receives output from debouncing circuit 182. detector circuit 183 comprises detecting elements dt.sub.31 -dt.sub.0 (only detecting elements dt.sub.31 -dt.sub.21 and dt.sub.0 are shown). at the detecting element corresponding to the location where outputs from debouncing circuit 182 makes its transition from logic 0 to logic 1, detecting circuit 183 produces a logic 1. for all other detecting elements, detecting circuit 183 produces a logic 0, as shown. example values 187, corresponding to example values 186 and 185, show a logic 1 at the output of detecting element dt.sub.26. the logic 1 output of detecting element dt.sub.26 corresponds to the transition from logic 0 to logic 1 which occurs at the output of debouncing element de.sub.26. a binary encoder 184 receives output from detecting circuit 183 and produces a binary coded number 188 which indicates the location of transition from logic 0 to logic 1 in the output of debouncing circuit 182. binary coded number 188 corresponds to example values 187, i.e., binary coded number 188 is 1101.sub.base 2 which is equivalent to 26.sub.base 10, i.e., the location where the outputs of debouncing circuit 182 makes a transition from logic 0 to logic 1. fig. 1e shows an embodiment of debouncing elements within debouncing circuit 182, and detecting elements within detecting circuit 183. debouncing elements 161-165 illustrate how debouncing elements de.sub.31 -de.sub.0 may be constructed. for example, debouncing element 161 has an input 161.sub.i from a vernier element and an input 161.sub.c from a prior debouncing element. a transistor pair 168 operates as an inverter, and transistor pair 169 operates as a switch. when input 161.sub.i is at logic 0, transistor pair 169 is switched "off" and a transistor 166 is switched "on" so that a logic 1 (represented by a "+" in fig. 1e) is propagated through to debouncing output 161.sub.o and to an input 162.sub.c of debouncing element 162. when input 161.sub.i is at logic 1, transistor 166 is switched "off" and transistor pair 169 is switched "on". transistor pair 169 thus propagates the value on input 161.sub.c through to debouncing output 161.sub.o and input 162.sub.c. debouncing elements 162-164 operate in a manner similar to debouncing element 161. detecting elements 171-174 illustrate how detecting elements de.sub.31 -de.sub.0 may be constructed. a transistor 176 within detecting element 171 acts as a switch. when an input 171.sub.c from a prior detecting element is at logic 0, the value on output 161.sub.o is propagated through to an output 171.sub.o. when input 171.sub.c is at logic 1, a depletion transistor 177 pulls the output 171.sub.o to logic 0 (logic 0 is represented by the "ground" in fig. 1e). an input 172.sub.c to detecting element 172 is coupled to transistor 176 as shown. detecting elements 172-174 operate in a manner similar to detecting element 171. in fig. 1a, each pair of conducting strips--e.g. 101 and 111, 102 and 112, 103 and 113, etc.--form a vernier element. in each vernier element, proceeding from left to right across fig. 1a, the conducting strip on the second layer (conducting strips 111-118) is shifted to the right an incremental distance 142--see fig. 1f--relative to the conducting strip on the first layer (conducting strips 101-108). incremental distance 142 is uniform throughout all the vernier elements. fig. 1f shows how incremental distance 142 can be calculated using two vernier elements. a first distance 131 is the distance from a leading edge 111a of conducting strip 111 to a leading edge 101a of conducting strip 101. a second distance 132 is the distance from a leading edge 112a of conducting strip 112 to a leading edge 102a of conducting strip 102. incremental distance 142 is the difference in length between first distance 131 and second distance 132. incremental distance 142 may be used in conjunction with binary coded number 188 to determine a quantity of misalignment. for example, if incremental distance 142 has a value d, and binary coded number 188 has a value v.sub.1 but would have had a value v.sub.0 if the first and second layers had been properly aligned, then a quantity of misalignment m could be calculated using the following formula: m=d x/v.sub.0 -v.sub.1 / fig. 2a shows an alternate arrangement of conducting strips. conducting strips 201-212 are part of a first conducting layer and conducting strips 221-232 are part of a second conducting layer. in fig. 2a, conducting strips 201-204 and 209-212 make contact with conducting strips 221-224 and 229-232 respectively. there is no contact between conducting strips 205-208 and conducting strips 225-228. thus in fig. 2a there are two transitions, a first transition between conducting strips 224 and 225, and a second transition between conducting strips 228 and 229. in fig. 2b, the second conducting layer has been moved to the right relative to the first conducting layer. thus in fig. 2b the first transition occurs between conducting strips 222 and 223, and the second transition occurs between conducting strips 226 and 227. utilizing a vernier with two transitions, as in fig. 2a, allows for greater process independence. for instance, if conducting strips 101-108 and 111-118 in fig. 1a are formed by etching, under etch or over etch might result in a change in the location of the transition thereby introducing uncertainty into the determination of alignment. in fig. 2a, under etch or over etch may result in a change in location of both transition points, but the relative center of the transition points will remain in the same location. the relative center of the transition points may then be used to determine the alignment of the layers. the verniers discussed above were designed to work between two conducting layers. verniers may also be constructed which can be used for non-conducting layers. for instance, fig. 3a shows a vernier element which may be used to construct a vernier for alignment of a first conducting layer having a conducting strip 301, a non-conducting layer 302 having a window 302a, and a second conducting layer having a conducting strip 303. in fig. 3a conducting strip 301 makes contact with conducting strip 303 through window 302a. in fig. 3b, the vernier element of fig. 3a is shown with non-conducting layer 302 shifted to the left relative to the first and second conducting layers. in fig. 3b conducting strip 301 does not make electrical contact with conducting strip 303 because window 302a has shifted relative to conducting strips 301 and 303. using the vernier element shown in figs. 3a and 3b, a vernier similar to the vernier of figs. 2a and 2b may be constructed. fig. 4a shows a vernier element which may be used to construct a vernier for alignment of two semiconducting layers: an island (diffusion) layer having a semiconducting strip 401 and a poly-silicon layer having a gate 403. through contacts 404, poly-silicon layer 403 is biased to a reference voltage (logic 0). the portion of semiconducting strip 401 immediately under gate 403 is biased to its non-conducting state. however a conducting channel exists which allows conduction between a location 401a and 401b on semiconducting strip 401. in fig. 4b, the vernier element of fig. 4a is shown with the island layer shifted to the right relative to the poly-silicon layer. thus, in fig. 4b, conducting channel 402 disappears and there is no conduction between location 401a and 401b. using the vernier element shown in figs. 4a and 4b, a vernier similar to the vernier of figs. 2a and 2b may be constructed.
185-633-700-340-049
IT
[ "WO", "JP", "ES", "EP", "US", "KR", "PL" ]
F17C1/16,F17C1/06,F17C13/08,F17C1/00,F16J12/00,F17C1/04,F17C1/02,F17C13/06,B60K15/03,F17C13/00
2010-08-09T00:00:00
2010
[ "F17", "F16", "B60" ]
gas cylinder
a gas cylinder (1 ) internally defining a gas storage space (2) able to be closed by a stop valve (3) comprises a rigid wall (4) made from composite material having a reinforcing layer (5) containing reinforcing fibres and an inner surface (6) and a flexible sealing wall (13) connected to the rigid wall (4) through a mouth (9) and suitable for adhering in pressing contact against the inner surface (6) of the rigid wall (4).
claims 1. gas cylinder (1) internally defining a gas storage space (2) able to be closed by a stop valve (3), wherein said cylinder (1 ) comprises: - a rigid wall (4) made from composite material having a reinforcing layer (5) containing reinforcing fibres and an inner surface (6) that defines an inner space (7) accessible through an access opening (8) formed in the rigid wall (4), - a tubular mouth (9) connected to the access opening (8) of the rigid wall (4) and configured to receive the stop valve (3) in communication with the gas storage space (2), - a flexible sealing wall (13) arranged in the inner space (7) and connected to the rigid wall (4) only through the mouth (9), said flexible sealing wall (13) internally defining said gas storage space (2) and being suitable for adhering in pressing contact against the inner surface (6) of the rigid wall (4), wherein the rigid wall (4) is permeable to air so as to allow a complete expansion of the flexible wall (13) in the inner space (7) and the inner surface (6) is formed from a sliding layer (14) of the rigid wall (4) different from the reinforcing layer (5) and that allows sliding in pressing contact between the flexible sealing wall (13) and the rigid wall (4). 2. gas cylinder (1) according to claim 1 , wherein the mouth (9) comprises a cylinder connection portion (11 ) suitable for removably engaging a mouth connection seat (12) formed from the rigid wall (4) at the opening (8). 3. gas cylinder (1) according to claim 1 , wherein the mouth (9) comprises a cylinder connection portion (11 ) suitable for removably engaging a mouth connection seat (12) formed from a body (20) initially distinct and then connected with the rigid wall (4). 4. gas cylinder (1 ) according to any one of the previous claims, wherein the reinforcing fibres of the reinforcing layer (5) have a tensile strength of over 4500 mpa and a modulus of elasticity of over 200 gpa and wherein said reinforcing layer (5) comprises a content of reinforcing fibres within the range from 50%vol to 70%vol, preferably from 55%vol to 65%vol, even more preferably about 60%vol. 5. gas cylinder (1 ) according to any one of the previous claims, wherein said inner surface (6) is smooth and without edges or steps, with the exception of the edge of the access opening (8). 6. gas cylinder (1 ) according to any one of the previous claims, wherein said sliding layer (14) comprises a thermoplastic synthetic material selected from the group comprising polyethylene, pet, polyester, polyvinylchloride, polytetrafluoroethylene. 7. gas cylinder (1 ) according to any one of the previous claims, wherein said sliding layer (14) comprises a substance selected from the group consisting of: - lubricant powder containing nano-particles suitable for reducing the friction coefficient between the rigid wall (4) and the flexible sealing wall (13), - lubricant fluid, - lubricant grease, - lubricant oils, - lubricant gels. 8. gas cylinder (1) according to any one of the previous claims, wherein the sliding layer (14) is fixed to the reinforcing layer (5) by means of moulding of the sliding layer (14) and subsequent winding of the reinforcing layer (5) around the sliding layer (14). 9. gas cylinder (1 ) according to any one of the previous claims, wherein the sliding layer (14) is fixed to the reinforcing layer (5) by means of blow-moulding in a mould containing said reinforcing layer (5). 10. gas cylinder (1 ) according to any one of the previous claims, wherein the sliding layer (14) is fixed to the reinforcing layer (5) by means of spraying. 11. gas cylinder (1 ) according to any one of the previous claims, wherein the sliding layer (14) is fixed to the reinforcing layer (5) by means of dip coating. 12. gas cylinder (1) according to any one of the previous claims, wherein the flexible sealing wall (13) forms a deformable bag with an opening connected to the mouth (9) so as to form a sealing wall (13) - mouth (9) group that is prefabricated and able to be reversibly connected to the rigid wall (4). 13. gas cylinder (1) according to any one of the previous claims, wherein the flexible sealing wall (13) is connected to the mouth (9) through connection means selected from the group comprising: - vulcanized join, - co-moulded join, - gluing, - mechanical join through crimping or screw clamping. 14. gas cylinder (1 ) according to claim 1, wherein the mouth (9) with the flexible sealing wall (13) are irreversibly connected to such a rigid wall (4). 15. gas cylinder (1 ) according to any one of the previous claims, wherein said sliding layer (14) comprises a fabric. 16. gas cylinder (1 ) according to any one of the previous claims, wherein the rigid wall (4) comprises a tubular portion (15) with at least one widened first tubular section (21) and at least one second tubular section (22) adjacent to and narrower than the first tubular section (21 ). 17. gas cylinder (1 ) according to claim 15, wherein said flexible sealing wall (13) has a shape that, in the deflated state, is incompatible with the shape of the tubular portion (15) of the rigid wall (4) and, in the inflated state, adapts to the shape of the rigid wall (4). 18. cylinder space - gas cylinder group (32), comprising: - one or more gas cylinders (1 ) according to one of claims 16 and 17, - a support and containment structure (33) able to be fixed to a bearing structure of an application and configured to at least partially receive said gas cylinders (1 ), - one or more locking members (34) anchored to the support and containment structure (33) and suitable for at least partially wrapping around said gas cylinders (1 ) so as to lock them to the support and containment structure (33), wherein said locking members (34) are partially received in a circumferential seat (35) of the gas cylinder (1) formed by its narrow portion (22) and the widened portion (21 ) extends into a gap (36) defined by the locking member (34) and the support and containment structure (33).
description "gas cylinder" [0001] the object of the present invention is a gas cylinder made from synthetic or composite material for storing gas under pressure. known gas cylinders made from composite material usually comprise an inner layer, for example made from steel or synthetic material, which ensures the impermeability to the stored gas, and an outer layer made from composite material reinforced with fibres that ensures the mechanical resistance of the cylinder to the operating pressures, as well as a mouth that forms a through opening from the inside to the outside of the cylinder and a seat for receiving a valve for opening and closing the through opening. [0002] the gases in the cylinders are classified as compressed gases if their critical temperature is below -50 °c like hydrogen or oxygen, as liquefied gases if the critical temperature is above -50 °c like lpg and as dissolved gases as for example acetylene in acetone. [0003] the cylinders are intended for many uses and the standards for making them and testing them vary according to the application. amongst the main applications of gas cylinders, we can quote the storage of liquefied or compressed gases for vehicular transport, domestic or industrial uses, the storage of compressed or liquefied gases for industrial use, holding tanks for compressed air, the storage of breathable mixtures for respirators, the storage of medical gases and fire extinguishers. [0004] thanks to the use of different materials for the functions of impermeabilising and mechanical resistance to pressure, composite gas cylinders have a very low weight/containment capacity ratio compared to steel gas cylinders. [0005] however, the relatively complex structure of composite gas cylinders and the interaction between the different material of the impermeabilising layer, of the reinforcing layer and of the mouth involve problems of sealing the cylinder and phenomena of deterioration of the synthetic materials and of the interface areas between the inner layer, the outer layer and the mouth, in particular in case of extended operating times. [0006] such structural and seal deterioration is worsened by the difference of the thermal expansion coefficients of the materials of the inner impermeabilisation layer and of the outer reinforcing layer (which are connected together over the entire surface), which involves a cyclical stressing of the two layers and by their interfacing due to the prevention of their free and independent thermal deformation. [0007] the purpose of the present invention is therefore to provide a composite gas cylinder, having features such as to improve the impermeability in case of long operating periods and to avoid phenomena of structural deterioration with reference to the prior art. [0008] this and other purposes are achieved through a gas cylinder internally defining a gas storage space able to be closed by a stop valve, wherein said cylinder comprises: - a rigid wall made from composite material having a reinforcing layer containing reinforcing fibres and an inner surface that defines an inner space accessible through an access opening formed in the rigid wall, - a tubular mouth able to be removably connected to the access opening of the rigid wall and configured to receive the stop valve in communication with the gas storage space, - a flexible sealing wall arranged in the inner space and connected to the rigid wall only through the mouth, said flexible sealing wall internally defining the gas storage space and being suitable for adhering in pressing contact against the inner surface of the rigid wall, wherein the rigid wall is permeable to air so as to allow a complete expansion of the flexible wall in the inner space and the inner surface is formed from a sliding layer of the rigid wall that is different from the reinforcing layer and that allows sliding in pressing contact between the flexible sealing wall and the rigid wall. [0009] thanks to the freedom of movement in pressing contact between the flexible sealing wall and the rigid wall and to their connection in a single area at the access opening of the rigid wall it is possible to avoid the phenomena of deterioration of the interfaces between the different layers and materials of the cylinder and the mouth, maintaining an excellent seal also in case of very long operating times. [0010] moreover, thanks to the removable connection of the mouth to the access opening of the rigid wall it is possible to replace the flexible sealing wall and gain access to the inner space of the rigid wall in order to inspect and regenerate the sliding layer. [0011] in order to better understand the present invention and to appreciate its advantages, some non-limiting example embodiments will be described hereafter, with reference to the figures, in which: [0012] figure 1 is a longitudinal section view of a gas cylinder according to one embodiment of the invention; [0013] figure 2 is a partial longitudinal section view of a gas cylinder according to one further embodiment of the invention; [0014] figure 3 is a partial longitudinal section view of a gas cylinder according to one further embodiment of the invention; [0015] figures 4 and 5 illustrate cylinder spaces with gas cylinders according to figures 2 and 3; [0016] figure 6 is a partial longitudinal section view of a gas cylinder according to one further embodiment of the invention; [0017] figure 7 illustrates a cylinder space with gas cylinder according to figure 6; [0018] figure 8 is a partial longitudinal section view of a gas cylinder according to one further embodiment of the invention. [0019] with reference to the figures, a gas cylinder (hereafter "cylinder") is wholly indicated with reference numeral 1. [0020] cylinder 1 internally defines a gas storage space 2 able to be closed by a stop valve 3 and comprises a rigid outer wall 4 made from composite material that has a reinforcing layer 5 containing reinforcing fibres and an inner surface 6 that defines an inner space 7 accessible through an access opening 8. such an access opening 8 preferably constitutes the only opening of the rigid outer wall 4 suitable for receiving a mouth for a valve and/or directly the valve. [0021] in the present description of the invention, the term "composite material" denotes a material with a non-homogeneous structure, consisting of the assembly of two or more different substances, physically separated by a clear interface of zero thickness and equipped with substantially different physical and chemical properties that remain separate and distinct at macroscopic and structural level. in particular, the composite material may comprise fibres of natural or artificial materials, for example glass fibres, carbon fibres, ceramic fibres, aramid fibres, such as kevlar®, embedded in a matrix made from a preferably but not necessarily synthetic material, for example thermoplastic like nylon® and abs or thermosetting like epoxy resins or polyester resin or metals, such as for example aluminium, titanium and alloys thereof or a ceramic material, generally silicon carbide or alumina. [0022] according to one embodiment, the rigid wall 4 comprises a tubular portion 15, preferably substantially cylindrical and extending along a longitudinal axis l of the cylinder 1 , a bottom portion 16, for example spherical or elliptical cap-shaped, which connects to a lower end of the tubular portion 15 and defines the inner space 7 on a lower side 17 of the cylinder 1 , as well as an upper portion 18, for example shaped like an ogive, which connects to an upper end of the tubular portion 15 and defines the inner space 7 on an upper side 19 of the cylinder 1 opposite to the lower side 17. [0023] cylinder 1 further comprises a tubular mouth 9 able to be removably connected to the access opening 8 of the rigid wall 4 and configured to receive the stop valve 3 in communication with the gas storage space 2. [0024] according to one embodiment, the mouth 9 may be made from synthetic or metallic material and may comprise a valve seat 10, for example a frusto-conical or cylindrical internal threading with a seat for a gasket, for the connection of the valve 10 in the mouth 9, as well, as a cylinder connection portion 1 , for example an external threading or a bayonet portion suitable for engaging a corresponding mouth connection seat 12 formed from the rigid wall 4 at the opening 8. [0025] according to one embodiment, the connection seat 12 comprises a threading formed from the reinforcing layer 5 or a body 20 eventually threaded with many distinct pieces or in a monoblock removably connected (for example through screw clamping) with one edge of the opening 8 formed by the reinforcing layer 5 after its completion and setting. this facilitates and speeds up the winding process of the reinforcing layer and the entire assembly of the gas cylinder. [0026] alternatively, such a body 20 may be connected to the rigid wall through at least partial over-winding of the reinforcing layer. [0027] the connection between the mouth 9 and the rigid wall 4 is configured to withstand the forces resulting from the pressure of the gas in the area of the opening 8, but it is not necessarily impermeable to gases and it could even be configured so as to create an area of programmed permeability of the rigid wall 4. [0028] a flexible sealing wall 13, shaped like an inflatable bag impermeable to gases that are intended to be stored, is arranged in the inner space 7 and connected to the rigid wall 4 only through the mouth 9 and only at the mouth 9. the flexible sealing wall 13 internally defines the gas storage space 2 and is suitable for adhering in pressing contact against the inner surface 6 of the rigid wall 4. [0029] according to one embodiment, in an undeformed state the flexible sealing wall 13 has a shape defined by a tubular portion 25, preferably substantially cylindrical and extending along a longitudinal axis l of the cylinder 1 , a bottom portion 26, for example shaped like a spherical or elliptical cap, which connects to a lower end of the tubular portion 25 and defines the gas storage space 2 on the lower side 17 of the cylinder 1 , as well as an upper portion 28, for example nose cone-shaped, which connected to an upper end of the tubular portion 25 and defines the gas storage space 2 on the upper side 19 of the cylinder 1 opposite the lower side 17. [0030] according to one aspect of the invention, the rigid wall 4 is permeable to air so as to allow a complete expansion of the flexible sealing wall 13 in the inner space 7 and the inner surface 6 is formed by a sliding layer 14 of the rigid wall different from the reinforcing layer and that allows sliding in pressing contact between the flexible sealing wall 13 and the rigid wall 4. in one embodiment, the sliding layer 14 has no fibres. according to an alternative embodiment, the sliding layer 14 comprises a fabric that will be described later on. [0031] thanks to the freedom of movement in sliding contact between the flexible sealing wall and the rigid wall and their connection in a single area at the access opening of the rigid wall it is possible to avoid the phenomena of deterioration of the interfaces between the different layers and materials of the cylinder and the mouth, maintaining an excellent seal also in case of very long operating times. [0032] moreover, thanks to the removable connection of the mouth to the access opening of the rigid wall it is possible to replace the flexible sealing wall and gain access to the inner space of the rigid wall in order to inspect and regenerate the sliding layer. [0033] the rigid outer wall 4 performs the function of withstanding the internal pressure exerted by the stored gas and it may be manufactured through winding filaments of continuous carbon fibres impregnated with epoxy resin on a mandrel. the mandrel itself may then be removed, for example through dissolving, mechanical crumbling or disassembly in the case of a mandrel in many pieces. [0034] alternatively, the mandrel around which the reinforcing layer 5 is wound stays integrated in the rigid wall 4 and forms a layer thereof, for example the aforementioned sliding layer 14 or an intermediate layer (not illustrated) between the reinforcing layer 5 and the sliding layer 13. [0035] the reinforcing fibres of the reinforcing layer 5 have a tensile strength of over 4500 mpa, preferably from 4800 mpa to 5200 mpa and a modulus of elasticity of over 200 gpa, preferably from 200 to 250 gpa. [0036] advantageously, the reinforcing layer 5 comprises a content (volumetric) of reinforcing fibres within the range from 50%vol to 70%vol, preferably from 55%vol to 65%vol, even more preferably of about 60%vol, in which the rest of the volume is formed by the matrix that may be an epoxy or vinyl ester resin made to set through a heat treatment, for example heating to about 120° lasting about 5 hours. [0037] the sliding layer 14 advantageously has a smooth exposed surface without edges or steps (with the exception of the edge of the preferably single access opening 8) and it may comprise a synthetic material, preferably thermoplastic, for example selected from the group comprising polyethylene, polyester, pet (polyethylene terephthalate), polyvinylchloride, polytetrafluoroethylene. in accordance with an embodiment, the sliding layer 14 comprises a fabric of fibres or natural or synthetic filaments, for example made from polyester, which may be further coated or directly exposed in the inner space 7. [0038] alternatively or additionally, the sliding layer 14 may comprise a fluid, for example lubricant gel or grease, or a lubricant powder containing for example nano-particles suitable for reducing the friction coefficient between the rigid wall 4 and the flexible sealing wall 13. [0039] the sliding layer 14 may be fixed to the rigid wall 4 through: - blow-moulding in a mould consisting of the reinforcing layer 5 with one or more optional intermediate layers and/or - moulding of the sliding layer 14 (for example through a different mould from the reinforcing layer) and subsequent winding of the reinforcing layer 5 around the sliding layer 14 and/or - spraying and/or - dip coating that in the present case provides for the temporary filling of the inner space 7 with a coating liquid or with a coating powder that deposits the sliding layer 14 on the semi-worked product of the rigid wall 4. [0040] the flexible sealing wall 13 is configured like a deformable bag, for example made from rubber, and it has a (preferably single) opening connected to the mouth 9, for example through fusion, vulcanization, co-moulding, gluing or mechanically through crimping or screw clamping, so as to advantageously form a sealing wall 13 - mouth 9 group that is prefabricated and able to be (preferably removably) connected to the rigid wall 4. as already stated earlier, the sealing wall 13 - mouth 9 group may be removable and replaceable. [0041] according to an alternative embodiment, structurally very simple and cost-effective but less versatile, the mouth 9 with the flexible sealing wall 13 are directly incorporated in the mandrel during the winding of the reinforcing layer 5 of the rigid wall 4 and are irreversibly connected to such a rigid wall 4. [0042] according to a further aspect of the present invention, the tubular portion 15 of the rigid wall 4 comprises at least one first widened tubular section 21 and at least one second tubular section 22 that is adjacent to and narrower than the first widened tubular section 21. [0043] thanks to the widened tubular section 21 adjacent to the narrow tubular section 22 it is possible to receive clamps or locking brackets in the narrow tubular section 22 and exploit the areas adjacent to the locking brackets for the storage of the gas. [0044] thanks to the separation of the flexible sealing wall from the non-cylindrical rigid wall and to the possibility of sliding in pressing contact between them, the variations in diameters of the rigid wall cannot form areas of potential permeability of the sealing wall. in this way, it is possible to reconcile the contrasting requirements of seal and of walls of irregular shape. [0045] advantageously, the tubular portion 25 of the flexible sealing wall 13 does not present, in the state not inflated by the pressure of the gas, narrow tubular sections adjacent to widened tubular sections or, in other words, in the non-inflated state, the shape of the tubular portion 25 of the flexible sealing wall 13 does not follow the outline or the shape of the tubular portion 15 of the rigid wall 4. [0046] this makes it possible to manufacture, stock and use few ranges of flexible sealing walls for a large number of variants of gas cylinders with rigid walls of different shapes. [0047] in accordance with a further aspect of the present invention (fig. 3), the narrow sections 22 form a plurality of preferably two annular necks each of which is defined on both sides by respective widened tubular sections 21. [0048] this particular configuration of the cylinder allows a correct positioning of the locking clamps and prevents them from accidentally sliding in the longitudinal direction l of the gas cylinder 1. [0049] in order to best reconcile the requirements of stressing the tubular portion 15 as much as possible as a "tight membrane" and of it having a non-cylindrical shape such as to exploit to the maximum the available space for storing the gas, the narrow section 22 or, preferably, all of the narrow sections 22 respectively form a circular cylindrical central ring 31 having a constant diameter along the longitudinal axis l and two side transition rings 27 that connect the central ring 31 to the adjacent widened tubular sections 21 , forming a circumferential step. [0050] advantageously, the side rings 27, in a longitudinal section plane that comprises the longitudinal axis l, have a shape with double curvature (fig. 2) or frustum of cone 24 (fig. 6) and a longitudinal extension l1 shorter than the longitudinal extension l2 of the cylindrical central ring 31. in order to reduce the flexional stresses of the narrowed areas it is advantageous to provide for the longitudinal extension l2 of the central ring to be shorter than the sum of the longitudinal extensions l1 of the side rings, i.e. l2<2l1. [0051] the widened tubular sections 21 also preferably form one or more circular cylindrical rings with constant diameter along the longitudinal axis l of the cylinder 1. [0052] in accordance with an embodiment (fig. 2), the tubular portion 15 forms a single first widened section 21 and a single second narrow section 22 adjacent to the widened section 21 and separated from it by an internally bevelled circumferential step 23 or by an internally bevelled frusto-conical joint 24, formed from a single side ring 27 of the narrow section 22 that may be shaped as described earlier. [0053] in this way, the gas cylinder 1 has an overall shape tapered with a step that allows it to be fixed through one or more locking brackets in the tapered area without leaving unused spaces in the cylinder space of the application, for example a gas-powered vehicle. [0054] in accordance with a further aspect of the present invention (fig. 8), two narrow sections 22 form respectively an upper section and a lower section of the tubular portion 15, between which a widened tubular section 21 extends forming an intermediate section, preferably substantially central, of such a tubular portion 15. [0055] as visible from the figures, the thickness of the tubular portion 15 of the rigid wall 4 does not substantially vary, in other words the inner surface 6 of the rigid wall 4, in the area of the tubular portion 15, substantially follows the shape of its outer surface 30, so that the shaping of the tubular portion 15 to the external space conditions translate into a maximisation of the gas storage space 2 inside the cylinder 1. [0056] according to a further aspect of the present invention (figure 8), advantageously able to be made in combination with single or all of the characteristics described up to here, the mouth 9 of the gas cylinder 1 comprises an internally threaded inner portion 29 projecting inside the gas storage space 2 defined by the flexible sealing wall 13. [0057] thanks to the configuration of the threaded mouth 9 that at least partially projects inside the gas storage space 2, it is possible to also exploit at least one part of the cylinder height, in any case necessary to screw in the stop valve, to store the gas. [0058] figures 4 and 5 illustrate embodiments of cylinder space - cylinder groups 32 for gas applications, for example vehicles. the cylinder space - cylinder group 32 comprises one or more gas cylinders 1 according to what has been described previously, a support and containment structure 33 able to be fixed to or formed at a bearing structure of the application (for example of a vehicle) and configured to at least partially receive such gas cylinders, as well as one or more locking members 34, preferably clamps or locking brackets, anchored or able to be anchored to the support and containment structure 33 and suitable for at least partially wrapping around said gas cylinders 1 to lock them to the support and containment structure 33. such locking members 34 have at least one portion received in a circumferential seat 35 formed from the narrow section 22 in the outer surface 30 of the gas cylinder 1. moreover, at least one of the widened sections 21 of the tubular portion 15 of the rigid wall 4 extends in a gap 36 (otherwise unused), defined between two locking members 34 or between a locking member 34 and the support and containment structure 33. [0059] figure 7 illustrates a further cylinder space - cylinder group 32, in which the support and containment structure 33 comprises side walls inclined with respect to one another and/or interruptions due to functional elements 37 (for example reinforcing stays or tubes) and the cylinders) 1 adapt to the shape of the containment structure 33 thanks to the presence of the widened and narrow sections 21 , 22. [0060] of course, a man skilled in the art may bring further modifications and variants to the gas cylinder made from composite material and to the cylinder space-gas cylinder group according to the present invention, in order to satisfy contingent and specific requirements, all of which are covered by the scope of protection of the invention, as defined by the following claims.
189-334-057-103-030
US
[ "US", "WO" ]
G09G3/3291,G09G3/3233,G09G3/3258,G06F3/038,G09G3/325,G09G5/10,G09G3/32,G09G3/22
2013-03-15T00:00:00
2013
[ "G09", "G06" ]
amoled displays with multiple readout circuits
the oled voltage of a selected pixel is extracted from the pixel produced when the pixel is programmed so that the pixel current is a function of the oled voltage. one method for extracting the oled voltage is to first program the pixel in a way that the current is not a function of oled voltage, and then in a way that the current is a function of oled voltage. during the latter stage, the programming voltage is changed so that the pixel current is the same as the pixel current when the pixel was programmed in a way that the current was not a function of oled voltage. the difference in the two programming voltages is then used to extract the oled voltage.
1 - 25 . (canceled) 26 . a system for determining a pixel parameter of a pixel in an array of pixels in a display, the system comprising: a controller adapted to: determine a final first input electrical signal provided to the pixel which results in a first output electrical signal from the pixel equal to a predetermined electrical signal; and extract the pixel parameter with use of the final first input electrical signal, wherein one of the first output electrical signal and the predetermined electrical signal is a function of the pixel parameter. 27 . the system of claim 26 , wherein the controller is adapted to determine the final first input signal by being adapted to: measure the first output electrical signal from the pixel in response to providing the first input electrical signal to the pixel, while varying the first input electrical signal until reaching the final first input electrical signal provided to the pixel when the first output electrical signal equals the predetermined electrical signal. 28 . the system of claim 26 wherein the predetermined electrical signal is a predetermined known reference current and the first output electrical signal is a first output current which is a function of the pixel parameter. 29 . the system of claim 26 wherein the predetermined electrical signal is a previously measured second output electrical signal, the second output electrical signal previously output from the pixel in response to a second input electrical signal provided to the pixel. 30 . the system of claim 29 wherein the controller is adapted to extract the pixel parameter with use of the second input electrical signal provided to the pixel. 31 . the system of claim 30 wherein the controller is adapted to extract the pixel parameter from a difference between the final first input electrical signal provided to the pixel and the second input electrical signal. 32 . the system of claim 29 wherein the controller is further adapted to, prior to the determining of the final first input electrical signal, setting the second input electrical signal provided to the pixel to generate the measured second output electrical signal, wherein only one of the first output electrical signal and the predetermined electrical signal is a function of the pixel parameter. 33 . the system of claim 26 wherein the first output electrical signal is a function of the pixel parameter, and wherein the controller is further adapted to: at an earlier time previous to the extracting of the pixel parameter, determine a final third input electrical signal provided to the pixel which results in a third output electrical signal equal to the predetermined electrical signal, wherein one of the predetermined electrical signal and the third output electrical signal is a function of the pixel parameter at the earlier time, and extract the pixel parameter at the earlier time with use of the final third input electrical signal; and extract the pixel parameter with use of the final third input electrical signal provided to the pixel and the final first input electrical signal provided to the pixel and the pixel parameter at the earlier time. 34 . the system of claim 33 wherein only one of the predetermined electrical signal and the third output electrical signal is a function of the pixel parameter at the earlier time. 35 . the system of claim 26 , wherein the pixel comprises a light-emitting device and the pixel parameter is the operational voltage v oled of the light-emitting device. 36 . a method of determining a pixel parameter of a pixel in an array of pixels in a display, the method comprising: determining a final first input electrical signal provided to the pixel which results in a first output electrical signal from the pixel equal to a predetermined electrical signal; and extracting the pixel parameter with use of the final first input electrical signal provided to the pixel, wherein one of the first output electrical signal and the predetermined electrical signal is a function of the pixel parameter. 37 . the method of claim 36 wherein determining the final first input electrical signal further comprises: while measuring the first output electrical signal from the pixel in response to providing the first input electrical signal to the pixel, varying the first input electrical signal until reaching the final first input electrical signal provided to the pixel when the first output electrical signal equals the predetermined electrical signal. 38 . the method of claim 36 wherein the predetermined electrical signal is a predetermined known reference current and the first output electrical signal is a first output current which is a function of the pixel parameter. 39 . the method of claim 36 wherein the predetermined electrical signal is a previously measured second output electrical signal, the second output electrical signal previously output from the pixel in response to a second input electrical signal provided to the pixel. 40 . the method of claim 39 wherein the extracting comprises extracting the pixel parameter with use of the second input electrical signal provided to the pixel. 41 . the method of claim 40 wherein the extracting comprises extracting the pixel parameter from a difference between the final first input electrical signal provided to the pixel and the second input electrical signal provided to the pixel. 42 . the method of claim 39 further comprising: prior to the determining the final first input electrical signal, setting the second input electrical signal provided to the pixel to generate the measured second output electrical signal, wherein only one of the first output electrical signal and the predetermined electrical signal is a function of the pixel parameter. 43 . the method of claim 36 wherein the first output electrical signal is a function of the pixel parameter, the method further comprising: at an earlier time previous to the extracting of the pixel parameter, determining a final third input electrical signal provided to the pixel which results in a third output electrical signal from the pixel equal to the predetermined electrical signal, wherein one of the predetermined electrical signal and the third output electrical signal is a function of the pixel parameter at the earlier time, and extracting the pixel parameter at the earlier time with use of the final third input electrical signal provided to the pixel; and extracting the pixel parameter with use of the final third input electrical signal provided to the pixel and the final first input electrical signal provided to the pixel and the pixel parameter at the earlier time. 44 . the method of claim 43 wherein only one of the predetermined electrical signal and the third output electrical signal is a function of the pixel parameter at the earlier time. 45 . the method of claim 36 , wherein the pixel comprises a light-emitting device and the pixel parameter is the operational voltage v oled of the light-emitting device.
cross reference to related applications this application claims the benefit of u.s. provisional application no. 61/787,397, filed mar. 15, 2013 which is hereby incorporated by reference herein in its entirety. field of the invention the present disclosure generally relates to circuits for use in displays, particularly displays such as active matrix organic light emitting diode displays having multiple readout circuits for monitoring the values of selected parameters of the individual pixels in the displays. background displays can be created from an array of light emitting devices each controlled by individual circuits (i.e., pixel circuits) having transistors for selectively controlling the circuits to be programmed with display information and to emit light according to the display information. thin film transistors (“tfts”) fabricated on a substrate can be incorporated into such displays. tfts tend to demonstrate non-uniform behavior across display panels and over time as the displays age. compensation techniques can be applied to such displays to achieve image uniformity across the displays and to account for degradation in the displays as the displays age. some schemes for providing compensation to displays to account for variations across the display panel and over time utilize monitoring systems to measure time dependent parameters associated with the aging (i.e., degradation) of the pixel circuits. the measured information can then be used to inform subsequent programming of the pixel circuits so as to ensure that any measured degradation is accounted for by adjustments made to the programming. such monitored pixel circuits may require the use of additional transistors and/or lines to selectively couple the pixel circuits to the monitoring systems and provide for reading out information. the incorporation of additional transistors and/or lines may undesirably decrease pixel-pitch (i.e., “pixel density”). summary in accordance with one embodiment, the oled voltage of a selected pixel is extracted from the pixel produced when the pixel is programmed so that the pixel current is a function of the oled voltage. one method for extracting the oled voltage is to first program the pixel in a way that the current is not a function of oled voltage, and then in a way that the current is a function of oled voltage. during the latter stage, the programming voltage is changed so that the pixel current is the same as the pixel current when the pixel was programmed in a way that the current was not a function of oled voltage. the difference in the two programming voltages is then used to extract the oled voltage. another method for extracting the oled voltage is to measure the difference between the current of the pixel when it is programmed with a fixed voltage in both methods (being affected by oled voltage and not being affected by oled voltage). this measured difference and the current-voltage characteristics of the pixel are then used to extract the oled voltage. a further method for extracting the shift in the oled voltage is to program the pixel for a given current at time zero (before usage) in a way that the pixel current is a function of oled voltage, and save the programming voltage. to extract the oled voltage shift after some usage time, the pixel is programmed for the given current as was done at time zero. to get the same current as time zero, the programming voltage needs to change. the difference in the two programming voltages is then used to extract the shift in the oled voltage. here one needs to remove the effect of tft aging from the second programming voltage first; this is done by programming the pixel without oled effect for a given current at time zero and after usage. the difference in the programming voltages in this case is the tft aging, which is subtracted from the calculated difference in the aforementioned case. in one implementation, the current effective voltage v oled of a light-emitting device in a selected pixel is determined by supplying a programming voltage to the drive transistor in the selected pixel to supply a first current to the light-emitting device (the first current being independent of the effective voltage v oled of the light-emitting device); measuring the first current; supplying a second programming voltage to the drive transistor in the selected pixel to supply a second current to the light-emitting device, the second current being a function of the current effective voltage v oled of the light-emitting device; measuring the second current and comparing the first and second current measurements; adjusting the second programming voltage to make the second current substantially the same as the first current; and extracting the value of the current effective voltage v oled of the light-emitting device from the difference between the first and second programming voltages. in another implementation, the current effective voltage v oled of a light-emitting device in a selected pixel is determined by supplying a first programming voltage to the drive transistor in the selected pixel to supply a first current to the light-emitting device in the selected pixel (the first current being independent of the effective voltage v oled of the light-emitting device), measuring the first current, supplying a second programming voltage to the drive transistor in the selected pixel to supply a second current to the light-emitting device in the selected pixel (the second current being a function of the current effective voltage v oled of the light-emitting device), measuring the second current, and extracting the value of the current effective voltage v oled of the light-emitting device from the difference between the first and second current measurements. in a modified implementation, the current effective voltage v oled of a light-emitting device in a selected pixel is determined by supplying a first programming voltage to the drive transistor in the selected pixel to supply a predetermined current to the light-emitting device at a first time (the first current being a function of the effective voltage v oled of the light-emitting device), supplying a second programming voltage to the drive transistor in the selected pixel to supply the predetermined current to the light-emitting device at a second time following substantial usage of the display, and extracting the value of the current effective voltage v oled of the light-emitting device from the difference between the first and second programming voltages. in another modified implementation, the current effective voltage v oled of a light-emitting device in a selected pixel is determined by supplying a predetermined programming voltage to the drive transistor in the selected pixel to supply a first current to the light-emitting device (the first current being independent of the effective voltage v oled of the light-emitting device), measuring the first current, supplying the predetermined programming voltage to the drive transistor in the selected pixel to supply a second current to the light-emitting device (the second current being a function of the current effective voltage v oled of the light-emitting device), measuring the second current, and extracting the value of the current effective voltage v oled of the light-emitting device from the difference between the first and second currents and current-voltage characteristics of the selected pixel. in a preferred implementation, a system is provided for controlling an array of pixels in a display in which each pixel includes a light-emitting device. each pixel includes a pixel circuit that comprises the light-emitting device, which emits light when supplied with a voltage v oled ; a drive transistor for driving current through the light-emitting device according to a driving voltage across the drive transistor during an emission cycle, the drive transistor having a gate, a source and a drain and characterized by a threshold voltage; and a storage capacitor coupled across the source and gate of the drive transistor for providing the driving voltage to the drive transistor. a supply voltage source is coupled to the drive transistor for supplying current to the light-emitting device via the drive transistor, the current being controlled by the driving voltage. a monitor line is coupled to a read transistor that controls the coupling of the monitor line to a first node that is common to the source side of the storage capacitor, the source of the drive transistor, and the light-emitting device. a data line is coupled to a switching transistor that controls the coupling of the data line to a second node that is common to the gate side of the storage capacitor and the gate of the drive transistor. a controller coupled to the data and monitor lines and to the switching and read transistors is adapted to: (1) during a first cycle, turn on the switching and read transistors while delivering a voltage vb to the monitor line and a voltage vd 1 to the data line, to supply the first node with a voltage that is independent of the voltage across the light-emitting device,(2) during a second cycle, turn on the read transistor and turn off the switching transistor while delivering a voltage vref to the monitor line, and read a first sample of the drive current at the first node via the read transistor and the monitor line,(3) during a third cycle, turn off the read transistor and turn on the switching transistor while delivering a voltage vd 2 to the data line, so that the voltage at the second node is a function of v oled , and(4) during a fourth cycle, turn on said read transistor and turn off said switching transistor while delivering a voltage vref to said monitor line, and read a second sample the drive current at said first node via said read transistor and said monitor line. the first and second samples of the drive current are compared and, if they are different, the first through fourth cycles are repeated using an adjusted value of at least one of the voltages vd 1 and vd 2 , until the first and second samples are substantially the same. the foregoing and additional aspects and embodiments of the present invention will be apparent to those of ordinary skill in the art in view of the detailed description of various embodiments and/or aspects, which is made with reference to the drawings, a brief description of which is provided next. brief description of the drawings the foregoing and other advantages of the invention will become apparent upon reading the following detailed description and upon reference to the drawings. fig. 1 is a block diagram of an exemplary configuration of a system for driving an oled display while monitoring the degradation of the individual pixels and providing compensation therefor. fig. 2a is a circuit diagram of an exemplary pixel circuit configuration. fig. 2b is a timing diagram of first exemplary operation cycles for the pixel shown in fig. 2a . fig. 2c is a timing diagram of second exemplary operation cycles for the pixel shown in fig. 2a . fig. 3 is a circuit diagram of another exemplary pixel circuit configuration. fig. 4 is a block diagram of a modified configuration of a system for driving an oled display using a shared readout circuit, while monitoring the degradation of the individual pixels and providing compensation therefor. fig. 5 is an example of measurements taken by two different readout circuits from adjacent groups of pixels in the same row. while the invention is susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and will be described in detail herein. it should be understood, however, that the invention is not intended to be limited to the particular forms disclosed. rather, the invention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the appended claims. detailed description fig. 1 is a diagram of an exemplary display system 50 . the display system 50 includes an address driver 8 , a data driver 4 , a controller 2 , a memory 6 , a supply voltage 14 , and a display panel 20 . the display panel 20 includes an array of pixels 10 arranged in rows and columns. each of the pixels 10 is individually programmable to emit light with individually programmable luminance values. the controller 2 receives digital data indicative of information to be displayed on the display panel 20 . the controller 2 sends signals 32 to the data driver 4 and scheduling signals 34 to the address driver 8 to drive the pixels 10 in the display panel 20 to display the information indicated. the plurality of pixels 10 associated with the display panel 20 thus comprise a display array (“display screen”) adapted to dynamically display information according to the input digital data received by the controller 2 . the display screen can display, for example, video information from a stream of video data received by the controller 2 . the supply voltage 14 can provide a constant power voltage or can be an adjustable voltage supply that is controlled by signals from the controller 2 . the display system 50 can also incorporate features from a current source or sink (not shown) to provide biasing currents to the pixels 10 in the display panel 20 to thereby decrease programming time for the pixels 10 . for illustrative purposes, the display system 50 in fig. 1 is illustrated with only four pixels 10 in the display panel 20 . it is understood that the display system 50 can be implemented with a display screen that includes an array of similar pixels, such as the pixels 10 , and that the display screen is not limited to a particular number of rows and columns of pixels. for example, the display system 50 can be implemented with a display screen with a number of rows and columns of pixels commonly available in displays for mobile devices, monitor-based devices, and/or projection-devices. each pixel 10 includes a driving circuit (“pixel circuit”) that generally includes a driving transistor and a light emitting device. hereinafter the pixel 10 may refer to the pixel circuit. the light emitting device can optionally be an organic light emitting diode (oled), but implementations of the present disclosure apply to pixel circuits having other electroluminescence devices, including current-driven light emitting devices. the driving transistor in the pixel 10 can optionally be an n-type or p-type amorphous silicon thin-film transistor, but implementations of the present disclosure are not limited to pixel circuits having a particular polarity of transistor or only to pixel circuits having thin-film transistors. the pixel circuit can also include a storage capacitor for storing programming information and allowing the pixel circuit to drive the light emitting device after being addressed. thus, the display panel 20 can be an active matrix display array. as illustrated in fig. 1 , the pixel 10 illustrated as the top-left pixel in the display panel 20 is coupled to a select line 24 i , a supply line 26 i , a data line 22 j , and a monitor line 28 j . a read line may also be included for controlling connections to the monitor line. in one implementation, the supply voltage 14 can also provide a second supply line to the pixel 10 . for example, each pixel can be coupled to a first supply line 26 charged with vdd and a second supply line 27 coupled with vss, and the pixel circuits 10 can be situated between the first and second supply lines to facilitate driving current between the two supply lines during an emission phase of the pixel circuit. the top-left pixel 10 in the display panel 20 can correspond to a pixel in the display panel in a “ith” row and “jth” column of the display panel 20 . similarly, the top-right pixel 10 in the display panel 20 represents a “jth” row and “mth” column; the bottom-left pixel 10 represents an “nth” row and “jth” column; and the bottom-right pixel 10 represents an “nth” row and “mth” column. each of the pixels 10 is coupled to appropriate select lines (e.g., the select lines 24 i and 24 n ), supply lines (e.g., the supply lines 26 i and 26 n ), data lines (e.g., the data lines 22 j and 22 m ), and monitor lines (e.g., the monitor lines 28 j and 28 m ). it is noted that aspects of the present disclosure apply to pixels having additional connections, such as connections to additional select lines, and to pixels having fewer connections, such as pixels lacking a connection to a monitoring line. with reference to the top-left pixel 10 shown in the display panel 20 , the select line 24 i is provided by the address driver 8 , and can be utilized to enable, for example, a programming operation of the pixel 10 by activating a switch or transistor to allow the data line 22 j to program the pixel 10 . the data line 22 j conveys programming information from the data driver 4 to the pixel 10 . for example, the data line 22 j can be utilized to apply a programming voltage or a programming current to the pixel 10 in order to program the pixel 10 to emit a desired amount of luminance. the programming voltage (or programming current) supplied by the data driver 4 via the data line 22 j is a voltage (or current) appropriate to cause the pixel 10 to emit light with a desired amount of luminance according to the digital data received by the controller 2 . the programming voltage (or programming current) can be applied to the pixel 10 during a programming operation of the pixel 10 so as to charge a storage device within the pixel 10 , such as a storage capacitor, thereby enabling the pixel 10 to emit light with the desired amount of luminance during an emission operation following the programming operation. for example, the storage device in the pixel 10 can be charged during a programming operation to apply a voltage to one or more of a gate or a source terminal of the driving transistor during the emission operation, thereby causing the driving transistor to convey the driving current through the light emitting device according to the voltage stored on the storage device. generally, in the pixel 10 , the driving current that is conveyed through the light emitting device by the driving transistor during the emission operation of the pixel 10 is a current that is supplied by the first supply line 26 i and is drained to a second supply line 27 i . the first supply line 26 i and the second supply line 27 i are coupled to the supply voltage 14 . the first supply line 26 i can provide a positive supply voltage (e.g., the voltage commonly referred to in circuit design as “vdd”) and the second supply line 27 i can provide a negative supply voltage (e.g., the voltage commonly referred to in circuit design as “vss”). implementations of the present disclosure can be realized where one or the other of the supply lines (e.g., the supply line 27 i ) is fixed at a ground voltage or at another reference voltage. the display system 50 also includes a monitoring system 12 . with reference again to the top left pixel 10 in the display panel 20 , the monitor line 28 j connects the pixel 10 to the monitoring system 12 . the monitoring system 12 can be integrated with the data driver 4 , or can be a separate stand-alone system. in particular, the monitoring system 12 can optionally be implemented by monitoring the current and/or voltage of the data line 22 j during a monitoring operation of the pixel 10 , and the monitor line 28 j can be entirely omitted. additionally, the display system 50 can be implemented without the monitoring system 12 or the monitor line 28 j . the monitor line 28 j allows the monitoring system 12 to measure a current or voltage associated with the pixel 10 and thereby extract information indicative of a degradation of the pixel 10 . for example, the monitoring system 12 can extract, via the monitor line 28 j , a current flowing through the driving transistor within the pixel 10 and thereby determine, based on the measured current and based on the voltages applied to the driving transistor during the measurement, a threshold voltage of the driving transistor or a shift thereof. the monitoring system 12 can also extract an operating voltage of the light emitting device (e.g., a voltage drop across the light emitting device while the light emitting device is operating to emit light). the monitoring system 12 can then communicate signals 32 to the controller 2 and/or the memory 6 to allow the display system 50 to store the extracted degradation information in the memory 6 . during subsequent programming and/or emission operations of the pixel 10 , the degradation information is retrieved from the memory 6 by the controller 2 via memory signals 36 , and the controller 2 then compensates for the extracted degradation information in subsequent programming and/or emission operations of the pixel 10 . for example, once the degradation information is extracted, the programming information conveyed to the pixel 10 via the data line 22 j can be appropriately adjusted during a subsequent programming operation of the pixel 10 such that the pixel 10 emits light with a desired amount of luminance that is independent of the degradation of the pixel 10 . in an example, an increase in the threshold voltage of the driving transistor within the pixel 10 can be compensated for by appropriately increasing the programming voltage applied to the pixel 10 . fig. 2a is a circuit diagram of an exemplary driving circuit for a pixel 110 . the driving circuit shown in fig. 2a is utilized to calibrate, program and drive the pixel 110 and includes a drive transistor 112 for conveying a driving current through an organic light emitting diode (oled) 114 . the oled 114 emits light according to the current passing through the oled 114 , and can be replaced by any current-driven light emitting device. the oled 114 has an inherent capacitance c oled . the pixel 110 can be utilized in the display panel 20 of the display system 50 described in connection with fig. 1 . the driving circuit for the pixel 110 also includes a storage capacitor 116 and a switching transistor 118 . the pixel 110 is coupled to a select line sel, a voltage supply line vdd, a data line vdata, and a monitor line mon. the driving transistor 112 draws a current from the voltage supply line vdd according to a gate-source voltage (vgs) across the gate and source terminals of the drive transistor 112 . for example, in a saturation mode of the drive transistor 112 , the current passing through the drive transistor 112 can be given by ids=β(vgs−vt) 2 , where β is a parameter that depends on device characteristics of the drive transistor 112 , ids is the current from the drain terminal to the source terminal of the drive transistor 112 , and vt is the threshold voltage of the drive transistor 112 . in the pixel 110 , the storage capacitor 116 is coupled across the gate and source terminals of the drive transistor 112 . the storage capacitor 116 has a first terminal, which is referred to for convenience as a gate-side terminal, and a second terminal, which is referred to for convenience as a source-side terminal. the gate-side terminal of the storage capacitor 116 is electrically coupled to the gate terminal of the drive transistor 112 . the source-side terminal 116 s of the storage capacitor 116 is electrically coupled to the source terminal of the drive transistor 112 . thus, the gate-source voltage vgs of the drive transistor 112 is also the voltage charged on the storage capacitor 116 . as will be explained further below, the storage capacitor 116 can thereby maintain a driving voltage across the drive transistor 112 during an emission phase of the pixel 110 . the drain terminal of the drive transistor 112 is connected to the voltage supply line vdd, and the source terminal of the drive transistor 112 is connected to (1) the anode terminal of the oled 114 and (2) a monitor line mon via a read transistor 119 . a cathode terminal of the oled 114 can be connected to ground or can optionally be connected to a second voltage supply line, such as the supply line vss shown in fig. 1 . thus, the oled 114 is connected in series with the current path of the drive transistor 112 . the oled 114 emits light according to the magnitude of the current passing through the oled 114 , once a voltage drop across the anode and cathode terminals of the oled achieves an operating voltage (v oled ) of the oled 114 . that is, when the difference between the voltage on the anode terminal and the voltage on the cathode terminal is greater than the operating voltage v oled , the oled 114 turns on and emits light. when the anode-to-cathode voltage is less than v oled , current does not pass through the oled 114 . the switching transistor 118 is operated according to the select line sel (e.g., when the voltage on the select line sel is at a high level, the switching transistor 118 is turned on, and when the voltage sel is at a low level, the switching transistor is turned off). when turned on, the switching transistor 118 electrically couples node a (the gate terminal of the driving transistor 112 and the gate-side terminal of the storage capacitor 116 ) to the data line vdata. the read transistor 119 is operated according to the read line rd (e.g., when the voltage on the read line rd is at a high level, the read transistor 119 is turned on, and when the voltage rd is at a low level, the read transistor 119 is turned off). when turned on, the read transistor 119 electrically couples node b (the source terminal of the driving transistor 112 , the source-side terminal of the storage capacitor 116 , and the anode of the oled 114 ) to the monitor line mon. fig. 2b is a timing diagram of exemplary operation cycles for the pixel 110 shown in fig. 2a . during a first cycle 150 , both the sel line and the rd line are high, so the corresponding transistors 118 and 119 are turned on. the switching transistor 118 applies a voltage vd 1 , which is at a level sufficient to turn on the drive transistor 112 , from the data line vdata to node a. the read transistor 119 applies a monitor-line voltage vb, which is at a level that turns the oled 114 off, from the monitor line mon to node b. as a result, the gate-source voltage vgs is independent of v oled (vd 1 −vb−vds 3 , where vds 3 is the voltage drop across the read transistor 119 ). the sel and rd lines go low at the end of the cycle 150 , turning off the transistors 118 and 119 . during the second cycle 154 , the sel line is low to turn off the switching transistor 118 , and the drive transistor 112 is turned on by the charge on the capacitor 116 at node a. the voltage on the read line rd goes high to turn on the read transistor 119 and thereby permit a first sample of the drive transistor current to be taken via the monitor line mon, while the oled 114 is off. the voltage on the monitor line mon is vref, which may be at the same level as the voltage vb in the previous cycle. during the third cycle 158 , the voltage on the select line sel is high to turn on the switching transistor 118 , and the voltage on the read line rd is low to turn off the read transistor 119 . thus, the gate of the drive transistor 112 is charged to the voltage vd 2 of the data line vdata, and the source of the drive transistor 112 is set to v oled by the oled 114 . consequently, the gate-source voltage vgs of the drive transistor 112 is a function of v oled (vgs=vd 2 −v oled ). during the fourth cycle 162 , the voltage on the select line sel is low to turn off the switching transistor, and the drive transistor 112 is turned on by the charge on the capacitor 116 at node a. the voltage on the read line rd is high to turn on the read transistor 119 , and a second sample of the current of the drive transistor 112 is taken via the monitor line mon. if the first and second samples of the drive current are not the same, the voltage vd 2 on the vdata line is adjusted, the programming voltage vd 2 is changed, and the sampling and adjustment operations are repeated until the second sample of the drive current is the same as the first sample. when the two samples of the drive current are the same, the two gate-source voltages should also be the same, which means that: after some operation time (t), the change in v oled between time 0 and time t is δv oled =v oled (t)−v oled (0)=vd 2 ( t )−vd 2 ( 0 ). thus, the difference between the two programming voltages vd 2 ( t ) and vd 2 ( 0 ) can be used to extract the oled voltage. fig. 2c is a modified schematic timing diagram of another set of exemplary operation cycles for the pixel 110 shown in fig. 2a , for taking only a single reading of the drive current and comparing that value with a known reference value. for example, the reference value can be the desired value of the drive current derived by the controller to compensate for degradation of the drive transistor 112 as it ages. the oled voltage v oled can be extracted by measuring the difference between the pixel currents when the pixel is programmed with fixed voltages in both methods (being affected by v oled and not being affected by v oled ). this difference and the current-voltage characteristics of the pixel can then be used to extract v oled . during the first cycle 200 of the exemplary timing diagram in fig. 2c , the select line sel is high to turn on the switching transistor 118 , and the read line rd is low to turn off the read transistor 118 . the data line vdata supplies a voltage vd 2 to node a via the switching transistor 118 . during the second cycle 201 , sel is low to turn off the switching transistor 118 , and rd is high to turn on the read transistor 119 . the monitor line mon supplies a voltage vref to the node b via the read transistor 118 , while a reading of the value of the drive current is taken via the read transistor 119 and the monitor line mon. this read value is compared with the known reference value of the drive current and, if the read value and the reference value of the drive current are different, the cycles 200 and 201 are repeated using an adjusted value of the voltage vd 2 . this process is repeated until the read value and the reference value of the drive current are substantially the same, and then the adjusted value of vd 2 can be used to determine v oled . fig. 3 is a circuit diagram of two of the pixels 110 a and 110 b like those shown in fig. 2a but modified to share a common monitor line mon, while still permitting independent measurement of the driving current and oled voltage separately for each pixel. the two pixels 110 a and 110 b are in the same row but in different columns, and the two columns share the same monitor line mon. only the pixel selected for measurement is programmed with valid voltages, while the other pixel is programmed to turn off the drive transistor 12 during the measurement cycle. thus, the drive transistor of one pixel will have no effect on the current measurement in the other pixel. fig. 4 illustrates a drive system that utilizes a readout circuit (roc) 300 that is shared by multiple columns of pixels while still permitting the measurement of the driving current and oled voltage independently for each of the individual pixels 10 . although only four columns are illustrated in fig. 4 , it will be understood that a typical display contains a much larger number of columns. multiple readout circuits can be utilized, with each readout circuit sharing multiple columns, so that the number of readout circuits is significantly less than the number of columns. only the pixel selected for measurement at any given time is programmed with valid voltages, while all the other pixels sharing the same gate signals are programmed with voltages that cause the respective drive transistors to be off. consequently, the drive transistors of the other pixels will have no effect on the current measurement being taken of the selected pixel. also, when the driving current in the selected pixel is used to measure the oled voltage, the measurement of the oled voltage is also independent of the drive transistors of the other pixels. when multiple readout circuits are used, multiple levels of calibration can be used to make the readout circuits identical. however, there are often remaining non-uniformities among the readout circuits that measure multiple columns, and these non-uniformities can cause steps in the measured data across any given row. one example of such a step is illustrated in fig. 5 where the measurements 1 a - 1 j for columns 1-10 are taken by a first readout circuit, and the measurements 2 a - 2 j for columns 11-20 are taken by a second readout circuit. it can be seen that there is a significant step between the measurements 1 j and 2 a for the adjacent columns 10 and 11, which are taken by different readout circuits. to adjust this non-uniformity between the last of a first group of measurements made in a selected row by the first readout circuit, and the first of an adjacent second group of measurements made in the same row by the second readout circuit, an edge adjustment can be made by processing the measurements in a controller coupled to the readout circuits and programmed to: (1) determine a curve fit for the values of the parameter(s) measured by the first readout circuit (e.g., values 1 a - 1 j in fig. 5 ),(2) determine a first value 2 a ′ of the parameter(s) of the first pixel in the second group from the curve fit for the values measured by the first readout circuit,(3) determine a second value 2 a of the parameter(s) measured for the first pixel in the second group from the values measured by the second readout circuit,(4) determine the difference ( 2 a ′- 2 a ), or “delta value,” between the first and second values for the first pixel in the second group, and(5) adjust the values of the remaining parameter(s) 2 b - 2 j measured for the second group of pixels by the second readout circuit, based on the difference between the first and second values for the first pixel in the second group. this process is repeated for each pair of adjacent pixel groups measured by different readout circuits in the same row. the above adjustment technique can be executed on each row independently, or an average row may be created based on a selected number of rows. then the delta values are calculated based on the average row, and all the rows are adjusted based on the delta values for the average row. another technique is to design the panel in a way that the boundary columns between two readout circuits can be measured with both readout circuits. then the pixel values in each readout circuit can be adjusted based on the difference between the values measured for the boundary columns, by the two readout circuits. if the variations are not too great, a general curve fitting (or low pass filter) can be used to smooth the rows and then the pixels can be adjusted based on the difference between real rows and the created curve. this process can be executed for all rows based on an average row, or for each row independently as described above. the readout circuits can be corrected externally by using a single reference source (or calibrated sources) to adjust each roc before the measurement. the reference source can be an outside current source or one or more pixels calibrated externally. another option is to measure a few sample pixels coupled to each readout circuit with a single measurement readout circuit, and then adjust all the readout circuits based on the difference between the original measurement and the measured values made by the single measurement readout circuit. while particular embodiments and applications of the present invention have been illustrated and described, it is to be understood that the invention is not limited to the precise construction and compositions disclosed herein and that various modifications, changes, and variations can be apparent from the foregoing descriptions without departing from the spirit and scope of the invention as defined in the appended claims.
190-700-095-547-284
US
[ "CN", "JP", "WO", "KR", "MX", "US", "AU" ]
H01L21/56,H01L23/28
1999-03-03T00:00:00
1999
[ "H01" ]
process for underfilling controlling flip-chip connecting (c4) integrated circuit package with underfill material that is heated to partial gel state
a partial gel step in the underfilling of an integrated circuit that is mounted to a substrate. the process involves dispensing a first underfill material and then heating the underfill material to a partial gel state. the partial gel step may reduce void formation and improve adhesion performance during moisture loading.
claims what is claimed is: 1. a process for underfilling an integrated circuit that is mounted to a substrate, comprising: dispensing a first underfill material which becomes attached to the integrated circuit and the substrate; and, heating the first underfill material to a partial gel state. 2. the process of claim 1, further comprising the step of dispensing a second underfill material which becomes attached to the integrated circuit and the substrate. 3. the process as recited in claim 2, wherein the second underfill material is dispensed in a pattern which surrounds the first underfill material . 4. the process as recited in claim 1, wherein the first underfill material flows between the integrated circuit and the substrate. 5. the process as recited in claim 1, further comprising the step of heating the substrate before dispensing the first underfill material . 6. the process as recited in claim 5, wherein the substrate is heated to a temperature greater than a temperature of the first underfill material at the partial gel state. 7. the process as recited in claim 1, further comprising the step of mounting the integrated circuit to the substrate with a solder bump. 8. the process as recited in claim 7, further comprising the step of attaching a solder ball to the substrate. 9. a process for mounting and underfilling an integrated circuit to a substrate, comprising: baking a substrate; mounting an integrated circuit to a substrate; and, dispensing a first underfill material which becomes attached to the integrated circuit and the substrate. 10. the process as recited in claim 9, further comprising the step of dispensing a second underfill material which becomes attached to the integrated circuit and the substrate. 11. the process as recited in claim 9, further comprising the step of mounting the integrated circuit to the substrate with a solder bump. 12. the process as recited in claim 11, further comprising the step of attaching a solder ball to the substrate. 13. a process for mounting and underfilling an integrated circuit to a substrate, comprising: baking a substrate; mounting an integrated circuit to the substrate; dispensing a first underfill material which becomes attached to the integrated circuit and the substrate; heating the first underfill material to a partial gel state; dispensing a second underfill material which becomes attached to the integrated circuit and the substrate. 14. the process as recited in claim 13 , further comprising the step of mounting the integrated circuit to the substrate with a solder bump. 15. the process as recited in claim 14, further comprising the step of attaching a solder ball to the substrate.
a process for underfilling a controlled collapse chip connection (c4) integrated circuit package with an underfill material that is heated to a partial gel state background of the invention 1. field of the invention the present invention relates to an integrated circuit package. 2. background information integrated circuits are typically assembled into a package that is soldered to a printed circuit board. figure 1 shows a type of integrated circuit package that is commonly referred to as flip chip or c4 package. the integrated circuit 1 contains a number of solder bumps 2 that are soldered to a top surface of a substrate 3. the substrate 3 is typically constructed from a composite material which has a coefficient of thermal expansion that is different than the coefficient of thermal expansion for the integrated circuit. any variation in the temperature of the package may cause a resultant differential expansion between the integrated circuit 1 and the substrate 3. the differential expansion may induce stresses that can crack the solder bumps 2. the solder bumps 2 carry electrical current between the integrated circuit 1 and the substrate 3 so that any crack in the bumps 2 may affect the operation of the circuit 1. the package may include an underfill material 4 that is located between the integrated circuit 1 and the substrate 3. the underfill material 4 is typically an epoxy which strengthens the solder joint reliability and the thermo-mechanical moisture stability of the ic package. the package may have hundreds of solder bumps 2 arranged in a two dimensional array across the bottom of the integrated circuit 1. the epoxy 4 is typically applied to the solder bump interface by dispensing a single line of uncured epoxy material along one side of the integrated circuit. the epoxy then flows between the solder bumps. the epoxy 4 must be dispensed in a manner that covers all of the solder bumps 2. it is desirable to dispense the epoxy 4 at only one side of the integrated circuit to insure that air voids are not formed in the underfill. air voids weaken the structural integrity of the integrated circuit/substrate interface. additionally, the underfill material 4 must have good adhesion strength with both the substrate 3 and the integrated circuit 1 to prevent delamination during thermal and moisture loading. the epoxy 4 must therefore be a material which is provided in a state that can flow under the entire integrated circuit/substrate interface while having good adhesion properties. the substrate 3 is typically constructed from a ceramic material. ceramic materials are relatively expensive to produce in mass quantities. it would therefore be desirable to provide an organic substrate for a c4 package. organic substrates tend to absorb moisture which may be released during the underfill process. the release of moisture during the underfill process may create voids in the underfill material. organic substrates also tend to have a higher coefficient of thermal expansion compared to ceramic substrates that may result in higher stresses in the die, underfill and solder bumps. the higher stresses in the epoxy may lead to cracks during thermal loading which propagate into the substrate and cause the package to fail by breaking metal traces. the higher stresses may also lead to die failure during thermal loading and increase the sensitivity to air and moisture voiding. the bumps may extrude into the voids during thermal loading, particularly for packages with a relatively high bump density. it would be desirable to provide a c4 package that utilizes an organic substrate. summary of the invention one embodiment of the present invention is an integrated circuit package which may include an integrated circuit that is mounted to a substrate. the package may include an underfill material that is attached to the integrated circuit and the substrate and a fillet which seals the underfill material . brief description of the drawings figure 1 is a side view of an integrated circuit package of the prior art; figure 2 is a top view of an embodiment of an integrated circuit package of the present invention; figure 3 is an enlarged side view of the integrated circuit package; figure 4 is a schematic showing a process for assembling the integrated circuit package. detailed description of the invention referring to the drawings more particularly by reference numbers, figures 2 and 3 show an embodiment of an integrated circuit package 10 of the present invention. the package 10 may include a substrate 12 which has a first surface 14 and a second opposite surface 16. an integrated circuit 18 may be attached to the first surface 14 of the substrate 12 by a plurality of solder bumps 20. the solder bumps 20 may be arranged in a two- dimensional array across the integrated circuit 18. the solder bumps 20 may be attached to the integrated circuit 18 and to the substrate 12 with a process commonly referred to as controlled collapse chip connection (c4). the solder bumps 20 may carry electrical current between the integrated circuit 18 and the substrate 12. in one embodiment the substrate 12 may include an organic dielectric material . the package 10 may include a plurality of solder balls 22 that are attached to the second surface 16 of the substrate 12. the solder balls 22 can be reflowed to attach the package 10 to a printed circuit board (not shown) . the substrate 12 may contain routing traces, power/ground planes, vias, etc. which electrically connect the solder bumps 20 on the first surface 14 to the solder balls 22 on the second surface 16. the integrated circuit 18 may be encapsulated by an encapsulant (not shown) . additionally, the package 10 may incorporate a thermal element (not shown) such as a heat slug or a heat sink to remove heat generated by the integrated circuit 18. the package 10 may include a first underfill material 24 that is attached to the integrated circuit 18 and the substrate 12. the package 10 may also include a second underfill material 26 which is attached to the substrate 12 and the integrated circuit 18. the second underfill material 26 may form a circumferentic fillet that surrounds and seals the edges of the ic and the first underfill material 24. the sealing function of the second material 26 may inhibit moisture migration, cracking of the integrated circuit and cracking of the first underfill material. the first underfill material 24 may be an epoxy produced by shin-itsu of japan under the product designation semicoat 5230-jp. the semicoat 5230-jp material provides favorable flow and adhesion properties. the second underfill material 26 may be an anhydride epoxy produced by shin-itsu under the product designation semicoat 122x. the semicoat 122x material has lower adhesion properties than the semicoat 5230-jp material, but much better fracture/crack resistance . figure 4 shows a process for assembling the package 10. the substrate 12 may be initially baked in an oven 28 to remove moisture from the substrate material . the substrate 12 is preferably baked at a temperature greater than the process temperatures of the remaining underfill process steps to insure that moisture is not released from the substrate 12 in the subsequent steps. by way of example, the substrate 12 may be baked at 163 degrees centigrade (°c). after the baking process, the integrated circuit 18 may be mounted to the substrate 12. the integrated circuit 18 is typically mounted by reflowing the solder bumps 20. the first underfill material 24 may be dispensed onto the substrate 12 along one side of the integrated circuit 18 at a first dispensing station 30. the first underfill material 24 may flow between the integrated circuit 18 and the substrate 12 under a wicking action. by way of example, the first underfill material 24 may be dispensed at a temperature between 110 to 120°c. there may be a series of dispensing steps to fully fill the space between the integrated circuit 18 and the substrate 12. the package 10 may be moved through an oven 32 to complete a flow out and partial gel of the first underfill material 24. by way of example, the underfill material 24 may be heated to a temperature of 120-145°c in the oven 32 to partially gel the underfill material 24. partial gelling may reduce void formation and improve the adhesion between the integrated circuit 18 and the underfill material 24. the improvement in adhesion may decrease moisture migration and delamination between underfill material 24 and the ic 18 as well as delamination between underfill material 24 and the substrate 12. the reduction in void formation may decrease the likelihood of bump extrusion during thermal loading. the package may be continuously moved through the oven 32 which heats the underfill material during the wicking process. continuously moving the substrate 12 during the wicking process decreases the time required to underfill the integrated circuit and thus reduces the cost of producing the package. the substrate 12 can be moved between stations 30 and 34 and through the oven 32 on a conveyer (not shown) . the second underfill material 26 may be dispensed onto the substrate 12 along all four sides of the integrated circuit 18 at a second dispensing station 34. the second material 26 may dispensed in a manner which creates a fillet that encloses and seals the first material 24. by way of example, the second underfill material 26 may be dispensed at a temperature of approximately 80 to 120°c. the first 24 and second 26 underfill materials may be cured into a hardened state. the materials may be cured at a temperature of approximately 150 °c. after the underfill materials 24 and 26 are cured, solder balls 22 may be attached to the second surface 16 of the substrate 12. while certain exemplary embodiments have been described and shown in the accompanying drawings, it is to be understood that such embodiments are merely illustrative of and not restrictive on the broad invention, and that this invention not be limited to the specific constructions and arrangements shown and described, since various other modifications may occur to those ordinarily skilled in the art.
191-314-257-291-244
US
[ "WO", "US" ]
A45F5/02,B23P19/04,F16B2/20
2009-06-05T00:00:00
2009
[ "A45", "B23", "F16" ]
self-locking clip
a self-locking clip may include a first member configured to attach to an item, the first member having a tongue arranged adjacent the item when the first member is attached to the item, the tongue configured to allow an article to be received between the tongue and the item when the first member is attached to the item; a second member pivotally attached to the first member, the second member having at least one engagement surface for engaging the article, when the first member is attached to the item and the article is received between the tongue and the item; and a bias member arranged to bias the second member to pivot in a direction to urge the at least one engagement surface toward the article to engage the article, when the first member is attached to the item and the article is received between the tongue and the item.
what is claimed is: 1. a clip for attachment to an item and for selectively securing the item to an article, the clip comprising: a first member configured to attach to an item, the first member having a tongue that is arranged adjacent the item when the first member is attached to the item, the tongue configured to allow an article to be received between the tongue and the item when the first member is attached to the item; a second member pivotally attached to the first member, the second member having at least one engagement surface for engaging the article, when the first member is attached to the item and the article is received between the tongue and the item; and a bias member arranged to bias the second member to pivot in a direction to urge the at least one engagement surface of the second member toward the article to engage the article, when the first member is attached to the item and the article is received between the tongue and the item. 2. the clip as recited in claim 1 , wherein the tongue is biased toward the item, when the first member is attached to the item. 3. the clip as recited in claim 1, wherein the first member comprises a base portion having a first surface and a second surface, the second surface attachable to the item, the first surface facing away from the item when the second surface is attached to the item; and wherein the tongue is biased toward the first surface of the base. 4. the clip as recited in claim 1, wherein the first member comprises a base portion having a first surface and a second surface, the second surface attachable to the item, the first surface facing away from the item when the second surface is attached to the item; and wherein the bias member is arranged to bias the second member to pivot the at least one engagement surface of the second member toward the first surface of the base portion. 5. the clip as recited in claim 4, wherein the second member includes a surface for receiving a force from a user in a direction for pivoting the second member relative to the first member against a bias force of the bias member, to urge the at least one engagement surface of the second member away from the first surface of the base portion. 6. the clip as recited in claim 1, wherein the bias member comprises a portion of the first member configured to provide a bias force against the second member. 7. the clip as recited in claim 1, wherein the first member and the bias member comprise a unitary structure formed from a single piece of metal. 8. the clip as recited in claim 1, wherein the at least one engagement surface of the second member comprises two engagement surfaces. 9. the clip as recited in claim 1, wherein the at least one engagement surface of the second member has a generally pointed tip for engaging the article when the first member is attached to the item and the article is received between the tongue and the item. 10. the clip as recited in claim 1, wherein the second member includes a surface for receiving a force from a user in a direction for pivoting the second member relative to the first member against a bias force of the bias member, to urge the at least one engagement surface of the second member away from the article, when the first member is attached to the item and the article is received between the tongue and the item. 11. the clip as recited in claim 1 , wherein the second member has a surface configured to receive a force from the article to pivot the second member relative to the first member in the direction to urge the at least one engagement surface of the second member toward the article and increase a force of engagement of the at least one engagement surface on the article to further inhibit the article from being removed from between the tongue and the item, when the first member is attached to the item and the article is received between the tongue and the item. 12. the clip as recited in claim 1, wherein the second member includes a surface for receiving a force from a user in a direction for pivoting the second member relative to the first member against a bias force of the bias member; and wherein the second member is configured to allow the article to be engaged by the at least one engagement surface of the second member before receiving the force from the user on the surface of the second member. 13. the clip as recited in claim 1, wherein the second member is configured to automatically engage the article with the at least one engagement surface as the article is received between the at least one engagement surface and the item. 14. a method of making a clip for attachment to an item and for selectively securing the item to an article, the method comprising: configuring a first member to attach to an item, providing the first member with a tongue in a position to be adjacent the item when the first member is attached to the item, the tongue configured to allow an article to be received between the tongue and the item when the first member is attached to the item; attaching a second member to the first member for pivotal movement of the second member relative to the first member; providing the second member with at least one engagement surface for engaging the article when the first member is attached to the item and the article is received between the tongue and the item; and arranging a bias member in a position to bias the second member to pivot relative to the first member in a direction to urge the at least one engagement surface of the second member toward the article to engage the article when the first member is attached to the item and the article is received between the tongue and the item. 15. the method as recited in claim 14, wherein providing the first member with the tongue comprises providing a bias force to biasing the tongue toward the item when the first member is attached to the item. 16. the method as recited in claim 15, wherein configuring the first member comprises configuring a base portion having a first surface and a second surface, the second surface attachable to the item, the first surface facing away from the item when the second surface is attached to the item; and wherein providing the first member with the tongue comprises biasing the tongue toward the first surface of the base. 17. the method as recited in claim 14, wherein configuring the first member comprises configuring a base portion having a first surface and a second surface, the second surface attachable to the item, the first surface facing away from the item when the second surface is attached to the item; and wherein arranging the bias member comprises arranging the bias member to bias the second member to pivot the at least one engagement surface of the second member toward the first surface of the base portion. 18. the method as recited in claim 17, wherein providing the second member comprises providing a body having a surface for receiving a force from a user in a direction for pivoting the body relative to the first member against a bias force of the bias member, to urge the at least one engagement surface away from the first surface of the base portion. 19. the method as recited in claim 14, wherein the bias member comprises a portion of the first member configured to provide a bias force against the second member. 20. the method as recited in claim 14, wherein providing the first member comprises forming the first member and the bias member as a unitary structure from a single piece of metal. 21. the method as recited in claim 14, wherein the at least one engagement surface comprises two engagement surfaces. 22. the method as recited in claim 14, wherein the at least one engagement surface of the second member has a generally pointed tip for engaging the article when the first member is attached to the item and the article is received between the tongue and the item. 23. the method as recited in claim 14, wherein providing the second member comprises providing a body having a surface for receiving a force from a user in a direction for pivoting the body relative to the first member against a bias force of the bias member, to urge the at least one engagement surface in a direction away from the article when the first member is attached to an item and an article is received between the tongue and the item.
self-locking clip cross-reference to related patent applications [0001] this application claims priority from u.s. provisional application no. 61/217,878, filed 06/05/2009, incorporated herein by reference in its entirety. background 1. field of the invention [0002] embodiments of the present invention generally relate to clip members, and, in specific embodiments, to self-engaging clip members. 2. related art [0003] there are many different organizational and attachment systems, including clips, for carrying and securing various items such as personal equipment, mobile devices, and accessories to a user or an article associated with the user, such as a pants pocket, belt, purse, jacket, and/or the like. however, most clips designed for attaching items such as personal equipment, mobile devices, and accessories hold onto the user or article with tension and friction. [0004] some clips lock to a user or an article. these clips may include a strong spring that allows the clips to lock to the article. however, these clips can be more complicated and expensive to build. in addition, the holding force of the clip is only as strong as that of its spring. accordingly, the spring cannot be too strong or the user will not be able to press and/or depress the spring. likewise, if the spring is not strong enough, the clip cannot be properly secured to the article. furthermore, these locking clips can require the user to depress the spring (and overcome the force of the spring) to move the clip to an unlocked position in order to insert or engage (or otherwise lock) the clip to the attached article. moreover, over time the pressure from a strong spring-loaded locking clip can damage the material or article (e.g., pants or jacket pocket, gear bag, nylon strap, vest, purse, belt, boot, and/or the like) to which it is clipped. summary of the disclosure [0005] a clip for attachment to an item and for selectively securing the item to an article may include, but is not limited to, a first member, a second member, and a bias member. the first member may be configured to attach to an item. the first member may have a tongue that is arranged adjacent the item when the first member is attached to the item. the tongue may be configured to allow an article to be received between the tongue and the item when the first member is attached to the item. the second member may be pivotally attached to the first member. the second member may have at least one engagement surface for engaging the article, when the first member is attached to the item and the article is received between the tongue and the item. the bias member may be arranged to bias the second member to pivot in a direction to urge the at least one engagement surface of the second member toward the article to engage the article, when the first member is attached to the item and the article is received between the tongue and the item. [0006] in various embodiments, the tongue may be biased toward the item, when the first member is attached to the item. in various embodiments, the first member may include a base portion having a first surface and a second surface. the second surface may be attachable to the item. the first surface may face away from the item when the second surface is attached to the item. the tongue may be biased toward the first surface of the base. [0007] in various embodiments, the first member may include a base portion having a first surface and a second surface. the second surface may be attachable to the item. the first surface may face away from the item when the second surface is attached to the item. the bias member may be arranged to bias the second member to pivot the at least one engagement surface of the second member toward the first surface of the base portion. in some embodiments, the second member may include a surface for receiving a force from a user in a direction for pivoting the second member relative to the first member against a bias force of the bias member, to urge the at least one engagement surface of the second member away from the first surface of the base portion. [0008] in various embodiments, the bias member may include a portion of the first member configured to provide a bias force against the second member. in various embodiments, the first member and the bias member comprise a unitary structure formed from a single piece of metal. in various embodiments, the at least one engagement surface of the second member may include two engagement surfaces. [0009] in various embodiments, the at least one engagement surface of the second member may have a generally pointed tip for engaging the article when the first member is attached to the item and the article is received between the tongue and the item. in various embodiments, wherein the second member may include a surface for receiving a force from a user in a direction for pivoting the second member relative to the first member against a bias force of the bias member, to urge the at least one engagement surface of the second member away from the article, when the first member is attached to the item and the article is received between the tongue and the item. [0010] in various embodiments, the second member may have a surface configured to receive a force from the article to pivot the second member relative to the first member in the direction to urge the at least one engagement surface of the second member toward the article and increase a force of engagement of the at least one engagement surface on the article to further inhibit the article from being removed from between the tongue and the item, when the first member is attached to the item and the article is received between the tongue and the item. in various embodiments, the second member may include a surface for receiving a force from a user in a direction for pivoting the second member relative to the first member against a bias force of the bias member. the second member may be configured to allow the article to be engaged by the at least one engagement surface of the second member before receiving the force from the user on the surface of the second member. in various embodiments, the second member may be configured to automatically engage the article with the at least one engagement surface as the article is received between the at least one engagement surface and the item. [0011] a method of making a clip for attachment to an item and for selectively securing the item to an article, the method may include, but is not limited to, any one of or combination of: (i) configuring a first member to attach to an item; (ii) providing the first member with a tongue in a position to be adjacent the item when the first member is attached to the item, the tongue configured to allow an article to be received between the tongue and the item when the first member is attached to the item; (iii) attaching a second member to the first member for pivotal movement of the second member relative to the first member; (iv) providing the second member with at least one engagement surface for engaging the article when the first member is attached to the item and the article is received between the tongue and the item; and (v) arranging a bias member in a position to bias the second member to pivot relative to the first member in a direction to urge the at least one engagement surface of the second member toward the article to engage the article when the first member is attached to the item and an article is received between the tongue and the item. [0012] in various embodiments, providing the first member with the tongue may comprise providing a bias force to biasing the tongue toward the item when the first member is attached to the item. in some embodiments, configuring the first member may comprise configuring a base portion having a first surface and a second surface. the second surface may be attachable to the item. the first surface may face away from the item when the second surface is attached to the item. providing the first member with the tongue may comprise biasing the tongue toward the first surface of the base. [0013] in various embodiments, configuring the first member may comprise configuring a base portion having a first surface and a second surface. the second surface may be attachable to the item. the first surface may face away from the item when the second surface is attached to the item. arranging the bias member may comprise arranging the bias member to bias the second member to pivot the at least one engagement surface of the second member toward the first surface of the base portion. in some embodiments, providing the second member may comprise providing a body having a surface for receiving a force from a user in a direction for pivoting the body relative to the first member against a bias force of the bias member, to urge the at least one engagement surface away from the first surface of the base portion. [0014] in various embodiments, the bias member may include a portion of the first member configured to provide a bias force against the second member. in various embodiments, providing the first member may comprise forming the first member and the bias member as a unitary structure from a single piece of metal. in various embodiments, the at least one engagement surface comprises two engagement surfaces. [0015] in various embodiments, the at least one engagement surface of the second member may have a generally pointed tip for engaging the article when the first member is attached to the item and the article is received between the tongue and the item. in various embodiments, providing the second member may comprise providing a body having a surface for receiving a force from a user in a direction for pivoting the body relative to the first member against a bias force of the bias member, to urge the at least one engagement surface in a direction away from the article when the first member is attached to an item and an article is received between the tongue and the item. brief description of the drawings [0016] fig. 1 is a top, elevated view of a clip member according to an embodiment of the present invention; [0017] fig. 2 is a side view of a clip member in a locked position according to an embodiment of the present invention; [0018] fig. 3 is a side, elevated view of a clip member in a locked position according to an embodiment of the present invention; [0019] fig. 4 is a side view of a clip member in an unlocked position according to an embodiment of the present invention; [0020] fig. 5 is a side view of a clip member in a locked position according to an embodiment of the present invention; [0021] fig. 6 is a top, elevated view of a clip member according to an embodiment of the present invention; and [0022] fig. 7 is a side, elevated view of a clip member engaged with an exemplary article of clothing according to an embodiment of the present invention. detailed description [0023] figs. 1-3 illustrate a clip member 10 according to an embodiment of the present invention. in particular, fig. 1 is a top, elevated view of the clip member. fig. 2 is a side view of the clip member 10 in a locked position (to receive and securely hold an article). fig. 3 is a side, elevated view of the clip member 10 in a locked position. [0024] the clip member 10 may be used to clip various items such as personal equipment, mobile devices, and accessories including, but not limited to, tools, electronic devices, weapons, gear pouches, safety equipment, and/or the like (or a product used with such an item, such as, but not limited to, a mobile phone case, key ring or chain, and/or the like) to an article such as (but not limited to) a pants pocket (e.g., 101 in fig. 7) or other pocket (of varying shapes and sizes), a shirt collar, pants waistband, belt, hat, purse, briefcase, gear bag, backpack, molle compatible webbing system, a shoe or boot, an elastic band, and/or the like. in specific embodiments, the clip member 10 may be molle (modular lightweight load-carrying equipment) compatible. in other embodiments, the item attachable to the clip member need not be a personal or otherwise portable device. for example, the clip member 10 may be attached to objects other than a person (or objects carried by a person) including, but not limited to, a wall or other mounting surface, filing cabinet, desk or other furniture, refrigerator or other appliance, and/or the like. [0025] the first member 20 and/or the second member 30 (or any one or more part thereof) may be made of any suitably rigid material (or combination of materials) including, but not limited to metal, alloy, plastic, rubber, composite material (e.g., carbon fiber), wood, ceramic, and/or the like. [0026] with reference to figs. 1-3, the clip member 10 may include a first member 20 and a second member 30. the first member 20 may include a base 28 and a body 21. in some embodiments, the base 28 may be integral to the body 21, for example, formed together. in other embodiments, the base 28 may be connected to the body 21 in any suitable manner, for example (but not limited to) with one or more screws (or other fasteners), glue, epoxy, double sided bonding tape, molded or welded together, and/or the like. the base 28 may have a first surface 28a and a second surface 28b. the first surface 28a may face the body 21. [0027] in addition, the shape of the base 28 may be substantially rectangular as shown in figs. 1-3. in other embodiments, however, the base 28 may have any suitable shape and size. in some embodiments, the base 28 (or a portion thereof or the body 21) may be attached to an item (e.g., 100 in fig. 5), such as a mobile phone, pager, mobile device, and/or the like or product used with the item (e.g., mobile phone case, key ring, and/or the like) in any suitable manner including (but not limited to) with one or more screws (or other fasteners), glue, epoxy, double sided bonding tape, molded or welded together, and/or the like). for instance, the base 28 may include an adhesive material on the second side 28b of the base 28 (e.g., the side facing away from the body 21) that adheres to the item. as another example, the base 28 may include or be used with a magnet for attracting to the base 28 (or other component) and/or the item. the magnet may be magnetically coupled to a metallic item or the like to couple the clip member 10 to the metallic item. in further embodiments, the magnet may include an adhesive material (as above) on an opposite side from the side coupled to the clip member 10. [0028] in particular embodiments, the base 28 (or a portion thereof) or the body 21 may be molded or welded or to or otherwise made part of the item. for instance, a portion of the clip member 10 (e.g., the base 28 or portion thereof) may be molded into the item or other product, such as a rigid (or soft) mobile phone case and/or the like. for example, when the phone case is manufactured, the base 28 may be encased in plastic (or other material) (e.g., figs. 4-5). accordingly, when the clip member 10 is clipped to an article, the article passes between the tongue 27 and the surface of the case, and when locked, the article is pressed against the surface of the case by the engagement ends 34 (discussed later). thus, in various embodiments, the clip member 10 (or portions thereof) may be a component or otherwise part of a manufactured item (or product). it should be noted that reference to the base 28 or surface thereof (e.g., first surface 28a) of the clip member 10 may also include embodiments in which the item itself is the base or provides the surface against which the engagement ends 34 press an article (as described later). [0029] in other embodiments, the base 28 (or the body 21 itself) may be attached, mounted, assembled, or formed integral with an item in any suitable manner. for instance, the clip member 10 and/or the base 28 can be assembled into a product such as a pocketknife or a flashlight (or other item). in some embodiments, the base 28 may be, include, or be formed with an attachment body (not shown) configured to fit to the item (e.g., flashlight) in any suitable manner including, but not limited to, a snap fitting, friction fitting, press fitting, with an adhesive, fastener, and/or the like. for example, for a clip member 10 configured to fit to a flashlight or other cylinder-shaped item, the attachment body may be (but not limited to) a "c"-shaped (to fit or snap to the flashlight) or annular ring (to slip over an end of the flashlight). in further examples, the attachment body may be received into a groove or the like of the flashlight (or other item). the above examples relate to flashlights and other cylinder- shaped items. however, in other embodiments, the clip member 10 may be configured to fit to any sized, shaped, and/or dimensioned item. as another example, the base 28 (or the body 21 itself) may be fastened (e.g., screwed) to a handle of a foldable pocketknife or the like. [0030] in various embodiments, the body 21 may include a tongue 27 or the like. the tongue 27 may be biased with a bias force toward the first surface 28a of the base 28 (i.e., in a direction toward the article when the article is received between the body 21 and the item as described later). the bias force may be selected or otherwise set based on the material used to make the first member 20 (or the tongue 27 by itself). in various embodiments, the bias force of the tongue 27 may be less than a bias force provided by a bias member 25, as described later. [0031] in the embodiments shown in figs. 1-3, the tongue 27 may have a substantially rectangular shape extending in a longitudinal direction. however, in other embodiments, the tongue 27 may have any suitable shape, curvature, and/or decoration (e.g., figs. 4-6). in various embodiments, the tongue 27 may be integral with the body 21. in other embodiments, the tongue 27 may be connected to the body 21 in any suitable manner, for example, as described with respect to the body 21 and the base 28. [0032] in various embodiments, the body 21 (including the tongue 27) and the base 28 may each have a length dimension that is substantially equal to each other (e.g., fig. 1). in other embodiments, the length dimension of the body 21 and the base 28 may be different. for instance, the length dimension of the body 21 may be greater than the length dimension of the base 28. alternatively, the length dimension of the body 21 may be less than the length dimension of the base 28 (e.g., fig. 5). for instance, a longer base 28 could allow for facilitated attachment of a key ring or the like to an end (or other suitable location) of the base 28. [0033] the first member 20 (e.g., the base 28, the body 21, and the tongue 27) may be for receiving an article such as (but not limited to) fabric of a pants pocket (e.g., 101 in fig. 7) or other pocket (of varying shapes and sizes), a shirt collar, pants waistband, any suitable portion of an article of clothing, belt, hat, purse, briefcase, gear bag, backpack, molle compatible webbing system, a shoe or boot, an elastic band, and/or the like. in particular, the first member 20 may be placed over the article to receive a portion of the article into the first member 20. [0034] the second member 30 may be supported on the first member 20 for movement toward and away from the first surface 28a of the base 28. in some embodiments, the first member 20 may include one or more pivot points 26 (or rotational axes) about which the second member 30 pivots (or rotates) relative to the first member 20. the pivot points 26 may include one or more protrusions received in one or more recesses 36 of the second member 30 to allow the one end of the second member 30 to pivot to and away from the first surface 28a of the base 28. in other embodiments, the locations of the protrusion(s) and recess(es) may be reversed such that the second member 30 may include protrusions (not shown) received in one or more recesses (not shown) of the first member 20 to allow the end of the second member 30 to pivot to and away from the first surface 28a of the base 28. [0035] the second member 30 may include a pressing area 33 for selectively moving the second member 30 toward and away from the first surface 28a of the base 28. accordingly, the second member 30 can be selectively pivoted by applying a force upon the pressing area 33 of the second member 30. the pressing body may include one or more arms 32 for movement with the second member 30. each of the arms 32 may include an engagement end 34 for contacting or otherwise operatively engaging the first surface 28a of the base 28 (or portion of the article positioned between the first surface 28a of the base 28 and the second member 30). [0036] in some embodiments, in the locked position (e.g., before an article is received into the clip member 10), the engagement ends 34 (and/or the arms 32) and the first surface 28a of the base 28 may form an angle al of less than 90 degrees (facing a receiving direction in which an article may be received into the clip member 10) (e.g., fig. 2). the formed angle al, for example, may help guide the article under the engagement ends 34 (and/or the arms 32) as the article is received into the clip member 10 (in the receiving direction (relative to the clip member 10)). this may urge the engagement ends 34 upward and allow continued movement of the article in the receiving direction. as such, a force provided by the article (in the receiving direction) may be sufficient to overcome a bias force of a bias member 25 (described below) to urge the engagement ends 34 away from the base 28 (e.g., away from the locked position). once the force provided by the article (in the receiving direction) is less than the bias force of the bias member 25 (e.g., the article is no longer moving in the receiving direction), the bias member 25 may urge the engagement ends 34 toward the base 28 (e.g., toward the locked position) to securely hold the article in place. [0037] in some embodiments, in the locked position (e.g., an article is received between the engagement ends 34 and the base 28 (or item)), the engagement ends 34 (and/or the arms 32) and the first surface 28a of the base 28 may form an angle a2 of less than 90 degrees (facing opposite the receiving direction in which the article was received) (e.g., fig. 2). the formed angle a2 may cause a force, which is provided from the article as the user (or other force) attempts to remove the article (in a removal direction, which is opposite the receiving direction), to (further) urge the engagement ends 34 toward the first surface 28a of the base 28. accordingly, the engagement ends 34 may be urged further toward the base 28, and thus increase the holding ability of the clip member 10 in direct proportion to the force with which the user (or the like) is attempting to remove the article. [0038] in various embodiments, the engagement ends 34 may be pointed. in other embodiments, the engagement ends 34 may be, but not limited to, smooth, round, flat (e.g., to be parallel with the first surface 28a of the base 28 when the engagement ends 34 engage the first surface 28a (or the article to which the clip member 10 is being clipped)), and/or the like. in some embodiments, the first surface 28a of the base 28 may include one more recesses (not shown) for receiving at least a portion of the engagement ends 34 and/or a portion of the article pushed into the recess(es) by the engagement ends 34. such embodiments, for example, may provide additional locking or gripping between the engagement ends 34 and the base 28 of the clip member 10. in other embodiments, each of the engagement ends 34 may include a recess or other curvature for receiving a protrusion (not shown) or the like arranged on the first surface 28a of the base 28. as above, such embodiments, for example, may provide additional locking or gripping between the engagement ends 34 and the base 28 of the clip member 10. [0039] in some embodiments, the engagement ends 34 may be sized and shaped as appropriate. in some embodiments, for example, the engagement ends 34 (and/or the one or more arms 32) may be formed as one continuous member for engaging an article received into the first member 20. in some embodiments, the engagement ends 34 may extend in the same direction as the arms 32. in other embodiments, the engagement ends 34 may protrude from the arms 32, for example toward the first surface 28a of the base 28 (e.g., fig 2). [0040] in various embodiments, the clip member 10 may include one or more friction members (not shown) strategically arranged along the clip member 10, for example, to provide further holding or gripping strength. for instance, a friction member may be arranged on each of the engagement ends 34 (and/or the recesses for receiving the engagement ends 34) to increase friction with the article, and thus the holding ability of the clip member 10 to the article to which the clip member 10 is clipped. the friction member(s) may be made of any suitable material (or combination of materials) for providing friction, such as, but not limited to, rubber and/or the like. in some embodiments, the clip member 10 may include one or more portions with a rough or textured surface or the like for providing increased friction between the clip member 10 and the article. for example, the first surface 28a of the base 28 may have one more rough or textured surfaces to increase friction between the article and the base 28. [0041] a bias member 25 (e.g., spring, leaf spring, or the like) may be positioned to provide a bias force on the second member 30 (e.g., on a bottom surface of the second member 30 opposite the pressing area 33). in the illustrated embodiments, the bias member 25 is positioned between the first member 20 and the second member 30. however, in other embodiments, the bias member 25 may be positioned at any suitable location for providing a bias force on the second member 30. [0042] the bias member 25 may urge the pressing surface 33 of the second member 30 upward (in the orientation of figs. 1-3) to cause the arms 32 to pivot downward (in the orientation of figs. 1-3) about the pivots 26. as the arms 32 pivot downward, the engagement ends 34 engage (i.e., contact and hold) the article positioned between the first surface 28a of the base 28 and the second member 30. the bias member 25 may be configured or otherwise selected to provide a bias force sufficient to leverage the pressing area 33 (and the arms 32 and engagement ends 34) of the second member 30 upward (in the orientation of figs. 1-3), yet allow a user to apply a greater force than the bias force upon the pressing body 33 to urge and move the second member 30 in the opposite direction against the bias force of the bias member 25. [0043] in some embodiments, the bias member 25 may be made integral with the body 21 of the first member 20. for instance, the bias member 25 may be a resilient portion of the body 21 arranged to provide the bias force (or similar tension) on the second member 30. in such embodiments, the user may apply a force (e.g., press) on the pressing area 33 to pivot the arms 32 upward. when the user stops pressing on the pressing area 33 (or the force provided by the resilient portion otherwise becomes greater than the force on the pressing area 33), the resilient portion may urge the arms 32 downward toward the first surface 28a of the base 28. in such embodiments, for example, when the pressing area 33 is pressed, the pressing area 33 may urge the resilient portion toward the base 28. once released, the resilient portion may once again move away from the base 28. in some embodiments, the body 21 may include at least one groove or cutout area separating the body 21 (or the tongue 27) and the resilient portion. [0044] in particular embodiments, the resilient portion may extend or otherwise protrude from the body 21 (in a direction angled away from the base 28 and toward the tongue 27) to contact or otherwise engaged the underside of the pressing area 33. as such, the resilient portion extends away from the base 28 at an angle of less than 90 degrees (facing the direction in which an article may be received) (e.g., fig. 2). in further embodiments, the second member 30 (including the pressing area 33, the arms 32, and the engagement ends 34) may be supported on the first member 20 to be at an angle of less than 90 degrees (facing the direction in which an article may be received) (e.g., fig. 2). thus, in these embodiments, for example, the resilient portion and the second member 30 may be orientated in substantially the same (or parallel) directions. as discussed, such a configuration may allow for a self-locking clip member 10. [0045] in other embodiments, the resilient portion of the base 28 may extend or otherwise protrude from the tongue 27 (in a direction angled away from the base 28 and away from the tongue 27) to come in contact with the underside of the pressing area 33. as such, the resilient portion extends away from the base 28 at an angle of greater than 90 degrees (facing the direction in which an article may be received) (e.g., fig. 5). put another way, the resilient portion extends away from the base 28 at an angle of less than 90 degrees (facing the direction in which an article may be removed) (e.g., fig. 5). thus, in these embodiments, for example, the resilient portion and the second member 30 may be orientated in opposing directions. however, these embodiments operate similar to those in which the resilient portion and the second member 30 are orientated in substantially the same (or parallel directions) (e.g., fig. 2). [0046] in other embodiments, the bias member may be a coil spring (or any other suitable biasing member) positioned between (or any other suitable location to provide a biasing force) the first member 20 and the second member 30 to allow the coil spring to provide the bias force on the second member 30. [0047] to use the clip member 10, the user may first slide or otherwise move the tongue 27 of the clip member 10 over an article so that the article is positioned between the base 28 and the tongue 27 of the body 21. the user may continue to slide or otherwise move the clip member 10 relative to the article beyond the engagement ends 34 of the arms 32 of the second member 30. as the article is moved toward the engagement ends 34 of the arms 32, the article may press on the arms 32 forcing the engagement ends 34 of the arms 32 up and away from the first surface 28a of the base 28. in other words, a force from the clip member 10 being moved relative to the article may be sufficient to overcome the bias force of the bias member 25 to urge and move the arms 32 upward and allow the article to pass between the engagement ends 34 and the base 28. in various embodiments, if desired, the pressing area 33 may be pressed to raise the arms 32 and engagement ends 34 of the arms 32 whereupon the article may be received into the clip member 10 past the engagement ends 34 of the arms 32. [0048] once the clip member 10 is no longer being moved relative to the article, the bias member 25 may urge the arms 32 downward so that the engagement ends 34 contact (or otherwise operatively engage) and securely hold the article. in other words, once the bias force of the bias member 25 is greater than the force from the clip member 10 being moved relative to the article, the bias member 25 may urge the arms 32 downward to engage the article. the engagement ends 34 may substantially prevent the article from being moved (relative to the clip member 10) in an opposite direction from which the article was inserted into the clip member 10 without damaging the article unless the pressing area 33 is pressed by the user to overcome the bias force of the bias member 25 and pivot the arms 32 upward to disengage the article. in addition, an opposing force created from an attempt to move the article (relative to the clip member 10) in the opposite direction may further urge the arms 32 of the second member 30 downward toward the article and the first surface 28a of the base 28. thus, as the opposing force increases, the holding strength or ability of the clip member 10 may increase accordingly. [0049] thus, in various embodiments, a clip member 10 may include a first member 20 moveable relative to a second member 30. the second member 30 may include one or more arms 32 with a pressing area 33 positioned on top (or other suitable location) of the clip member 10. the one or more arms 32 may be appropriately biased (e.g., spring-loaded) to urge engagement ends 34 of the arms 32 to securely grip an article to which the clip member 10 is being clipped. for instance, a bias member 25 may be configured or otherwise selected to provide a force on the second member 30 sufficient to move or leverage the engagement ends 34 of the arms 32 into a position where the engagement ends 34 contact the article (e.g., pants pocket) to which the user wants to clip the clip member 10. in these embodiments, for example, the clip member 10 may not hold and lock the article based solely (or at all) on the bias force of the bias member 25. instead, as previously discussed, the configuration of the second member 30, which may include, for example, providing the one or more arms 32 and/or the engagement ends 34 having a certain angle, allows the clip member 10 to receive the article without depressing the pressing area 33 and allows the clip member 10 to clip to and securely hold the article received in the clip member 10. thus, the clip member 10 may automatically engage or lock the article as the article is received. [0050] once the clip member 10 is clipped to the article, attempting to forcibly pull out or release the article to which the clip member 10 is clipped without depressing the pressing area 33 of the second member 30 of the clip member 10 may cause the arms 32 (and/or the engagement ends 34) to remain in the locked position and force the arms 32 further toward the base 28 of the clip member 10 to grip the article with added strength. the stronger the force at which the article is forcibly pulled (without depressing the pressing area 33), the stronger the force at which the clip member 10 holds or grips the article in place. [0051] in various embodiments, the one or more arms 32 may be configured to be self- activating in that the user does not need to unlock or otherwise ready the clip member 10 before clipping the clip member 10 to an article. therefore, when the user wants to clip the clip member 10 to an article, the user may simply slide the clip member 10 over the article whereupon the clip member 10 automatically engages or locks the article in place. [0052] in further embodiments, locating the pressing area 33 on top of the clip member 10 (in the orientation of figs. 1-3) conveniently allows the user to depress the pressing area 33 to release the article simultaneously as the user grasps and deploys the article to which the clip member 10 is clipped. [0053] figs. 4-6 illustrate a clip member 10' according to an embodiment of the present invention. the clip member 10' and components thereof may be substantially the same as described above with respect to the clip member 10 (figs. 1-3), however, with a different configuration, for example, for the base 28', the body 21 ', and the bias member 25'. fig. 4 is a side view of the clip member 10' in an unlocked position (to allow release of a clipped article) and mounted to an item 100. fig. 5 is a side view of the clip member 10' in a locked position (to receive and securely hold an article). fig. 6 is a top, elevated view of the clip member 10'. [0054] with reference to fig. 4, the clip member 10' is shown in the unlocked position, which may be obtained, for instance, by pressing on the pressing area 33' (e.g., with a user's finger) with sufficient force to overcome the bias force provided by the bias member 25 ' to urge the arms 32' to pivot (about the pivots 26') upward (in the orientation of fig. 4), thus disengaging (e.g., no longer contacting) the engagement ends 34' of the arms 32' from an article to which the clip member 10' is clipped. accordingly, the user may remove the clip member 10' from the article. [0055] with reference to fig. 5, once the pressing area 33 ' is released (or the force upon the pressing area 33' otherwise becomes less than the bias force of the bias member 25'), the bias member 25' may urge the arms 32' to pivot downward into the locked or engaged position. as discussed, in this position, the clip member 10' (with or without the attached item 100) may be clipped to an article by moving the clip member 10' over the article to place the engagement ends 34' of the arms 32' of the second member 30' over the article. thus, in some embodiments, the engagement ends 34' may press against the article to press the article against some other surface (e.g., the surface of the item 100) other than the base 28'. in addition, in this position, in a case where an article to which the clip member 10' is already clipped, the engagement ends 34' of the second member 30 may securely hold the article in place and substantially prevent the article from being removed from the clip member 10'. [0056] in various embodiments, such as those shown in figs. 4-6, the bias member 25' may face in an opposite direction from a direction that bias member 25 of the clip member 10 faced (e.g., figs. 1-3). in addition, as noted, various components, such as (but not limited to) the base 28', the one or more arms 32', the engagement ends 34', and the tongue 37' may be sized, shaped, and/or decorated in any suitable manner, for example, as shown in figs. 4-6. [0057] fig. 7 is a side, elevated view of the clip member 10' of figs. 4-6 (and/or the clip member 10 of figs. 1-3) shown in the locked position and clipped onto an article, such as a pants pocket 101, with material of the pants pocket 101 received into the clip member 10' to clip the clip member 10' (and an item (e.g., 101 in fig. 5) attached to the clip member 10') to the pants pocket 101. [0058] the clip member 10' is shown as facing from (or clipped to) an outside surface of the pants pocket 101 (and thus an attached item, such as a mobile phone or the like, may be inside the pants pocket 101). however, the user may optionally clip the clip member 10' to an inside surface of the pants pocket 101 and face inside the pants pocket 101 (and thus an attached item may be outside the pants pocket 101). [0059] accordingly, various embodiments provide a convenient, yet secure way to attach and detach a wide variety of items personal equipment, mobile devices, and accessories including, but not limited to tools, electronic devices, weapons, gear pouches, and safety equipment to a user or article associated with the user. [0060] with reference to figs. 1-7, it should be noted that the terms "locked position," "engaged position," and similar terms may be used interchangeably. furthermore, such terms, such as "locked position," unless otherwise noted, may refer to a position or state of the clip member 10 (or the clip member 10') in which it is ready to be clipped to an article or the like and/or to a position or state of the clip member 10 in which the clip member 10 is already clipped to the article. [0061] the components, such as the first member 20 and the second member 30, of the clip member 10 (or the clip member 10') may be made according to known methods for making clips or the like. for instance, the first member 20 and/or the second member 30 may be stamped out, cut, or otherwise formed from a piece of metal (or other material) and then bent into appropriate form. as another example, the first member 20 and/or the second member 30 may be molded into appropriate form. [0062] in various embodiments, the clip member 10 (or 10'), item, and article may be connected together or disconnected in any suitable order. for instance, the user may first attach the clip member 10, for example as previously described, to the item, and then clip the clip member 10 and the item to the article. alternatively, the clip member 10 may be clipped to the article, and then be attached to the item. likewise, to disconnect the components, the user may first unclip the clip member 10 from the article. then, the user may detach the clip member 10 from the item (or clip the clip member 10 and the item to another article). alternatively, the user may detach the clip member 10 from the item and then unclip the article (or attach the clip member 10 and the article to another item). [0063] in various embodiments, components of the clip member 10 (or 10'), may be assembled or disassembled in any suitable order. for example, the attachment body of the base 28 may be attached to a flashlight (or other item). then the clip member 10 may be clipped to the article. then the clipped member 10 or the article may be connected with the attachment body and the flashlight. [0064] the embodiments disclosed herein are to be considered in all respects as illustrative, and not restrictive of the invention. the present invention is in no way limited to the embodiments described above. various modifications and changes may be made to the embodiments without departing from the spirit and scope of the invention. the scope of the invention is indicated by the attached claims, rather than the embodiments. various modifications and changes that come within the meaning and range of equivalency of the claims are intended to be within the scope of the invention.
192-250-912-207-931
US
[ "CA", "US" ]
E02D27/42
1992-07-20T00:00:00
1992
[ "E02" ]
railroad signal foundation and method of producing, transporting and erecting same
railroad signal foundation and method of producing, transporting and erecting same a railroad signal foundation assembly 10 comprises a foundation 11 having a concrete base 12, a pillar 13 comprised of tiers of concrete spider blocks 19 mounted upon the base, and a concrete crown 14 mounted upon the pillar. the foundation assembly has a pallet 15 mounted between and extending laterally outward from the crown and base, and mounting bands 16 which secure that pallet aside the pillar. the foundation may be transported as an assembled unit upon the pallet to an erection site, dismounted from the pallet and lowered into a ground hole without a worker being in the hole during its erection.
1. a railroad signal foundation assembly comprising a pillar, a base mounted to one end of said pillar extending laterally outward therefrom, a crown mounted to another end of said pillar extending laterally outward therefrom, and a pallet mounted aside said pillar nested between said base and said crown, said pallet being sized and shaped to support said base, said pillar and said crown above a support surface in an assembled condition for storage and transportation to an erection site. 2. a method of producing, transporting and erecting a railroad signal foundation comprising the steps of: (a) assembling a foundation by mounting a base to one end of a pillar and mounting a crown to an opposite end of the pillar; (b) mounting a pallet aside the pillar so as to be nested between and extending laterally outward from the base and crown; (c) transporting the palletized foundation to an erection site with the foundation supported atop the pallet; (d) dismounting the foundation from the pallet; and (e) lowering the foundation in an upright orientation into a ground hole. 3. the method of claim 2 further comprising the step of digging the ground hole sized sufficiently wide to receive the foundation in its upright orientation but insufficient to accommodate a worker beside the foundation.
technical field this invention relates generally to foundations for railroad signal and traffic control devises, and to methods of producing, transporting and erecting such foundations. background of the invention today there exists a vast number of railroad crossings where automotive roads and highways cross railroad tracks. in early times signs were erected at such crossings to warn automotive vehicle drivers of the railroad crossing and thereby avoid the possibility of collision with a train. later such signs were made larger and equipped with flashing lights. major crossings were equipped with barrier bars that were automatically raised and lowered in response to the sensed presence of a train. the increase in the size of these signs and signals, and the addition of barrier bars to crossing signals, has meant that these apparatuses have had to be supported on stronger foundations in the ground aside the railroad crossings. railroad signal foundations have heretofore been constructed in a number of manners. some foundations have been formed by merely digging a hole in the ground and filling the hole with concrete to which upright signal masts have been anchored. this has been costly in that it is required that mixed concrete be transported in fluid form to each site. in more recent years railroad crossing signal and traffic control foundations have been made of precast, steel reinforced, concrete components erected one atop the other in a ground hole. this has typically been done by digging a hole in the ground adjacent a railroad crossing. with workers located both within the hole and above the ground, the foundations have been erected piece by piece by positioning a base on the floor of the hole upon which a relatively slender pillar is built with interlocking blocks to approximately ground level. a crown, sometimes referred to as a doughnut, to which a signal mast may be mounted, is finally mounted atop the pillar and the hole filled. foundations of the type just described have proven to be very hazardous and costly to construct. not only is working in a deep hole in the earth inherently dangerous, but the workers have had to manipulate heavy concrete components as they are successively each lowered by cable into the holes in close proximity to the workers. many workers have been injured and even killed from time to time from earth avalanches and mishaps in offloading and manipulating the individual concrete components as the foundation is erected within the hole. additionally, working under such hazardous conditions has caused the time necessary to erect such foundations to be substantial. accordingly, it is seen that a railroad signal and traffic control foundation has long remained needed that may be produced, transported and erected in a safe and cost efficient manner. it is to the provision of such therefore that the present invention is primarily directed. summary of the invention in a preferred form of the invention a railroad signal foundation assembly comprises a pillar, a base mounted to one end of the pillar which extends laterally outward therefrom, and a crown mounted to another end of the pillar which also extends laterally outward therefrom. the assembly also has a pallet mounted aside the pillar nested between the base and the crown. the pallet is sized and shaped to support the base, pillar and crown above a support surface in an assembled condition for storage and transportation to an erection site. in another preferred form of the invention a method of producing, transporting and erecting a railroad signal foundation comprises the steps of assembling a foundation by mounting a base to one end of a pillar and mounting a crown to an opposite end of the pillar. a pallet is mounted aside the pillar nested between and extending laterally outward from the base and crown. the palletized foundation is transported as an assembly to an erection site with the foundation supported atop the pallet. the foundation is then dismounted from the pallet and lowered in an upright orientation into a ground hole. brief description of the drawing fig. 1 is a perspective view of a railroad signal foundation assembly embodying principles of the invention is a preferred form. fig. 2 is a perspective view of the foundation assembly of fig. 1 being offloaded from an underlying pallet. fig. 3 is a perspective view of the foundation assembly of fig. 1 shown laid upon its side. fig. 4 is a bottom view of the foundation assembly of fig. 1 shown being moved by a forklift truck. fig. 5 is a bottom view of the foundation assembly of fig. 1 shown upon a flatbed truck adjacent other foundations of like construction. fig. 6 is a perspective view of the foundation assembly of fig. 1 shown with its pallet unfastened from the foundation. fig. 7 is a perspective view of the foundation assembly of fig. 1 with the foundation shown being raised from the pallet. fig. 8 is a perspective view of the foundation of fig. 1 shown being lowered into a hole in the ground. fig. 9 is a perspective view of the foundation assembly of fig. 1 in an erected orientation. fig. 10 is a perspective view of the foundation assembly of fig. 1 shown with portions shown removed to reveal internal components. detailed description with reference next to the drawing, there is shown in fig. 9 a railroad signal and traffic foundation assembly 10 of the present invention. the assembly 10 has a foundation 11 comprising a base 12, a pillar 13, and a crown 14, all of which are made of precast concrete structures, and a wooden pallet 15 and metallic mounting bands or straps 16. four guide rods 18 extend from the steel reinforced concrete base 12 through the pillar 13. the pillar itself is comprised of four tiers of interlocked spider blocks 19 with unshown transverse, open top channels. each tier thus has two conventional, steel reinforced, concrete spider blocks mounted transversely to each other in log-cabin fashion with each block oriented diagonally across the square shaped base so that the base extends laterally outward from the pillar. each spider block has two tapered holes therethrough that receive the guide rods 18. in assembling the foundation, the spider blocks are lowered one by one into place upon the base 12 and upon each other by passing them down along the guide rods 18 with the pair of each tier fitted together. the crown 14 is mounted atop the pillar 13 with the guide rods 18 passing through four tapered holes which extend through the crown and which are oriented about a large central hole 24. such a crown is therefore often referred to as a doughnut. the crown is sized so as to extend laterally from the pillar. the crown, which is of frusto-conical shape, is ruggedized with an annular array of reinforcing steel rods 21. it has two removable lifting eyes 22 threadably mounted into threaded holes 23 in its top. nuts 30 are mounted on the guide rods 18 flushly atop the crown to secure the base, pillar and crown components together as a complete foundation. the pallet 15 is comprised of a pair of wooden base boards 25, a pair of wooden mounting boards 26 oriented generally parallel to the base boards 25, and three sets of wooden cross boards 27. two cross boards 28 of each set are mounted between the base boards 25 and the mounting boards 26 and one cross board of each set is mounted atop the mounting boards 26. as shown in figs. 1 and 9, the pallet 15 is mounted aside the pillar 13 nested between the base 12 and the crown 13 with the base boards 25 of the pallet extending laterally outward beyond the base and crown. three flexible, metallic, mounting bands 16 are mounted tightly about the pillar and pallet so as to secure them together. mounted in this manner, the pallet closely abuts the crown and base thereby preventing longitudinal movement of the pallet along the pillar. transportation of the foundation assembly may best be understood by sequential reference to figs. 1-8. in fig. 1 the foundation assembly 10 is shown stowed on a small pallet 32 positioned beneath the base 12. to transport the foundation assembly the lifting eyes 22 are threaded into the crown holes 23 so that the foundation assembly may be lifted and lowered with a chain 31 coupled to the lifting eyes. in doing this, the foundation assembly is tilted and lowered off the small pallet 32, as shown being done in fig. 2, until the foundation is supported horizontally upon its pallet 15 upon the ground, as shown in fig. 3. the foundation assembly may then be moved with the use of a forklift truck, as shown in fig. 4, onto a flatbed truck for transportation to an erection site, as shown in fig. 5. again, it should be noted that the foundation is supported upon the pallet over the surface of the flatbed truck, thereby preventing the concrete components of the foundation from contacting the hard flatbed surface so as to avoid chipping and breakage. the wooden pallet also acts as a cushion between the foundation and the truck. the foundation assembly 10 may now be transported compactly along with assemblies of like construction without fear of the foundations toppling over. once the assembly arrives at its erection site it is removed from the truck and lowered onto the ground. the mounting bands 16 are then cut as shown in fig. 6 so that the foundation may be lifted from the pallet. in doing this a chain 33 is coupled to the lifting eyes 22 and the foundation 11 raised to an upright position as shown being done in fig. 7. the foundation is then lowered into a hole in the ground, as shown in fig. 8, having a width somewhat greater than the width of the base and a level floor. it should be noted that there is too small a space between the base and the earth wall of the hole to accommodate a worker. this provides a safety measure as it prevents one from entering the ground hole against standing instructions of his supervisor or foreman during foundation erection. also, moving and erecting the foundation as an assembled unit eliminates the need for dangerous manipulations of numerous concrete blocks within the confines of a ground hole, as with the construction of foundations in the past. once the foundation is properly positioned within the hole, excavated dirt is tightly packed about it so that only an upper portion of the crown is typically exposed above the ground. finally, the lifting eyes are removed and the railroad signal or traffic control mast is mounted atop the crown. in a preferred embodiment, the foundation has a height of approximately 51/2 feet and a weight of approximately 1,600 pounds. each spider block has a height of one foot and a weight of 120 pounds. the base 12 of the foundation measures 30 inches square while the crown has a width of 26 inches. therefore, the hole in the ground should measure at least somewhat larger than 30 inches square. the foundation may be made taller or shorter by merely by adding or removing one or more tiers of spider blocks from the pillar. it thus is seen that a new railroad signal and traffic control foundation and assembly, and a new method of producing, transporting and erecting such, is now provided that overcomes problems long associated with those of the prior art. it should be understood however that many modifications, additions and deletions may be made thereto without departure from the spirit and scope of the invention as set forth in the following claims.
193-234-907-009-275
EP
[ "EP", "AU", "WO" ]
H02J13/00,H04B3/54
1999-05-03T00:00:00
1999
[ "H02", "H04" ]
a data communication device
a device for data communication over a power supply line (1), said device comprising a first communication unit (2) and a set of second communication units (5-1,5-2,...), each of said units having a first connection member (7) provided to be connected to an electrical power supply line, said first and second communication units being provided with a message generator for generating a message and with modulating means for modulating and demodulating according to a powerline data transmission modulation, each of said second units have a second connection member provided to be connected to a load (6) powered by said power supply line, each of said second units further comprise a sensor provided for determining a power value by measuring an electrical power consumed by said connected load, said message generator being provided for inserting said power value into said message and for inserting an address assigned to said first unit into said message.
a data communication device comprising a first communication unit and a set of second communication units, each of said units having a first connection member provided to be connected to an electrical power supply line, said first and second communication units being provided with a message generator for generating a message and with modulating means for forming a message modulated according to a powerline data transmission modulation, said first and second communication units being further provided for transmitting said modulated message via said power supply line, said modulating means being further provided for demodulating a received modulated message, characterised in that each of said second units have a second connection member provided to be connected to a load which load is provided to be powered by said power supply line, each of said second units further comprise a sensor provided for determining a power value by measuring an electrical power consumed by said connected load, said second units being further provided with a memory for storing, at least temporarily, said power value, said message generator being connected to said memory and provided for retrieving said power value and for inserting said power value into said message, said message generator being further provided for inserting an address assigned to said first unit into said message. a data communication device as claimed in claim 1, characterised in that said first and second connection member of said second units are connected with each other within said second unit. a data communication device as claimed in claim 1 or 2, characterised in that said first communication unit comprises an i/o interface, connectable to an external i/o interface of a data processing unit. a data communication device as claimed in any one of the claims 1 to 3, characterised in that said second unit comprises a relay which is provided with a first control input for receiving a first instruction signal, said message generator being provided for generating said first instruction signal upon receipt of a first message generated by said first communication unit. a data communication device as claimed in any one of the claims 1 to 4, characterised in that said second unit comprises a triac control which is provided with a second control input for receiving a second instruction signal, said message generator being provided for generating said second instruction signal upon receipt of a second message generated by said first communication unit. a data communication device as claimed in any one of the claims 1 to 5, characterised in that said second unit comprises a control member which is provided with a third control input for receiving a third instruction signal, said message generator being provided for generating said third instruction signal upon receipt of a third message generated by said first communication unit. a data communication device as claimed in any one of the claims 1 to 6, characterised in that each message generator of each second communication unit is provided with an addressing element provided for assigning an address to its second communication unit. a data communication device as claimed in claim 7, characterised in that said addressing element comprises an address generator provided for generating addresses and supplying them to said message generator which is provided for inserting a received address into a generated message.
the present invention relates to a data communication device, comprising a first communication unit and a set of second communication units, each of said units having a first connection member provided to be connected to an electrical power supply line, said first and second communication units being provided with a message generator for generating a message and with modulating means for forming a message modulated according to a power line data transmission modulation, said first and second communication units being further provided for transmitting said modulated message via said power supply line, said modulating means being further provided for demodulating a received modulated message. such a data communication device is known and used for example for monitoring electrical power consumption. generally the first communication unit is installed at the authority delivering the electrical power and the second units are installed in the consumers current meter. the measured consumed electrical power value is then inserted into the message generated by the message generator of the second unit. since that message has to be transmitted over a noisy medium, such as the mains, it is necessary to modulate the message according to a powerline data transmission modulation in order to obtain a reliable data transmission. the known device generally operates according to a master-slave relationship, whereby the first communication unit is the master and the second units are the slave. a drawback of the known device is that they do not enable a power consumption management of individual loads connected to the power supply line, neither an individual control of the loads. the management and control of individual loads within a same local environment, such as a building or a vehicle, needs an additional communication medium where only digital information and no power is present. it is an object of the present invention to realise a data communication device enabling a control and a power measurement of individual loads within a same local environment while still using the electrical power supply line as a medium for digital information communication. a data communication device according to the present invention is therefore characterised in that each of said second units have a second connection member, provided to be connected to a load, which load is provided to be powered by said power supply line, each of said second units further comprise a sensor, provided for determining a power value by measuring an electrical power consumed by said connected load, said second units being further provided with a memory for storing, at least temporarily, said power value, said message generator being connected to said memory and provided for retrieving said power value and for inserting said power value into said message, said message generator being further provided for inserting an address assigned to said first unit into said message. the fact that the second units comprise a second connection member enables to assign to each individual electrical load a dedicated second communication unit. the sensor, which belongs to each second unit, enables to determine the amount of power consumed by the load to which the second unit is assigned. the availability of a memory and the possibility to insert an address into the message, which comprises the measured value, provides to the second unit the possibility to generate its own messages and to send them on their own initiative to the first unit. control of the individual loads is then possible. a first preferred embodiment of a data communication device according to the invention is characterised in that said first and second connection member of said second units are connected with each other within said second unit. in such a manner the power circulates through the second unit towards the load and the second unit can be switched between the mains' local connection point and the electrical connection of the load. a second preferred embodiment of a data communication device according to the invention is characterised in that said second unit comprises a relay, which is provided with a first control input for receiving a first instruction signal, said message generator being provided for generating said first instruction signal upon receipt of a first message generated by said first communication unit. the relay enables to manage individually the power consumption of the load in function of the consumed power. preferably, each message generator of each second communication unit is provided with an addressing element provided for assigning an address to its second communication unit. this offers a convenient way for addressing each of the second communication units. the invention will now be described in more details with reference to the accompanying drawings illustrating an embodiment of a data communication device according to the present invention. in the drawings: figure 1 and 2 illustrate two different set-ups of a communication device according to the present invention connected to the mains; and figure 3 illustrates a second communication unit. in the drawings a same reference has been assigned to a same or an analogous element. in the embodiments illustrated in figure 1 and 2, a first communication unit 2 and two second communication units 5-1 and 5-2 are connected to an electrical power supply line 1. in the illustrated example only two second communication units are shown, but it will be clear that more than two units or only a single second communication unit can belong to the set of second communication units. the electrical power supply line 1 can be formed by the mains, an electrical cable bundle in a vehicle furnishing the electrical power to the different electric components or a power line output of an electrical power generator. each of the communication units 2, 5-1 and 5-2 are provided with a first connection member 7 for connection with the power supply line 1. the loads 6-1 and 6-2 are connected to a second connection member 9 of their respective second unit 5-1, 5-2 in order to be supplied with the electric power available on the supply line 1. according to another embodiment, the loads could be directly connected to the mains 1 and the second connection member of the second unit would then be connected to a current sensing coil, branched on the supply line part, to which the load is connected. the electrical loads 6-1 or 6-2 can be formed by any electrical load such as for example a lamp, a heater, a motor, a computer etc.. in the embodiment illustrated in figure 1, the first communication unit 2 comprises an i/o interface 10 connected via a communication bus 4 to an i/o interface 11 of a data processing unit 3, for example a personal computer. the i/o interfaces are for example formed by standard rs 232 interfaces. however in the embodiment illustrated in figure 2, a load 8, such as for example an alarm generator, operating with power supplied by the mains, is connected to an output of the first communication unit 2. it is also possible to integrate the first and the second communication unit in a single device. figure 3 illustrates schematically the construction of a second communication unit 5 as part of a data communication device according to the invention. the electrical power, supplied by the power supply tine 1, is fed to a coupling circuit 15 and to a power station 17 for example formed by a transformer. the power station 17 is galvanically separated from the supply line, for example by means of an inductive coupling. the power station 17 is provided for powering a microcontroller 20 and a memory 21 with an operating power of for example 5 v. the coupling circuit 15 is connected to a powerline communication unit 16. the latter has also a data communication gate connected to the microcontroller 20. the memory is for example formed by a ram or an eeprom. an operating member 18, comprising a sensor provided for determining a power value by measuring the electrical power input at the i/o interface 7, is also connected to the power supply line. that operating member 18 is also provided with a data communication gate connected to the microcontroller 20. a load control unit 19 is further connected to the mains and also comprises a data communication gate connected to the microcontroller 20. the load 6 is connectable to the load control unit 19. in the case that the load is directly connected to the mains, the second unit has an additional input 22, indicated with dotted lines, so that the connection between the load control unit 19 and the load can be omitted as well as the load control unit 19 itself. the coupling circuit 15 is a passive circuit that is provided for galvanically separating the supply line 1 from the second unit. the communication unit 16 is provided with means for modulating and demodulating data, according to a powerline data transmission modulation, to be transmitted or transmitted over the power supply line. the modulating and demodulating means are for example provided for applying the modulation and demodulation according to a spread spectrum modulation or according to a combination of a frequency shift keying (fsk) and a phase shift keying (psk) modulation. the communication unit is also provided for adding check bits to a message issued by the microcontroller. a physical layer operative within the communication unit 16 comprises these means for modulating and demodulating and is also in communication with a datalink layer provided for acknowledging or not received messages. the load control unit 19 optionally and dependent on the functions to be performed, also comprises for example a relay, a switch, a triac control and/or a control member. suppose now that the load 6 is for example formed by a lamp or a heating device, connected to the second connection member 9 of the second communication unit 5. the electrical current supplied to the load is measured by the sensor which belongs to the operating member 18. the sensor is for example formed by an ampère meter switched within the power line or inductively coupled to the power line. the sensor is provided either to continuously measure the current or the consumed power or to measure by sampling at request of the microcontroller 20. preferably the current measurement is continuously, which enables to permanently monitor the current flowing towards the load and to detect abnormal current peaks for example caused by a lightning impact. the value of the measured current or power is fetched under control of the application layer of the microcontroller 20 and temporarily stored in the memory 21. that value is fetched either by sampling at predetermined times or on request of the first communication unit. that value is then inserted into a message generated by a message generator which belongs to the microcontroller 20 and the communication unit 16. the generated message is supplied to the communication unit 16 in order to be modulated according to the selected powerline data transmission modulation and to be transmitted via the power supply line to the first communication unit 2. if the first and second communication unit form a single unit, the transmission is realised inside the unit. upon receipt of such a message by the first communication unit 1, the latter demodulates the message by means of its own communication unit 16. the demodulated message is then read and processed by the microcontroller 20 of the first unit. in function of the contents of the received message, the microcontroller either generates another message to be sent to the second unit, or a signal to activate the load connected to the first unit or stores data in its own memory. the production of a message will now be described in more details. suppose the first communication unit 5 requests the status of the second communication unit 6-i. the message generator of the first unit will then produce a status message with the node address of the second unit 6-i. that message will for example comprise the following fields: source node address, for example 32 bits, indicating the node address of the first unit; source unit address, for example 8 bits, indicating the address of the element inside the unit requesting the information, in this case the microcontroller destination node address, for example 32 bits, indicating the node address of the second unit 6-i destination unit address, for example 8 bits, indicating in this case the microcontroller of the second unit 6-i a message code indicating the kind of message, in this example the status a data field a checksum for verification purposes after production, the message is modulated and transmitted via the power supply line 1. only second unit 6-i will accept the message as only this unit will recognise its destination node address. the other units will, after demodulating and checking the destination node address, ignore the message since it was not addressed to them. upon receipt of the message, the microcontroller of the second unit 6-i will decode the message code and the data field and recognise that the status is requested. moreover the microcontroller will read the contents of memory 21 in order to fetch the indicated power consumption value. the microcontroller will upon receipt of that value either enter that value in the data field of the message to be generated or perform a check operation in order to verify if this value is to be considered as a normal value. in the latter case a message normal or not normal will be generated and introduced into the data field of the message to be produced. the message generator will then prepare a message with analogous fields of the one sent by the first unit. of course the destination address will now be the one of the first unit. when the message is formed, it is modulated and sent to the first unit via supply line 1. the second units are also provided to generate messages on their own initiative independently from the first unit. there is thus not a master-slave relation between the first and second units. suppose now the second unit comprises except for its sensor also a relay for controlling the power supplied to the load. suppose further that the sensor has unit address u01 and the relay has unit address u00. the microcontroller checks regularly the value of the consumed power measured by the sensor. the microcontroller can be programmed to send that value at predetermined time intervals to the first unit or only when the measured value exceeds a predetermined threshold value. once the microcontroller has decided to send a message to the first unit, the message generator will be activated for generating a message in order to inform the first unit. the message generator will now for example form the following message: source node address sna = a 00010002 (address of the second unit) source unit address: sua = u01 destination node address: dna = a 00010001 (first unit) destination source address: dsa = uff (microcontroller first unit) message code = mc = status data field: df = value of consumed power check bits this message will now lead to the formation of a following binding in the node of the second unit: ra 00010001uff = uo1s where r stands for a remote address. the first unit will receive this message and react in function of its programming. suppose for example that the indicated value leads to a switch over of the relay in the second unit which issued the message. the message generator will then generate the following message: sna = a00010001 sua = uff dna = a00010002 dsa = uoo mc = e01; event df = fa3d0107; switch over check bits the binding in the node will be ga00010001u01e01 = uoofa300107s. this message will then be received by the second unit. the microcontroller will generate a first instruction signal upon receipt of this message generated by the first communication unit this first instruction signal will then be supplied to a first control input of the relay, which will then switch over. in an analogous manner a dimmer or a control member could be controlled. the dimmer and the control member have respective second and third control inputs for receiving second and third instruction signals generated by the microcontroller of the second unit.
193-868-732-366-135
US
[ "US" ]
B26D3/28,B29C65/00
1986-05-12T00:00:00
1986
[ "B26", "B29" ]
splicing method for tire sheet material
an overlapping layer (24) of a first body of deformable material is pressed by an edge (78) of each of a plurality of blade members (42) into an overlapped layer (26) of a second body of resilient material forming slots (74) in the second body with ribs (88) of the first body displaced therein. the blade members (42) are removed while the ribs (88) are retained in the slots (74) and gripped by the surrounding resilient material. the blade members (42) may be adjustably clamped in a cartridge (40) which is removable from the splicing apparatus (10) for adjustment and replacement of the blade members (42) so that they conform to the surface profile (82) and splice interface line (80) of the bodies to be spliced.
1. a splicing method for attaching an overlapping upper layer of a first body of deformable material to an overlapped lower layer of a second body of resilient material by a plurality of spaced-apart blade members comprising: (a) pressing an edge of each of said blade members against said upper layer of said first body and into said lower layer of said second body to form a slot in said second body and form a rib of said first body displaced into said slot; (b) retracting said edge of each of said blade members from said rib and said slot; and (c) holding said first body against said second body during retraction of said edge of each of said blade members from said rib and said slot so that said rib is gripped in said slot by said resilient material of said second body surrounding said rib. 2. the splicing method of claim 1 wherein said edge of each of said blade members is elongated and said upper layer of said first body terminates in a splice interface line over said lower layer of said second body further comprising: (a) pressing said edge of each of said blade members against said upper layer of said first body with said edge extending across said splice interface line whereby at least a portion of said first body is engaged by said edge for attaching said first body to said second body. 3. the splicing method of claim 2 wherein said first body is of resilient material having substantially the same thickness as said second body further comprising: (a) cutting said overlapping upper layer of said first body and said overlapped lower layer of said second body at an angle to the surface of said first body and said second body to provide sloping mating splice ends; (b) joining said splice ends at said splice interface line and pressing said edge of each of said blade members against said first body into the sloping mating splice end of said second body to form said slot and displace said rib of the sloping mating splice end of said first body into said slots. 4. the splicing method of claim 3 wherein said overlapping upper layer of said first body and said overlapped layer of said second body have a variable thickness and a predetermined surface profile along said splice interface line whereby said splice interface line has an angular configuration with portions extending in different directions at the surface of said first body depending on the thickness of said overlapping layer of said first body and said overlapped layer of said second body further comprising: (a) selecting each of said blade members to conform with the angular configuration of said splice interface line; and (b) setting the position of said edge of each of said blade members across said splice interface line to conform with said surface profile for simultaneous pressing of said blade members against said overlapping layer of said first body and simultaneous removal of said edge of each of said blade members from said rib and said slot. 5. the splicing method of claim 4 wherein said splice interface line is traight in a central portion of substantially constant thickness further comprising: (a) holding said first body against said second body at said central portion of substantially constant thickness. 6. the splicing method of claim 1 further comprising holding said first body between at least some of said blade members to prevent pulling of said rib out of said slot. 7. the splicing method of claim 3 wherein said overlapping upper layer of said first body is one end of a tire tread cut to length from a tread stock member and said overlapped lower layer of said second body is the other end of said tire tread further comprising: (a) wrapping said tread stock around a tire casing built on a cylindrical tire building drum with the sloping mating splice end of said first body in overlapping mating engagement with the sloping mating splice end of said second body; (b) pressing said edge of each of said blade members in a radial direction against said overlapping upper layer of said first body relative to said drum to form said slot in said second body and displace said rib of said first body into said slot; (c) retracting said edge of each of said blade members in a radial direction; and (d) holding said one end of said tire tread against said other end of said tire tread to prevent retraction of said rib and said slot. 8. the splicing method of claim 4 wherein setting the position of said edge of each of said blade members to conform with the profile of said first body and said second body at said splice interface line includes the steps of: (a) placing a third body having the same profile and splice interface line as said first body and said second body on a supporting surface; (b) positioning said blade members in a supporting blade member cartridge over said third body and urging each or said blade members into a position in contact with said surface of said third body; and (c) clamping each of said blade members in said blade member cartridge in said position in contact with said surface of said third body.
this invention relates generally, as indicated, to a method and apparatus for splicing low rolling resistance polymer compounds, or any compounds for which there is a low "tack" coefficient in the uncured state or where the use of an adhesive is prohibitive. this method and apparatus is especially desirable for splicing of tire treads. heretofore, tire tread compounds have had a high "tack" coefficient in the uncured state or an adhesive has been used so that the traditional tread splice stitchers, which press the tread ends together, have been used. with the method and apparatus of this invention, there is provided a mechanical bond of the tread splice that does not rely on the "tackiness" of the tread material. the resilience of the tread material is utilized to grip ribs of a top layer displaced into slots of a lower layer formed by pressing edges of blade members against portions of the top layer into portions of the bottom layer and then holding the top layer against the bottom layer while the blade members are removed. with this construction, the blade members may be adjustably clamped in a cartridge which is removable for adjusting and replacing the blade members so that they conform to the surface profile and splice interface line of the upper and lower layers. in accordance with one aspect of the invention there is provided a splicing method for attaching an overlapping upper layer of a first body of deformable material to an overlapped lower layer of a second body of resilient material by a plurality of spaced-apart blade members comprising: (a) pressing an edge of each of the blade members against said upper layer of the first body and into the lower layer of the second body to form a slot in the second body and form a rib of the first body displaced into the slot; (b) retracting the edge of each of the blade members from the rib and the slot; and (c) holding the first body against the second body during retraction of the edge of each of the blade members from the rib and the slot so that the rib is gripped in the slot by the resilient material of the second body surrounding the rib. in accordance with another aspect of of the invention there is provided splicing apparatus for attaching an overlapping upper layer of a first body of deformable material and an overlapped lower layer of a second body of resilient material comprising: (a) support means for supporting the overlapped lower layer of the second body; (b) blade means at a spaced-apart position from the support means adjacent the overlapping upper layer of the first body; (c) means for moving the blade means toward the support means to press an edge of each of the blade members through the upper layer of the first body and into the lower layer of the second body to form a slot in the second body and displace a rib of the first body into the slot; (d) means for moving the blade means away from the support means to retract the blade members from the rib and the slot; and (e) means for holding the upper layer of the first body against the lower layer of the second body on the support means while retracting the blade members from the rib and the slot so that the rib is gripped in the slot by the resilient material of the second body. to the accomplishment of the foregoing and related ends, the invention, then, comprises the features hereinafter fully described and particularly pointed out in the claims, the following description and the annexed drawings setting forth in detail a certain illustrative embodiment of the invention, this being indicative, however, of but one of the various ways in which the principles of the invention may be employed. in the drawings: fig. 1 is an end view of a tread splice stitcher apparatus embodying the invention taken along line 1--1 in fig. 2, shown in position for splicing a tire tread on a tire casing supported on a tire building drum with parts being sectioned and broken away. fig. 2 is a front elevation taken along the line 2--2 in fig. 1, with parts being broken away. fig. 3 is a plan view taken along the line 3--3 in fig. 2 with parts being broken away. fig. 4 is a fragmentary enlarged section view taken along the line 4--4 in fig. 3 but showing one of the blades in the splicing position after forming a slot in the lower overlapped body and displacing a rib of the upper overlapping body into the slot. fig. 5 is a view like fig. 4 showing the rib in the slot gripped by the resilient material of the lower overlapped body after retraction of the blade. fig. 6 is a front elevation of the blade cartridge after removal from the stitcher head positioned over a setup table supporting a spliced portion of a tread for adjusting and replacing the blade members. fig. 7 is a sectional view taken along the line 7--7 in fig. 6 showing one of the blade members in the adjusted and replaced condition. fig. 8 is a view like fig. 7 taken along the line 8--8 in fig. 6 showing another one of the blade members. referring to figs. 1, 2 and 3, a splicing apparatus such as tread splice stitcher 10 is shown in operating position adjacent a cylindrical tire building drum 12 on which a tire casing 14 having beads 16 and plies 18 of reinforcing fabric has been built. a layer of tread stock 20 is wrapped around the tire casing 14 on the drum 12 and has a spliced portion 22 where an upper layer 24 of one end of the tread stock 20 is in overlapping relationship with a lower layer 26 at the other end of the tread stock. in this embodiment the stock 20 has been wound on a spool and is fed to a conveyor where it is cut to length by a tread cutter providing cut ends with surfaces at an angle other than 90 degrees to the surface of the tread stock so that the ends will be overlapping at the spliced portion 22 as shown in figs. 1 and 3. the tread stock 20 of this embodiment is a low rolling resistance polymer compound which has a low "tack" coefficient in the uncured state. this material is deformable and resilient but does not have enough tack to stick the upper layer 24 against the lower layer 26 even under pressure. the tread splice stitcher 10 has a mounting plate 28 which may be connected to a rod 30 connected to a piston of a piston and cylinder assembly or other actuating means for moving the mounting plate toward and away from the drum 12. end plates 32 and 34 are fastened to the mounting plate 28 at each end and connected by side plates 36 and 38 defining a splicing blade member cartridge 40 for containing a plurality of blade members 42 as shown in figs. 7 and 8. each of the blade members 42 has a splicing blade 44 mounted in a blade holder 46. each blade holder 46 has a slot 48 through which a clamping bolt 50 extends. the clamping bolt 50 is located in holes in the end plates 32 and 34 and has a nut 52 threaded on one end for clamping the blade members 42 in position between the end plates. the end plates 32 and 34 may be fastened to the mounting plate 28 by screws 54 so that the splicing blade member cartridge 40 may be removed from the mounting plate for adjustment and replacement of the blade members 42. a holding means such as retaining plate 56 is located under the splicing blade member cartridge 40 and is fastened to side members 58 and 60 by bolts 62 and nuts 64 threaded on the bolts. the side members 58 and 60 extend from the retaining plate 56 between the side plates 36 and 38 of the splicing blade member cartridge 40 to flanges 66 and 68 spaced from the mounting plate 28. resilient means such as compression springs 70 are interposed between the flanges 66, 68 and the mounting plate 28 for urging the retaining plate 56 against the upper layer 24 of the spliced portion 22. bolts 72 extend through holes in the mounting plate 28 and flanges 66 and 68 as well as through the spring 70 interposed between the flanges and mounting plate for supporting the side members 58 and 60 and the retaining plate 56 while at the same time permitting sliding movement of the flanges on the bolts. nuts 73 are threaded on the bolts 72 for supporting the side members 58 and 60 and the retaining plate 56. other resilient means such as a piston and cylinder may be used instead of the spring 70, bolts 72 and nuts 73, if desired. referring to figs. 2 and 3, the retaining plate 56 has slots 74 through which at least some of the splicing blades 44 extend. the retaining plate 56 may have a length the same as the length of the splicing blade member cartridge 40; however, in this embodiment, the tread stock 20 has a profile such that the outer portions are raised and accordingly the retaining plate only has a length equal to the length of a flat portion 76 at the center of the tread stock member 20. as shown in figs. 1, 2 and 3, the blade holders 46 are clamped between the end plates 32 and 34 so that the ends of the blades 44 touch the tread stock 20 and conform to the profile of the tread stock. also, as shown in fig. 3, the splicing blades 44 are positioned so that an edge 78 of each of the splicing blades 44 extends over a splice interface line 80 at a surface 82 of the tread stock 20. the edge 78 of the blade 44 is preferably curved at the sides 79 to prevent cutting of the tread stock 20 and reduce the force necessary to penetrate the tread stock. with the apparatus of this invention, the blade holders 46 can be adjusted and replaced to conform with the profile of the surface 82 and the splice interface line 80 for a particular tread stock 20 by removing the splicing blade member cartridge 40. this is accomplished by first retracting the tread splice stitcher 10 away from the drum 12 by moving the rod 30 with the actuating means (not shown). the retaining plate 56 is then removed by turning the nuts 64 to remove them from the bolts 62 which may then be pulled out of the holes in the retaining plate and side members 58 and 60. then by removing the screws 54, the splicing blade member cartridge 40 may be pulled away from the side members 58 and 60 and moved to a position over a setup table 84, as shown in fig. 6, and over a spliced portion 22' of a tread stock 20' in which the surface 82' has the same profile as the tread stock 20 to be spliced. the splice interface line 80', as shown in figs. 7 and 8, will also be the same as the splice interface line 80 for the tread stock 20 to be stitched. with the splicing blade member cartridge 40 in the same relative position over the tread stock 20' as it is when mounted on the mounting plate 28, the blade holders 46 are selected so that the splicing blades 44 are in overlapping relationship with the splice interface line 80' as shown in figs. 7 and 8. the blade holders 46 may have means for adjusting the position of the splicing blades 44 or the blade holders may be made with the blades at different positions as shown in figs. 7 and 8. after the blade holders 46 have been selected for the position of the blades 44 relative to the splice interface line 80', the bolt 50 is inserted through the slots 48 and the blades 44 allowed to rest on the surface 82' of the spliced portion 22' at which time the nut 52 is threaded on the bolt 50 and the holders clamped between the end plates 32 and 34. the splicing blade member cartridge 40 may then be placed over the side members 58 and 60 and the screws 54 threaded in the end plates 32 and 34 to hold the cartridge on the mounting plate 28. the retaining plate 56 may then be placed over the side members 58 and 60, the bolts 62 inserted and the nuts 64 threaded on the bolts. in operation, the tread stock 20 is wound on a spool and fed to a conveyor where it is cut to length by a tread skiver. the tread cutter cuts the ends at an angle to the surface 82. the tread stock 20 is then wrapped around the tire casing 14 and the ends spliced at the spliced portion 22 providing the first body or upper layer 24 overlapping the second body or lower layer 26. the drum 12 is rotated to a position under the tread splice stitcher 10 with the splice interface line 80 under the blades 44 so that when the blades contact the surface 82 at least some of the material of the upper layer 24 will be under the blades. after the tread splice stitcher 10 is in the splicing position the piston and cylinder assembly moves the rod 30 and the mounting plate 28 toward the drum 12 pressing the edge 79 of each of the blade members 44 in a radial direction against the upper layer 24 to form a slot 86 in the lower layer 26, as shown in fig. 4, and displace a rib 88 of the upper layer into the slot. the tread splice stitcher 10 is then retracted by moving the rod 30 radially away from the drum 12 which retracts the edge 78 of each of the blade members 42. during the retraction of the blade members 42, the retaining plate 56 is pressed against the surface 82 of the tread stock 20 by the springs 70 so that the ribs 88 remain in the slots 86 and are gripped in the slots by the resilient material of the lower layer 26. as shown in figs. 4 and 5, the upper layer 24 is of a deformable material with sufficient strength to withstand the pressure of the edge 78 of the blade 44 without severing the material. as shown in figs. 4, 5, 8 and 9 the edge 78 of the blade 44 is curved and has a blunt surface so that the upper layer 24 may be deformed without cutting. in this process the retaining plate 56 extends along the flat portion 76 of the tread stock surface 82 and it has been found that this is sufficient to hold the upper layer 24 against the lower layer 26 during retraction of the blades 44. it is evident, however, that the retaining plate 56 may be longer and have a configuration conforming to the surface of the raised portions of the surface 82 of the tread stock 20, if desired. with this apparatus the blades 44 penetrate the lower layer 26 approximately 1/16 to 1/18 in. (0.159 to 0.318 cm.) and a satisfactory mechanical bond is provided without the necessity of using an adhesive. the blades 44 are spaced apart about 1/4 in. (0.635 cm); however, the spacing may be increased or decreased depending upon the material of the tread stock 20 and the power available for pressing the blades against the tread stock surface 82. it is understood that the length of the blade edge 78 may be less than that shown and the blade edge may be curved at the sides 79 only or completely across the blade, as shown in figs. 7 and 8, depending upon the material to be spliced and the power available for penetrating the material of the tread stock 20, displacing the ribs 88 and forming the slots 86. it can be seen that with an elongated edge 78 of the blade 44, the accuracy of alignment of the tread splice stitcher 10 with the splice interface line 80 need not be as precise because only a portion of the edge of the blade need be in pressing engagement with the upper layer 24 to provide the necessary mechanical bond. while a certain representative embodiment and details have been shown for the purpose of illustrating the invention, it will be apparent to those skilled in the art that various other changes and modifications may be made therein without departing from the scope of the invention.
193-973-286-563-95X
KR
[ "US", "EP", "CN", "KR" ]
G06F3/033,G06F3/00,G06F3/01,G06F3/048,G06F3/0482,G06F3/0488,G06F3/14,H04M1/72454,G06F3/0484,H04Q7/32,H04W88/02,G06F3/041,G06F9/00
2007-04-20T00:00:00
2007
[ "G06", "H04" ]
editing of data using mobile communication terminal
an electronic device, a mobile communication terminal, method and computer program product for editing data the method of editing data of an electronic device displays an item list comprising at least one item; detecting first and second touch inputs and detects a change in a distance between first and second touch input points. the method executes a predetermined function related to the item list according to the detected distance change.
1. a method of editing data of an electronic device having a memory, the method comprising: retrieving, by the electronic device, an item list from the memory, the item list comprising a plurality of items; displaying, by the electronic device, the item list on a touch screen of the electronic device; detecting, by the electronic device, two touch inputs at two touch input points, respectively, wherein at least one of the plurality of items is interposed in an area between the two touch input points; detecting, by the electronic device, a difference between a first distance separating the two touch input points at a first time and a second distance separating the two touch input points at a second time; and as a result of the detected difference between the first and second distances separating the two touch input points at the first and second times, respectively: storing in the memory at least one new item to the item list in the area between the first and second touch input points, or deleting from the memory at least one of the plurality of items that is interposed in the area between the first and second touch input points, wherein the at least one new item is stored in the memory only as a result of the detected difference between the first and second distances corresponding to the first distance being smaller than the second distance, and wherein the at least one item is deleted from the memory only as a result of the detected difference between the first and second distances corresponding to the first distance being greater than the second distance. 2. the method of claim 1 , wherein the item list comprises a phonebook list containing a name and at least one contact information as one of items therein. 3. the method of claim 1 , wherein the displaying of an item list comprises arranging and displaying the plurality of items in a matrix form or a sequential index form on the touch screen. 4. the method of claim 1 , further comprising: displaying one of a save and an over-write prompt. 5. an electronic device, comprising: a memory; a touch screen; and a controller configured to: retrieve an item list from the memory, the item list comprising a plurality of items, display the item list on the touch screen, detect two touch inputs at two touch input points, respectively, detect a difference between a first distance separating the two touch input points at a first time and a second distance separating the two touch input points at a second time, wherein at least one of the plurality of items is interposed in an area between the two touch input points, and as a result of the detected difference between the first and second distances separating the two touch input points at the first and second times, respectively: store in the memory at least one new item to the item list in the area between the first and second touch input points, or delete from the memory at least one of the plurality of items that is interposed in the area between the first and second touch input points, wherein the at least one new item is stored in the memory only as a result of the detected difference between the first and second distances corresponding to the first distance being smaller than the second distance, and wherein the at least one item is deleted from the memory only as a result of the detected difference between the first and second distances corresponding to the first distance being greater than the second distance. 6. the device of claim 5 , wherein the item list comprises a phonebook information list and the predetermined function relates to editing of the phonebook information list. 7. the device of claim 5 , wherein said controller is configured to cause the display to display one of a save and an over-write prompt.
cross reference to related applications this application is related to and claims priority under 35 u.s.c. §119(a) to patent application no. 10-2007-0039000 filed in republic of korea on apr. 20, 2007, the entire contents of which are hereby incorporated by reference. background 1. field this document relates to editing of data using a mobile communication terminal. 2. related art a touch screen, which is used as a user interface for an electronic device, functions as an input and output device. thus, the touch screen is excellent in space efficiency. further, the touch screen can provide easier information accessibility since a user may enter a desired menu or information by directly touching the touch screen. the touch screen recognizes the presence of a touch and a touch position on a displayed screen and processes the corresponding touch event. a simple touch of a user may enable the touch screen to process a corresponding function related to the touch event such as selecting a predetermined area on the screen, or executing a menu provided on the screen. however, electronic devices of the related art can not support user functions such as an editing function including deleting or adding data, because a separate editing menu or a menu key button has to be manipulated in addition to a touch operation for selecting the data. summary an aspect of this document is to provide an electronic device, a method of editing data using the same, and a mobile communication terminal and computer program product that can easily edit data using a touch screen. in one general aspect, there is a method and computer program product for editing data of an electronic device, comprising: displaying an item list comprising at least one item; detecting first and second touch inputs; detecting a change in a distance between first and second touch input points; and executing a predetermined function related to the item list according to the detected distance change. in another aspect, there is an electronic device comprising: a touch screen for displaying an item list comprising at least one item; and a controller for detecting a change in a distance between first and second touch input points that are input through the touch screen and executing a predetermined function related to the item list according to the detected distance change. in another aspect, there is a mobile communication terminal comprising: a memory for storing phonebook information comprising a name and contact information; a touch screen for displaying phonebook information stored in the memory in a list form; and a controller for detecting a change in a distance between first and second touch input points that are input through the touch screen and executing a predetermined function related to editing of a phonebook information list displayed in the touch screen according to the detected distance change. further features will be apparent from the following description, comprising the drawings, and the claims. brief description of the drawings the details of one or more implementations are set forth in the accompanying drawings and the description below. in the entire description of this document, like reference numerals represent corresponding parts throughout various figures. fig. 1 is a block diagram illustrating a configuration of an electronic device in an implementation; fig. 2 is a diagram illustrating an input process in a method of editing data in an implementation; fig. 3 is a diagram illustrating an input process in a method of editing data in another implementation; fig. 4 is a block diagram illustrating a configuration of a mobile communication terminal in an implementation; fig. 5 is a flowchart illustrating a method of editing data of a mobile communication terminal in an implementation; fig. 6 is a flowchart illustrating a method of editing data of a mobile communication terminal in another implementation; fig. 7 is a diagram illustrating a process of editing data of a mobile communication terminal in an implementation; fig. 8 is a diagram illustrating a process of editing data of a mobile communication terminal in another implementation; fig. 9 is a diagram illustrating a process of editing data of a mobile communication terminal in another implementation; and fig. 10 is a diagram illustrating a process of editing data of a mobile communication terminal in another implementation. detailed description hereinafter, implementations of an electronic device, a method and computer program product for editing data using the same, and a mobile communication terminal will be described in detail with reference to the accompanying drawings. in describing this document, when it is regarded that descriptions about a related well-known function or configuration are not necessary for understanding a major point of this document, individual descriptions thereof are omitted. fig. 1 is a block diagram illustrating a configuration of an electronic device in an implementation. as shown in fig. 1 , the electronic device comprises a display 20 for displaying a screen, an input 10 for processing a touch signal input by a user, a memory 40 for storing various data items for displaying in the display 20 , and a controller 30 for controlling the display 20 to display a data item stored in the memory 40 or a data item generated by performing a process, and controlling to edit a data item displayed in the display 20 according to a plurality of touch signals that is input through the input 10 , and to display an editing state on a screen of the display 20 . the input 10 comprises a touch panel 12 for detecting touch of the user, and an input processor 14 for recognizing a selection position and a moving direction of a touch signal detected in the touch panel 12 and providing the recognized information to the controller 30 . the touch panel 12 can detect a touch signal of predetermined points such as ‘a’ point and ‘b’ point and use various technologies such as a resistance method, a capacitance method, an infrared ray method, a surface acoustic wave (saw) method, an electromagnetic method, and a near field imaging (nfi) method. the touch panel 12 uses a transparent panel having a touch response surface and is mounted on a visible surface, for example a liquid crystal display (lcd) 22 of the display 20 to form a touch screen. a touch signal detected by the input 10 comprises a touch signal that is simultaneously input at first and second points and in which a distance between two touch points changes. accordingly, the input processor 14 receives a touch signal from the touch panel 12 , processes information such as an input position and a moving direction of the touch signal, and provides the processed information to the controller 30 . the display 20 comprises an lcd 22 for providing an image and an image processor 24 for converting a data signal provided from the controller 30 to a video signal that can display in the lcd 22 and displaying the converted video signal in the lcd 22 . the display 20 can use a plasma display panel (pdp), and an organic light-emitting diode (oled), in addition to the lcd 22 . the memory 40 stores various data items that can display in the display 20 , and the data item comprises data stored by the user and data provided in a terminal. the data item comprises a picture, an image, phonebook information, and memo information stored by the user, and graphic data and thumbnail data provided in a terminal. the data item indicates various kinds of data that can be edited, i.e. that can be deleted, added, and corrected by the user. the controller 30 controls to display a data item stored in the memory 40 or a data item generated by performing a predetermined function in a list of a text index form or a thumbnail matrix form. accordingly, the controller 30 controls the display 20 to display a text data item input by the user in an index form or a data item such as a picture and an image in a matrix form using the corresponding thumbnail image. further, the controller 30 controls the display 20 to display a data item such as an image or a picture. the controller 30 detects first and second touch inputs generated in the input 10 and detects a change in a distance between two input points, thereby performing a function related to an item list displayed in the display 20 . a distance between the first and second touch input points can be increased or decreased. when a data item is displayed in a list of an index form or a matrix form, if a distance between the first and second touch input points decreases, the controller 30 recognizes the decrease of a distance as a data deletion signal and deletes a data item. for example, when the user selects both sides of a predetermined data item and inputs a touch signal in a direction for decreasing a distance between both sides and thus a distance between the touch input points decreases, the controller 30 deletes the corresponding item. if a distance between the first and second touch input points increases, the controller 30 controls to display an input window for adding data. accordingly, the user can input data to add to a displaying list, and the controller 30 adds the input data to the displaying list. further, when an image is being displayed, the controller 30 processes touch signals with an image data editing signal for editing color characteristics such as brightness, contrast, gamma, hue, saturation, and sharpness of an image. accordingly, when touch signals are input in a direction for decreasing a distance between the first and second touch input points, the controller 30 controls to decrease color characteristics of a displaying image. when touch signals are input in a direction for increasing a distance between the first and second touch input points, the controller 30 controls to increase color characteristics of a displaying image. in this way, the electronic device can use an editing function of adding data to, deleting data from, or correcting data of a list through an operation of increasing or decreasing a distance between touch input points that are input through a touch screen. fig. 2 is a diagram illustrating an input process in a method of editing data of an electronic device in an implementation and illustrates a method of editing a data item provided in a list of an index form. as shown in fig. 2 , a data item stored in the memory 40 or generated by a processing can be displayed on a list screen 50 . for example, an item such as a phonebook, a memo, event setting, and a ringtone can be displayed on the list screen 50 . in order to add new data to a list, the user can select ‘a’ point and ‘b’ point on the list screen 50 and input a touch signal in a direction for separating two points from each other, i.e. in a direction for increasing a distance between touch input points. accordingly, the controller 30 processes the input touch signal as a list addition signal and controls to display a data addition window so that the user adds new data to a list. for example, when a phone book list is being displayed, the controller 30 controls to display a phone number input window, and when a memo list is being displayed, the controller 30 controls to display a memo input window. further, the controller 30 can control to display an input window in an area between ‘a’ point and ‘b’ point, which are a starting point of the first and second touch signals. in this case, the user can insert a desired data item at the middle of the list. fig. 3 is a diagram illustrating an input process in a method of editing data of an electronic device in another implementation and illustrates a method of editing a data item provided in a matrix form. as shown in fig. 3 , a data item stored in the memory 40 or generated by a processing can be displayed on a matrix screen 60 . for example, a data item such as a picture photographed by the user, a received multimedia card, and a desktop screen image can be arranged in the display 20 in a matrix form using the corresponding thumbnail image. in order to delete a predetermined data item, the user selects ‘c’ point and ‘d’ point, which are a surrounding area of an item to delete among data items arranged in a matrix form and inputs a touch signal in a direction for approaching two touch points, i.e. a direction for decreasing a distance between touch input points. accordingly, according to a touch signal for decreasing a distance between touch input points, the controller 30 deletes a data item interposed in the corresponding reference area, and deletes, when a plurality of data items is interposed in an area between touch points, all data items. fig. 4 is a block diagram illustrating a configuration of a mobile communication terminal in an implementation. as shown in fig. 4 , the mobile communication terminal comprises a communication module 101 for wireless transmission and reception, an audio processor 103 for processing an audio signal input/output through a microphone mic and a speaker spk, a memory 140 for storing a data item, a touch screen 100 for displaying user input and a screen, and a terminal controller 130 for controlling units through a predetermined signal line 104 for transmitting a control signal and a data signal and executing general functions of a mobile communication terminal. the touch screen 100 comprises a display 120 for displaying data and an input 110 that is a transparent panel having a touch response screen and that is mounted on a visible surface of the display 120 . the display 120 displays a processing screen according to user input, a state of a device, and function execution. the display 120 can use devices such as an lcd, a pdp, and an oled. the input 110 recognizes a selection position and a selection direction of a touch signal input by the user and provides a touch signal to the terminal controller 130 . the input 110 can be embodied using various technologies such as a resistance method, a capacitance method, an infrared ray method, a saw method, an electromagnetic method, and a nfi method. the memory 140 stores data items that can be displayed in the display 120 . the data item comprises data that can be edited, i.e. that can be stored, deleted, or corrected by the user, such as phonebook data 142 , thumbnail data 144 , and picture data 146 . the terminal controller 130 controls to display a data item stored in the memory 140 according to a touch signal input through the touch screen 100 or a data item generated by performing a predetermined function in a list of an index form or a matrix form. accordingly, the terminal controller 130 controls the display 120 to display the phonebook data 142 in an index form or a picture data item in a matrix form using the thumbnail data 144 of the picture data 146 . further, the terminal controller 130 controls the display 120 to display the picture data 146 . the terminal controller 130 can detect the first and second touch signals input through the touch screen 100 , detect a change in a distance between touch points, delete or correct data displayed in the touch screen 100 according to a detection result, and add new data. the first touch signal and the second touch signal can be input in a direction for increasing or decreasing a distance between both touch points. when a data item is displayed in a list, the terminal controller 130 processes a touch signal of a direction for decreasing a distance between a first touch point and a second touch point as a data deletion signal. when a touch signal of a direction for increasing a distance between the first touch point and the second touch point is detected, the terminal controller 130 controls to display an input window for adding data. accordingly, the user can input data to add to a displaying list, and the terminal controller 130 adds data input through the input window to the displaying list. further, when an image is being displayed, the terminal controller 130 processes touch signals with an image data editing signal for editing color characteristics such as brightness, contrast, gamma, hue, saturation, and sharpness of an image. accordingly, when a touch signal of a direction for decreasing a distance between the first touch point and the second touch point is input, the terminal controller 130 controls to decrease color characteristics of a displaying image. when a touch signal of a direction for increasing a distance between the first touch point and the second touch point is input, the terminal controller 130 controls to increase color characteristics of a displaying image. fig. 5 is a flowchart illustrating a method of editing data of a mobile communication terminal in an implementation and illustrates a case of deleting or adding a data item. the terminal controller 130 controls the display 120 of the touch screen 100 to display a predetermined item list by user selection (s 110 ). the terminal controller 130 controls to display an item list such as the phonebook data 142 in a text index form, or the picture data 146 in a list of a matrix form using the corresponding thumbnail data 144 . the terminal controller 130 detects first touch input and second touch input that are input through the touch screen 100 (s 120 ). the terminal controller 130 detects a change in a distance between first and second touch input points (s 130 ). if touch signals are input in a direction for decreasing a distance between the first touch point and the second touch point (s 140 ), the terminal controller 130 recognizes the input touch signals as a data deletion signal and deletes the corresponding item (s 150 ). the touch signals of a distance decreasing direction indicate touch signals of a direction for approaching the touch points, i.e. a direction for decreasing a distance between the touch points after a surrounding area of an item to be deleted by the user is touched. accordingly, the terminal controller 130 can delete a data item interposed in an area between the touch points and can delete, when a plurality of data items is interposed between touch points, all of the corresponding data items. if touch signals are input in a direction for increasing a distance between the first touch point and the second touch point (s 160 ), the terminal controller 130 recognizes the input touch signals as a data addition signal and controls to display an input window for adding a data item (s 170 ). at various points of the process, a confirmation prompt may be displayed to request that the user confirm the change is desired. also, a save and/or over-write prompt may be displayed to request that the user indicate if the change should be saved as a separate file or if the previous file should be over-written. fig. 5 is a flowchart illustrating a method of editing data of a mobile communication terminal in another implementation and illustrates a case of editing image data among data items. the terminal controller 130 controls to display an image such as a user album picture in the display 120 of the touch screen 100 by user selection (s 210 ). the terminal controller 130 detects first touch input and second touch input that are input through the touch screen 100 (s 220 ). the terminal controller 130 detects a change in a distance between first and second touch input points (s 230 ). if touch signals are input in a direction for decreasing a distance between the first touch point and the second touch point (s 240 ), the terminal controller 130 controls to decrease color characteristics of a displaying image (s 250 ). for example, the terminal controller 130 controls to decrease color characteristics such as brightness, contrast, gamma, hue, saturation, and sharpness of a displaying image. if touch signals are input in a direction for increasing a distance between the first touch point and the second touch point (s 260 ), the terminal controller 130 controls to increase color characteristics of a displaying image (s 270 ). thereafter, the terminal controller 130 determines whether an image edited by the user is stored (s 280 ). if an image edited by the user is stored, the terminal controller 130 stores the edited image in the memory 140 (s 290 ). the mobile communication terminal in implementation controls to fluctuate color characteristics such as brightness, contrast, gamma, hue, saturation, and sharpness of an image in interlock with a fluctuation in a distance between the first touch point and the second touch point that are input through the touch screen 100 . accordingly, the user can edit color characteristics of an image with a simple touch operation without using a separate editing menu. in the method of editing an image, a case of fluctuating color characteristics according to a change in a distance between touch points is described, however in the touch screen 100 in which an image is displayed, the method of editing an image may control, when touch signals are input in a direction for decreasing a distance between the first touch point and the second touch point, to delete the corresponding image, and when touch signals are input in a direction for increasing a distance between the first touch point and the second touch point, to store the corresponding image. fig. 7 is a diagram illustrating a process of editing data of a mobile communication terminal in an implementation and illustrates a case of editing a phonebook list 200 . as shown in fig. 7 , in the phonebook list 200 , a name, a mobile phone number, and a home phone number of a specific person input by the user can be displayed as a data item. accordingly, when the user touches ‘a’ point and ‘b’ point, which are random points in the phonebook list and inputs a touch signal in a direction for increasing a distance between two touch points, the terminal controller 130 determines that a function of adding data to the phonebook list 200 is selected. accordingly, the terminal controller 130 provides an input interface 220 for inputting a new phonebook data item, and the user can input data to add to the input interface 220 . when the user inputs and stores “mobile phone 2 ” and the corresponding phone number, the terminal controller 130 controls to display a phonebook list 240 to which “mobile phone 2 ” and the corresponding phone number are added. fig. 8 is a diagram illustrating a process of editing data of a mobile communication terminal in another implementation and illustrates a case of editing a communication list 300 . as shown in fig. 8 , in the communication list 300 , a user's phone call records can be sequentially provided. when the user selects ‘a’ point and ‘b’ point in the communication list 300 and inputs a touch signal in a direction for decreasing a distance between two touch points, the terminal controller 130 determines that a function of deleting data of the communication list 300 is selected. accordingly, the terminal controller 130 controls to display a deletion selection window 310 for selecting whether to delete no. 2 item and no. 3 item interposed in an area between ‘a’ point and ‘b’ point. the terminal controller 130 controls to delete the corresponding items (no. 2 item and no. 3 item) by user deletion selection and display an editing result through a processing result window 320 . fig. 9 is a diagram illustrating a process of editing data of a mobile communication terminal in another implementation and illustrates a case of editing a picture list 400 . as shown in fig. 9 , in the picture list 400 , a thumbnail data 144 corresponding to each picture data 146 are arranged in a matrix form. when the user selects ‘a’ point and ‘b’ point in the picture list 400 and inputs a touch signal in a direction for decreasing a distance between two touch points, the terminal controller 130 determines that a function of deleting data from the picture list 400 is selected. accordingly, the terminal controller 130 controls to delete an item interposed between ‘a’ point and ‘b’ point and display a picture list in which the corresponding item is edited. fig. 10 is a diagram illustrating a process of editing data of a mobile communication terminal in another implementation and illustrates a case of editing color characteristics of an image. as shown in fig. 10 , the picture data 146 stored in an album menu or a picture menu can be displayed on a touch screen 500 by user selection. if the user selects two points on the touch screen 500 and inputs a touch signal in a direction for increasing a distance between two touch points, the terminal controller 130 controls to increase color characteristics, for example, brightness of an image displayed on the touch screen 500 . the terminal controller 130 controls a process of changing brightness in interlock with an input operation of a touch signal, thereby increasing only brightness of a reference area opened by a touch signal. accordingly, the user can easily compare original screen brightness and adjusted screen brightness and be interested in the process of changing brightness. thereafter, when editing of brightness characteristics is complete and an editing content is stored, the terminal controller 130 stores image data in which brightness characteristic is edited in the memory 140 . color characteristics such as temperature, contrast, gamma, hue, saturation, exposure and sharpness as well as brightness can be edited with the same method when the terminal controller 130 determines that a distance between two touch points selected on the touch screen 500 decreases, the terminal controller 130 controls to decrease color characteristics of an image displayed on the touch screen 500 . changes to the image may be saved as a replacement image, as a supplementary image data file, or as a supplementary image. changed and original images may be displayed next to one another for comparison and further editing. other image characteristics such as focus, matte, edge blur, sepia, straightness, etc. may be added or deleted with the same method. zooming and/or cropping of images may also be achieved with the same method (e.g., by touching and separating two touch points, an image may be zoomed and/or cropped so that the portions outside of the touch points are deleted, and the portions inside the touch points are retained.) also, images may be cut by touching and bringing together two touch points. for a software implementation, the embodiments described herein may be implemented with separate software modules, such as procedures and functions, each of which perform one or more of the functions and operations described herein. the software codes can be implemented with a software application written in any suitable programming language and may be stored in memory, and executed by a controller or processor. as described above, in an implementation, an electronic device, a method of editing data, and a mobile communication terminal can easily edit data using a touch screen. other features will be apparent from the description and drawings, and from the claims.
197-262-692-355-683
US
[ "US" ]
E02B17/02
1992-07-06T00:00:00
1992
[ "E02" ]
offshore drilling platform support
an improved columnar support for offshore drilling platforms intended for installation in a body of water such as the ocean, comprising a cylindrical shaft of unitary cemetitious construction, a frustrum shaped hollow capital having a first peripheral flange, a frustrum shaped hollow base having a second peripheral flange and seals, which may include removable plugs in the open ends of the capital and the base for creating a scuttle opening therein to admit water and sink the columnar support at a designated position on the sea floor.
1. a columnar offshore drilling platform support for installation in a body of water, comprising, a cylindrical shaft of unitary cemetitious construction, a frustrum shaped hollow capital having at least one open end, a frustrum shaped hollow base having at least one open end, means sealing the open ends of the capital and the base, at least one of which sealing means includes a removable plug for creating a scuttle opening therein and wherein the capital includes a first peripheral frustrum flange, said first flange having a continuous circular outer edge forming the first flange perimeter, and wherein the base includes a second peripheral frustrum flange, said second flange having a continuous circular outer edge forming the second flange perimeter, said first and second peripheral frustrum flanges being sized to allow the support to be rolled on said frustrum flanges in forward direction. 2. the support of claim 1 and further including longitudinal surface fluting carried by the shaft for wave breaking, ice breaking and wave energy dissipation. 3. the support of claim 2 and further including a plurality of vertically aligned holes in the first and second flanges for receiving and guiding anchor pilings. 4. the support of claim 3 where the bottom surface of the second flange is scalloped to accommodate uneven ocean floor surfaces.
background of the invention the prior art, as represented by the aforesaid previously issued patent, contemplates an offshore drilling platform support comprising a single prestressed concrete column formed of stacked annular segments and having top and bottom segments carrying radial ribs, the lower of which serves to stabilize the column on the sea bottom while the ribs protruding from the top segment serve to support the structure comprising the drilling platform. vertically aligned holes int he upper and lower ribs provide guidance sleeves for the anchoring piles which are driven into the sea floor. to fabricate or assemble the column, the various segments are attached together at sea on a construction barge, a limiting and inconvenient aspect of the precast concrete column concept. the primary object of the improvement comprising the present invention is to provide a light weight offshore platform support column of the same general nature as that previously disclosed but which is now designed to be fully fabricated on shore, easily placed into the sea and then towed to the location for its final installation. description of the drawings fig. 1 is a perspective view of an off-shore drilling platform which includes a platform support constructed in accordance with the instant invention. fig. 2 is a perspective view of the platform support of the present invention. fig. 3 is a cross-sectional view of the support taken along lines 3--3 of fig. 2. fig. 4 is a cross-sectional view of the support taken along lines 4--4 of fig. 2 and showing an exemplary piling positioned in the vertically aligned holes in the top and bottom flanges. fig. 5a is a diagrammatic view showing the support end view as it would be rolled down a slope into the water. fig. 5b is a diagrammatic side view of the support of the present invention in tow behind a tugboat. fig. 5c is a diagrammatic side view of the support showing the removal of the top end plug with a sequential position of the column shown in dotted lines as water fills the column to sink it bottom first so as to achieve an upright position on the ocean floor. fig. 5d is a diagrammatic side view of the support in an upright position on the ocean floor. detailed description of the invention an offshore drilling platform 10 having a columnar support 12 constructed in accordance with the present invention is illustrated in fig. 1. the columnar support 12 comprises a shaft 14 which is substantially cylindrical in its cross section except for the exterior fluting 15. between the top of the shaft 14 and the drilling platform 10 is a hollow frustrum shaped capital 16 having a peripheral flange 18 which forms the boundary of a planar covering 19. the base of the column is formed of a hollow frustrum shaped segment 20 having a peripheral flange 22 and a planar floor 24. the fluting is provided for additional columnar strength in addition to its ice breaking and wave energy dissipation functions. the column may preferably be constructed of concrete. the top and bottom flanges 18 and 22 are each provided with vertically aligned and spaced apart holes 25 and 27 respectively. after the column is lowered to its position and made vertically erect, pilings 31 are placed into the aligned holes 25 and 27 and driven into the sea floor in order to anchor the platform in position. the top flange holes 25 serve as sleeve guides for the driving of the piles until the piles are substantially implanted in the sea floor, then the piles may be driven on through the top flange holes 25 and down to a level near the bottom flange holes 27 or they may be cut off along a plane proximate to the bottom flange 22. the circular perimeter of the frustrum flanges 18 and 22 provides novel implications for the problems of moving the heavy column from its fabricating location to the sea. preferably, the construction of the support 12 would be done at a site slightly elevated from and close to the shore where, after the support is completed it can be rolled into the sea, as shown in fig. 5a, using the peripheral flanges 18 and 22 as rolling rims which support the column shaft 14 as an "axle". once in the water, the top covering 19 and bottom floor 24 prevent the entry of water into the interior of the support, making it buoyant enough to be towed on the surface of the ocean by a tug 30, as shown in fig. 5b. plugs 32 and 34 are disposed respectively in the top covering 19 and the bottom floor 24. these plugs are removed sequential when the support is sited at its erection location in order to allow the column to fill with sea water and sink, as shown in fig. 5c. if the topmost plug 32 is removed first water will flood the interior of the column, going toward the bottom portion and the bottom will sink first, establishing the proper position of the support on the ocean floor. the lower plug may be removed if necessary to complete the flooding operation. the former necessity for providing a substantially flat area on which the support may rest on the sea floor is less rigorous with the instant support than with the prior art models. the bottom surface of the lower flange 24 is scalloped or intermittently relieved, as at 35, in order to better accommodate the irregularities of the drifting sands of the sea floor. although not shown, the risers and conduits from the wells are intended to pass through the interior of the support structure, as in the prior art, penetrating both the floor 24 and the top covering 19 through the use of plugs or valves.
197-718-044-017-961
US
[ "WO", "KR", "US", "JP" ]
B23Q15/22,G03F7/20,G05D3/12,H01L21/30,G05B13/02
1990-05-21T00:00:00
1990
[ "B23", "G03", "G05", "H01" ]
partially constrained minimum energy state controller
a lithographic system (10), having a stepper (16) on a block of granite (12) attached to the ground using isolation legs (32), includes a servo control system (34, 38, 40) for providing signals to a motor (18) associated with the stepper (16) to control its movement. the control system (34, 38, 40) includes means (38, 40) to calculate a filter formula used to generate the control signals, which formula is calculated in response to the initial velocity of the stepper (16) and a certain time between discrete movements, so that a minimum amount of energy is utilized. the filter formula requires that the difference in the velocity of the base (12) and the stepper (16) is zero at the time the stepper (16) reaches the final position. where time is short, both the base (12) and the stepper (16) initially move with the same velocity and end with the same velocity, and an array processor (40) calculates the time varying filter formula.
the embodiments of the invention in which an exclusive property or privilege is claimed are defined as follows: 1. in a system (10) having a movable member (16) for being moved by controllable moving means (18) over a sur¬ face (14) of a non-secured base (12), said system (10) fur¬ ther having position detecting means (28) for providing signals manifesting the position of said movable member (16) relative to said base (12), characterized by means (34, 38, 40) for providing a control signal to said movable member (16) to move said movable member (16) to a desired position on said base (12), said control signal being such that the relative difference in the velocity of said base (12) and said movable member (16) is zero at the time said movable member (16) reaches said desired position. 2. the invention according to claim 1 characterized in that said means (34, 38, 40) for providing further includes filter means (34) for filtering said control signal. 3. the invention according to claim 1 or 2 character¬ ized in that said means (34, 38, 40) for providing includes means (34) responsive to the position of said movable mem¬ ber (16) and to a filter signal for generating said control signal and means (38, 40) for calculating and generating said filter signal for reducing the energy to move said movable member (16) between the initial position and the desired position of said movable member (16) within a cer¬ tain time period and for providing said calculated signal to said means for generating (34). 4. the invention according to claim 3 characterized in that the velocity of said movable member (16) and of said base (12) is finite at the time said movable member (16) reaches said desired position. 5. the invention according to claim 1 characterized in that said control signal is provided in response to the desired trajectory of said movable member (16) and the time required for said movable member (16) to reach said desired position. 6. the invention according to claim 1, 2 or 5 charac¬ terized in that the velocity of said movable member (16) and of said base (12) is finite at the time said movable member (16) reaches said desired position. 7. the invention according to claim 6 characterized in that said means for providing (34, 38, 40) includes means (34) responsive to the position of said movable member (16) and to a filter signal for generating said control signal and means (38, 40) for calculating and generating said fil¬ ter signal for reducing the energy to move said movable member (16) between the initial position and the desired position of said movable member (16) within a certain time period and for providing said calculated signal to said means (34) for generating. 8. a lithography machine (10) for providing signals for exposing a resist covered semiconductor wafer (26), one section at a time, as one step in the process of fabricat¬ ing semiconductor devices, said machine (10) being charac- terized by a base (12) having at least one flat surface (14); isolation means (32) affixed between said base (12) and mechanical ground so as to permit said base (12) to move with respect to ground with a certain damped spring restoring force; means (20) for generating exposing energy along a given path, said means (20) for generating energy being solidly affixed to said base (12); a stepper (16) for holding said wafer (26), said stepper (16) being positioned to be moved over said one surface (14); motor means (18) connected to said base (12) and said stepper (16) for mov- ing said stepper (16) relative to said base (12) from an initial position to a desired position, said movement being determined by a control signal applied thereto; and means to generate said control signal (34, 38, 40) so that at the end of the movement of said stepper (16), the difference in the velocity of said stepper (16) and base (12) is zero. 9. the invention according to claim 8 characterized in that said means to generate (34, 38, 40) said control sig¬ nal responds to a filter formula and a trajectory formula for generating said control signals in a manner to reduce the energy expended by said motor means (18) in moving said stepper (16) to said desired position. 10. the invention according to claim 8 characterized in that said means to generate (34, 38, 40) said control sig¬ nal is determined in response to the initial velocity of said stepper (16) and base (12). 11. the invention according to claim 8, 9 or 10 charac¬ terized in that said means to generate (34, 38, 40) said control signal responds to a time varying filter formula and a trajectory formula for generating said control sig- nals to reduce the energy expended by said motor means (18) in moving said stepper (16) to said desired position, said filter formula being based upon the initial velocity of said stepper (16) and base (12). 12. the invention according to cla.im 11 characterized in that said filter formula is based an initial condition that said base (12) and said stepper (16) moving at the same velocity. 13. a method of moving a member (16) over a over a sur¬ face (14) of a non-secured base (12) characterized by the steps of detecting the position of said member (16) rela¬ tive to said base (12) and providing a control signal to said movable member (16) to move said movable member (16) to a desired position on said base (12) , said control sig¬ nal being such that the relative difference in the velocity of said base (12) and said movable member (16) is zero at the time said movable member (16) reaches said desired po- sition. 14. the method according to claim 13 further including the step of providing a filter formula, said control signal being provided in response to said filter formula and said detected position. 15. the method according to claim 14 characterized in that said step of providing a filter formula includes cal¬ culating said filter signal for reducing the error between the desired displacement and the actual displacement of said movable member (16) and for providing said calculated signal to said means for generating (34, 38, 40). 16. the method according to claim 15 characterized in that said step of calculating said filter signal includes reducing the energy to move said member (16) from the ini¬ tial position to the desired position of said movable mem- ber (16) within a certain time period. 17. the method according to claim 16 characterized in that said control signal is provided to move said movable member (16) such that the velocity of said movable member (16) and of said base (12) is finite at the time said mov- able member (16) reaches said desired position. 18. the method according to claim 13, 14 or 15 charac¬ terized in that said control signal is provided to move said movable member (16) such that the velocity of said movable member (16) and of said base (12) is finite at the time said movable member (16) reaches said desired posi¬ tion.
pawφtat.t.v nwc rained minimdm energy st&τw pnwpwnt.t.tϋw this invention relates to a motor controller and more particularly to such a controller having a servo sys¬ tem used for moving one element over a floating second ele¬ ment, such as a lithography stepper over a base isolated from ground motion, so as to bring the two elements to rel¬ ative rest within a given time using a minimum of energy. in the past, lithography machines have been built utilizing a large base, which base included an extremely heavy granite block firmly attached to the ground, and which block had one very flat and smooth surface. most lithography systems typically include a mechanism, known as a stepper, for holding and moving a wafer in discrete steps over the flat surface of the granite base so that one sec¬ tion at a time of the wafer is exposed to energy. in the past, the energy of choice was ultraviolet light. however recent advances in semiconductor technology have resulted in the necessity to utilize shorter wavelength energy sources, such as x-rays, in order to pattern smaller and smaller features on the wafers. τhe ability to pattern smaller features on semicon¬ ductors wafers has created a whole new set of constraints on lithographic systems. one such constraint is that it now is necessary to isolate the granite block from simple and everyday ground motion because of the high degree of precision required in the semiconductor fabrication pro¬ cesses. such motion, for example, may result from vehicles, such as large trucks or trains, in the neighborhood where the lithographic machine is being used. with traditional lithographic machine construction, any slight ground move- ment is translated directly to the machine and could cause misalignments or other problems in the modern day precision machines. further, in the case of a point source x-ray lithography system, the gap between the mask and wafer is critical and is maintained nominally at twenty microns. thus, slight ground movements can cause problems in main¬ taining the integrity of the mask to wafer gap. one solution to the ground motion problem discussed above is to isolate the granite base from the ground on isolation legs, similar in construction to shock absorbers used on heavy vehicles. inclusion of these shock absorbers unfortunately cause a new problem of a damped oscillation of the granite base whenever the stepper is moved. servo- controllers have been used in the past to solve these prob¬ lems and generally operate by providing reverse accelera¬ tion signals to the item being moved to reduce the base oscillations. however, these techniques require significant time and energy in order to bring the entire system to rest so that it can be useful. what is needed is a more accept¬ able system which permits more frequent movements to occur and with the utilization of less energy. in accordance with one aspect of this invention, there is provided, in a system having a movable member for being moved by controllable moving means over a surface of a non-secured base and further having position detecting means for providing signals manifesting the position of the movable member relative to the base, characterized by means for providing a control signal to the movable member to move the movable member to a desired position on the base. the control signal is such that the relative difference in the velocity of the base and the movable member is zero at the t.ime the movable member reaches the desired position. one preferred embodiment of this invention is here¬ after described with specific reference being made to the following figures, in which: figure 1 is mechanical block diagram of the various elements of a lithography system; figure 2(a) and 2(b) constitute a pair of graphs with respect to time of the distance d moved by the stepper assembly shown in figure 1 in response to the force f uti¬ lizing the control systems of the prior art; and figure 3(a) and 3(b) constitute a pair of graphs with respect to time of the distance d moved by the stepper assembly shown in figure 1 in response to the force f when utilizing the inventive control system of this invention. referring to figure 1, a mechanical .block diagram of a lithography machine 10 is shown. for, example lithog¬ raphy machine 10 may be similar to the x-ray lithography machine described in united states patent 4,870,668 in the name of robert d. frankel et al and entitled, "gap sensing/ adjustment apparatus and method for a lithography machine". the basic structural element upon which machine 10 is built is a massive base 12, such as a block of granite, having the upper surface 14 thereof polished so as to be very flat and smooth. a stepper assembly 16 is positioned to be mov¬ able over surface 14 over air bearings included therewith. thus, there is only a small, but finite, amount of friction between stepper assembly 16 and surface 14. an induction motor 18 is further mounted on base 12 and may be energized in a conventional manner to drive stepper assembly 16 in both the x and y directions. an energy source 20 is also included with lithogra¬ phy machine 10. source 20 may be an x-ray source, as de¬ scribed in the aforementioned frankel et al patent, or it may be an ultraviolet light source. source 20 is mechani¬ cally affixed to base 12, as indicated by the dashed line between source 20 and base 12. source 20 includes a mask 22 through which energy, such as x-rays 24, are passed to expose a pattern on a resist coated wafer 26. mask 22 should be capable of permitting features as small as one half of a micron to be exposed on wafer 26. because semi¬ conductor devices are made of various layers of such fea¬ tures, it is necessary to be able to align mask 22 with respect to wafer 26 to within one tenths of a micron, or less. in addition, where lithography machine 20 is an x- ray type lithography machine, it is important to maintain the gap between mask 22 and wafer 26 at a precise and con¬ stant value. also positioned on surface 14 of base 12 is a in¬ terferometer 28 and a mirror 30, associated with interfer¬ ometer 28, is positioned on stepper assembly 16. in prac- tice, two interferometers 28 and two mirrors 30, one for the x direction and one for y direction, will be provided, although only one of each is shown in figure 1. as is well known, an interferometer and its associated mirror measures distance very accurately. thus, by knowing the starting position of stepper assembly 16, the movement of stepper assembly 16 can be monitored. base 12 is maintained above the ground by six (only three of which are shown) isolation legs 32. each of the legs 32 may include servo valve assisted gas springs and the legs 32 are designed for supporting loads up to 6000 or more pounds. when each leg 12 is affixed to base 12, both x and y lateral damped motion can occur. thus, base 12 be¬ comes isolated from the ground. the purpose of legs 32 is to absorb ground motion and thereby prevent such ground motion from interfering with the precision required in or¬ der to fabricate semiconductor components. stepper assembly 16 is moved by motor 18 and motor 18 is, in turn, controlled by signals provided by main mo¬ tor and interferometer controller 34. main controller 34 also receives signals from interferometer 28 and uses the information manifested by the interferometer 28 signals, as well as other information discussed hereafter, in determin¬ ing the appropriate motor control signals it provides to motor 18. because of the coupling between legs 32 and base 12 and the fact that a certain amount of friction exists be¬ tween stepper 16 and surface 14, each time stepper assembly 16 is moved, a certain amount of force is transferred to base 12, and thereby causes base 12 to move in the direc¬ tion of movement of stepper assembly 16. the movement of base 12, of course, is because it is not solidly affixed to the ground in view of the presence of isolation legs 32. the movement of base 12, in turn, then causes a transfer of force to stepper assembly 16, thereby causing additional movement of stepper assembly 16. of course, the trans¬ ferred movement becomes damped with respect to time and ultimately, the lit.. rraphy system 10 settles to a station¬ ary and stable state. in order to have stepper assembly 16 move to a de¬ sired location and expose a pattern at the desired section of wafer 26, one must wait until the entire system settles and then make adjustments to stepper assembly 16 to correct for movements caused by the forces transferred between stepper assembly 16 and base 12. the time to settle, of course depends upon the physical characteristics of the 10 system, such as the motor 18 torque constant, the masses of the stepper assembly 16 and base 12, the spring constant restoring force and damping restoring force of the isola¬ tion legs 32, and any other resonances in the system. of course, each of these characteristics can be measured and ■ * ■ the proper amount of energy to be applied to motor 18 can be calculated in order to bring lithographic system 10 to rest at the earliest time. in such event, an external con¬ troller 38 may be used to generate a filter signal and to apply such filter signal to main controller 34 to be used 2 ^ the main controller 34 when generating the signals used to control motor 18. according to the subject invention, the energy to be applied to the motor 18 may be calculated so as to mini¬ mize the energy applied and yet permit settling within a " given time period. more specifically, the invention in¬ cludes selecting the path of stepper assembly 16 to be fol¬ lowed and the form of the filter to be used for generating the proper motor control signals that cause stepper assem¬ bly 16 to arrive at the correct position on surface 14 30 within the time allowed. in modern control theory, the state of a system can be described as a vector, as follows: stepper absolute position stepper absolute velocity x = base absolute position base absolute velocity if the lateral spring restoring force of the isolation legs • " 32 is defined as k and the lateral damping restoring force of the isolation legs 32 is defined as d, the mass of the stepper assembly 16 as ms and the mass of the base as mb, then: stepper absolute velocity dx stepper absolute acceleration base absolute velocity = ax + cu base absolute acceleration where u is the force applied to the motor 18 and where, in practice, the above vector equations should be calculated to account for the fact that discrete time sam¬ ples of the system are utilized, and it thus takes the 5 form: x(n) = mx(n-l) + bu(n) when utilizing a modern state controller, such as control¬ ler 34, the initial state of the system is x(0) and the 0 goal is to find the sequence of motor control signals u(n) which will yield a desired final state xf at a final in¬ stant of time, n=nf. normally, the final state xf is cho¬ sen so that desired ending position 25 xf » 0 0 under this constraint, the min.imum energy which can be put in the system to achieve the desired result is: 30. nf .σ u(n) 2 ι=l which yields a solution: r ki g = σ r*i n (n f f-i)bbt(m(nf-i))t]-l [xd-m) (nf) (x(0) ι=l 35 and u(k) » bt(m(nf-k))tg this is a well known result of control theory. in using the above solution to move stepper assem¬ bly 16 to a desired position, it is wasteful of both time and/or energy in getting the base 12 back to its home posi¬ tion and at rest at the same time that stepper assembly 16 arrives at its desired position. however, for the litho¬ graphic applications described above, the extra condition is not needed. what is needed is that, at the time an ex¬ posure is to occur, the relative position of stepper assem¬ bly 16 be at the desired position on base 12 and that the stepper assembly 16 and base 12 be at rest with respect to one another. this may be viewed as the final constraint, that is k(xf-xd) + 0 where xf is the final state of the system and desired ending position xd 0 0 and in general, if k is any matrix whose rank is equal to its number of rows, then the minimization of energy un¬ der the partial constraint can be solved by lagrangian mul¬ tipliers as follows: nf minimize = σ u(n)2 + lk(xf-xd) 1=1 under the constraint: k(xf-xd) = 0 this equation may then be linearly solved. if xn = (gktk) + mnfx(o) where, nf g = σ m(nf-i)bbt(m(nf-i) )t 1=1 and q = (kgkt)-l (kxd - kmnfx(o)) then, the sequence of motor control signals provided by main controller 34 to motor 18 may be expressed as: u(k) = bt(m(n-k) )t x(0) referring again to figure 1, the manner of obtain¬ ing the best energy/time performance is accomplished by providing external controller 38 coupled with main control¬ ler 34. main controller 34 includes a digital filter to provide a correction factor to the signals provided to drive motor 18. external controller 38 is used to calcu¬ late a digital filter formula and to apply that formula to the digital filter in main controller 34. in addition, external controller 38 calculates the required trajectory for the movement of stepper assembly 16. at the required time for movement, main controller 34 receives the calcu¬ lated filter information and trajectory information from external controller 38 and provides the proper motor con¬ trol signals to motor 18. referring now to figures 2(a), 2(b), 3(a) and 3(b), curves with respect to time (t) of the displacement (d) of stepper assembly 16 and the force (f) required to cause such displacement are shown. specifically, figure 2(a) rep¬ resents the trajectory, that is, the relative distance be¬ tween the mirror 30 and interferometer 28 as stepper assem¬ bly 16 approaches a movement of a distance d, where the movement of stepper assembly 16 utilizes the minimum -amount of force or energy, as shown in figure 2(b). as seen, the time required for all movement to cease is relatively long, primarily due to the movement of base 12. figure 3(a) rep¬ resents the trajectory for a move of stepper assembly 16 using additional energy, but requiring a much shorter tra¬ jectory. as seen, the trajectory is relatively smooth and approaches the desired location d in time t. in order to accomplish the figure 3(a) trajectory, the curve of the force f (energy) applied to motor 18 is shown in figure 3(b). as seen, a small force in the generally desired di¬ rection, followed by a negative force opposite to the de¬ sired direction and finally a larger force in the desired direction occurs. generally, the smaller t, the greater the total amount of energy expended. thus, the proper val¬ ue of t should be used in the calculation of the last given equation by external controller 38 in order to use the min¬ imum energy necessary to obtain a desired movement in a desired time. in some instances, the time t becomes so short, that an unacceptably large amount of energy is expended. in order to reduce the energy, the calculation may be modified to consider the actual set of circumstances. in the simple situation, where the stepper assembly 16 comes to complete rest prior to the exposure, the calculations are basic and can be handled by a conventional microprocessor in external controller 38. however, as the time t becomes shorter, other factors must be considered. for example, it is not necessary that the stepper assembly 16 come to a complete rest, relative to ground, between movements. it is only necessary that the desired position of stepper assembly have been reached and that the relative velocity between the stepper assembly 16 and base 12 be zero. thus, it is possible that both the base 12 and stepper assembly 16 can be moving in unison at the time the mask 22 and the proper section of wafer 26 are aligned and exposed. with this additional constraint removed, filter formulas to permit shorter times t can be calculated. how¬ ever, this involves calculations in a time varying formula and a variance in the trajectory dependent upon variable initial conditions. the computational load for this calcu¬ lation is beyond the capacity of a conventional micropro¬ cessor and to assist in these calculation, a floating point co-processor 40 is provided. co-processor 40 will be cou¬ pled directly with main controller 34 and will be provided with the real time interferometer 28 position readings. with this information, co-processor 40 will calculate the necessary trajectory and filter formula and provide this information to main controller 34.
198-522-794-307-417
US
[ "US" ]
G01N25/20,G01K17/00,G01K17/04,G01N25/48
2015-07-17T00:00:00
2015
[ "G01" ]
system and method for the direct calorimetric measurement of laser absorptivity of materials
a method and system for calorimetrically measuring the temperature-dependent absorptivity of a homogeneous material dimensioned to be thin and flat with a predetermined uniform thickness and a predetermined porosity. the system includes a material holder adapted to support and thermally isolate the material to be measured, an irradiation source adapted to uniformly irradiate the material with a beam of electromagnetic radiation, and an irradiation source controller adapted to control the irradiation source to uniformly heat the material during a heating period, followed by a cooling period when the material is not irradiated. a thermal sensor measures temperature of the material during the heating and cooling periods, and a computing system first calculates temperature-dependent convective and radiative thermal losses of the material based on the measured temperature of the material during the cooling period when beam intensity is zero, followed by calculation of the temperature-dependent absorptivity of the material based on the temperature-dependent convective and radiative thermal losses determined from the cooling period.
1 . a calorimetric method of measuring the temperature-dependent absorptivity of a material, comprising: providing a homogeneous material dimensioned to be thin and flat with a predetermined uniform thickness and a predetermined porosity; thermally isolating the material; uniformly irradiating the material with a beam of electromagnetic radiation having a predetermined intensity to uniformly heat the material during a heating period, followed by a cooling period when the material is not irradiated and the intensity of the beam is zero; measuring the temperature of the material during the heating and cooling periods; determining temperature-dependent convective and radiative thermal losses of the material based on the measured temperature of the material during the cooling period when beam intensity is zero, and the thickness and porosity of the material; and determining the temperature-dependent absorptivity of the material based on the temperature-dependent convective and radiative thermal losses determined from the cooling period, the thickness and porosity of the material, and the intensity of the beam. 2 . the method of claim 1 , wherein the material is a powder or liquid provided in a thermally conductive container having a container volume with a uniform and shallow depth, and with the powder or liquid conformably filling the container volume to capacity to have the predetermined thickness and the predetermined porosity; and wherein the temperature of the powder or liquid is measured by measuring the temperature of the container. 3 . the method of claim 2 , wherein the container has an opening defined by a rim into the container volume; and wherein the powder is made to conformably fill the container volume to capacity by using an implement adapted to remove excess powder in the container volume when run along the rim of the opening. 4 . the method of claim 2 , wherein the powder or liquid material is thermally isolated by suspending the powder or liquid-filled container on at least two wires suspended between end supports. 5 . the method of claim 1 , wherein a solid material is thermally isolated by suspending the solid material on at least two wires suspended between end supports. 6 . a system for calorimetrically measuring the temperature-dependent absorptivity of a homogeneous material dimensioned to be thin and flat with a predetermined uniform thickness and a predetermined porosity, comprising: a material holder adapted to support and thermally isolate the material to be measured; an irradiation source adapted to uniformly irradiate the material with a beam of electromagnetic radiation; an irradiation source controller adapted to control the irradiation source to uniformly heat the material with a beam having a predetermined intensity during a heating period, followed by a cooling period when the material is not irradiated and the intensity of the beam is zero; a thermal sensor adapted to measure temperature of the material during the heating and cooling periods; and a computing system operably connected to the thermal sensor, and adapted to determine temperature-dependent convective and radiative thermal losses of the material based on the measured temperature of the material during the cooling period when beam intensity is zero, and the thickness and porosity of the material, and to determine the temperature-dependent absorptivity of the material based on the temperature-dependent convective and radiative thermal losses determined from the cooling period, the thickness and porosity of the material, and the intensity of the beam. 7 . the system of claim 6 , wherein the material holder comprises a thermally conductive container having a container volume with a uniform and shallow depth so that a powder or liquid material conformably filling the container volume to capacity has the predetermined thickness and the predetermined porosity; and wherein the thermal sensor is adapted to measure temperature of the powder or liquid material by measuring the temperature of the container. 8 . the system of claim 7 , wherein the container has an opening defined by a rim into the container volume, and the system further comprises an implement adapted to remove excess powder in the container volume when run along the rim of the opening. 9 . the system of claim 7 , wherein the material holder further comprises at least two wires suspended between end supports for suspending the powder or liquid-filled container thereon to thermally isolate the powder or liquid material. 10 . the system claim 6 , wherein the material holder further comprises at least two wires suspended between end supports for suspending a solid material thereon to thermally isolate the solid material. 11 . the system of claim 6 , wherein the computing system operably is operably connected to the irradiation controller for receiving start and stop times of the irradiation source for identifying the heating and cooling periods.
federally sponsored research or development the united states government has rights in this invention pursuant to contract no. de-ac52-07na27344 between the united states department of energy and lawrence livermore national security, llc for the operation of lawrence livermore national laboratory. background additive manufacturing is a rapidly growing field with a wide range of applications. presently, there exists a need for the measurement of powder absorptivity for use in the analysis and modeling of laser-material interactions in the additive manufacturing build process. the absorptivity of powder depends not only on material type and laser wavelength; it is sensitive to the size distribution of the particles, particle shape, powder thickness and porosity. existing systems, such as for example the system described in absorptance of powder materials suitable for laser sintering by n. tolochko et al, (rapid prototyping journal 6.155. 2000), measure the reflected light from the powder with the help of an integrating sphere and are typically complex. the distribution of the scattered light is unknown and even the small absorption in the integrating sphere coating can affect the result. there is therefore a need for a simple compact system and method for performing fast, accurate measurements of laser absorptivity of powders. summary one aspect of the present invention includes a calorimetric method of measuring the temperature-dependent absorptivity of a material, comprising: providing a homogeneous material dimensioned to be thin and flat with a predetermined uniform thickness and a predetermined porosity; thermally isolating the material; uniformly irradiating the material with a beam of electromagnetic radiation having a predetermined intensity to uniformly heat the material during a heating period, followed by a cooling period when the material is not irradiated and the intensity of the beam is zero; measuring the temperature of the material during the heating and cooling periods; determining temperature dependent convective and radiative thermal losses of the material based on the measured temperature of the material during the cooling period when beam intensity is zero, and the thickness and porosity of the material; and determining the temperature-dependent absorptivity of the material based on the temperature-dependent convective and radiative thermal losses determined from the cooling period, the thickness and porosity of the material, and the intensity of the beam. other aspects of the calorimetric method of measuring the temperature dependent absorptivity of a material of the present invention include one or more of the following: (1) wherein the material is a powder or liquid provided in a thermally conductive container having a container volume with a uniform and shallow depth, and with the powder or liquid conformably filling the container volume to capacity to have the predetermined thickness and the predetermined porosity; and wherein the temperature of the powder or liquid is measured by measuring the temperature of the container; (2) wherein the container has an opening defined by a rim into the container volume; and wherein the powder is made to conformably fill the container volume to capacity by using an implement adapted to remove excess powder in the container volume when run along the rim of the opening; (3) wherein the powder or liquid material is thermally isolated by suspending the powder or liquid-filled container on at least two wires suspended between end supports; and (4) wherein a solid material is thermally isolated by suspending the solid material on at least two wires suspended between end supports. another aspect of the present invention include a system for calorimetrically measuring the temperature-dependent absorptivity of a homogeneous material dimensioned to be thin and flat with a predetermined uniform thickness and a predetermined porosity, comprising: a material holder adapted to support and thermally isolate the material to be measured; an irradiation source adapted to uniformly irradiate the material with a beam of electromagnetic radiation; an irradiation source controller adapted to control the irradiation source to uniformly heat the material with a beam having a predetermined intensity during a heating period, followed by a cooling period when the material is not irradiated and the intensity of the beam is zero; a thermal sensor adapted to measure temperature of the material during the heating and cooling periods; and a computing system operably connected to the thermal sensor, and adapted to determine temperature-dependent convective and radiative thermal losses of the material based on the measured temperature of the material during the cooling period when beam intensity is zero, and the thickness and porosity of the material, and to determine the temperature-dependent absorptivity of the material based on the temperature-dependent convective and radiative thermal losses determined from the cooling period, the thickness and porosity of the material, and the intensity of the beam. other aspects of the system of the present invention include one or more of the following: (1) wherein the material holder comprises a thermally conductive container having a container volume with a uniform and shallow depth so that a powder or liquid material conformably filling the container volume to capacity has the predetermined thickness and the predetermined porosity; and wherein the thermal sensor is adapted to measure temperature of the powder or liquid material by measuring the temperature of the container; (2) wherein the container has an opening defined by a rim into the container volume, and the system further comprises an implement adapted to remove excess powder in the container volume when run along the rim of the opening; (3) wherein the material holder further comprises at least two wires suspended between end supports for suspending the powder or liquid-filled container thereon to thermally isolate the powder or liquid material; (4) wherein the material holder further comprises at least two wires suspended between end supports for suspending a solid material thereon to thermally isolate the solid material; and (5) wherein the computing system operably is operably connected to the irradiation controller for receiving start and stop times of the irradiation source for identifying the heating and cooling periods. the present invention is generally directed to a system and method for the direct calorimetric measurement of powder absorptivity, and which eliminates the effect of convective and radiative loss. generally, a thin and flat homogeneous material is provided and thermally isolated. the material may be a powder, liquid, or solid (i.e. bulk solid). in the case of measuring powders or liquids for absorptivity, a flat container or tray (e.g. disc-shaped container) having a shallow container volume is provided and filled with powder or liquid to produce a thin layer of powder or liquid in the container volume. dimensioned in this manner, the materials dimensions as well as the porosity are be predetermined and known based on the filled-to-capacity container volume and a measured weight of the material filling the container volume. the material is also thermally isolated by various methods. the material is then uniformly irradiated by laser or diode array light to uniformly heat the material during a heating period, followed by a cooling period in which irradiation is removed and beam intensity is zero. the temperature of the material is measured using thermal sensors operably connected (contactedly or remotely) to the container. and a computing system, which may include for example a processor, data storage, display, etc. first calculates and determines temperature-dependent convective and radiative thermal losses of the material based on the measured temperature of the material during the cooling period when beam intensity is zero, and the thickness and porosity of the material. based on the temperature-dependent convective and radiative thermal losses determined from the cooling period, the computing system then calculates and determines the temperature-dependent absorptivity of the material. the system provides the temperature dependent absorptivity measurements up to the powder melting point. these and other implementations and various features and operations are described in greater detail in the drawings, the description and the claims. brief description of the drawings fig. 1 is a schematic view of an example embodiment of the system of the present invention. fig. 2 is a flow chart of an example embodiment of the method of the present invention. detailed description turning now to the drawings, fig. 1 shows an example embodiment of the system for the direct calorimetric measurement of the laser absorptivity of a material of the present invention, generally indicated at 10 . the system is shown having a material container or tray 11 having an open cavity for receiving a material to be measured (shown at 13 filling the cavity). the open cavity is shown having a width w 1 and a depth d 1 , and the material container/tray is shown having a thickness d 2 between the material to be measured and an outer surface where thermal sensors 17 are shown positioned at various locations to determine temperature and temperature uniformity of the material container/tray. the container is also shown supported by a thermal isolation structure, generally shown at 12 . the container 11 and the thermal isolation structure 12 may be characterized together as a material holder adapted to support and thermally isolate the material to be measured. it is appreciated that the thermal sensor(s) 17 are shown schematically connected to the material container/tray, and that the connection may be direct and physical, such as for example if thermocouples are directly attached, or may be an indirect remote, such as for example if a remote sensor is utilized. the system is also shown including an irradiation source controller 16 which is adapted to control a uniform irradiation source 14 to uniformly heat the material with a beam 15 having a predetermined intensity during a heating period, followed by a cooling period when the material is not irradiated and the intensity of the beam is zero. a computing system 18 is also provided operably connected to the thermal sensors and adapted to use the thermal readings to determine the absorptivity of the material, as further described below. in one example embodiment, the computing system may also be operably connected to the irradiation controller (shown by broken line in fig. 1 ) for receiving start and stop times of the irradiation source for identifying the heating and cooling periods. fig. 2 shows an example embodiment of a method for the direct calorimetric measurement of the laser absorptivity of a material of the present invention. first, a thin flat homogenous material is provided at 20 , which may be a powder, liquid, or a bulk solid. in the case of powders in particular, a thin layer of powder may be placed on a thin disc made from refractory metal or other high thermally conductive material. and various powder laying techniques may be used, including for example, a hard blade, a soft blade and roller. each technique will result in a different powder distribution with significant differences in porosity. in the case of bulk solids, the having predetermined dimensions. next at 21 , the material sample provided in the material container is then thermally isolated, such as by a thermal insulator structure (e.g. a wire suspension). the thermal insulator structure may also be constructed from a refractory metal or other type of material that does not absorb a significant amount of the incident radiation nor affect the temperature distribution in the target. at 22 , the thermally isolated material is then uniformly irradiated by a uniform laser or diode array beam during both heating and cooling periods. next at 23 , measurements of the temperature evolution are made during heating and cooling in order to eliminate the effect of the convective and radiative losses, and thereby directly measure material absorptivity. for powder materials in particular, the special design of the flat container provides direct powder porosity measurements and the possibility of performing the measurements up to the powder melting point. the temperature changes are measured by the thermal sensors (e.g. thermocouples connected to the rear side/outer surface of the disc, or a remote thermal sensor directed at the rear side/outer surface of the disc). and the heating process may be, for example on the scale of seconds, which is sufficient slow so that the temperature will be uniform across the material and will be uniform across the disc due to irradiation uniformity. next at 24 , a computing system determines temperature dependent convective and radiative thermal losses of the material based on temperature measures during the cooling period. consider a thin layer of powder with thickness d 1 on a flat disc of refractory metal with thickness d 2 uniformly illuminated by light with intensity i. for absorptivity of powder (or melt), assuming uniform temperature through the foil, the temperature evolution is: where a(t) is the absorptivity, q(t) the thermal losses including convective and radiative, i the density, c the specific heat, d the thickness, and subscript “1” for powder and “2” the substrate. note: an important parameter in this process is the powder density (porosity). the disc is filled with powder and the extra material is removed by a hard blade, soft blade or roller pin rolled along the rim, mimicking the powder deposition in a real system. weighing the disc with powder and without allows one to determine the powder weight and porosity. consider a flat top, finite duration heating pulse. first the temperature during cooling when i=0 is measured and will determine the convective and radiative losses q(t). finally, at 25 , the computing system determines the temperature dependent absorptivity of the material based on the previously determined convective and radiative thermal losses according to equation (1). in particular, equation (1) is used to determine the temperature dependent absorptivity considering the temperature evolution during the heating stage. also, the powder density (porosity) required in equation (1) is determined from the dimensions and parameters of the material contained in the container/tray of the present invention. the target disc is dimensioned so that the open cavity for receiving the material has a predetermined width (w 1 in fig. 1 ) and a rim with a predetermined height (d 1 ) to determine the powder thickness. the height is determined by the shallow depth of the container volume of the disc-shaped container/tray. the disc is filled with powder and the extra material is removed by a blade or roller moved along the rim, mimicking the powder deposition in the real system. weighing the disc with powder (using a weight scale not shown in the figures) and without the powder weight of the volume πhd 2 /4 can be determined, as well as calculate the porosity of the material. various types of materials may be measured for absorptivity, including powders, liquids, and solids. however, the material is dimensioned to be thin and flat with a predetermined uniform thickness, e.g. 1% variation of thickness, and predetermined porosity. one example thickness range of the material may be in the range of: 0.50 μm-2 mm. some example thicknesses for various materials include: ti foil, 0.2 mm, 0.5 mm for ti foil, 0.1 mm for ss powder, and 1.6 mm for ofhc cu foil. the material also preferably has a uniform material composition (homogeneity). for example, the irradiated area of the sample must be large compared to the areas of impurities or different material phases. regarding the material holder, various types of designs may be utilized which provide support and thermal isolation to the material being measured. in some embodiments, the material holder may include a thermally conductive container or tray having a container volume for containing a powder or liquid type material. various material types which are good thermal conductors may be used for the container to enable the container to exhibit the same temperature of the material it is containing. in some embodiments, refractory metals or carbon(graphite) may be used to withstand the high applied temperatures. furthermore, the material holder must also operate to thermally isolate the material being measured. for example if a solid bulk material is measured, a thermally conductive container is not required, and in this case the material holder may include a thermally insulating support structure, such as for example a fused silica structure. even in the case where a thermally conductive container is used to contain a powder or liquid material, such a thermally insulating support structure may also be utilized to thermally isolate the powder or liquid-filled container. some example configurations for thermally isolating the material include providing a wire suspension, i.e. at least two wires suspended between end supports, upon which a solid material or a material-filled container is suspended. the wire suspension may be constructed with, for example, tungsten wire. in another example embodiment, a refractory metal holder may be used having 3 equally spaced thin arms free to move on knife-edge supports away from irradiation area. regarding the thermal sensor various types of thermal sensors may be used, including contact type thermal sensors such as for example, thermocouples, as well as non-contact/remote thermal sensors, such as for example infrared cameras and fiber-optic pyrometers. regarding the irradiation source, it is adapted to uniformly irradiate the sample. uniformity must be sufficiently high so that the temperature of the material remains uniform across and throughout the material. some example sources include lasers (diode, fiber, solid-state, dye, gas), leds, auxiliary heating sources (electrical, lamp, rf), etc. example intensities include the range or about 1-1000 w/cm 2 cw intensity on the material sample depending on temperature range desired. and the irradiation sources may operate at various wavelengths. for example, most of the modern laser processing is done with 1 μm light (welding, cutting and additive manufacturing applications). second harmonics and wavelength ˜0.8 μm and 1.5 μm may also be considered. the computing system of the present invention may be implemented a central processor unit and memory, and may include input devices, (e.g. keyboard and pointing devices), output device (e.g. display devices), and storage devices (e.g. disk drives). the computing system may also be implemented with software programmed to perform the method described herein for calculating the absorptivity of the material. although the description above contains many details and specifics, these should not be construed as limiting the scope of the invention but as merely providing illustrations of some of the presently preferred embodiments of this invention. other implementations, enhancements and variations can be made based on what is described and illustrated in this patent document. the features of the embodiments described herein may be combined in all possible combinations of methods, apparatus, modules, systems, and computer program products. certain features that are described in this patent document in the context of separate embodiments can also be implemented in combination in a single embodiment. conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination. similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments. therefore, it will be appreciated that the scope of the present invention fully encompasses other embodiments which may become obvious to those skilled in the art. in the claims, reference to an element in the singular is not intended to mean “one and only one” unless explicitly so stated, but rather “one or more.” all structural and functional equivalents to the elements of the above-described preferred embodiment that are known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the present claims. moreover, it is not necessary for a device to address each and every problem sought to be solved by the present invention, for it to be encompassed by the present claims. furthermore, no element or component in the present disclosure is intended to be dedicated to the public regardless of whether the element or component is explicitly recited in the claims. no claim element herein is to be construed under the provisions of 35 u.s.c. 112, sixth paragraph, unless the element is expressly recited using the phrase “means for.”
199-269-142-492-610
SE
[ "KR", "CN", "US", "EP", "WO", "JP" ]
H01L29/06,H01L29/12,H01L29/775,B82B1/00,H01L29/778,H01L33/06,B82Y20/00,B82Y99/00,H01L29/423,H01L33/00,H01L33/02,H01L29/66,H01L29/861,H01L33/20,H01L35/18
2008-04-15T00:00:00
2008
[ "H01", "B82" ]
nanowire wrap gate devices
the present invention provides a semiconductor device comprising at least a first semiconductor nanowire (105) having a first lengthwise region (121) of a first conductivity type, a second lengthwise region (122) of a second conductivity type, and at least a first wrap gate electrode (111) arranged at the first region (121) of the nanowire (105) in order to vary the charge carrier concentration in the first lengthwise region (121) when a voltage is applied to the first wrap gate electrode (111). preferably a second wrap gate electrode (112) is arranged at the second lengthwise region (122). thereby tuneable artificial junctions (114) can be accomplished without substantial doping of the nanowire (105).
1 . a semiconductor device comprising at least a first semiconductor nanowire, wherein the device comprises a first lengthwise region of a first conductivity type, a second lengthwise region of a second conductivity type, and at least a first wrap gate electrode arranged at the first region of the device in order to vary the charge carrier concentration in at least a first portion of the nanowire device associated with the first lengthwise region when a voltage is applied to the first wrap gate electrode, wherein at least the first lengthwise region is arranged in said first nanowire. 2 . the semiconductor device according to claim 1 , wherein the second lengthwise region is arranged in sequence with the first lengthwise region along the length of the nanowire. 3 . the semiconductor device according to claim 1 , wherein the second lengthwise region is arranged in a second nanowire being in electrical contact with the first nanowire. 4 . the semiconductor device according to claim 1 , wherein a second wrap gate electrode is arranged at the second lengthwise region to vary the charge carrier concentration in at least a portion associated with the second lengthwise region when a voltage is applied to the second wrap gate electrode. 5 . the semiconductor device according to claim 1 , wherein the first lengthwise region and the second lengthwise region are of the same conductivity type. 6 . the semiconductor device according to claim 5 , wherein at least the first lengthwise region and the second lengthwise region are homogenous with respect to composition and/or doping. 7 . the semiconductor device according to claim 5 , wherein the first and the second lengthwise regions comprise at least two heterostructure segments of different composition. 8 . the semiconductor device according to claim 1 , comprising an artificial lengthwise junction at an interface between the first lengthwise region and the second lengthwise region, with different conductivity type on each side of the junction and with the portion on one side thereof, the junction being formed when the voltage is applied. 9 . the semiconductor device according to claim 8 , wherein the artificial lengthwise junction is a pn junction. 10 . the semiconductor device according to claim 1 , wherein the first lengthwise region and the second lengthwise region are of different conductivity type. 11 . the semiconductor device according to claim 10 , wherein an interface between the first lengthwise region and the second lengthwise region, with the portion on one side thereof, comprises a lengthwise junction with different conductivity type on each side of the junction and the first wrap gate electrode is adapted to move the lengthwise junction (when the voltage is applied. 12 . the semiconductor device according to claim 1 , wherein the first nanowire comprises a third lengthwise region, the first lengthwise region being placed between the second and third lengthwise regions, and wherein one or more wrap gate electrodes are adapted to control the width and position of a depletion region between a p-type region and a n-type region. 13 . the semiconductor device according to claim 4 , wherein the nanowire comprises an artificial junction formed by the first region having the first wrap gate electrode and the second region having the second wrap gate electrode, being adapted to vary the charge carrier concentration so that either of the first and second regions is a p-type region, and the other is a n-type region. 14 . the semiconductor device according to claim 1 , wherein said regions and one or more wrap gate electrodes provides an artificial pn or pin junction for the production of light, the active region being adapted to be moved between heterostructure segments of different composition and/or dimension to produce light having different wavelength. 15 . the semiconductor device according to claim 1 , wherein said regions and one or more wrap gate electrodes provides an artificial pn junction for the production of light, the active region being adapted to be moved along a nanowire segment of a graded composition to produce light having different wavelength. 16 . the semiconductor device according to claim 1 , wherein the nanowire comprises a core and at least a first shell layer forming a radial heterostructure, and the first wrap gate electrode is adapted to be used for varying the charge carrier concentration in a radial direction of the first lengthwise region of said first nanowire when a voltage is applied to the first wrap gate electrode. 17 . the semiconductor device according to claim 16 , wherein the radial heterostructure is adapted to comprise an active region to produce light when the voltage is applied. 18 . the semiconductor device according to claim 1 , wherein at least the first lengthwise region of the first nanowire comprises a magnetic semiconductor material having ferromagnetic properties that can be varied by the variation of the charge carrier concentration of the first lengthwise region. 19 . the semiconductor device according to claim 18 , wherein the first wrap gate electrode is arranged at the first region of the first nanowire to switch the ferromagnetism in the first region on and off. 20 . the semiconductor device according to claim 1 , wherein said nanowires are epitaxially arranged on a substrate, and the nanowires are protruding from the substrate. 21 . the semiconductor device according to claim 1 , wherein the first nanowire comprises a sequence of quantum wells distributed along the length thereof and one or more wrap gate electrodes are arranged at different positions along the length of the nanowire to provide tuning of an active region to produce light to any of the quantum wells. 22 . a method of modulating the properties of a first nanowire using at least a first wrap gate electrode arranged at a first region of the first nanowire wherein the method comprises a step of varying at least one of a charge carrier concentration or type or the ferromagnetic properties of the first region of said first nanowire when a voltage is applied to the first wrap gate electrode. 23 . the method according to claim 22 , wherein the step of varying the charge carrier concentration and/or type is adapted to provide an artificial pn or junction when the voltage is applied to the first wrap gate electrode.
technical field of the invention the present invention relates to nanowire-based semiconductor devices in general and to nanowire-based semiconductor devices that requires tailored properties with regards to band gap, charge carrier type and concentration, ferromagnetic properties, etc. in particular. background of the invention semiconductor devices have, until recently, been based on planar technology, which imposes constraint in terms of miniaturization and choices of suitable materials, as described further below. the development of nanotechnology and, in particular, the emerging ability to produce nanowires has opened up new possibilities for designing semiconductor devices having improved properties and making novel devices which were not possible with planar technology. such semiconductor devices can benefit from certain nanowire specific properties, 2d, 1d, or 0d quantum confinement, flexibility in axial material variation due to less lattice matching restrictions, antenna properties, ballistic transport, wave guiding properties etc. however, in order to manufacture semiconductor devices, such as field effect transistors, light emitting diodes, semiconductor lasers, and sensors, from nanowires, the ability to form doped regions in the nanowires is crucial. this is appreciated when considering the basic pn junction, a structure which is a critical part of several semiconductor devices, where a built-in voltage is obtained by forming p-doped and n-doped regions adjacent to each other. in nanowire-based semiconductor devices, pn junctions along the length of a nanowire are provided by forming lengthwise segment of different composition and/or doping. this kind of tailoring of the bandgap along the nanowire can for example also be used to reduce both the source-to-gate and gate-to-drain access resistance of a nanowire-based field effect transistor by using lengthwise segments of different bandgap and/or doping level. commonly the bandgap is altered by using heterostructures comprising lengthwise segments of different semiconductor materials having different bad gap. in addition, the doping level and type of dopant can be varied along the length during, or after, growth of the nanowire. during growth dopants can be introduced in gas phase and after growth dopants can be incorporated into the nanowire by diffusion or the charge carrier concentration can be influenced by so called modulation doping from surrounding layers. in u.s. pat. no. 5,362,972, a wrap gate field effect transistor is disclosed. the wrap gate field effect transistor comprises a nanowire of which a portion is surrounded, or wrapped, by a gate. the nanowire acts as a current channel of the transistor and an electrical field generated by the gate is used for transistor action, i.e. to control the flow of charge carriers along the current channel. from the international application wo 2008/034850 it is appreciated that by doping of the nanowire n-channel, p-channel, enhancement or depletion types of transistors can be formed. in the international application wo 2006/135336, heterostructure segments are further introduced in the nanowire of a wrap gate field effect transistor in order to improve properties such as current control, threshold voltage control and current on/off ratio. the doping of nanowires is challenging due to several factors. for example, physical incorporation of dopants into a crystalline nanowire may be inhibited and the charge carrier concentration obtained from a certain dopant concentration may be lower than expected from doping of corresponding bulk semiconductor materials. for nanowires grown from catalytic particles, using e.g. the so-called vls (vapor-liquid-solid) mechanism, the solubility and diffusion of the dopant in the catalytic particle will also influence the dopant incorporation. one related effect, with similar long term consequences for nanowires in general is the out-diffusion of dopants in the nanowire to surface sites. this effect is enhanced by the high surface to volume ratio of the nanowire. surface depletion effects, decreasing the volume of the carrier reservoir, will also be increased due to the high surface to volume ratio of the nanowire. summary of the invention in view of the foregoing, it is an object of the present invention to provide an improvement of semiconductor devices comprising nanowires with regards to properties related to doping of the nanowires. this is achieved by the semiconductor device and the method as defined in the independent claims. in a first aspect of the invention a semiconductor device comprises at least a first semiconductor nanowire is provided. the nanowire has a first lengthwise region of a first conductivity type, a second lengthwise region of a second conductivity type, and at least a first wrap gate electrode arranged at said first region. said wrap gate electrode is adapted to vary the charge carrier concentration in at least a first portion of the nanowire associated with the first lengthwise region when a voltage is applied to the first wrap gate electrode. the second lengthwise region may be arranged in sequence with the first lengthwise region along the length of the nanowire or in a second nanowire that is electrically connected to the first nanowire. additional wrap gates can be arranged at the second lengthwise region or other regions in order to vary the charge carrier concentration along the length of the nanowire. the first nanowire of the semiconductor device may comprise a core and at least a first shell layer forming a radial heterostructure, which may be used to produce light. in one embodiment of the invention the semiconductor device is adapted to work as a thermoelectric element. in a second aspect of the invention a semiconductor device comprising a nanowire that comprises a ferromagnetic material is provided in order for the semiconductor device to work as e.g. a memory device. this is attained by applying a voltage to a wrap gate electrode arranged at a region of the nanowire in order to change the charge carrier concentration such that the ferromagnetic properties of the ferromagnetic material changes. thanks to the invention it is possible to replace conventional doping or avoid substantial doping of semiconductor devices and nanowires based semiconductor devices in particular with local gating and inversion. by way of example this enables the formation of an improved pn junction without space charges in the depletion region as in conventional devices and tunable semiconductor devices, such as a wavelength tunable leds (light emitting diodes). embodiments of the invention are defined in the dependent claims. other objects, advantages and novel features of the invention will become apparent from the following detailed description of the invention when considered in conjunction with the accompanying drawings and claims. brief description of the drawings preferred embodiments of the invention will now be described with reference to the accompanying drawings, wherein: figs. 1 a - b are schematic illustrations of a nanowire having a wrap gate electrode for variation of the conductivity of the nanowire according to the invention; figs. 2 a - b are schematic illustrations of a nanowire having a double wrap gate for formation of an artificial pn junction according to the invention; figs. 3 a - i are schematic illustrations showing the effect of the activation of the wrap gate electrodes in some embodiments of the present invention figs. 4 a - c are schematic diagrams of conversion of a depleted nanowire to a nanowire comprising an artificial pn junction according to the invention; figs. 5 a - b are schematic illustrations of nanowires comprising a plurality of quantum wells according to the present invention; fig. 6 is a schematic illustration of a nanowire comprising a radial heterostructure according to the present invention and a pl-diagram from excitation of such a structure; and fig. 7 a - b are schematic illustrations of a thermoelectric element according to the present invention. detailed description of embodiments the embodiments of the present invention are based on nanostructures including so-called nanowires. for the purpose of this application, nanowires are to be interpreted as having nanometre dimensions in their width and diameter and typically having an elongated shape that provides a one-dimensional nature. such structures are commonly also referred to as nanowhiskers, nanorods, nanotubes, one-dimensional nanoelements, etc. the basic process of nanowire formation on substrates by particle assisted growth or the so-called vls (vapour-liquid-solid) mechanism described in u.s. pat. no. 7,335,908, as well as different types of chemical beam epitaxy and vapour phase epitaxy methods, are well known. however, the present invention is limited to neither such nanowires nor the vls process. other suitable methods for growing nanowires are known in the art and is for example shown in international application no. wo 2007/104781. from this it follows that nanowires may be grown without the use of a particle as a catalyst. thus selectively grown nanowires and nanostructures, etched structures, other nanowires, and structures fabricated from nanowires are also included. nanowires are not necessarily homogeneous along the length thereof. the nanometer dimensions enable not only growth on substrates that are not lattice matched to the nanowire material, but also heterostructures can be provided in the nanowire. the heterostructure(s) consists of a segment of a semiconductor material of different constitution than the adjacent part or parts of the nanowire. the material of the heterostructure segment(s) may be of different composition and/or doping. the heterojunction can either be abrupt or graded. the present invention is based on the use of a wrap gate electrode to control the charge carrier concentration of at least a portion of a nanowire that is used as transport channel in a semiconductor device in order to modulate the properties of the nanowire. referring to fig. 1 a , a semiconductor device according to the present invention comprises at least a first semiconductor nanowire 105 forming a transport channel of the semiconductor device, a first lengthwise region 121 , a second lengthwise region 122 of a second conductivity type, and at least a first wrap gate electrode 111 arranged at the first lengthwise region 121 of the first nanowire 105 in order to vary the charge carrier concentration in at least a portion of the nanowire associated with the first lengthwise region 121 when a voltage is applied to the first wrap gate electrode 111 . the first wrap gate electrode 111 encloses at least a portion of the nanowire 105 with a dielectric material (not shown) in-between. the effect of this gating is dependent on the voltage applied and the specific design of the semiconductor device, and the first gate electrode 111 and the nanowire 105 in particular, but for example it may cause a change of the charge carrier concentration in the complete first lengthwise region. the change of charge carrier concentration may be made to such an extent that the charge carrier type of a portion of the nanowire changes. this enables creation of different “artificial” devices, such as artificial pn-junctions. the change of charge carrier concentration can also be used to change ferromagnetic properties of the nanowire. this general description of the invention is detailed in the following. charge carrier types are commonly referred to as being either p-type or n-type. for the purpose of this application the charge carrier type can also be intrinsic, i.e. i.-type. the p-type material has holes as majority charge carriers, and the n-type material has electrons as majority charge carriers, while the intrinsic-type material is a material without significant majority charge carrier concentration. hence, the intrinsic-type material may have either electrons or holes as charge carriers although at such a low concentration that the conductivity is due to other properties of the material than these charge carriers. as mentioned above the nanowire 105 may be homogenous with respect to composition and doping or the nanowire may have been subjected to band gap engineering e.g. by forming heterostructures in along the nanowire. fig. 1 b schematically illustrates a semiconductor device according to one embodiment of the present invention comprising a first non-homogenous nanowire 105 grown in an orthogonal direction from a substrate 104 . a first wrap gate electrode 111 extends from the substrate along a portion of the nanowire and encloses a first lengthwise region 121 of the nanowire 105 with a dielectric material in-between 104 . the nanowire 105 forms a transport channel, which is electrically connected by a top contact in one end portion of the nanowire 105 and the substrate 104 in the other end of the nanowire 105 . the first nanowire 105 comprises at least one quantum well 115 , which may be in the form of a quantum dot enclosed by the first wrap gate electrode 111 and one wide bandgap barrier segment on each side of the quantum dot within the first lengthwise region 121 . the first lengthwise region 121 and the second lengthwise region 122 can be of the same or different conductivity type and moreover the conductivity properties can be changed by applying a voltage to one or more wrap gate electrodes. for example, in one embodiment of the present invention, a semiconductor device comprises at least a first nanowire 105 that is homogenously n-doped with a second lengthwise region 122 arranged in sequence with a first lengthwise region 121 along the length of the nanowire 121 . a first wrap gate electrode 111 is arranged at the first lengthwise region 121 of the first nanowire 105 to vary the charge carrier concentration so that the first region 121 , when a pre-determined voltage is applied to the first wrap gate electrode 111 , becomes a p-type region. accordingly a pn junction is actively formed. the charge carrier concentration can varied in a plurality of lengthwise regions by arranging a plurality of wrap gate electrodes at the lengthwise regions. referring to fig. 2 a , a semiconductor device according to one embodiment of the present invention comprises at least a first nanowire 105 . the first nanowire 105 has a first wrap gate electrode 111 arranged at a first lengthwise region 121 of the first nanowire 105 and a second wrap gate electrode 112 arranged at a second lengthwise region 122 of the first nanowire 105 . each wrap gate electrode is adapted to vary the charge carrier concentration of the corresponding region 121 , 122 of said first nanowire 105 when voltages are applied to the wrap gate electrodes 111 , 112 . fig. 2 b schematically illustrates such a double-gated nanowire 105 with the wrap gate electrodes activated such that the charge carrier concentrations of the first and second lengthwise regions are changed from originally intrinsic to p-type in the first lengthwise region 121 and n-type in the second lengthwise region 122 , thereby forming a pn- or pin junction 114 at the interface 116 between the first lengthwise region 121 and the second lengthwise region 122 . by changing the voltages applied, the properties of the pn-junction, such as the properties defined by the width and the position of a depletion region between the p-type region and the n-type region or the width of the p-type and n-type regions, can be varied. as appreciated by one skilled in the art, the either one of the regions 121 , 122 can be made p-type or n-type and artificial pn-junctions can be formed also from originally n-type or p-type nanowires. thus, the variation of the charge carrier concentration of one or more of the first and second regions 121 , 122 may be used to form a junction 114 at the interface 116 between lengthwise regions. this junction is either not actually present in the first nanowire 105 before activation of the wrap gate electrodes 121 , 122 or a junction between regions of different conductivity type that already is present in the passive state may be moved along the length of the nanowire. this kind of junction is hereinafter referred to as an artificial junction or in the particular case with adjacent regions of p-type and n-type an artificial pn junction. while the invention has been illustrated by examples of embodiments having one or two wrap gate structures per nanowire, it is of course conceivable to have three or more wrap gate structures per nanowire. a plurality of wrap gate electrodes may be arranged at different positions along a nanowire to tailor the charge carrier concentration and/or type along the length of the nanowire. it should be noted that, when the voltage is applied to the first wrap gate electrode 111 that surrounds the first lengthwise region 121 , a portion 101 of the nanowire 105 associated with the first lengthwise region 121 changes charge carrier concentration. analogously, when the voltage is applied to a second or a third wrap gate electrode 111 that surrounds a second lengthwise region 121 and a third lengthwise region 113 , respectively, portions 102 , 103 of the nanowire 105 changes charge carrier concentration. the magnitude of the voltage applied determines the extension of said portion and if the conductivity type is changed. figs. 3 a - i schematically illustrates embodiments of the present invention with different wrap gate electrode and conductivity type configuration. although the embodiments are illustrated at an active state when the applied voltage is relatively low and the portions that have changed conductivity type only extend partly into the nanowire or the adjacent regions it should be understood that at a higher voltage level said portions will have larger extension, i.e. the nanowire will change conductivity type over the whole width and over a complete region at a pre-determined voltage level. only at a certain voltage level a lengthwise junction is formed. a brief description of each of the figs. 3 a - i are given in the following. in fig. 3 a the first and second lengthwise regions 121 , 122 are of p-type, and when applying a voltage (potential) to the first wrap gate electrode 111 , which is arranged at the said first region 121 , at least a portion of the said first region is transferred to n-type. thus a pn-junction is eventually formed between the said first and second regions 121 , 122 . in fig. 3 b a first and a second region 121 , 122 are gated by a first and a second wrap gate electrode 111 , 112 , respectively. the nanowire is at least in said regions intrinsic and by applying voltages to the wrap gate electrodes 111 , 112 at least a portion of the first region becomes n-type and at least a portion of the second region becomes p-type, thereby eventually forming an artificial pn-junction between the first and second regions. the nanowire in fig. 3 c comprises a n-type region 123 and a p-type region with an intrinsic region in-between. by applying voltage to one or more of the wrap gate electrodes, one wrap gate electrode surrounding each region, the interfaces between the intrinsic region 121 and the adjacent regions 122 , 123 can be moved. in fig. 3 d the nanowire comprises a p-type material in the first region 121 and a n-type material in the second region 122 . by operating the device in accordance with fig. 3 a the pn-junction between the first and second regions can be erased. fig. 3 e is the same as fig. 3 a although having intrinsic regions 121 , 122 . in fig. 3 e the first region 121 is p-type and the second region 122 is n-type, but by applying voltages to wrap gate electrodes arranged at each region 121 , 122 the charge carrier type can be changed, i.e. the pn junction becomes a np junction. figs. 3 f - g are analogous to fig. 3 c , although with different voltages applied to the wrap gate electrodes or a different configuration of wrap gate electrodes active. fig. 3 i schematically illustrates how an interface between a p-type region and a n-type region can be moved. the activation of one or a plurality of wrap gates gives the possibility to locally force the band gap in one direction or the other. by having two adjacent wrap gate electrodes forcing the band gap in different directions an artificial pn junction may be accomplished. this makes it is possible to replace conventional doping of nanowires. by way of example this enables the formation of an improved pn junction without space charges in the depletion region as in conventional devices. as mentioned, the nanowires of the present invention may be e.g. undoped (intrinsic) or only p- or n-doped, which simplifies the manufacturing of nanowire semiconductor devices. the nanowires can be homogenous with respect to doping, however not limited to this. this opens up new possibilities, such as the possibility to use thinner nanowires, which have a true one dimensional behaviour. the present invention allows the construction of a semiconductor device comprising inhomogeneous induction of regions where transport is carried by electrons and/or holes along a nanowire, where, for instance, one half of the nanowire will be electron-conducting and the other half be hole-conducting, thus effectively providing a tunable artificial pn junction along the length of the nanowire. one advantage of the present invention is that, in principle, undoped nanowires, for which carriers are provided from the gated regions, are used. this enables semiconductor devices, such as rectifiers and light-emitting diodes, which are intimately based on the unique opportunities offered by nanowires. although single pn junctions have been described above, other kinds of combination of regions behaving as n- and p-regions will be possible, e.g. a gate-induced n-p-n bipolar transistor configuration. fig. 4 a schematically illustrates local conversion of an otherwise depleted nominally undoped (60 nm diameter) gaas nanowire 105 according to fig. 2 b , wherein a first region 121 closest to a (p-type) substrate 104 is converted to p-type conductivity, and a second region 122 , closest to a n-type termination of the nanowire is converted to n-type conductivity when voltages are applied to the wrap gate electrodes 111 , 112 . these wrap gate electrodes 111 , 112 can be part of one electrical circuit having a common voltage source in-between, whereby the interface between the converted regions can be moved. for zero-potential on the gates the nanowire 105 is depleted and for +/−3 v on the two gates 111 , 112 an n- and p-doped behaviour is resembled. with an applied bias between substrate and the n-type termination, this will operate as an artificial pn junction, by way of example for use as a nano-led. in one embodiment of the present invention the semiconductor device is functional as such an led having at least two wrap gate electrodes allowing an recombination region of the led to be moved along the length of the nanowire, e.g. to obtain a wave-length tunable led having a graded composition along the length of the nanowire. the graded composition may comprise segments of different composition along the length of the nanowire. varying dimension, i.e. diameter, is along the length of the nanowire can be used to alone or in combination with varying composition in order to accomplish the tunable led. fig. 4 b schematically illustrates the behaviour with the applied bias and fig. 4 c illustrates the spatial distribution of electrons and holes at 0v bias and at 1.3v bias. referring to figs. 5 a - b , one embodiment of a semiconductor device according to the present invention comprises a first nanowire 105 having a sequence of quantum wells 115 distributed along the length thereof. one or more wrap gate electrodes are arranged at different positions along the length of the nanowire which allows tuning of recombination region to produce light to any of the quantum wells in order to generate light having a predetermined wavelength determined by the composition of the quantum well. in such way switching between discrete wavelengths in a nanowire led device is conceivable. the wavelength of light emitted from a plurality of nanowires may also be combined to have a broader spectrum. fig. 5 a illustrates a nanowire 105 having two quantum wells of different composition in a position in-between the first and the second wrap gate electrode. by varying the voltages applied to the first and the second wrap gate electrodes 111 , 112 the extension of the extensions of the portions of the nanowire 105 that have changed charge carrier type from intrinsic to either p-type or n-type can be varied. thereby the recombination region can be moved to either of the quantum wells. fig. 5 b illustrates another embodiment comprising only a first gate 111 arranged at a first lengthwise region 121 having intrinsic conductivity type in the passive state. in a second lengthwise region 122 the nanowire is of p-type. the recombination region can be moved between two quantum wells of different composition in-between the first and the second regions 121 , 122 . a mentioned above, the doping of nanowires is challenging. in particular doping of nitride-based iii-v semiconductors, for example mg-doping of gan, is challenging. the performance of semiconductor devices made of this kind of materials, such as nanowire leds, can be improved by using wrap gates to increase the concentration of holes at the recombination region. referring to fig. 6 , one embodiment of a semiconductor device according to the present invention comprises at least a first nanowire 205 comprising a nanowire core 207 and at least a first shell layer 208 epitaxially arranged on the core 207 and at least partly surrounding the nanowire core 207 , providing a radial heterostructure. at least a first wrap gate electrode 211 is arranged at a first region 221 of the nanowire 205 . in one embodiment of the present invention both the core and one or more quantum wells defined in the first shell layer surrounding the core are conducting, with the carrier concentration in the shell layer being controlled by a first wrap-gate. in one implementation of this embodiment both the core and the shell layer are adapted to be electron-conducting by activation of the wrap gate electrode. in another implementation of this embodiment the core is adapted to be n-conducting and the shell to be p-conducting by activation of the wrap gate electrode. in yet another implementation of this embodiment the charge carrier type is tunable. one embodiment of a semiconductor device according to the present invention comprises a nanowire having a gaas core and an algaas shell layer. this core-shell structure allows an opportunity to form spatially indirect excitons, i.e. with electrons and holes separated radially. studies of pl from excitons recombining in the core and in the shell layer of the gaas/algaas core-shell structure are shown in fig. 4 . referring to figs. 7 a - b , in one embodiment of the present invention the semiconductor device is a thermoelectric element. wrap gate controlled nanowires 305 makes it possible to use the thermoelements of the present invention in room-temperature thermoelectrics. in general nanowire based technology is considered to be an extremely promising candidate for thermoelectric materials with an energy-conversion efficiency that exceeds traditional cooling and power conversion technologies. one challenge in the field is however the need for both p- and n-type nanowires with equally good performance characteristics to form a thermocouple. n-type devices are usually considered due to the substantially higher mobility for the electrons than for the holes in a typical iii/v material. in this embodiment wrap-gate induced carrier conduction is used to define p- and n-type nanowires 305 , 306 from otherwise identical nanowires, and tune these such that their performance matches, thus optimizing the performance of the resulting thermoelectric element, such as e.g. a thermocouple or a peltier element. in one implementation of this embodiment an entire wafer with a checker-board pattern of n- and p-regions are operated to provide thermoelectric effects for heating/cooling. in another embodiment of the present invention, wherein the semiconductor device is functional as a thermoelectric element, the semiconductor device comprises a radial heterostructure as described above, i.e. a nanowire with a n-type core 307 and a p-type shell layer 308 , and at least a first wrap gate electrode 311 surrounding a first region 321 of the nanowire 305 together forming a single-nanowire peltier element. whereas an array of a very large number of such nano-peltier elements could be used for cooling or power generation, a single such element might also represent an extremely effective nano-spot cooler. one embodiment of the present invention is related to spintronics. in this embodiment wrap-gate-induced carrier-modulation is used for formation and manipulation of ferromagnetic properties of dilutely doped magnetic semiconductors. it is known that free carriers, i.e. free holes, are mediating and inducing the spin-coupling between the magnetic impurities, which in most cases are mn-impurities with concentrations up to the %-level. until now, this carrier-mediated spin-coupling leading to ferromagnetic behavior has been extremely difficult to control since the hole-concentration is intimately correlated with the mn-doping concentration. by arranging one or more wrap gates around nanowires comprising said magnetic semiconductors in a manner described above the present invention it is possible to separately tune the free-carrier concentration using the wrap-gate-induced carrier-modulation. in one implementation of this embodiment a semiconductor device according to the present invention comprises dense arrays of mn-doped iii-v nanowires, for which an external gate is used to switch the ferromagnetism on and off. this device could be used for magnetic storage. by arranging the nanowires, for example in row and columns, single nanowires are easily addressed. the anisotropy determined by the one-dimensional nature of the nanowires and the two-dimensional array arrangement improves the performance at higher temperatures as compared to conventional storage mediums. analogously to the gating of nanowires in order to create artificial junctions above and to provide tunable leds a plurality of regions, the ferromagnetic properties of multiple regions of one nanowire can be controlled by a plurality of wrap gates arranged along the length of the nanowire. the basic structure for the wrap-gate-induced carrier-modulation for formation and manipulation of ferromagnetic properties is best illustrated by fig. 1 a and fig. 2 a . the charge carrier concentration of the nanowire is locally controlled, not in order to change charge carrier type, but such that the ferromagnetic properties are changed. nanowires in semiconductor devices according to the present invention may have a smaller diameter than used in the prior art. the diameter of nanowires in prior art semiconductor devices is typically more than 30 nm, often in the range of 30-50 nm. the present invention allow the use of nanowires having a diameter less than 30 nm, preferably less than 20 nm, and more preferably in the range of 10-20 nm. this is possible since modulation of the charge carrier concentration and/or type of essentially undoped nanowires is used. the present invention is however not limited to homogeneous nanowires, nanowires having a graded or varying composition along the length thereof may be used. furthermore, radial heterostructures may be utilized, as explained above. the present invention makes it is possible to manipulate the carrier concentration over large ranges, including carrier inversion, and to do so independently for different segments along nanowires. this approach offers a complete tuning of the fermi-energy in ideal one dimensional nanowires. based on experiences in the creation of ultra-short gate-lengths (about 50 nm), it is possible to stack such wrap-gates vertically. this will enable control of the transport channel of a nanowire along the length thereof via single quantum dots or single electron turn-stile designs. while the invention has been described for single nanowires it is to be understood that a very large number (few to millions of) nanowires can be collectively gated in identical fashions. while the invention has been described in connection with what is presently considered to be the most practical and preferred embodiments, it is to be understood that the invention is not intended to be limited to the disclosed embodiments, on the contrary, it is intended to cover various modifications and equivalent arrangements within the scope of the appended claims.
199-692-355-962-899
US
[ "US" ]
G06F3/01,G06F3/16,G06V10/70,G09B5/04,G09B19/00,G09B5/00,H04N5/225
2016-11-14T00:00:00
2016
[ "G06", "G09", "H04" ]
system and method for providing guidance or feedback to a user
a device for providing guidance or feedback to a user. the device includes a camera configured to detect image data indicating a user performance of an activity. the system includes a guidance unit connected to the camera. the guidance unit is configured to identify the activity based on image processing of the image data or an identification of the activity from the user. the guidance unit is also configured to determine a criteria associated with the activity. the guidance unit is also configured to determine a user performance of the activity based on the image data. the guidance unit is also configured to determine feedback based on a comparison of the criteria and the user performance of the activity, the feedback indicating an improvement or suggestion for the user. the system also includes an output unit connected to the guidance unit, the output unit configured to output the feedback.
1. a wearable neck device for providing guidance or feedback to a user, the wearable neck device comprising: a body having a neck portion configured to rest against a back of a neck of the user, a first side portion connected to the neck portion, and a second side portion connected to the neck portion, the first side portion configured to extend across a first shoulder of the user and rest on a front body portion of the user and the second side portion configured to extend across a second shoulder of the user and rest on the front body portion of the user; a camera located on the first side portion or the second side portion and configured to detect image data of a first angle of a user performance of an activity; a gps unit located within the body and configured to detect location data; a memory located within the body and configured to store a learned model associated with the activity; an activity detection unit located in the body, connected to the camera and the gps unit, and configured to automatically analyze the image data to identify a presence of one or more objects associated with the activity, compare the one or more objects associated with the activity to the learned model and identify the activity based on the location data and the comparison of the one or more objects associated with the activity to the learned model; a guidance unit located in the body, connected to the camera and the imu, and configured to: obtain, from a second camera of another device, additional image data of a second angle of the user performance of the activity, determine a series of instructions associated with the activity and including a plurality of steps to be performed by the user based on the learned model, determine an action being performed within the detected image data, determine a current step of the plurality of steps of the series of instructions associated with the activity that is being performed based on the action, determine that the current step of the activity has been completed based on the image data of the first angle of the user performance of the activity and the additional image data of the second angle of the user performance of the activity, and determine an instruction associated with a next step within the series of instructions in response to determining that the current step has been completed; and an output unit located on the body, connected to the guidance unit, and configured to output the instruction to the user. 2. the wearable neck device of claim 1 , further comprising an input unit configured to detect input data from the user identifying the activity. 3. the wearable neck device of claim 1 , wherein the learned model stored in the memory is periodically updated. 4. the wearable neck device of claim 1 , wherein the output unit includes a speaker configured to provide an audio output or a vibration unit configured to provide a tactile output. 5. the wearable neck device of claim 1 , wherein the one or more objects identified by the activity detection unit are one or more stationary objects unconnected to the user. 6. a wearable neck device for providing guidance or feedback to a user, the wearable neck device comprising: a body having a neck portion configured to rest against a back of a neck of the user, a first side portion connected to the neck portion, and a second side portion connected to the neck portion, the first side portion and the second side portion configured to extend across a shoulder of the user and rest on a front body portion of the user; a camera located on the first side portion or the second side portion and configured to detect image data of a first angle of a user performance of an activity; an inertial measurement unit (imu) located within the body and configured to detect movement data associated with the user during the user performance of the activity; a gps unit located within the body and configured to detect location data; an activity detection unit connected to the camera and the gps unit, and configured to: automatically analyze the image data to identify the activity based on a presence of one or more objects associated with the activity, and identify the activity based on the presence of the one or more objects associated with the activity and the location data; a guidance unit connected to the camera and the imu, and configured to: obtain, from a second camera of another device, additional image data of a second angle of the user performance of the activity, determine a series of instructions associated with the activity and including a plurality of steps to be performed by the user, determine an action being performed within the image data, determine a current step of the plurality of steps of the series of instructions associated with the activity that is being performed based on the image data, the additional image data and the movement data, determine that the current step of the activity has been completed based on the image data of the first angle of the user performance of the activity and the additional image data of the second angle of the user performance of the activity, and determine a instruction associated with a next step within the series of instructions; and an output unit connected to the guidance unit, the output unit configured to output the instruction. 7. the wearable neck device of claim 6 , further comprising an input unit configured to detect input data from the user indicating the activity. 8. the wearable neck device of claim 6 , further comprising: a memory configured to store a learned model; wherein the guidance unit is configured to determine that the current step of the activity has been completed by comparing the image data to the learned model stored in the memory. 9. the wearable neck device of claim 6 , wherein the output unit includes a speaker configured to provide an audio output or a vibration unit configured to provide a tactile output. 10. a method of providing guidance or feedback to a user of a wearable neck device, the method comprising: providing, by the wearable neck device, a body having a neck portion configured to rest against a back of a neck of the user, a first side portion connected to the neck portion, and a second side portion connected to the neck portion, the first side portion configured to extend across a first shoulder of the user and rest on a front body portion of the user and the second side portion configured to extend across a second shoulder of the user and rest on the front body portion of the user; detecting, by a camera located on the first side portion or the second side portion of the body of the wearable neck device, image data of a first angle of a user performance of an activity; detecting, by a gps unit, location data associated with the wearable neck device; storing, by a memory located within the body, a learned model associated with the activity; automatically analyzing, by an activity detection unit, the image data to identify a presence of one or more objects; identifying, by the activity detection unit, the presence of the one or more objects associated with the activity; comparing, by the activity detection unit, the one or more objects associated with the activity to the learned model; identifying, by the activity detection unit, the activity based on the location data and the comparison of the one or more objects associated with the activity to the learned model; obtaining, by a guidance unit and from a second camera of another device, additional image data of a second angle of the user performance of the activity; determining, by the guidance unit, a series of instructions associated with the activity and including a plurality of steps to be performed by the user based on the learned model; determining, by the guidance unit, an action being performed within the image data; determining, by the guidance unit, a current step of the plurality of steps of the series of instructions associated with the activity that is being performed based on the action; determining, by the guidance unit, that the current step of the activity has been completed based on the image data of the first angle of the user performance of the activity and the additional image data of the second angle of the user performance of the activity; determining, by the guidance unit, an instruction associated with a next step within the series of instructions; and outputting, by an output unit located on the body, the instruction. 11. the method of claim 10 , wherein identifying the activity includes receiving, by an input unit, input data from the user indicating the activity. 12. the method of claim 10 , wherein the method further comprises periodically updating the learned model stored in the memory.
background 1. field the present disclosure relates to providing information by a device, and more particularly to a system and a method for providing guidance or feedback to a user performing an activity. 2. description of the related art an individual performing an activity, such as cooking, repairing a vehicle, or playing a sport, may follow a set of instructions. for example, an individual who is cooking or baking may follow a recipe. in another example, an individual who is repairing a component of a vehicle may follow instructions for disassembling the component, repairing the component, and reassembling the component. however, following a set of instructions may not always be convenient. an individual following a recipe may print the recipe on a paper or may view the recipe on a mobile device, such as a tablet or a smartphone. however, the integrity of the paper may become compromised if subjected to water or foods spilling on the paper, and the mobile device may turn off or may dim the display, requiring periodic engagement with the screen. in addition, it is often the responsibility of the individual to ensure the instructions are being followed so that the activity is successfully completed with no other oversight. thus, there is a need for systems and methods for providing more convenient guidance and feedback to users. summary what is described is a system for providing guidance or feedback to a user. the system includes a camera configured to detect image data indicating a user performance of an activity. the system also includes a guidance unit connected to the camera. the guidance unit is configured to identify the activity based on image processing of the image data or an identification of the activity from the user. the guidance unit is also configured to determine a criteria associated with the activity. the guidance unit is also configured to determine a user performance of the activity based on the image data. the guidance unit is also configured to determine feedback based on a comparison of the criteria and the user performance of the activity, the feedback indicating an improvement or suggestion for the user. the system also includes an output unit connected to the guidance unit, the output unit configured to output the feedback. also described is a device for providing guidance or feedback to a user. the device includes a camera configured to detect image data indicating a user performance of an activity. the device also includes a guidance unit connected to the camera. the guidance unit is configured to identify the activity based on image processing of the image data or an identification of the activity from the user. the guidance unit is also configured to determine a set of instructions associated with the activity. the guidance unit is also configured to determine a current stage of the activity based on the image data. the guidance unit is also configured to determine a next instruction from the set of instructions to provide the user based on the current stage. the device also includes an output unit connected to the guidance unit, the output unit configured to output the next instruction. also described is a method for providing guidance or feedback to a user. the method includes detecting, by a camera, image data indicating a user performance of an activity. the method also includes identifying, by a guidance unit, the activity based on image processing of the image data or an identification of the activity from the user. the method also includes determining, by the guidance unit, a criteria or a set of instructions associated with the activity. the method also includes determining, by the guidance unit, a user performance of the activity based on the image data or a current stage of the activity based on the image data. the method also includes determining, by the guidance unit, feedback based on a comparison of the criteria and the user performance of the activity or a next instruction from the set of instructions to provide the user based on the current stage. the method also includes outputting, by an output unit, the feedback or the next instruction. brief description of the drawings other systems, methods, features, and advantages of the present invention will be or will become apparent to one of ordinary skill in the art upon examination of the following figures and detailed description. it is intended that all such additional systems, methods, features, and advantages be included within this description, be within the scope of the present invention, and be protected by the accompanying claims. component parts shown in the drawings are not necessarily to scale, and may be exaggerated to better illustrate the important features of the present invention. in the drawings, like reference numerals designate like parts throughout the different views, wherein: fig. 1a illustrates an exemplary use of a system for providing guidance or feedback to a user cooking food, according to an embodiment of the present invention; fig. 1b illustrates an exemplary use of a system for providing guidance or feedback to a user repairing a vehicle, according to an embodiment of the present invention; fig. 1c illustrates an exemplary use of a system for providing guidance or feedback to a user swinging a golf club, according to an embodiment of the present invention; fig. 1d illustrates an exemplary use of a system for providing guidance or feedback to a user administering first aid, according to an embodiment of the present invention; fig. 1e illustrates an exemplary use of a system for providing guidance or feedback to a user regarding food or drink consumption, according to an embodiment of the present invention; fig. 1f illustrates an exemplary use of a system for providing guidance or feedback to a user regarding a piece of art, according to an embodiment of the present invention; fig. 1g illustrates an exemplary use of a system for providing guidance or feedback to a user regarding speech behavior, according to an embodiment of the present invention; fig. 1h illustrates an exemplary use of a system for providing guidance or feedback to a user regarding reminders for when the user leaves the user's house, according to an embodiment of the present invention; fig. 2 is a block diagram of components of a system for providing guidance or feedback to a user, according to an embodiment of the present invention; fig. 3 is a block diagram of components of a system for providing guidance or feedback to a user, according to another embodiment of the present invention; fig. 4 illustrates an exemplary device, according to an embodiment of the present invention; fig. 5 illustrates a method for providing feedback to a user of a device, according to an embodiment of the present invention; and fig. 6 illustrates a method for providing guidance to a user of a device, according to an embodiment of the present invention. detailed description disclosed herein are systems and methods for providing guidance or feedback to a user. the systems and methods disclosed herein determine an activity the user is engaged in, and automatically provides guidance or feedback to the user. the activity may be detected by a camera on a device worn by the user, or may be provided to the wearable device by the user. the user receives the guidance or feedback from the wearable device, allowing the user to perform the activity without occupying the user's hands. the guidance or feedback may be suggestions for the user or may be instructions for the user to follow. the systems and methods provide several benefits and advantages, such as providing updated, accurate, and personalized guidance or feedback for the user. additional benefits and advantages include the user not having to rely on memory to remember instructions for performing an activity. as such, the user may perform the activity at a higher level and may achieve more consistent and better results. further, the user may be more capable while using the systems and methods disclosed herein, as the user can access instructions on how to perform activities the user may not have previously been capable of performing. the systems and methods provide additional benefits and advantages such as allowing users to become less reliant on other human beings to teach the users how to do things and inform the users about suggestions or reminders. an exemplary system includes a camera configured to detect image data indicating a user performance of an activity or a user about to begin an activity. the system also includes a guidance unit connected to the camera. the guidance unit is configured to identify the activity based on image processing of the image data or an identification of the activity from the user. the guidance unit is also configured to determine a criteria associated with the activity. the guidance unit is also configured to determine a user performance of the activity based on the image data. the guidance unit is also configured to determine feedback based on a comparison of the criteria and the user performance of the activity, the feedback indicating an improvement or suggestion for the user. the system also includes an output unit connected to the guidance unit, the output unit configured to output the feedback. figs. 1a-1h illustrate various exemplary situations where the system for providing guidance or feedback to a user may be used. in each of the figs. 1a-1h , there is a user 102 of a device 100 . the system for providing guidance or feedback to the user includes the device 100 . the device 100 is illustrated as a wearable device resembling a necklace, but other devices, such as a wearable smart watch or a smartphone may be used. the device 100 is configured to provide an output 104 via an output unit including a speaker 110 , for example. the output 104 may be an audio output from a speaker 110 or a tactile output from a vibration unit. the output 104 may be guidance or feedback. when the output 104 is feedback, the output 104 may be a suggestion, a reminder, an improvement, or general information for the user 102 . when the output 104 is guidance, the output 104 may be an instruction for the user 102 in performing an activity. the device 100 may automatically identify the activity being performed by the user 102 using a camera 106 . the user 102 may provide an identification of the activity to the device 100 using an input unit 108 . the input unit 108 may be a touchpad, a keyboard, or a microphone, for example. fig. 1a illustrates the user 102 making cookies. the device 100 may identify that the user 102 is making cookies. the device 100 may detect image data using a camera 106 and the device 100 may analyze the detected image data to determine that the user 102 is making cookies. for example, the camera 106 detects a cookie box or mix or ingredients to make cookies and determines that the user 102 wants to make cookies. the device 100 may compare the detected image data with a learned model to determine that the activity performed by the user 102 is making cookies. part of the learned model may be recognition of objects 112 or actions associated with the activity. in fig. 1a , the learned model may include a bowl, a hand mixer, ingredients such as flour, or the action of scooping flour with a measuring spoon or measuring cup. alternatively, or in addition, the user 102 may speak into the input unit 108 an indication that the user 102 is making cookies, such as “hey, i'm making chocolate chip cookies, can you help me?” or “hey, teach me how to make chocolate chip cookies.” in addition, the user 102 may type into the input unit 108 an indication that the activity the user 102 is engaged in is making cookies. the device 100 may output an output 104 that is feedback. the device 100 may determine, based on image data that the user 102 has not scooped enough flour, and the device 100 may output an output 104 such as “you might want to check how much flour you scooped.” the user 102 may prompt the device 100 for feedback using the input unit 108 . for example, the user 102 may say “hey, did i scoop enough flour?” and the device 100 may, based on detected image data, determine a response to the prompt provided by the user 102 . the device 100 may output an output 104 that is guidance. the device 100 may determine, based on image data, that the user has finished performing a step in a series of instructions. for example, the device 100 may detect that the user 102 has finished adding flour to the bowl, and that the user 102 should next add baking soda. the device 100 may provide an output 104 that is an instruction, such as “next, after the flour, you should add 2 teaspoons of baking soda.” the learned model may be stored locally on the device 100 or remotely. the learned model may be periodically updated. for example, the user 102 may identify a particular chocolate chip cookie recipe the user likes, or a most popular chocolate chip cookie recipe may be provided in the learned model. another example embodiment is illustrated in fig. 1b . in fig. 1b , the user 102 wearing the device 100 is repairing a vehicle. the device 100 may identify that the user 102 is repairing a vehicle. the device 100 may detect image data using a camera 106 and the device 100 may analyze the detected image data to determine that the user 102 is repairing the vehicle. the device 100 may compare the detected image data with a learned model to determine that the activity performed by the user 102 is repairing the vehicle. in fig. 1b , the learned model may include objects 112 , such as a vehicle, a vehicle part, tools, or the action of engaging the vehicle part with the tool. alternatively, or in addition, the user 102 may speak into the input unit 108 an indication that the user 102 is repairing a vehicle, such as “hey, i'm trying to repair this engine for a 1982 car make x, car model y, can you help me?” or “hey, teach me how to replace a gasket in an engine.” in addition, the user 102 may type into the input unit 108 an indication that the activity the user 102 is engaged in is repairing a vehicle. as described herein, the learned model may be updated to provide a most up-to-date and accurate guidance or feedback possible. the device 100 may output an output 104 that is feedback. the device 100 may determine, based on image data that the user 102 has not forgotten to replace a removed engine component, and the device 100 may output an output 104 such as “you might want to check if you replaced all of the bolts.” the user 102 may prompt the device 100 for feedback using the input unit 108 . for example, the user 102 may say “hey, did i miss anything when putting this engine back together?” and the device 100 may, based on detected image data, determine a response to the prompt provided by the user 102 . the device 100 may output an output 104 that is guidance. the device 100 may determine, based on image data, that the user has finished performing a step in a series of instructions. for example, the device 100 may detect that the user 102 has finished removing the bolts, and that the user 102 should next remove the cover plate and clean the surface of debris. the device 100 may provide an output 104 that is an instruction, such as “next, after you remove the bolts, remove the cover plate and clean the surface of any debris.” fig. 1c illustrates a user 102 wearing a device 100 , a second user 132 wearing a second device 130 , and a third device 134 . the device 100 may also be used with other devices (e.g., second device 130 and third device 134 ) to provide guidance or feedback to the user 102 . in some situations, the camera 106 of the device 100 may be unable to view the user 102 or the user's actions to properly assess the user's performance. in other situations, the device 100 may benefit from having additional image data of different angles of the activity being performed by the user 102 to provide more comprehensive feedback to the user 102 . as illustrated in fig. 1c , the user 102 is performing an activity of playing golf. the device 100 may identify that the user 102 is playing golf. the device 100 may use location data to determine that the user 102 is playing golf, when the location data indicates that the device 100 and the user 102 are at a golf course. the device 100 may detect image data using a camera 106 and the device 100 may analyze the detected image data to determine that the user 102 is playing golf. the detected image data may be the user 102 carrying a set of golf clubs. the device 100 may compare the detected image data with a learned model to determine that the activity performed by the user 102 is playing golf. the learned model may include objects 112 such as a golf club, wide areas of grass, or a golf ball. alternatively, or in addition, the user 102 may speak into the input unit 108 an indication that the user 102 is playing golf, such as “hey, i'm playing golf, how does my swing look?” or “hey, teach me how to swing a 5 iron properly.” in addition, the user 102 may type into the input unit 108 an indication that the activity the user 102 is engaged in is playing golf. the device 100 may not be able to view the user's swing and form from the perspective of the user 102 . the device 100 may communicate with other devices, such as second device 130 or third device 134 , to evaluate the user's actions, to provide feedback. the device 100 may communicate directly with the other devices using a device-to-device protocol such as bluetooth or wi-fi direct. the device 100 may communicate with the other devices via a remote server, such as a cloud based server, whereby the other devices (e.g., the second device 130 and the third device 134 ) communicate image data to the cloud based server, and the device 100 retrieves the image data from the cloud based server. the other devices, such as the second device 130 and the third device 134 may be wearable devices with cameras or may be other devices, such as a tablet, a smartphone, or a camera. in fig. 1c , the second device 130 is another wearable device similar to the device 100 of the user 102 , and the third device is a tablet mounted to a stand. the second device 130 has a camera 136 and the third device has a camera 138 . the second device 130 may detect image data of the user 102 swinging the golf club from an angle that the device 100 is unable to capture using the camera 106 of the device 100 . likewise, the third device 134 may further detect image data of the user 102 swinging the golf club from another angle that neither the device 100 nor the second device 130 are able to capture using their respective cameras. the device 100 , based on the image data from the device 100 , the second device 130 , and the third device 134 , may evaluate the user's performance of the activity based on the image data to provide feedback. the learned model may further include criteria by which the user's actions should be compared and the device 100 may determine the feedback based on a comparison of the user's performance and the criteria. the device 100 may output an output 104 that is feedback. the feedback may be critiques of the user's performance, such as “you should keep your back straight.” the user 102 may prompt the device 100 for feedback using the input unit 108 . for example, the user 102 may say “hey, was my left aim straight?” and the device 100 may, based on detected image data, determine a response to the prompt provided by the user 102 . the device 100 may output an output 104 that is guidance. the device 100 may determine, based on image data, that the user has finished performing a step in a series of instructions. for example, the device 100 may detect that the user 102 has finished gripping the club and getting ready to swing, and that the user 102 should next bring the club back for the backswing. the device 100 may provide an output 104 that is an instruction, such as “next, after you address the ball, begin your backswing, making sure to keep your left arm straight, your hips turned, your back straight, and your front heel on the ground.” the guidance provided by the device 100 may be particularly useful when there are many things to remember at once, and doing so may be challenging for a human being. the guidance provided by the device 100 may also be particularly useful when the user 102 has never performed a particular activity or when a situation is an emergency. for example, as illustrated in fig. 1d , the user 102 is administering first aid to a victim 142 . the user 102 may be performing cardiopulmonary resuscitation (cpr) on the victim 142 . the device 100 may identify that the user 102 is performing cpr. the device 100 may detect image data using a camera 106 and the device 100 may analyze the detected image data to determine that the user 102 is performing cpr. the device 100 may compare the detected image data with a learned model to determine that the activity performed by the user 102 is performing cpr. alternatively, or in addition, the user 102 may speak into the input unit 108 an indication that the user 102 is performing cpr, such as “hey, my friend was drowning, but we got him out of the water and he's not breathing, can you help me?” or “hey, teach me how to perform cpr.” in addition, the user 102 may type into the input unit 108 an indication that the activity the user 102 is engaged in is performing cpr. the device 100 may output an output 104 that is feedback. the device 100 may determine, based on image data that the user 102 has not performed enough chest compressions, and the device 100 may output an output 104 such as “you are not pumping rapidly enough in your chest compressions—the target is 100-120 times per minute, or more than one per second.” the user 102 may prompt the device 100 for feedback using the input unit 108 . for example, the user 102 may say “hey, is this location the right one for chest compressions?” and the device 100 may, based on detected image data, determine a response to the prompt provided by the user 102 . the device 100 may output an output 104 that is guidance. the device 100 may determine, based on image data, that the user has finished performing a step in a series of instructions. for example, the device 100 may detect that the user 102 has finished performing chest compressions, and that the user 102 should next blow into the victim's mouth. the device 100 may provide an output 104 that is an instruction, such as “next, after chest compressions, you should tilt the victim's head back, lift the chin, pinch the nose, cover the mouth with yours and blow until you can see the victim's chest rise.” as described herein, the learned model and/or other data used by the device 100 to provide guidance or feedback may be stored locally on the device 100 or stored on a remote memory and accessed by the device 100 . the learned model and/or other data may be updated periodically so that the feedback and/or guidance provided is up-to-date and current. for example, when general first aid guidelines change, the device 100 is able to provide the updated instructions. in this way, use of the device 100 may be superior to relying on human knowledge, which may become outdated and/or inaccurate. the device 100 may provide situationally appropriate guidance or feedback without being prompted by the user 102 based on the detected activity. for example, when the device 100 detects an individual in distress, the device 100 may automatically provide cpr instructions to the user 102 . the output 104 provided by the device may be feedback or a reminder associated with a behavior identified by the user 102 . the behavior may be a limitation of an undesirable behavior. for example, as shown in fig. 1e , the user 102 may indicate to the device 100 that the user would like to limit consumption of a particular beverage, such as soda or coffee. the device 100 may detect, based on image data detected from the camera 106 that the user 102 is consuming the beverage and may provide an output 104 reminding the user 102 of the restriction. the output 104 may be an audio output of “remember to watch your consumption of coffee.” the device 100 may detect an object 112 associated with the particular beverage. the output 104 may be a tactile output of a series of vibrations when the device 100 determines the user 102 is participating in the undesirable behavior. the behavior may also be a limitation of calories consumed throughout the day. the camera 106 may detect image data of food as the user 102 is eating the food. the device 100 may identify the food being eaten based on the image data, and may determine nutritional data associated with the identified food. the nutritional data may be stored in a local or remote memory or may be provided by the user 102 . the user 102 may provide the nutritional data by identifying values of categories, such as calories, fat, sugar, or ingredients. the user 102 may also provide the nutritional data by holding up a nutritional label associated with the food so the camera 106 may capture an image of the nutritional label. the device 100 may determine nutritional feedback for the user 102 , such as “you have consumed your daily allotment of sugar and it is only 11 am. you may consider limiting your sugar intake for the rest of the day or exercising.” the device 100 may include an inertial measurement unit (imu) for detecting user activity to determine an approximate calories burned by the user 102 . the nutritional feedback provided by the device 100 may vary based on the user's activity, as detected by the imu. in addition to the image data detected by the camera 106 , the device 100 may use a microphone to detect audio data. the device 100 may use the audio data to assist in determining the user 102 is participating in an activity. for example, as shown in fig. 1f , the user 102 may instruct the device 100 to notify the user when the user says the word “umm.” the device 100 may detect audio data using the microphone, and when the device 100 detects the user 102 has said “umm,” the device 100 may output an output 104 that is an audio output or a tactile output to indicate to the user 102 that the user 102 has said “umm.” the output 104 may be information associated with a detected object. for example, as shown in fig. 1g , the device 100 may determine, using image data detected by the camera 106 , that the user 102 is looking at an object 112 , such as a painting 170 . the device 100 may identify the painting 170 by comparing the image data associated with the painting with a database of paintings. in addition, the device 100 may determine a location of the user 102 based on the location data and may identify a painting associated with the location, as the location may be a museum or other landmark. in the example embodiment of fig. 1g , the output 104 may be information regarding the painting 170 , such as the artist, the year it was painted, the style of painting, the circumstances surrounding the painting, and a history of owners of the painting. the user 102 may provide an input to the device 100 inquiring about the object 112 (e.g., painting 170 ), or the device 100 may automatically provide the output 104 based on identifying the object 112 based on the image data. the output 104 may be a location-based reminder to the user 102 . for example, the user 102 may indicate to the device 100 that the user 102 would like to be reminded when the user 102 leaves his house, that the user 102 should make sure he has his wallet, keys, and cell phone. the device 100 may detect, based on location data detected by a gps unit, the location of the user 102 . when the user 102 is in a first location within the user's home and then goes to a second location outside of the user's home (as detected by the location data), the device 100 may provide the output 104 reminding the user 102 . the output 104 may be an audio output such as “don't forget your keys, wallet, and phone,” or may be a tactile output of a series of vibrations. in one implementation, and with reference to fig. 2 , a device 100 includes a guidance unit 202 , connected to a memory 204 , a sensor array 206 , an output unit 208 , a transceiver 210 , an activity detection unit 212 , and an input unit 108 . the guidance unit 202 may be one or more computer processors such as an arm processor, dsp processor, distributed processor, microprocessor, controller, or other processing device. the guidance unit 202 may be located in the device 100 , may be a remote processor or it may be a pairing of a local and a remote processor. the memory 204 may be one or any combination of the following: a ram or other volatile or nonvolatile memory, a non-transitory memory or a data storage device, such as a hard disk drive, a solid state disk drive, a hybrid disk drive or other appropriate data storage. the memory 204 may further store machine-readable instructions which may be loaded into or stored in the memory 204 and executed by the guidance unit 202 . as with the guidance unit 202 , the memory 204 may be positioned on the device 100 , may be positioned remote from the device 100 or may be a pairing of a local and a remote memory. the memory 204 may also store learned model data, such that the activity detection unit 212 may compare the image data to the learned model data to determine an activity and/or the guidance unit 202 may compare the image data to the learned model data to determine guidance or feedback. the memory 204 may also store past performance data associated with the user performing the activity. the output 104 may be determined based on the past performance data. for example, in fig. 1a when the user 102 is making cookies, the output may include a reminder that the last time the user made cookies, the user 102 forgot to take them out of the oven in time. in another example, in fig. 1c when the user is playing golf, the output may include a reminder that the user's average score for this particular course is 82, that the user 102 typically shoots par on this particular hole, or that the user should use a particular club for that particular hole. the sensor array 206 includes a camera 106 , stereo cameras 216 , a gps unit 218 , an inertial measurement unit (imu) 220 , and a sensor 222 . the stereo cameras 216 may be a stereo camera pair including two cameras offset by a known distance. in that regard, the guidance unit 202 may receive image data from the stereo cameras 216 and may determine depth information corresponding to objects in the environment based on the received image data and the known distance between the cameras of the stereo cameras 216 . the stereo cameras 216 may be used instead of or in conjunction with the camera 106 to detect image data. the sensor 222 may be one or more sensors which provide further information about the environment in conjunction with the rest of the sensor array 206 such as one or more of a temperature sensor, an air pressure sensor, a moisture or humidity sensor, a gas detector or other chemical sensor, a sound sensor, a ph sensor, a smoke detector, an altimeter, a depth gauge, a compass, a motion detector, a light sensor, or other sensor. the gps unit 218 may detect location data and may be used to determine a geographical location. the map data stored in the memory 204 may also be used to determine the geographical location. the output unit 208 includes a speaker 110 and a vibration unit 224 . the speaker 110 may be one or more speakers or other devices capable of producing sounds and/or vibrations. the vibration unit 224 may be one or more vibration motors or actuators capable of providing haptic and tactile output. the transceiver 210 can be a receiver and/or a transmitter configured to receive and transmit data from a remote data storage or other device. the transceiver 210 may include an antenna capable of transmitting and receiving wireless communications. for example, the antenna may be a bluetooth or wi-fi antenna, a cellular radio antenna, a radio frequency identification (rfid) antenna or reader and/or a near field communication (nfc) unit. in another implementation and with reference to fig. 3 , the system 300 may include a device 100 and a secondary device 302 . the device 100 is a wearable device, as described herein and includes the sensor array 206 , the input unit 108 , the output unit 208 , and the device transceiver 304 . the secondary device 302 may be a device communicatively coupled with the device 100 and configured to perform the processing of the detected data. the secondary device 302 may be a smartphone or tablet and includes the guidance unit 202 , memory 204 , a secondary device transceiver 306 , and an activity detection unit 212 . in the system 300 of fig. 3 , the device 100 may not be responsible for the processing of the detected data, such as the image data and the location data. the device 100 may communicate the detected data to the secondary device 302 via the respective transceivers (device transceiver 304 and secondary device transceiver 306 ). turning to fig. 4 , the device 100 may be a wearable device, which has an outer casing, or body 402 having a shape designed to be worn by a user. in particular, the body 402 has a neck portion 404 designed to rest against a back of a neck of the user. the body 402 also includes a first side portion 406 and a second side portion 408 each configured to extend across a shoulder of the user and to rest on a front of the user. in that regard, the wearable device 100 may be worn in a similar manner as a necklace. although the disclosure is directed to the wearable device 100 having the u-shape, one skilled in the art will realize that the features described herein can be implemented in a wearable computing device having another shape such as eyeglasses or earpieces. the wearable device 100 includes multiple components capable of receiving or detecting data. for example, the wearable device 100 may include an input unit 108 , a microphone 418 , and a camera 106 and/or a stereo pair of cameras (e.g., stereo cameras 216 ), each as described herein. the input unit 108 may include one or more buttons and/or a touchpad. each of the input unit 108 , the camera 106 , and the microphone 418 may be physically attached to the body 402 . in some embodiments, the microphone 418 is part of the input unit 108 . the microphone 418 may be capable of detecting audio data corresponding to the environment of the wearable device 100 . for example, the microphone 418 may be capable of detecting speech data corresponding to speech of the user or of another person. in some embodiments, the user may provide input data to the guidance unit 202 by speaking commands that are received by the microphone 418 . the microphone 418 may also be capable of detecting other sounds in the environment such as a scream, a siren from an emergency vehicle, or the like. the wearable device 100 includes one or more output devices including speakers 110 . the speakers 110 are physically attached to the body 402 . each of the speakers 110 is configured to output an audio output based on an instruction from the guidance unit 202 . the speakers 110 may be part of the output unit 208 , as described herein. in some embodiments, as shown in fig. 2 , the wearable device 100 also includes the guidance unit 202 , the memory 204 , and the activity detection unit 212 physically within the wearable device 100 . in other embodiments, as shown in fig. 3 , the wearable device 100 only includes the camera 106 , the input unit 108 , and the speakers 110 physically within the wearable device 100 . in these embodiments, the guidance unit 202 , the memory 204 and the activity detection unit 212 are physically located in a secondary device 302 , such as a smartphone or a tablet computer. as described herein, the components located in the wearable device 100 and the components located in the secondary device 302 are communicatively coupled and may communicate via respective transceivers configured to transmit and receive data (e.g., device transceiver 304 and secondary device transceiver 306 ). with reference now to fig. 5 , a method 500 may be used by a device (e.g., device 100 ) or a system (e.g., system 300 ) for providing feedback to a user. the image data is detected by the camera 106 and/or the stereo cameras 216 of the device 100 (step 502 ). the image data may indicate a user performance of an activity. the guidance unit 202 identifies the activity (step 504 ). the activity detection unit 212 connected to the guidance unit 202 may detect the activity based on the image data, and the activity detection unit 212 may communicate the identified activity to the guidance unit 202 (step 506 ). alternatively, or in addition, the input unit 108 may detect input data from the user indicating the activity and the input unit 108 may communicate the identified activity to the guidance unit 202 (step 508 ). the guidance unit 202 determines a criteria associated with the activity (step 510 ). the criteria associated with the activity may be determined based on the learned model stored in the memory 204 . the guidance unit 202 may analyze the learned model to determine a criteria to identify in order to determine whether the user 102 is properly performing the activity. the guidance unit 202 determines a user performance of the activity based on the image data (step 512 ). the guidance unit 202 may perform image processing on the image data to construct a model of the user performance. the guidance unit 202 determines feedback based on a comparison of the criteria and the user performance of the activity (step 514 ). the feedback indicates an improvement or suggestion for the user, such as a suggestion to check an amount of an ingredient in a recipe (as shown in fig. 1a ), an improvement to the user's form in a sports activity (as shown in fig. 1c ), a suggestion to the user 102 in the form of relevant information regarding an object near the user 102 (as shown in fig. 1g ), or a suggestion to the user 102 to make sure the user has certain items in the user's possession (as shown in fig. 1h ). the guidance unit 202 communicates the feedback to the output unit 208 and the output unit 208 outputs the feedback to the user 102 (step 516 ). for example, the output unit 208 includes a speaker 110 and the speaker outputs an audio output of the feedback. with reference now to fig. 6 , a method 600 may be used by a device (e.g., the device 100 ) or a system (e.g., the system 300 ) for providing guidance to a user. the image data is detected by the camera 106 and/or the stereo cameras 216 of the device 100 (step 602 ). the image data may indicate a user performance of an activity. the guidance unit 202 identifies the activity (step 604 ). an activity detection unit 212 connected to the guidance unit 202 may detect the activity based on the image data, and the activity detection unit 212 may communicate the identified activity to the guidance unit 202 (step 606 ). alternatively, or in addition, the input unit 108 may detect input data from the user indicating the activity and the input unit 108 may communicate the identified activity to the guidance unit 202 (step 608 ). the guidance unit 202 determines a set of instructions associated with the activity (step 610 ). the set of instructions associated with the activity may be determined based on the learned model stored in the memory 204 . the guidance unit 202 may analyze the learned model to determine a set of instructions to provide to the user 102 or the guidance unit 202 may retrieve the set of instructions from the memory 204 . the set of instructions in the memory 204 may be indexed by activity, allowing the guidance unit 202 to retrieve a set of instructions corresponding to a given activity. the guidance unit 202 determines a current stage of the activity based on the image data (step 612 ). the guidance unit 202 may perform image processing on the image data to determine an action being performed, and the determined action being performed may be associated with a corresponding current stage of the activity. the guidance unit 202 determines a next instruction from the set of instructions to provide the user based on the current stage (step 614 ). the set of instructions may be an ordered list of instructions such that for each instruction there is a next instruction, unless the current instruction is the final stage of the activity. for example, the current stage of the activity may be adding flour to a bowl and the next instruction may be to add chocolate chips (as shown in fig. 1a ). the guidance unit 202 communicates the next instruction to the output unit 208 and the output unit 208 outputs the next instruction to the user 102 (step 616 ). for example, the output unit 208 includes a speaker 110 and the speaker 110 outputs an audio output of the next instruction. exemplary embodiments of the methods/systems have been disclosed in an illustrative style. accordingly, the terminology employed throughout should be read in a non-limiting manner. although minor modifications to the teachings herein will occur to those well versed in the art, it shall be understood that what is intended to be circumscribed within the scope of the patent warranted hereon are all such embodiments that reasonably fall within the scope of the advancement to the art hereby contributed, and that that scope shall not be restricted, except in light of the appended claims and their equivalents.
199-939-862-092-439
US
[ "EP", "US", "WO" ]
H01L21/208,B01L3/00,H01L21/28,B01D57/02,H01L21/288,G01N27/447,G01N27/453,B01L3/02
2007-12-10T00:00:00
2007
[ "H01", "B01", "G01" ]
droplet actuator configurations and methods
droplet actuators for conducting droplet operations, such as droplet transport and droplet dispensing, are provided. in one embodiment, the droplet actuator may include a substrate including, droplet operations electrodes arranged for conducting droplet operations on a surface of the substrate; and reference electrodes associated with the droplet operations electrodes and extending above the surface of the substrate. other embodiments of droplet actuators and methods of loading and using such droplet actuators are also provided.
1 . a droplet actuator comprising a substrate comprising: (a) droplet operations electrodes arranged for conducting droplet operations on a surface of the substrate; and (b) reference electrodes associated with the droplet operations electrodes and extending above the surface of the substrate. 2 . a droplet actuator comprising: (a) two substrates separated to form a gap; (b) droplet operations electrodes associated with at least one of the substrates and arranged for conducting droplet operations in the gap; and (c) reference electrodes associated with at least one of the substrates and extending into the gap. 3 . a droplet actuator comprising a substrate comprising droplet operations electrodes and reference electrodes configured for conducting droplet operations, wherein at least a subset of the reference electrodes is separated from a droplet operations surface by an insulator and/or dielectric material. 4 . a droplet actuator comprising: (a) two substrates separated to form a gap; (b) droplet operations electrodes associated with at least one of the substrates and arranged for conducting droplet operations in the gap; and (c) reference electrodes: (i) associated with at least one of the substrates; and (ii) separated from a droplet operations surface of the substrate of (c)(i) by an insulator and/or a dielectric material. 5 . a droplet actuator comprising a substrate comprising: (a) droplet operations electrodes configured for conducting one or more droplet operations; and (b) reference electrodes inset into and/or between and/or interdigitated with one or more droplet operations electrodes. 6 . the droplet actuator of claim 5 comprising a reference electrode inset into a droplet operations electrode. 7 . the droplet actuator of claim 5 comprising a reference electrode inset between two or more droplet operations electrodes. 8 . the droplet actuator of claim 5 comprising a reference electrode interdigitated with a droplet operations electrode. 9 - 11 . (canceled) 12 . a droplet actuator comprising an electrode that is rotationally but not reflectively symmetrical. 13 . the droplet actuator of claim 12 comprising a path and/or array of the electrodes. 14 . the droplet actuator of claim 13 wherein the electrodes are interdigitated. 15 . the droplet actuator of claim 13 wherein the electrodes are not interdigitated. 16 - 24 . (canceled) 25 . the droplet actuator of claim 12 wherein the rotational symmetry is x-fold, and for each electrode x is 3, 4, 5, 6, 7, 8, 9, and/or 10. 26 . the droplet actuator of claim 12 wherein the droplet actuator comprises a path and/or array of the electrodes, wherein adjacent electrodes are arranged such that no line can be drawn between two adjacent electrodes without overlapping one or both of the two adjacent electrodes. 27 . the droplet actuator of claim 26 wherein the electrodes are not interdigitated. 28 . the droplet actuator of claim 26 wherein the electrodes are interdigitated. 29 .- 38 . (canceled) 39 . a droplet actuator comprising: (a) a first substrate comprising droplet operations electrodes configured for conducting one or more droplet operations; and (b) a second substrate comprising: (i) a conductive layer at least partially contiguous with two or more of the droplet operations electrodes; and (ii) a perfluorophosphonate coating overlying at least a portion of the conductive layer; wherein the first substrate and the second substrate are separated to form a gap for conducting droplet operations mediated by the droplet operations electrodes. 40 . the droplet actuator of claim 39 wherein the conductive layer comprises indium tin oxide. 41 . a droplet actuator comprising: (a) two surfaces separated to form a gap; (b) electrodes associated with one or more surfaces and arranged for conducting one or more droplet operations; (c) a filler fluid in the gap; (d) a reservoir comprising a droplet fluid in the reservoir; (e) a fluid path from the reservoir into the gap; (f) optionally, an filler fluid opening arranged for permitting fluid to exit the gap and/or exit one portion of the gap into another portion of the gap; and (g) a pressure source configured to force dislocation of filler fluid in the gap and/or through the filler fluid opening and thereby force droplet fluid from the reservoir through the fluid path into the gap. 42 . the droplet actuator of claim 41 wherein the pressure source is configured such that the dislocation of filler fluid forces droplet fluid from the reservoir through the fluid path into the gap into sufficient proximity with one or more of the electrodes to enable one or more droplet operations to be mediated by the one or more of the electrodes. 43 . the droplet actuator of claim 41 wherein the pressure source comprises a negative pressure source. 44 . the droplet actuator of claim 41 wherein the pressure source comprises a positive pressure source. 45 . the droplet actuator of claim 41 comprising multiple reservoirs each arranged to permit a droplet fluid to be loaded into the gap. 46 . the droplet actuator of claim 42 wherein the droplet operation comprises a droplet dispensing operation in which a droplet is dispensed from the droplet fluid. 47 . a method of loading a droplet actuator, the method comprising: (a) providing: (i) a droplet actuator loaded with a filler fluid; (ii) a reservoir comprising a droplet fluid; and (iii) a fluid path extending from the reservoir into the droplet actuator; and (b) forcing filler fluid: (i) from one locus in the droplet actuator to another locus in the droplet actuator; or (ii) out of the droplet actuator; thereby causing droplet fluid to flow through the fluid path and into the droplet actuator. 48 . the method of claim 47 wherein step (b) comprises forcing droplet fluid into sufficient proximity with one or more electrodes to enable one or more droplet operations to be mediated in the droplet actuator by the one or more electrodes. 49 . the method of claim 47 wherein step (b) comprises forcing the filler fluid using a negative pressure source. 50 . the method of claim 47 wherein step (b) comprises forcing the filler fluid using a positive pressure source. 51 . the method of claim 47 wherein multiple droplet fluids are loaded from multiple reservoirs. 52 . the method of claim 48 wherein the droplet operation comprises a droplet dispensing operation in which a droplet is dispensed from the droplet fluid. 53 . a method of conducting a droplet operation on a droplet actuator, the method comprising: (a) using a negative pressure to flow a source fluid into a droplet actuator gap into proximity with a droplet operations electrode; and (b) using the droplet operations electrode along with other droplet operations electrodes to conduct the droplet operation. 54 . the method of claim 53 wherein the droplet operation comprises dispensing a droplet from the source fluid.
related applications in addition to the patent applications cited herein, each of which is incorporated herein by reference, this patent application is related to and claims priority to u.s. provisional patent application no. 61/012,567, filed on dec. 10, 2007, entitled “droplet actuator loading by displacement of filler fluid;” u.s. provisional patent application no. 61/014,128, filed on dec. 17, 2007, entitled “electrode configurations for a droplet actuator;” and u.s. provisional patent application no. 61/092,709, filed on aug. 28, 2008, entitled “electrode configurations for a droplet actuator;” the entire disclosures of which are incorporated herein by reference. grant information this invention was made with government support under gm072155-02 and dk066956-02, both awarded by the national institutes of health of the united states. the united states government has certain rights in the invention. field of the invention the invention relates to droplet actuators for conducting droplet operations, such as droplet transport and droplet dispensing, and to methods of loading and using such droplet actuators. background of the invention droplet actuators are used to conduct a wide variety of droplet operations, such as droplet transport and droplet dispensing. a droplet actuator typically includes a substrate with electrodes arranged for conducting droplet operations on a droplet operations surface of the substrate. electrodes may include droplet operations electrodes and reference electrodes. droplets subjected to droplet operations on a droplet actuator may, for example, be reagents and/or droplet fluids for conducting assays. there is a need for improved functionality when conducting droplet operations and for alternative approaches to configuring droplet actuators for conducting droplet operations. there are various ways of loading reagents and droplet fluids into droplet actuators. problems with such methods include the risk of introducing air into the fluid and the inability to reliably handle small droplet fluid volumes. because of these and other problems, there is a need for alternative approaches to loading droplet fluids into a droplet actuator. brief description of the invention the invention provides a droplet actuator. in one embodiment, the droplet actuator may include a substrate including, droplet operations electrodes arranged for conducting droplet operations on a surface of the substrate; and reference electrodes associated with the droplet operations electrodes and extending above the surface of the substrate. in another embodiment the droplet actuator may include two substrates separated to form a gap. droplet operations electrodes may be associated with at least one of the substrates and arranged for conducting droplet operations in the gap. reference electrodes may be associated with at least one of the substrates and extending into the gap. in yet another embodiment, the invention provides a droplet actuator including a substrate including droplet operations electrodes and reference electrodes configured for conducting droplet operations, wherein at least a subset of the reference electrodes is separated from a droplet operations surface by an insulator and/or dielectric material. in still another embodiment, the invention provides a droplet actuator including two substrates separated to form a gap; droplet operations electrodes associated with at least one of the substrates and arranged for conducting droplet operations in the gap; and reference electrodes. the reference electrodes may be associated with at least one of the substrates; and separated from a droplet operations surface of the substrate by an insulator and/or a dielectric material. further, the invention provides a droplet actuator including a substrate, which may have droplet operations electrodes configured for conducting one or more droplet operations; and reference electrodes inset into and/or between and/or interdigitated with one or more droplet operations electrodes. a reference electrode may be inset into a droplet operations electrode. a reference electrode may be inset between two or more droplet operations electrodes. a reference electrode may be interdigitated with a droplet operations electrode. the invention provides droplet operations electrodes that are rotationally but not reflectively symmetrical. these electrodes may be formed into paths and/or arrays. these electrodes are interdigitated. in some cases, these electrodes are not interdigitated. the rotational symmetry may in certain embodiments be x-fold, where x is 3, 4, 5, 6, 7, 8, 9, or 10. the rotational symmetry may in certain embodiments be x-fold, where x is greater than 10. in some cases, adjacent electrodes are arranged such that no straight line can be drawn between two adjacent electrodes without overlapping one or both of the two adjacent electrodes. in some cases, adjacent electrodes are not interdigitated but are arranged such that no straight line can be drawn between two adjacent electrodes without overlapping one or both of the two adjacent electrodes. the invention also provides a droplet actuator including an electrode having a shape that comprises a section of a rotationally but not reflectively symmetrical shape, the electrode having x-fold rotational symmetry, where x is 5, 6, 7, 8, 9, 10 or more. a droplet actuator may include a path or array including one or more of such electrodes. the invention provides a droplet actuator including top and bottom substrates separated to form a gap, each substrate including electrodes configured for conducting droplet operations, the gap arranged to provide a distance between the substrates sufficient to permit independent droplet operations on a droplet operations surface of each substrate. the top substrate may, in some embodiments, include an arrangement of electrodes that is substantially congruent with an arrangement of electrodes on the bottom substrate. the top substrate may, in some embodiments, include an arrangement of electrodes that is substantially congruent with and in registration with an arrangement of electrodes on the bottom substrate. in some embodiments, the gap is sufficiently wide that: one or more droplets having a footprint which is from about 1 to about 5 times the size of the footprint of a droplet operations electrode can be subjected to droplet operations on the droplet operations surface of the top substrate without contacting the droplet operations surface of the bottom substrate; and one or more droplets having a footprint which is from about 1 to about 5 times the size of the footprint of a droplet operations electrode can be subjected to droplet operations on the droplet operations surface of the bottom substrate without contacting the droplet operations surface of the top substrate. the invention also provides a droplet actuator including: a first substrate including droplet operations electrodes configured for conducting one or more droplet operations; a second substrate including: a conductive layer at least partially contiguous with two or more of the droplet operations electrodes; and a perfluorophosphonate coating overlying at least a portion of the conductive layer. the first substrate and the second substrate are separated to form a gap for conducting droplet operations mediated by the droplet operations electrodes. the conductive layer may in some embodiments include indium tin oxide or a substitute therefor. the invention further provides a droplet actuator including: two surfaces separated to form a gap; electrodes associated with one or more surfaces and arranged for conducting one or more droplet operations; a filler fluid in the gap; a reservoir including a droplet fluid in the reservoir; a fluid path from the reservoir into the gap; and optionally, an filler fluid opening arranged for permitting fluid to exit the gap and/or exit one portion of the gap into another portion of the gap; a pressure source configured to force dislocation of filler fluid in the gap and/or through the filler fluid opening and thereby force droplet fluid from the reservoir through the fluid path into the gap. the pressure source may be configured such that the dislocation of filler fluid forces droplet fluid from the reservoir through the fluid path into the gap into sufficient proximity with one or more of the electrodes to enable one or more droplet operations to be mediated by the one or more of the electrodes. the pressure source may include a negative pressure source and/or a positive pressure source. in some cases, multiple reservoirs are provided, each arranged to permit a droplet fluid to be loaded into the gap. the droplet operation may, for example, include a droplet dispensing operation in which a droplet is dispensed from the droplet fluid. the invention also provides a method of loading a droplet actuator, the method including providing: a droplet actuator loaded with a filler fluid; a reservoir including a droplet fluid; a fluid path extending from the reservoir into the droplet actuator; forcing filler fluid: from one locus in the droplet actuator to another locus in the droplet actuator; or out of the droplet actuator; thereby causing droplet fluid to flow through the fluid path and into the droplet actuator. droplet fluid may be forced into sufficient proximity with one or more electrodes to enable one or more droplet operations to be mediated in the droplet actuator by the one or more electrodes. filler fluid may be forced using a negative and/or positive pressure source. in some cases, multiple droplet fluids are loaded from multiple reservoirs. the droplet operation may, for example, include a droplet dispensing operation in which a droplet is dispensed from the droplet fluid. the invention also provides a method of conducting a droplet operation on a droplet actuator, the method including: using a negative pressure to flow a source fluid into a droplet actuator gap into proximity with a droplet operations electrode; and using the droplet operations electrode along with other droplet operations electrodes to conduct the droplet operation. the droplet operation can include dispensing a droplet from the source fluid. definitions as used herein, the following terms have the meanings indicated. “activate” with reference to one or more electrodes means effecting a change in the electrical state of the one or more electrodes which in the presence of a droplet results in a droplet operation. “droplet” means a volume of liquid on a droplet actuator that is at least partially bounded by filler fluid. for example, a droplet may be completely surrounded by filler fluid or may be bounded by filler fluid and one or more surfaces of the droplet actuator. droplets may, for example, be aqueous or non-aqueous or may be mixtures or emulsions including aqueous and non-aqueous components. droplets may take a wide variety of shapes; nonlimiting examples include generally disc shaped, slug shaped, truncated sphere, ellipsoid, spherical, partially compressed sphere, hemispherical, ovoid, cylindrical, and various shapes formed during droplet operations, such as merging or splitting or formed as a result of contact of such shapes with one or more surfaces of a droplet actuator. “droplet actuator” means a device for manipulating droplets. for examples of droplets, see u.s. pat. no. 6,911,132, entitled “apparatus for manipulating droplets by electrowetting-based techniques,” issued on jun. 28, 2005 to pamula et al.; u.s. patent application ser. no. 11/343,284, entitled “apparatuses and methods for manipulating droplets on a printed circuit board,” filed on filed on jan. 30, 2006; u.s. pat. no. 6,773,566, entitled “electrostatic actuators for microfluidics and methods for using same,” issued on aug. 10, 2004 and u.s. pat. no. 6,565,727, entitled “actuators for microfluidics without moving parts,” issued on jan. 24, 2000, both to shenderov et al.; pollack et al., international patent application no. pct/us2006/047486, entitled “droplet-based biochemistry,” filed on dec. 11, 2006, the disclosures of which are incorporated herein by reference. methods of the invention may be executed using droplet actuator systems, e.g., as described in international patent application no. pct/us2007/009379, entitled “droplet manipulation systems,” filed on may 9, 2007. in various embodiments, the manipulation of droplets by a droplet actuator may be electrode mediated, e.g., electrowetting mediated or dielectrophoresis mediated. “droplet operation” means any manipulation of a droplet on a droplet actuator. a droplet operation may, for example, include: loading a droplet into the droplet actuator; dispensing one or more droplets from a source droplet; splitting, separating or dividing a droplet into two or more droplets; transporting a droplet from one location to another in any direction; merging or combining two or more droplets into a single droplet; diluting a droplet; mixing a droplet; agitating a droplet; deforming a droplet; retaining a droplet in position; incubating a droplet; heating a droplet; vaporizing a droplet; condensing a droplet from a vapor; cooling a droplet; disposing of a droplet; transporting a droplet out of a droplet actuator; other droplet operations described herein; and/or any combination of the foregoing. the terms “merge,” “merging,” “combine,” “combining” and the like are used to describe the creation of one droplet from two or more droplets. it should be understood that when such a term is used in reference to two or more droplets, any combination of droplet operations sufficient to result in the combination of the two or more droplets into one droplet may be used. for example, “merging droplet a with droplet b,” can be achieved by transporting droplet a into contact with a stationary droplet b, transporting droplet b into contact with a stationary droplet a, or transporting droplets a and b into contact with each other. the terms “splitting,” “separating” and “dividing” are not intended to imply any particular outcome with respect to size of the resulting droplets (i.e., the size of the resulting droplets can be the same or different) or number of resulting droplets (the number of resulting droplets may be 2, 3, 4, 5 or more). the term “mixing” refers to droplet operations which result in more homogenous distribution of one or more components within a droplet. examples of “loading” droplet operations include microdialysis loading, pressure assisted loading, robotic loading, passive loading, and pipette loading. in various embodiments, the droplet operations may be electrode mediated, e.g., electrowetting mediated or dielectrophoresis mediated. “filler fluid” means a fluid associated with a droplet operations substrate of a droplet actuator, which fluid is sufficiently immiscible with a droplet phase to render the droplet phase subject to electrode-mediated droplet operations. the filler fluid may, for example, be a low-viscosity oil, such as silicone oil. other examples of filler fluids are provided in international patent application no. pct/us2006/047486, entitled, “droplet-based biochemistry,” filed on dec. 11, 2006; and in international patent application no. pct/us2008/072604, entitled “use of additives for enhancing droplet actuation,” filed on aug. 8, 2008. the terms “top” and “bottom” are used throughout the description with reference to the top and bottom substrates of the droplet actuator for convenience only, since the droplet actuator is functional regardless of its position in space. when a liquid in any form (e.g., a droplet or a continuous body, whether moving or stationary) is described as being “on”, “at”, or “over” an electrode, array, matrix or surface, such liquid could be either in direct contact with the electrode/array/matrix/surface, or could be in contact with one or more layers or films that are interposed between the liquid and the electrode/array/matrix/surface. when a droplet is described as being “on” or “loaded on” a droplet actuator, it should be understood that the droplet is arranged on the droplet actuator in a manner which facilitates using the droplet actuator to conduct one or more droplet operations on the droplet, the droplet is arranged on the droplet actuator in a manner which facilitates sensing of a property of or a signal from the droplet, and/or the droplet has been subjected to a droplet operation on the droplet actuator. detailed description of the invention the invention provides modified droplet actuators, as well as methods of making and using such droplet actuators. among other things, the droplet actuators and methods of the invention provide improved functionality when conducting droplet operations and alternative approaches to configuring droplet actuators for conducting droplet operations. the invention also provides improved loading configurations for droplet actuators, as well as improved methods of loading droplet actuators, and reliable handling small droplet fluid volumes. fig. 1 illustrates a side view of a section of droplet actuator 100 . droplet actuator 100 includes a top substrate 110 and a bottom substrate 120 that are separated to form a gap 124 therebetween. the top substrate may or may not be present. a set of droplet operations electrodes 128 , e.g., electrodes 128 a, 128 b, and 128 c, are associated with bottom substrate 120 . in one embodiment, the droplet operations electrodes comprise electrowetting electrodes. an insulator layer 132 is provided atop bottom substrate 120 and electrodes 128 . insulator layer 132 may be formed of any dielectric material, such as polyimide. additionally, a set of reference electrodes 136 (e.g., reference electrodes 136 a, 136 b, and 136 c ) are arranged between electrodes 128 , as shown in fig. 1 . a hydrophobic layer (not shown) may be disposed atop insulator layer 132 . reference electrodes 136 are provided as exposed posts or pillars that protrude through insulator layer 132 and into gap 124 where the reference electrodes may contact the droplet 140 . the function of the reference electrodes 136 is to bias droplet 140 at the ground potential or another reference potential. the reference potential may, for example, be a ground potential, a nominal potential, or another potential that is different than the actuation potential applied to the droplet operations electrodes. in a related embodiment, the tops of reference electrodes 136 are substantially flush the insulator layer 132 . in another related embodiment, the tops of reference electrodes 136 are substantially flush with the hydrophobic layer (not shown). in yet another related embodiment, the tops of reference electrodes 136 are substantially flush with insulator layer 132 , and the hydrophobic layer (not shown) overlies the tops of reference electrodes 136 . further, in another related embodiment, the tops of reference electrodes 136 lie within insulator layer 132 but below a top surface of insulator layer 132 , e.g., as illustrated in fig. 2 . fig. 2 illustrates a side view of a section of droplet actuator 200 . droplet actuator 200 is substantially the same as droplet actuator 100 of fig. 1 , except that reference electrodes 136 of droplet actuator 100 that protrude through insulator layer 132 are replaced with reference electrodes 210 (e.g., reference electrodes 210 a, 210 b, and 210 c ) that extend into but do not protrude through insulator layer 132 . the inventors have unexpectedly found that droplet operations can be conducted using insulated reference electrodes by inducing a voltage in the droplet (e.g., by fringing fields). using insulated reference electrodes has the advantage that the device is easier to manufacture (e.g., no patterning of the insulator layer 132 is required). in one embodiment of fig. 2 , the top substrate 110 may include a conductive coating (not shown) over some portion or all of the surface exposed to the droplet actuator. an example of such a conductive coating is indium tin oxide (ito). the conductive coating may be electrically connected to reference electrode 210 through a resistor or a capacitor. the capacitor may be formed between the conductive coating and reference electrode 210 (serving as the plates of the capacitor) with the insulator layer 132 and as the gap 124 serving as a composite dielectric. in one embodiment, portions of the reference electrode 210 are not covered with the insulator 132 . in another embodiment, portions of the reference electrode 210 are not covered with the insulator 132 and protrude through the substrate as shown in fig. 1 . in yet another embodiment, the insulated reference electrodes may be provided on the top substrate 110 , e.g., as illustrated in fig. 3 . fig. 3 illustrates a side view of a section of droplet actuator 300 . droplet actuator 300 includes a top substrate 310 and a bottom substrate 320 that are arranged having a gap 324 therebetween. a set of electrodes 328 (e.g., electrodes 328 a, 328 b, and 328 c ) are associated with bottom substrate 320 . an insulator layer 332 is provided atop bottom substrate 320 and electrodes 328 . additionally, a reference electrode 336 is associated with top substrate 310 atop which is provided an insulator layer 340 . insulator layers 332 and 340 may be formed of any dielectric material, such as polyimide. a hydrophobic coating (not shown) may be disposed on the surface of the insulator exposed to the gap. in certain embodiments the thickness of insulator 332 is larger than the thickness of insulator 340 by for example a factor of 2, 3, 4, 5, 10, 25, 50, 100. the factor need not be an integer and can be fractions. the larger the factor the lower the voltage required for droplet operations. embodiments shown in fig. 2 and fig. 3 can also be combined to result in a device with reference electrodes on both substrates. the reference elements may be electrically connected to each other through a resistor or a capacitor. the capacitor may be formed between the two reference electrodes (serving as the plates of the capacitor) with the insulator layer 332 and as the gap 324 serving as a composite dielectric. as noted with respect to droplet actuator 200 of fig. 2 , the inventors have unexpectedly found that droplet operations can be conducted using insulated reference electrodes by inducing a voltage in the droplet. in one embodiment, the dielectric material is a hydrophobic material. for example, fluorinated ethylene propylene (fep; available from dupont as teflon® fep) is a suitable hydrophobic material. the hydrophobic material may, in some embodiments, serve as both the dielectric and the hydrophobic coating. this embodiment improves ease of manufacture, since an additional hydrophobic coating is not required. in a related embodiment, the dielectric material includes a laminated film. fep also serves as an example of a laminated film. using a film dielectric which is hydrophobic facilitates use perfluorinated solvents as filler fluids. perfluorinated solvents are ideal filler fluids for many applications, since they are immiscible with both aqueous and organic liquids. thus, aqueous and organic droplets can be subjected to droplet operations in such an environment. thus, the invention also provides a method of conducting a droplet operation on an organic droplet in a droplet actuator loaded with perfluorinated solvent as a filler fluid. for example, the method provides for dispensing one or more organic droplets from a source organic droplet; splitting, separating or dividing an organic droplet into two or more organic droplets; transporting an organic droplet from one location to another in any direction; merging or combining two or more organic droplets into a single droplet; diluting an organic droplet; mixing an organic droplet; agitating an organic droplet; deforming an organic droplet; retaining an organic droplet in position; incubating an organic droplet; heating an organic droplet; vaporizing an organic droplet; condensing an organic droplet from a vapor; cooling an organic droplet; disposing of an organic droplet; transporting an organic droplet out of a droplet actuator; other droplet operations described herein; and/or any combination of the foregoing; in each case on a droplet actuator in which the droplet operations surface is coated with, in contact with or flooded with a perfluorinated solvent. the foregoing operations are suitably conducted on a droplet operations surface that is composed of or is coated with a hydrophobic perfluorinated solvent-tolerant coating, such as fep. fig. 4a illustrates an electrode pattern 400 for a droplet actuator (not shown). droplet actuator 400 includes a set of electrodes 410 . electrodes 410 are substantially h-shaped, leaving gaps in top and bottom regions for references electrodes 414 . reference electrodes 414 are inset into the gaps in electrodes 410 . as shown, gaps are provided on two sides of electrodes 410 ; however in some embodiments, gaps may be provided on only one side or on more than two sides. further, the illustrated electrodes show single gaps with single reference electrodes inset therein; however, it will be appreciated that multiple gaps may be provided, and in some embodiments, the electrode 410 and the reference electrode 414 may be substantially interdigitated. electrodes 410 may be used to conduct one or more droplet operations. reference electrodes 414 may be exposed to the gap, and in some cases, they may protrude into the gap, e.g., as described with reference to reference electrodes 136 of fig. 1 or they may be insulated, e.g., as described with reference to figs. 2 and 3 . one or more insulated wires 418 provide an electrical connection to reference electrodes 414 . fig. 4a shows a droplet 420 that is being manipulated along electrodes 410 using reference electrodes 414 . in one embodiment, the top or bottom substrate may include a conductive coating over some portion or all of the surface exposed to the droplet actuator. an example of such a conductive coating is indium tin oxide (ito). the conductive coating may itself be coated with a hydrophobic layer. a variety of materials are suitable for coating the conductive layer to provide a hydrophobic surface. one example is a class of compounds known as perfluorophosphonates. perfluorophosphonates may be useful for establishing a hydrophobic layer over a conductive layer, such as a metal. in one embodiment, a perfluorophosphonate is used to form a substantial monolayer over the conductive layer. for example, a droplet actuator may include a metal conducting layer coated with a perfluorophosphonate exposed to a region in which droplets are subjected to droplet operations. similarly, a droplet actuator may include a metal conducting layer coated with a perfluorophosphonate monolayer exposed to a region in which droplets are subjected to droplet operations. the perfluorophosphonate may be deposited on the conducting layer in an amount which facilitates the conducting of droplet operations. the perfluorophosphonate layer may reduce fouling during droplet operations relative to fouling that would occur in the absence of the phosphonate or perfluorophosphonate coating. the conducting layer may, in some embodiments, include ito. as another example, a droplet actuator comprising two substrates separated to form a gap, each substrate comprising electrodes configured for conducting droplet operations, may include ito on a top substrate coated with a perfluorophosphonate. a suitable perfluorophosphonate for use in accordance with the invention is 1-phosphonoheptadecafluorooctane (cf 3 (cf 2 ) 7 p 0 3 h 2 ). this material can be synthesized using known methods starting with 1-bromoheptadecafluorooctane (cf 3 (cf 2 ) 7 br). similar molecules of varying lengths can be synthesized using well-known techniques starting with known precursors. fig. 4b illustrates an electrode pattern 450 for a droplet actuator (not shown). droplet actuator 450 is substantially the same as droplet actuator 400 of fig. 4a , except that the geometries of electrodes 410 and the inset reference electrodes 414 differ from the electrode geometries illustrated in fig. 4a . electrodes 410 in fig. 4b are shaped to provide a gap between each adjacent pair of electrodes 410 . electrodes 414 are inset between electrodes 410 rather than inset into electrodes 410 . as described above with reference to fig. 1a , reference electrodes 414 in fig. 4b may also be exposed to the gap, and in some cases, they may protrude into the gap, e.g., as described with reference to reference electrodes 136 of fig. 1 or they may be insulated, e.g., as described with reference to figs. 2 and 3 . one or more insulated wires 418 provide an electrical connection to reference electrodes 414 . fig. 4b shows a droplet 420 that is being manipulated along electrodes 410 using reference electrodes 414 . electrodes 410 may be used to conduct one or more droplet operations. figs. 5a , 5 b, 5 c, 5 d and 5 e illustrate electrode patterns 510 , 520 , 530 , 540 , and 550 , respectively, which are yet other nonlimiting examples of electrode configurations of the invention. electrode patterns 510 , 520 , 530 , 540 , and 550 illustrate configurations in which the electrodes are overlapping, but not interdigitated, in order to facilitate droplet overlap with adjacent electrodes. these electrode patterns can be combined with reference electrodes that are also inset into and/or between and/or interdigitated with the electrodes, e.g., as described with reference to fig. 4 . the illustrated overlapping electrodes exhibit rotational symmetry. the examples illustrated in fig. 5 show four-fold rotational symmetry, but it will be appreciated that the overlapping electrodes may exhibit a rotational symmetry which is x-fold, where x is 3, 4, 5, 6, 7, 8, 9, 10 or greater. further, a droplet actuator may combine electrodes with different x-fold rotational symmetries. electrodes with rotational symmetry are preferred for overlapping electrodes, since the symmetry causes the droplets to be centered over the electrode. further, the droplet shape will also have rotational symmetry, which permits overlap with adjacent electrodes or reference elements in all directions. in some embodiments, one or more of the electrodes have rotational symmetry but not reflection symmetry. in another embodiment, one or more of the electrodes have rotational symmetry and reflection symmetry, where the rotational symmetry is x-fold, and x is 5, 6, 7, 8, 9, 10 or more. in a further embodiment, the rotationally symmetrical overlapping electrodes are arranged such that no straight line can be drawn between two adjacent electrodes without overlapping one or both of the two adjacent electrodes. the invention also includes electrodes that are sections of rotationally symmetrical shapes, such as a quarter or half of a rotationally symmetrical shape. in some embodiments, the sections are characterized in that the lines creating the sections generally intersect the center point of the rotationally symmetrical shape, i.e., like slices of a pie. in still another embodiment, the overlapping regions of adjacent rotationally symmetrical electrodes generally fit together like pieces of a puzzle except that each point along adjacent edges of adjacent electrodes is separated by a gap from a corresponding point on the other of the adjacent electrodes. fig. 6 illustrates a side view of a droplet actuator 600 . droplet actuator 600 includes a top substrate 610 and a bottom substrate 614 that are arranged in a generally parallel fashion. top substrate 610 and bottom substrate 614 are separated to provide a gap 618 therebetween. both top substrate 610 and bottom substrate 614 include a set of electrodes 622 , e.g., droplet operations electrodes. in the embodiment shown, both substrates include an insulator layer 626 associated therewith, which forms a droplet operations surface 627 . insulator layers 626 may be formed of any dielectric material, such as polyimide. reference electrodes are not shown. a hydrophobic coating (not shown) may also be present. in one embodiment, droplet operations electrodes 622 include at least a subset of electrodes 622 associated with top substrate 610 which are substantially congruent (having substantially the same size and shape) with and/or in registration (being substantially aligned on opposite plates) with a subset of electrodes 622 associated with bottom substrate 614 (e.g., a perpendicular line passing through the center-point of an electrode 622 on the bottom substrate 614 would approximately pass through the center point of a corresponding electrode 622 on the top substrate 610 ). in one embodiment, gap 618 is sufficiently wide that: (a) one or more droplets 630 having a footprint which is from about 1 to about 5 times the size of the footprint of a droplet operations electrode 622 can be subjected to droplet operations on surface 627 of top substrate 610 without contacting surface 627 of bottom substrate 614 ; and (b) one or more droplets 634 having a footprint which is from about 1 to about 5 times the size of the footprint of a droplet operations electrode 622 can be subjected to droplet operations on surface 627 of bottom substrate 614 without contacting surface 627 of top substrate 610 . in this embodiment, droplets may be subjected to droplet operations along both substrates (top and bottom). in one embodiment, droplets may be merged by bringing a droplet on one surface into contact with a droplet on the other surface. droplet actuator cartridges of the invention may in various embodiments include fluidic inputs, on-chip reservoirs, droplet generation units and droplet pathways for transport, mixing, and incubation. the fluidic input port provides an interface between the exterior and interior of the droplet actuator. the design of the fluidic input port is challenging due to the discrepancy in the scales of real world samples (microliters-milliliters) and the lab-on-a-chip (sub-microliter). if oil is used as the filler fluid in the droplet actuator gap, there is also the possibility of introducing air bubbles during liquid injection. the dimensions of the fluidic input may be selected to ensure that the liquid is stable in the reservoirs and does not spontaneously flow back to the loading port after loading. the entrapment of air as bubbles in the filler fluid should be completely avoided or minimized during the loading process. in some embodiments the fluidic input port is designed for manual loading of the reservoirs using a pipette through a substrate of the droplet actuator. the sample (or reagent) may, for example, be injected into the reservoir through a loading opening in the top substrate. the opening may, for example, be configured to fit a small volume (<2 μl) pipette tip. fig. 7 illustrates an embodiment in which the loading opening is connected to the reservoir by a narrow channel of width w, patterned in the spacer material. the liquid pressure in the reservoir is on the order of γ(1/r+1/h) where r is the radius of the reservoir, h is the height of the reservoir and γ is the interfacial tension of the liquid with the surrounding media. since r is typically much greater than h the pressure can be approximated as γ/h. the pressure in the channel connecting the loading port and the reservoir is γ(1/w+1/h). if w is on the order of h then the pressure in the channel is 2γ/h which is twice the pressure in the reservoir. therefore by choosing w to be close to h the liquid is forced to remain in the reservoir and not spontaneously flow back into the loading opening. this pressure difference is initially overcome by the positive displacement pipetting action, to fill the reservoir with the liquid. fig. 7 also illustrates steps for dispensing a droplet. in the specific embodiment illustrated, droplet dispensing from an on-chip reservoir occurs in the following steps. in step a, the reservoir electrode is activated. in step b, a liquid column is extruded from the reservoir by activating a series of electrodes adjacent to it. once the column overlaps the electrode on which the droplet is to be formed. in step c, all the remaining electrodes are deactivated to form a neck in the column. in step d, simultaneously or subsequently to step c, the reservoir electrode is activated to pull the liquid back causing the neck to break completely and form a droplet. though simple in principle, the reliability and repeatability of the dispensing process is affected by several design and experimental parameters. the design parameters include the reservoir shape and size, shape and size of the pull-back electrode, size of the unit electrode (and correspondingly the unit droplet) and the spacer thickness. in one embodiment, the design parameters may be established as follows: the electrode size may be fixed, e.g., at about 500 μm, and most of the other design parameters were chosen using this as the starting point. droplet dispensing for a water-silicone oil system may be suitably conducted using a droplet aspect ratio (diameter/height) greater than 5 and a water-air system may be suitably conducted using a an aspect ratio greater than 10. thus, given an approximately 500 μm electrode size, the spacer thickness may be about 100 μm for a nominal droplet diameter of 500 μm. for this electrode size and spacer thickness combination the unit droplet volume is expected to be between about 25 and 50 nl. larger aspect ratios caused droplets to split easily even while transporting. as a rule of thumb, an aspect ratio between about 4 and about 6 is most optimal for droplet transport, dispensing and splitting for an electrowetting system in silicone oil. the reservoir size is essentially determined by the smallest pipette-loadable volume on the lower end and chip real-estate concerns on the higher end. in theory, the reservoirs could be made as large as possible and always filled with a smaller quantity of liquid as needed. in some embodiments, reservoir capacities may vary from about 500 to about 1500 nl. a tapering pull-back electrode (wider at the dispensing end) may be employed in some embodiments to ensure that the liquid stays at the dispensing end of the reservoir as the reservoir is depleted. in addition to the design parameters discussed above there are additional experimental factors which affect dispensing, and these include the volume of liquid in the reservoir, the length of the extruded liquid column and the voltage applied. it is generally observed that the volume variation is much higher for the last few droplets generated from a reservoir i.e. when the reservoir is close to being empty. the length of the extruded column also determines the volume of a unit droplet. during the necking process the liquid in the extruded column drains with half the volume going towards the reservoir and another half towards the droplet. therefore the longer the extruded finger the larger the droplet volume. the volume variation is also larger when the droplet is formed farther away from the reservoir. the extruded liquid column also determines the minimum unusable dead volume in the reservoir. the invention provides droplet actuators and associated systems configured for loading one or more droplet fluids by displacement of filler fluid. the invention also provides methods of making and using such droplet actuators. in some cases, the droplet fluid loading approach of the invention relies on displacement of filler fluid in order to move a droplet fluid from a locus which is exterior to the gap to a locus which is inside the gap and/or from one portion of the gap to another. in one embodiment, the droplet fluid loading operation moves a droplet fluid from a position in which the droplet is not subject to droplet operations to a locus in which the fluid is subject to droplet operations. for example, a droplet fluid loading operation of the invention may be employed to move a droplet fluid from a locus in which the droplet fluid is not subject to electrode-mediated droplet operations into a locus in which the droplet fluid is subject to electrode-mediated droplet operations. in a specific example, an aliquot of droplet fluid may be transported into proximity with electrodes configured to dispense droplets of the droplet fluid, and the electrode arrangement may be used to dispense such droplets and may further be used to transport such droplets to downstream droplet operations, e.g. for conducting an assay. various droplet fluid loading purchase of the invention work well for any droplet fluid volume, including small droplet fluid volumes; reduce, preferably entirely eliminate, the possibility of introducing air into the droplet actuator during loading; and reduce, preferably entirely eliminate, dead volume of droplet fluid. figs. 8a and 8b illustrate a side view and top view (not to scale), respectively, of a droplet actuator 800 . droplet actuator 800 is configured to make use of negative displacement of filler fluid for droplet fluid loading. droplet actuator 800 includes a top substrate 810 and a bottom substrate 814 arranged to provide a gap for conducting droplet operations. a reservoir electrode 818 and a set of electrodes 822 (e.g., droplet operations electrodes) are provided in association with bottom substrate 814 . the gap between top substrate 810 and a bottom substrate 814 is filled with a volume of filler fluid 824 . a loading assembly 826 is provided atop top substrate 810 , as illustrated in fig. 8a . it will be appreciated that top substrate 810 and loading assembly 826 (as well as other loading assemblies described herein) may be a single structure comprising some or all elements of top substrate 810 and loading assembly 826 . loading assembly 826 includes a droplet fluid reservoir 830 that substantially aligns with an inlet opening 825 of top substrate 810 . droplet fluid reservoir 830 is configured to receive a volume of droplet fluid (not shown), which is to be loaded into the gap of droplet actuator 800 . loading assembly 826 may also include a negative pressure opening 834 that substantially aligns with an outlet opening of top substrate 810 . negative pressure opening 834 is configured to receive a volume of filler fluid 824 that is displaced during loading of the droplet fluid. fig. 8b illustrates gasket 838 arranged to direct droplet fluid (not shown) from droplet fluid reservoir 830 toward reservoir electrode 818 during a fluid loading operation. reservoir 830 is located a certain distance from reservoir electrode 818 in order to hinder or restrain droplet fluid (not shown) from retreating back into droplet fluid reservoir 830 once loaded into droplet actuator 800 . additional aspects of droplet actuator 800 in use are described with reference to figs. 9a , 9 b, and 9 c. fig. 9a illustrates a side view of droplet actuator 900 (not to size) with droplet fluid reservoir 930 being loaded with droplet fluid. a droplet fluid source 950 , such as a pipette or syringe, may be used to deposit a volume of droplet fluid 954 into droplet fluid reservoir 930 . a negative pressure device 958 (not to size), such as, but not limited to, a syringe, pipette, or pump, may be securely fitted to negative pressure opening 934 . the size of negative pressure opening 934 may be selected to couple the opening to a negative pressure device 958 , e.g., the tip of a pipette, syringe, or other negative pressure device or coupling for a negative pressure device, such as a capillary tube. initially, negative pressure device 958 is in a state of applying little or no significant negative pressure to filler fluid 924 , as illustrated in fig. 9a , and droplet fluid 954 is retained in droplet fluid reservoir 930 . fig. 9b illustrates a side view of droplet actuator 900 during a droplet fluid loading operation using negative pressure device 958 . negative pressure is applied to filler fluid 924 using negative pressure device 958 . droplet fluid 954 flows from droplet fluid reservoir 930 through opening 925 (shown in fig. 9a ), into droplet actuator 900 , and toward reservoir electrode 918 . the negative pressure device forces a volume of filler fluid 924 out of the gap, and the displaced filler fluid is replaced by a volume of droplet fluid 954 . this action continues until a desired volume of droplet fluid 954 is drawn into sufficient proximity with reservoir electrode 918 to permit reservoir electrode 918 to be used to conduct one or more electrode-mediated droplet operations. as illustrated in fig. 9c , reservoir electrode 918 may be activated to induce loaded fluid to move into a locus which is generally atop the reservoir electrode 918 . fig. 9c illustrates a side view of droplet actuator 900 following the droplet fluid loading operation. a slug of droplet fluid 954 is positioned atop reservoir electrode 918 . a volume of filler fluid 924 has been removed from the gap due to the action of negative pressure device 958 . fig. 10a illustrates a side view (not to scale) of a droplet actuator 1000 that makes use of negative displacement for droplet fluid loading. droplet actuator 1000 is substantially the same as droplet actuator 1000 that is described in fig. 8 , except that the negative pressure opening and the negative pressure device of loading assembly 1026 is constituted by a threaded negative pressure opening 1010 that has a screw 1014 therein. the action of backing screw 1014 out of threaded negative pressure opening 1010 creates negative pressure (i.e., vacuum pressure). fig. 10a illustrates screw 1014 substantially fully engaged within threaded negative pressure opening 1010 and a volume of droplet fluid 1054 present in droplet fluid reservoir 1030 . screw 1014 may be backed out of threaded negative pressure opening 1010 to force a volume of filler fluid 1024 out of the gap. the displaced filler fluid 1024 is replaced by droplet fluid 1054 as it is drawn into droplet actuator 1000 . fig. 10b illustrates a side view of droplet actuator 1000 with the droplet fluid loading operation complete. more specifically, fig. 10b illustrates a slug of droplet fluid 1054 atop reservoir electrode 1018 and a volume of filler fluid 1024 that is present within threaded negative pressure opening 1010 due to the action of backing out screw 1014 , which creates a negative pressure (i.e., vacuum pressure). referring again to fig. 8 , loading assembly 1026 , which may include any of the active negative pressure mechanisms, may be permanently attached to the droplet actuator or, alternatively, may be attached to the droplet actuator during droplet fluid loading only and then removed. fig. 11a illustrates a side view (not to scale) of a droplet actuator 1100 . droplet actuator 1100 is substantially the same as the droplet actuator that is described in figs. 8 and 9 , except that the negative pressure opening and the negative pressure device of loading assembly 1126 is replaced with a negative pressure opening 1110 that has a septum 1114 therein. septum 1114 is configured to seal negative pressure opening 1110 and is formed of a material that is suitable for sealing, that is resistant to the filler fluid, and that may be easily punctured. for example, septum 1114 may be formed of any rubbery material, such as elastomer material. atop septum 1114 is an absorbent material 1118 , which may be any material that is suitable for absorbing filler fluid 1124 and that may be easily punctured. for example, absorbent material 1118 may be a sponge material or foam material. in operation, a volume of droplet fluid 1154 is deposited into droplet fluid reservoir 1130 , as illustrated in fig. 11a . subsequently, septum 1114 and absorbent material 1118 are punctured in a manner to form a capillary 1122 between filler fluid 1124 in the gap of droplet actuator 1100 and absorbent material 1118 , as illustrated in fig. 11b . in this way, due to negative pressure created by capillary 1122 , which is displacing filler fluid 1124 into absorbent material 1118 , droplet fluid 1154 displaces filler fluid 1124 as it is pulled into sufficient proximity with reservoir electrode 1118 such that reservoir electrode 1118 may be employed to conduct one or more droplet operations using droplet fluid 1154 . fig. 11b illustrates a side view of droplet actuator 1100 with the droplet fluid loading operation complete. more specifically, fig. 11b illustrates a slug of droplet fluid 1154 atop reservoir electrode 1118 and a volume of filler fluid 1124 that is present within capillary 1122 and absorbent material 1118 due to the creation of negative pressure when septum 1114 and absorbent material 1118 are punctured. referring again to fig. 11b , droplet fluid reservoir 1130 has a diameter d, the gap of droplet actuator 1100 has a height h, and capillary 1122 has a diameter d. in order to create the desired pressure differentials along droplet actuator 1100 that best encourages fluid flow from droplet fluid reservoir 1130 to capillary 1122 , d>h>d. fig. 12a illustrates a side view (not to scale) of a droplet actuator 1200 . droplet actuator 1200 makes use of a passive method of filler fluid displacement for droplet fluid loading. droplet actuator 1200 is substantially the same as the droplet actuator that is described in figs. 8 and 9 , except that the negative pressure opening of loading assembly 1226 that has the negative pressure device installed therein is replaced with a capillary 1210 and no mechanism installed therein. additionally, droplet fluid reservoir 1230 has a diameter d, the gap of droplet actuator 1200 has a height h, and capillary 1210 has a diameter d. in order to create the desired pressure differentials along droplet actuator 1200 that promote fluid flow by capillary forces from droplet fluid reservoir 1230 into capillary 1210 , d>h>d. the capillary 1210 is sealed using tape for example (not shown) before fluid loading and air is trapped within the capillary. in operation, when a volume of droplet fluid 1254 is loaded into droplet fluid reservoir 1230 , and the seal is removed, the capillary action of capillary 1210 pulls filler fluid 1224 therein and creates a negative pressure that allows a slug of droplet fluid 1254 to move into droplet actuator 1200 and displace filler fluid 1224 . fig. 12b illustrates a side view of droplet actuator 1200 with the droplet fluid loading operation complete. more specifically, fig. 12b illustrates a slug of droplet fluid 1254 atop reservoir electrode 1218 and a volume of filler fluid 1224 that is present within capillary 1210 due to the creation of negative pressure via capillary 1210 . figs. 13a and 13b illustrate a side view and top view (not to scale), respectively, of a droplet actuator 1300 . droplet actuator 1300 is formed of a top substrate 1310 and a bottom substrate 1314 , with a gap therebetween. a reservoir electrode 1318 and a set of electrodes 1322 (e.g., droplet operations electrodes) are provided on bottom substrate 1314 . the gap between top substrate 1310 and bottom substrate 1314 is filled with a volume of filler fluid 1326 . additionally, top substrate 1310 includes a fluid reservoir 1330 that substantially aligns with an inlet opening of top substrate 1310 , which is near reservoir electrode 1318 . fluid reservoir 1330 is configured to receive a volume of droplet fluid 1334 , which is to be loaded into droplet actuator 1300 . top substrate 1310 also includes one or more vent holes 1338 , which is disposed along electrodes 1322 and near a spacer 1342 that is between top substrate 1310 and bottom substrate 1314 . additionally, the one or more vent holes 1338 are sealed by a seal 1344 . in one example, seal 1344 may be a removable seal. in another example, seal 1344 may be a seal that may be punctured, such as a seal that is formed of any rubbery material (e.g., elastomer material) or foil material. in any case, seal 1344 is formed of a material that is resistant to the filler fluid. furthermore, figs. 13a and 13b show a volume of air 1350 that is trapped is in the gap of droplet actuator 1300 , and at the one or more vent holes 1338 . in operation, prior to loading filler fluid 1326 into droplet actuator 1300 , the one or more vent holes 1338 , which are negative pressure holes, are sealed via seal 1344 . with vent holes 1338 sealed, droplet actuator 1300 is then loaded with filler fluid 1326 , which causes a volume of air 1350 to be trapped in the gap, against spacer 1342 and at vent holes 1338 , as illustrated in figs. 13a and 13b . air 1350 is trapped under pressure because there is no path for venting air 1350 out of droplet actuator 1300 . the volume of air 1350 may be controlled, for example, by the placement of the one or more vent holes 1338 and/or by the geometry of spacer 1342 . droplet fluid 1334 is present in fluid reservoir 1330 , which is sealed with seal 1347 . thus, the contents of the droplet actuator are under pressure. in order to load droplet fluid 1334 into droplet actuator 1300 , seal 1344 is breached (e.g., removed, broken or punctured) which permits pressurized air 1350 to escape through vent holes 1338 , which causes droplet fluid 1334 to displace filler fluid 1326 as it flows into the one or more vent holes 1338 . this action pulls a slug of droplet fluid 1334 onto reservoir electrode 1318 (not shown). additionally, fluid reservoir 1330 has a diameter d, the gap of droplet actuator 1300 has a height h, and vent holes 1338 have a diameter d. in order to create the desired pressure differentials along droplet actuator 1300 that best encourage fluid flow from fluid reservoir 1330 to vent holes 1338 , d>h>d. various kinds of pressure sources, positive and/or negative, may be used to cause dislocation of filler fluid to result in the desired dislocation or movement of droplet fluid, e.g., vacuum pump, syringe, pipette, capillary forces, and/or absorbent materials. for example, negative pressure may be used to dislocate filler fluid and thereby move a droplet fluid from a locus which is exterior to the gap to a locus which is inside the gap and/or from one portion of the gap to another. the pressure source may be controlled via active and/or passive mechanisms. displaced filler fluid may be moved to another locus within the gap and/or transported out of the gap. in one embodiment, displaced filler fluid flows out of the gap, while a droplet fluid flows into the gap and into proximity with a droplet operations electrode. for examples of fluids that may be subjected to droplet operations using the electrode designs and droplet actuator architectures of the invention, see international patent application no. pct/us 06/47486, entitled, “droplet-based biochemistry,” filed on dec. 11, 2006. in some embodiments, the fluid includes a biological sample, such as whole blood, lymphatic fluid, serum, plasma, sweat, tear, saliva, sputum, cerebrospinal fluid, amniotic fluid, seminal fluid, vaginal excretion, serous fluid, synovial fluid, pericardial fluid, peritoneal fluid, pleural fluid, transudates, exudates, cystic fluid, bile, urine, gastric fluid, intestinal fluid, fecal samples, fluidized tissues, fluidized organisms, biological swabs and biological washes. in other embodiments, the fluid may be a reagent, such as water, deionized water, saline solutions, acidic solutions, basic solutions, detergent solutions and/or buffers. in still other embodiments, the fluid includes a reagent, such as a reagent for a biochemical protocol, such as a nucleic acid amplification protocol, an affinity-based assay protocol, a sequencing protocol, and/or a protocol for analyses of biological fluids. the foregoing detailed description of embodiments refers to the accompanying drawings, which illustrate specific embodiments of the invention. other embodiments having different structures and operations do not depart from the scope of the present invention. this specification is divided into sections for the convenience of the reader only. headings should not be construed as limiting of the scope of the invention. the definitions are intended as a part of the description of the invention. it will be understood that various details of the present invention may be changed without departing from the scope of the present invention. furthermore, the foregoing description is for the purpose of illustration only, and not for the purpose of limitation, as the present invention is defined by the claims as set forth hereinafter.
002-668-676-272-149
IB
[ "CN", "KR", "WO", "CA", "US", "DE", "GB", "EP" ]
H01Q1/24,H01Q1/38,H01Q13/08,H01Q13/10,H01Q1/36,H01Q1/44,H01Q13/18,H01Q1/42,H01Q9/00
2007-04-10T00:00:00
2007
[ "H01" ]
an antenna arrangement and antenna housing
an antenna arrangement comprising: an antenna occupying at least a first plane; a conductive structure that is isolated from the antenna but is arranged to be parasitically fed by the antenna, the conductive structure having a slot and occupying at least a second plane different to but adjacent the first plane.
claims 1. an antenna arrangement comprising: an antenna occupying at least a first plane; a conductive structure that is isolated from the antenna but is arranged to be parasitically fed by the antenna, the conductive structure having a slot and occupying at least a second plane different to but adjacent the first plane. 2. an antenna arrangement as claimed in claim 1 , wherein the conductive structure is part of a housing for the antenna arrangement. 3. an antenna arrangement as claimed in claim 2, wherein the housing is a housing for an apparatus and the antenna is an internal antenna located inside the housing. 4. an antenna arrangement as claimed in claim 2 or 3, wherein the housing comprises an edge and the slot terminates at the edge. 5. an antenna arrangement as claimed in any one of claims 2 to 4, wherein the housing has a face bounded by edges and the slot is positioned wholly within the face. 6. an antenna arrangement as claimed in any preceding claim, wherein the antenna has a first resonant frequency and the slot is dimensioned to have an electrical length that corresponds to one or more multiples of one quarter of the resonant wavelength corresponding to the first resonant frequency 7. an antenna arrangement as claimed in any preceding claim wherein the slot has a minimum width and a length and the length is at least ten times greater than the minimum width. 8. an antenna arrangement as claimed in any preceding claim, wherein the slot has a constant width. 9. an antenna arrangement as claimed in any one of claims 1 to 7, wherein the slot has a variable width. 10. an antenna arrangement as claimed in any preceding claim, wherein the slot is straight. 11. an antenna arrangement as claimed in any one of claims 1 to 9, wherein the slot meanders. 12. an antenna arrangement as claimed in any one of claims 1 to 9, wherein the slot comprises one or more curved sections. 13. an antenna arrangement as claimed in any preceding claim, comprising an electric circuit connected across the slot. 14. an antenna arrangement as claimed in claim 13, wherein the impedance of the electrical circuit tunes a resonant frequency of the antenna arrangement to a first resonant frequency. 15. an antenna arrangement as claimed in claim 13 or 14, wherein the electric circuit comprises a single component. 16. an antenna arrangement as claimed in any preceding claim, wherein a plastic housing comprises a metallic covering and the slot is defined by an absence of the metallic covering. 17. an antenna arrangement as claimed in any preceding claim, further comprising a matching circuit connected to the antenna. 18. an antenna arrangement as claimed in any preceding claim, wherein the antenna is a chip dielectric feeding antenna. 19. an apparatus comprising a housing, with exterior metallization, that defines an interior cavity and an antenna arrangement as claimed in any preceding claim, positioned within the cavity, wherein the exterior metallization provides the conductive structure and the slot provides an electromagnetic aperture to the interior cavity. 20. an apparatus as claimed in claim 18, wherein the slot is covered with a dielectric. 21. an apparatus comprising a conductive housing that defines an interior cavity, an opening in the conductive housing, and an antenna positioned within the cavity adjacent the opening. 22. an apparatus as claimed in claim 21 , wherein, in operation, the antenna feeds the conductive housing which operates as a resonator. 23. an apparatus as claimed in claim 21 or 22, wherein the antenna has a first resonant frequency and the opening has an electrical dimension corresponding to a resonance at the first resonant frequency. 24. an apparatus as claimed in claim 21 , 22 or 23, wherein the conductive housing comprises a dielectric substrate and an exterior metallization. 25. an apparatus as claimed in claim 21 , 22, 23 or 24 wherein no conductive element or elements intervene between the antenna and the slot. 26. an antenna arrangement comprising: an antenna having a first resonant wavelength λ; a conductive housing that is isolated from the antenna but is indirectly fed by the antenna, the conductive housing having a slot that has an electrical length that corresponds to a multiple of λ/4. 27.a method comprising: directly feeding an antenna occupying at least a first plane; and using the antenna to indirectly feed a slotted conductive structure that is isolated from the antenna and occupies at least a second plane different to but adjacent the first plane.
title an antenna arrangement and antenna housing field of the invention embodiments of the present invention relate to an antenna arrangement and/or an apparatus housing an antenna arrangement. in particular, in some embodiments the housing is conductive. background to the invention as is well known a conductive enclosure shields the interior cavity defined by the enclosure from electromagnetic (em) radiation. the conductive material forms a block to photons and the effectiveness of the block depends upon the thickness of the material, the frequency of the photon and the electromagnetic properties of the material (electrical conductivity and magnetic permeability). for metal at radio frequencies, thin layers can provide effective high impedance shields. - there is a current trend towards using metallic housings for electronic apparatuses. a metallic housing may be used for a number of reasons. it may, for example, provide a good electrical earth for the apparatus or it may, if applied as an exterior coat, where it provides a pleasing look and feel. it is now becoming common for an electronic apparatus to include wireless rf technology. such technology includes, for example, sensing technology such as rfid, mobile cellular technology such as umts, gsm etc, cable-less technology such as bluetooth and wireless usb and networking technology such as wlan. it would be desirable to provide an apparatus that is functional in one or more of these wireless technologies and uses a conductive housing. one solution would be to provide one of more external antennas for the apparatus but this is undesirable as it increases the size of the apparatus and also decreases it eye appeal. brief description of the invention according to some embodiments of the invention there is provided an antenna arrangement comprising: an antenna occupying at least a first plane; a conductive structure that is isolated from the antenna but is arranged to be parasitically fed by the antenna, the conductive structure having a slot and occupying at least a second plane different to but adjacent the first plane. according to some embodiments of the invention there is provided an apparatus comprising a conductive housing that defines an interior cavity, an opening in the conductive housing, and an antenna positioned within the cavity adjacent the opening. according to some embodiments of the invention there is provided an antenna arrangement comprising: an antenna having a first resonant wavelength λ; a conductive housing that is isolated from the antenna but is indirectly fed by the antenna, the conductive housing having a slot that has a length that corresponds to a multiple of λ/4. the inventors have realized that the impedance a conductive housing presents may be tuned for a particular frequency by carefully positioning and sizing an opening in the conductive housing. the impedance of the housing can, for example, be tuned for a resonant frequency of an antenna thereby enabling the antenna to be placed in the interior of the housing. brief description of the drawings for a better understanding of the present invention reference will now be made by way of example only to the accompanying drawings in which: fig 1a schematically illustrates in plan view an apparatus comprising a slotted external conductive housing element that houses an antenna fig 1 b schematically illustrates a cross-sectional view of the apparatus illustrated in fig 1a; fig 2 schematically illustrates in plan view an apparatus comprising an external conductive housing element comprising a meandering slot; fig 3 schematically illustrates in plan view an apparatus comprising an external conductive housing element comprising a slot of variable width; and fig 4 schematically illustrates in plan view an apparatus comprising an external conductive housing element comprising a slot having an associated electrical tuning circuit. detailed description of embodiments of the invention the figures schematically illustrate an antenna arrangement 10 comprising: an antenna 2 occupying at least a first plane 6; and a conductive structure 12 that is not electrically connected to the antenna 2 but is parasitically fed by the antenna 2, the conductive structure 12 having a slot 14 and occupying at least a second plane different to but adjacent the first plane. in particular, figs 1a and 1 b illustrate an apparatus 1 comprising an external conductive housing element 12 that houses an antenna 2. in this example, the housing element 12 forms a conductive structure that almost entirely surrounds a cavity 3 housing the internal antenna 2. the conductive housing element 12 comprises a slot 14 that facilitates the transfer of electromagnetic waves between the exterior of the housing 12 and the antenna 2. the slot 14 is defined by the absence of conductive material in the region of the slot 14. the slot 14 may be an open aperture to the interior cavity 3 or it may be covered by a dielectric that is permeable to electromagnetic radiation such as plastic (other examples are ceramic and ferrite material). in one embodiment, the slot 14 may be engraved on a metal foil covering a plastic substrate. the slot 14 has a width w defined as the separation between opposing first and second terminating long edges 21 , 23 of the housing 12. the width w may be constant for the length of the slot or vary along the length of the slot 14. the slot 14 has a length l defined as the separation between opposing first and second terminating short edges 22, 24 of the housing 12. in the example illustrated in figs 1a and 1 b, the slot is a region lying within a slot plane 16 and the housing element 12 provides a conductive structure that extends in the slot plane 16. at least a portion of the antenna 2 extends in an antenna plane 6, that is adjacent and parallel to (but separate from) the slot plane 16. the antenna 2 does not extend into the slot plane 16. the position of the antenna 2 relative to the slot 14 is such that it achieves very good or optimal coupling between the antenna 2 and the slotted housing 12. the antenna 2 and the conductive housing element 12 are galvanically isolated such that there is no dc current path between them. they are, however, arranged for electromagnetic coupling and together form an antenna arrangement 10. the antenna 2 has a resonant frequency f and the slot 14 is dimensioned to have an electrical length l' that corresponds to one or more multiples of one quarter of the resonant wavelength corresponding to the first resonant frequency f. l' = nλ /4 where n is a natural number, l' is the electrical length of the slot 14 and λ is the resonant wavelength. the dimensions of the slot result in the housing 12 parasitically resonating with the antenna 2. this results in the characteristics of the antenna arrangement 10 such as bandwidth, efficiency etc being different to that of the antenna 2. the antenna 2 operates as a feed to the antenna arrangement 10. in the absence of a dielectric covering the slot 14, the electrical length l' may be the same as the physical length l of the slot. the characteristics of the resonance of the antenna arrangement 10 may be engineered by varying the physical and/or electrical characteristics of the slot 14. variations in the physical dimensions of the slot typically affect its associated electrical characteristics such as its electrical length and q-factor which affect the antenna arrangement's resonant frequency and bandwidth respectively. for example, varying the physical length l of the slot 14 varies its electrical length. varying the physical position of the slot 14 may affect its electrical characteristics. in fig 1 , the slot 14 terminates on an edge 18 of the housing 12, whereas in the examples illustrated in figs 2 and 3 the slot 14 does not terminate at an edge of the housing but is wholly contained within a face 13 of the housing 12. increasing the inductance associated with the slot 14 increases the slot's electrical length (which decreases the resonant frequency) and may decrease bandwidth. the electrical length may, for example, be increased by increasing the physical length of the slot. one option is to form the slot from one or more curved sections and another option is to meander the slot 14 as illustrated in fig 2 (instead of using a straight slot 14 as in figs 1 and 3). increasing the capacitance associated with the slot by, for example, decreasing the slot's width as illustrated in fig 3 (instead of having a constant width w as in figs 1 and 2) decreases the slot's electrical length (increasing the resonant frequency) and may increase bandwidth. the electrical characteristics of the slot 14 may be engineered using lumped electrical components as an addition or as an alternative to changing the physical characteristic of the slot 14. fig 4 illustrates a slot 14 that has an electrical circuit 7 connected across the slot 14. the electrical component 7 may comprise one or more lumped components. the electrical characteristics of the antenna arrangement 10 can also be modified by attaching a matching circuit 8 to the antenna 2. the antenna arrangement is able to operate as a receiver and/or a transmitter at one or more of a large number of frequency bands including the following frequency bands: bluetooth (2400-2483.5 mhz); wlan (2400-2483.5 mhz); hlan (5150-5850 mhz); gps (1570.42-1580.42 mhz); us-gsm 850 (824-894 mhz); egsm 900 (880- 960 mhz); eu-wcdma 900 (880-960 mhz); pcn/dcs 1800 (1710-1880 mhz); us- wcdma 1900 (1850-1990 mhz); wcdma 2100 (tx: 1920-1980 mhz rx: 2110- 2180 mhz); pcs1900 (1850-1990 mhz); uwb lower (3100-4900 mhz); uwb upper (6000-10600 mhz); dvb-h (470-702 mhz); dvb-h us (1670-1675 mhz); wi max (2300-2400 mhz, 2305-2360 mhz, 2496-2690 mhz, 3300-3400 mhz, 3400-3800 mhz, 5250-5875 mhz); rfid uhf (433 mhz, 865-956 mhz, 2450 mhz). in one particular embodiment schematically illustrated in figs 1a and 1 b, the apparatus is a mobile cellular telephone, the antenna 2 is a chip dielectric (ceramic) monopole feeding antenna and operates at the 2.45 ghz wlan band. it has dimensions of 9 mm x 3 mm x 2 mm (length, width, height) and is mounted on a piece of copper-free pwb 8 of size 9.75 mm x 7 mm. the length of the antenna 2 is orthogonal and transverse to the length of the slot 14. the distance between the antenna 2 and slot 14 is 1.1 mm the housing 12 provides a homogenous, metallic cover for the apparatus. the physical slot length l is about % of the wavelength at 2.45 ghz. the slot 14 has a constant width w of 2.4 mm and a length l of 25 mm i.e. l> 10*w. the slot 14 terminates, as in fig s 1a and 1 b, at an edge of the housing 12. the slot 14 may be integrated into ventilation grates of the housing 12. the slot 14 may be covered with a plastic strip. although embodiments of the present invention have been described in the preceding paragraphs with reference to various examples, it should be appreciated that modifications to the examples given can be made without departing from the scope of the invention as claimed. whilst endeavoring in the foregoing specification to draw attention to those features of the invention believed to be of particular importance it should be understood that the applicant claims protection in respect of any patentable feature or combination of features hereinbefore referred to and/or shown in the drawings whether or not particular emphasis has been placed thereon. i/we claim:
002-842-755-267-816
TW
[ "CN", "US", "TW" ]
G06F3/041,G06F3/044
2012-03-13T00:00:00
2012
[ "G06" ]
electrode unit on touch-sensing element
an electrode unit on a touch-sensing element includes a first electrode and a second electrode. the first electrode includes a first conductive element and a plurality of second conductive elements. the first conductive element has a plurality of first funnel-shaped notches. the plurality of second conductive elements extends from the first conductive element. the second electrode includes a third conductive element and a plurality of fourth conductive elements. the third conductive element has a plurality of second funnel-shaped notches. the plurality of fourth conductive elements extends outwards from the third conductive element. the plurality of fifth conductive elements extends outwards from the third conductive element.
1 . an electrode unit on a touch-sensing element, comprising: a first electrode, comprising: a first conductive element, having a plurality of first funnel-shaped notches; and a plurality of second conductive elements, extending from the first conductive element; and a second electrode, comprising: a third conductive element, having a plurality of second funnel-shaped notches; a plurality of fourth conductive elements, extending from the third conductive element; and a plurality of fifth conductive elements, extending from the third conductive element. 2 . the electrode unit of claim 1 , wherein the first electrode and the second electrode are not electrically connected. 3 . the electrode unit of claim 1 , wherein the first conductive element includes a wider part and a narrower part, and a width of the wider part is greater than a width of the narrower part. 4 . the electrode unit of claim 3 , wherein the plurality of first funnel-shaped notches are located in the wider part. 5 . the electrode unit of claim 1 , wherein the plurality of second conductive elements are strip-shaped, each of the plurality of second conductive elements includes a the first section and a the second section, the first section extends in directions in parallel with a first direction, and the second section is not in parallel with the first section. 6 . the electrode unit of claim 5 , further comprising: a plurality of sixth conductive elements, straightly extending from the first conductive element in a direction in parallel with the first direction, for outputting a sensing signal sensed by the first electrode. 7 . the electrode unit of claim 1 , wherein the third conductive element is a rectangle having the plurality of second funnel-shaped notches. 8 . the electrode unit of claim 1 , wherein the plurality of fourth conductive elements are strip-shaped, each of the plurality of fourth conductive elements includes a the first section and a the second section, the first section extends in directions in parallel with a first direction, and the second section in not in parallel with the first section. 9 . the electrode unit of claim 8 , wherein the plurality of fifth conductive elements are strip-shaped and straightly extend from the third conductive element in directions in parallel with the first direction. 10 . the electrode unit of claim 8 , wherein the plurality of fifth conductive elements are strip-shaped and straightly extend from the third conductive element in directions in parallel with a second direction, and the second direction is different from the first direction. 11 . the electrode unit of claim 1 , wherein the electrode unit is a fringe electrode.
background of the invention 1. field of the invention the disclosed embodiments of the present invention relate to a sensing pattern design, and more particularly, to an electrode unit with a perimeter-lengthened touch-sensing pattern on a touch-sensing element located at fringes of a touch panel. 2. description of the prior art regarding a single-layered capacitive touch panel, a touch-sensing element on a touch panel is usually implemented using longitudinal electrodes and transverse electrodes with transparent conductive materials (e.g., indium tin oxide (ito)). when a finger touches a longitudinal electrode and a transverse electrode, an inductive capacitance between the touched longitudinal electrode and transverse electrode alters responsively. the difference of the inductive capacitance before and after the touch can then be used to calculate where the contact is. please refer to fig. 1 , which is a schematic diagram illustrating an example of a sensing pattern of electrodes on a conventional touch panel tp. the touch panel tp includes a plurality of touch-sensing elements tu, where each of the touch-sensing elements tu has an electrode unit 100 thereon, and the electrode unit 100 includes at least a first transverse electrode 110 and a second longitudinal electrode 120 . as shown in fig. 1 , the touch-sensing elements tu are staggered as a rectangular pattern, and the first electrode 110 on the same row are series-connected as a sensing trace, and the second electrode 120 on the same column are series-connected as a sensing trace. in this way, the touch panel tp would have a plurality of transverse sensing traces t 1 -tn and a plurality of longitudinal sensing traces s 1 -sm. in addition, the electrode unit 100 also has a separation unit constituted by insulation material and disposed on an intersection of the corresponding first electrode 110 and second electrode 120 . hence, the sensing traces t 1 -tn and sensing traces s 1 -sm would not be electrically connected. however, since an area being able to induct the inductive capacitance between two adjacent electrodes (i.e., the first electrode 110 and the second electrode 120 ) on the touch-sensing element tu located on fringes of the touch panel tp is smaller than an area being able to induct the inductive capacitance between two adjacent electrodes (i.e., the first electrode 110 and the second electrode 120 ) on the touch-sensing element tu located in the middle of the touch panel tp, when the finger enters the touch panel tp from the fringe, the inductive capacitance sensed by the electrodes on the fringes is smaller than inductive capacitance sensed by the electrodes in an effective sensing area, which is prone to misjudgment. therefore, there is a need to enhance the inductive capacitance sensed by the electrodes of the touch-sensing element located on the fringes of the touch panel, in order to decrease the likelihood of faulty calculation of contact on the fringes of the touch panel. summary of the invention in accordance with exemplary embodiments of the present invention, an electrode unit with a perimeter-lengthened touch-sensing pattern on a touch-sensing element located at fringes of a touch panel is proposed to solve the above-mentioned problem. according to an aspect of the present invention, an exemplary electrode unit is disclosed. the electrode unit includes a first electrode, a second electrode, and a plurality of fifth conductive elements. the first electrode includes a first conductive element and a plurality of second conductive elements. the first conductive element has a plurality of first funnel-shaped notches. the plurality of second conductive elements extends from the first conductive element. the second electrode includes a third conductive element and a plurality of fourth conductive elements. the third conductive element has a plurality of second funnel-shaped notches. the plurality of fourth conductive elements extends from the third conductive element. the plurality of fifth conductive elements extends from the third conductive element. therefore, when deployed on the touch-sensing element located on the fringes of the touch panel, the present invention can decrease the likelihood of faulty calculation of contact on the fringes of the touch panel. these and other objectives of the present invention will no doubt become obvious to those of ordinary skill in the art after reading the following detailed description of the preferred embodiment that is illustrated in the various figures and drawings. brief description of the drawings fig. 1 is a schematic diagram illustrating an example of a sensing pattern of electrodes on a conventional touch panel. fig. 2a is a top view illustrating an electrode unit on a touch-sensing element according to an embodiment of the present invention. fig. 2b is a schematic diagram illustrating an embodiment of the first electrode in fig. 2a . fig. 2c is a schematic diagram illustrating an embodiment of the second electrode in fig. 2a . fig. 3 is a schematic diagram illustrating a sensing pattern of electrode units shown in fig. 2a on a touch panel according to an embodiment of the present invention. detailed description certain terms are used throughout the description and following claims to refer to particular components. as one skilled in the art will appreciate, manufacturers may refer to a component by different names. this document does not intend to distinguish between components that differ in name but not function. in the following description and in the claims, the terms “include” and “comprise” are used in an open-ended fashion, and thus should be interpreted to mean “include, but not limited to . . . ”. also, the term “couple” is intended to mean either an indirect or direct electrical connection. accordingly, if one device is electrically connected to another device, that connection may be through a direct electrical connection, or through an indirect electrical connection via other devices and connections. when an object touches an electrode unit of a current touch-sensing element and an electrode unit on an adjacent touch-sensing element, a contact of the object can be determined by calculating the difference between an inductive capacitance c sensed by the electrode unit before and after the touch, and comparing the inductive capacitance c with an inductive capacitance c′ sensed by the electrode unit on the adjacent touch-sensing element. therefore, a concept of the present invention is to increase the inductive capacitance c sensed by a touch-sensing element located on a fringe of a touch panel by increasing contact areas being able to induct the inductive capacitance c on an electrode unit of the touch-sensing element, such that accuracy of determining the contact of the object can be improved. more specifically, since an electrode itself has a certain thickness, the present invention may increase the contact areas being able to induct the inductive capacitance c by increasing a perimeter of a sensing pattern formed by the electrode unit. please refer to fig. 2a , which is a top view illustrating an electrode unit 200 on a touch-sensing element tu′ according to an embodiment of the present invention. in fig. 2a , the electrode unit 200 includes a first electrode 210 and a second electrode 220 . in addition, the electrode unit 200 also has a separation unit (not shown in fig. 2a ) constituted by insulation material and disposed at an intersection of the first electrode 210 and the second electrode 220 and located in between the first electrode 210 and the second electrode 220 , such that the first electrode 210 and the second electrode 220 are not electrically connected. therefore, there may be an inductive capacitance inducted between the first electrode 210 and the second electrode 220 . please refer to fig. 2a and fig. 2b concurrently. fig. 2b is a schematic diagram illustrating an embodiment of the first electrode 210 shown in fig. 2a . the first electrode 210 includes a first conductive element 212 and a plurality of second conductive element 214 _ 1 - 214 _ 4 . in this embodiment, the first conductive element 212 includes a wider part 212 _ 1 and a narrower part 21 2 _ 2 , where the wider part 212 _ 1 is located at a middle section of the first conductive element 212 , and a width w 1 of the wider part 212 _ 1 is greater than a width w 2 of the narrower part 212 _ 2 (i.e., w1>w2). the first conductive element 212 has a plurality of first funnel-shaped notches h 1 _ 1 and h 1 _ 2 located on both sides of the wider part 212 _ 1 , respectively. in addition, each of the second conductive elements 214 _ 1 - 214 _ 4 is strip-shaped, and includes at least a first section l 1 and a second section l 2 . the first section l 1 extends outward from the first conductive element 212 in directions in parallel with a first direction d 1 , and the second section l 2 is not in parallel with the first section l 1 . in other words, a joint of the first section l 1 and its corresponding second section l 2 of each of the second conductive elements 214 _ 1 - 214 _ 4 forms a bent part. please note that, in this embodiment, the first section l 1 and its corresponding second section l 2 of each of the second conductive elements 214 _ 1 - 214 _ 4 are perpendicular to each other, and each of the second conductive elements 214 _ 1 - 214 _ 4 has only one bent part. however, it is for illustrative purpose only, and is not meant for a limitation of the present invention. for example, in another embodiment, at least one conductive element in the second conductive elements 214 _ 1 - 214 _ 4 may include a first section, a second section and a third section, where the first section is not in parallel with the second section, and the second section is not in parallel with the third section. at this moment, the conductive element that includes the first section, the second section and third section has two bent parts. that is, each of the conductive elements in the second conductive elements 214 _ 1 - 214 _ 4 has at least one bent part, and different conductive elements may have different numbers of bent parts. besides, this embodiment uses 4 second conductive elements for illustrative purpose only, and it is not meant for a limitation of the present invention. those skilled in the art should readily increase/decrease the number of second conductive elements according to actual design requirement. in addition, the electrode unit 200 further includes a plurality of third conductive elements 216 _ 1 and 216 _ 2 straightly extending outward from two ends of the first conductive element 212 in directions in parallel with the first direction d 1 , respectively, so as to output a sensing signal sig sensed by the first electrode 210 . the third conductive elements 216 _ 1 and 216 _ 2 are substantially strip-shaped, respectively. that is, the third conductive elements 216 _ 1 and 216 _ 2 may be considered as conductive wires, respectively, for outputting the sensing signal sig. please note that, in this embodiment, the first conductive element 212 and the second conductive elements 214 _ 1 - 214 _ 4 may be realized by indium tin oxide (ito), and the third conductive elements 216 _ 1 and 216 _ 2 may also be realized by ito, or realized by conductive metal (i.e., implemented in a metal layer of the touch-sensing element) based on actual requirement of signal output layouts. however, it is for illustrative purpose only, and is not meant for a limitation of the present invention. please refer to fig. 2a and fig. 2c concurrently. fig. 2c is a schematic diagram illustrating an embodiment of the second electrode 220 shown in fig. 2a . the second electrode 220 includes a fourth conductive element 222 , a plurality of fifth conductive elements 224 _ 1 - 224 _ 4 and a plurality of sixth conductive elements 226 _ 1 - 226 _ 4 . the fourth conductive element 222 has a plurality of second funnel-shaped notches h 2 _ 1 and h 2 _ 2 located at both sides of the fourth conductive element 222 , respectively, and the fourth conductive element 222 together with the notches h 2 _ 1 and h 2 _ 2 may substantially form a rectangle. each of the fifth conductive elements 224 _ 1 - 224 _ 4 is strip-shaped, and includes a first section l 1 and a second section l 2 . the first section l 1 extends outward from the fourth conductive element 222 in directions in parallel with a second direction d 2 , and the second section l 2 is not in parallel with the first section l 1 . in other words, a joint of the first section l 1 and its corresponding second section l 2 of each of fifth conductive elements 224 _ 1 - 224 _ 4 forms a bent part. in addition, each of sixth conductive elements 226 _ 1 - 226 _ 4 is strip-shaped, straightly extending outward from the fourth conductive element 222 in directions in parallel with a third direction d 3 , and the third direction d 3 is different from the second direction d 2 (in this embodiment, the second direction d 2 is perpendicular to the third direction d 3 , but it is for illustrative purpose only). however, when the touch-sensing element tu′ is located at one of the corners of the touch panel (e.g. an upper-left corner, an upper-right corner, a lower-left corner or a lower-right corner), the sixth conductive elements 226 _ 1 - 226 _ 4 would straightly extend outward in directions in parallel with the second direction d 2 . please note that, in this embodiment, the first section l 1 and its corresponding second section l 2 of each of fifth conductive elements 224 _ 1 - 224 _ 4 are perpendicular to each other, and each of the fifth conductive elements 224 _ 1 - 224 _ 4 has only one bent part. however, it is for illustrative purpose only, and is not meant for a limitation of the present invention. for example, in another embodiment, at least one conductive element in the fifth conductive elements 224 _ 1 - 224 _ 4 may include a first section, a second section and a third section, where the first section is not in parallel with the second section, and the second section is not in parallel with the third section. at this moment, the conductive element that includes the first section, the second section and third section has two bent parts. that is, each of the conductive elements in the fifth conductive elements 224 _ 1 - 224 _ 4 has at least one bent part, and different conductive elements may have different numbers of bent parts. please note that, in this embodiment, the fourth conductive element 222 , the fifth conductive elements 224 _ 1 - 224 _ 4 and the sixth conductive element 226 _ 1 - 226 _ 4 may be realized by ito. in addition, this embodiment uses 4 fifth conductive elements and 4 sixth conductive elements for illustrative purpose only, and it is not meant for a limitation of the present invention. those skilled in the art should readily increase/decrease the number of fifth conductive elements and the number of sixth conductive elements according to actual design requirement. please refer to fig. 3 , which is a schematic diagram illustrating a sensing pattern of electrode units 200 on a touch panel tp′ according to an embodiment of the present invention. in this embodiment, the touch panel tp′ includes a plurality of touch-sensing elements tu shown in fig. 1 that are orderly arranged in the middle of the touch panel tp′, and further includes a plurality of touch-sensing elements tu′ shown in fig. 2 that are accordingly arranged at fringes of the touch panel tp. in other words, electrode units in the touch-sensing element tu′ are fringe electrodes. as shown in fig. 3 , the touch-sensing elements tu are staggered in order to thereby form a rectangular pattern, first electrodes 110 on the same row are series-connected as a sensing trace, and second electrodes 120 on the same column are series-connected as a sensing trace. in addition, two ends of each sensing trace formed by series-connected first electrodes 110 are coupled to the corresponding electrode units 210 on the touch-sensing element tu′, and two ends of each sensing trace formed by series-connected second electrodes 120 are coupled to the corresponding electrode units 220 on the touch-sensing element tu′. in this way, the touch panel tp′ would have a plurality of transverse sensing traces t 1 ′-tn′ and a plurality of longitudinal sensing traces s 1 ′-sm′. those skilled in the art should readily understand operations of the touch panel tp′ in fig. 3 after reading the above mentioned paragraph directed to the electrode unit 200 . hence, detailed descriptions and modifications may be referred to the above and therefore omitted here for brevity. to sum up, according to the present invention, the electrode unit 200 may increase contact areas arranged for inducting the inductive capacitance c by increasing the perimeter of the sensing pattern formed by the first electrode 210 and the second electrode 220 , so as to increase the value of the inductive capacitance c sensed by the touch-sensing element located on the fringes of the touch panel, and thus may decrease the likelihood of faulty calculation of contact on the fringes of the touch panel. those skilled in the art will readily observe that numerous modifications and alterations of the device and method may be made while retaining the teachings of the invention. accordingly, the above disclosure should be construed as limited only by the metes and bounds of the appended claims.
003-823-364-984-60X
US
[ "US" ]
G06F12/08,G06F12/12
1983-06-20T00:00:00
1983
[ "G06" ]
set association memory system
a memory system for use in a computer which in the preferred embodiment provides two megabytes of capacity per board (up to four boards) is disclosed. an alu generates an address signal which selects a number of set locations in the main memory. simultaneously, a portion of the address field is fed to a set association logic circuit for parallel processing. the set association circuit contains tag storage memories and comparators which store tag values. these values are compared with address fields, and if a match occurs, one of the comparators selects a 128-bit word from the main memory. a hash function is also used to provide for dispersal of storage locations to reduce the number of collisions of frequently used addresses. because of hardware implementation of hashing and least recently used (lru) algorithm, a constant predetermined cycle time is realized since all accessing functions occur substantially in parallel. several sets of data are accessed simultaneously while a set association process is performed which selects one of the accessed sets, wherein access time is reduced because of the parallel accessing.
1. in a digital computer system which includes processing means, an address bus, and a data bus, a memory system comprising: a plurality of random-access memories (rams) for storing data, said rams coupled to said data bus and said address bus; set association means coupled to said data bus and said address bus, said set association means coupled to receive an address signal from said address bus and providing a first field of digital signals for a set association determination, said first field also being coupled to said rams to access sets of digital signals stored in said rams; said processing means coupled to said data bus and said address bus for providing said address signals to said set association means; said set association means further providing a second field of digital signals for said set association determination; said set association means for providing a select signal to said rams for selecting one of said sets of stored signals from those accessed by said first field such that accessing of sets of locations in said rams by said first field and said set association determination of one of said sets by said first and second fields occurs substantially simultaneously; whereby simultaneous accessing of said rams with said set association provides more rapid cycle times for said memory system. 2. the memory system defined in claim 1 wherein said first field is derived from a subset of said address signals. 3. the memory system defined by claim 2 wherein said first field is derived from the implementation of a hash function, said hash function for hashing selective bits of said address signal for providing said first field. 4. the memory system defined by claim 3 wherein said hash function implementation further includes a plurality of exclusive oring means for exclusively oring selective bits of said address signal respectively. 5. the memory system defined by claims 1 or 4 including an offset field derived from said address signal for selecting a stored word from said selected set and a field isolation unit (fiu) for selectively isolating a field of bits from said selected word from said rams, said fiu being coupled to said address bus and said data bus. 6. in a digital computer system which includes processing means, an address bus and a data bus, a memory system comprising: a plurality of random-access memories (rams) for storing data, said rams coupled to said data bus and said address bus; set association means for determining set association, said set association means providing a first field and second field of digital signals, said fields being derived from address signals provided by said processing means on said address bus, said set association means being coupled to said data bus, address bus and rams; said processing means for providing address signals to said rams and said set association means such that said rams are accessed substantially at the same time by said first field as said set association means determines said set association, said first field selecting sets of stored signals in said ram, said set association means comparing said first and second fields and providing a select signal to said rams for selecting one of said set of stored signals from those accessed in said rams by said first field; whereby substantially simultaneous accessing of said rams and said set association means determination provides more rapid cycle times for said memory system. 7. the memory system defined by claim 6 wherein said set association means further including circuit means for determining a least recently used (lru) value used by said set association means for generating said select signal. 8. the memory system defined by claim 7 wherein said set association means includes a plurality of tag store memories and comparators for generating said select signal, said first field addressing said tag store memories and providing said second field from said tag store memories, and said second field being compared in said comparators with said first field. 9. the memory system defined by claim 8 wherein said lru values are stored in said tag store memories. 10. the memory system defined by claim 9 further including coupling means for permitting one of said lru values from one of said tag store memories to be selected for coupling to other of said tag store memories. 11. the memory system defined by claim 10 wherein said tag store memories are static memories and said rams are dynamic memories. 12. the memory system defined by claims 6 or 11 wherein said first field is derived from a subset of said address signals using a hash function, said hash function for hashing selective bits of said address signals to provide said first field. 13. in a digital computer system which includes processing means, an address bus and a data bus, a memory system comprising: a plurality of random-access memories (rams) for storing digital signals, said rams coupled to said data bus for receiving said digital signals and coupled to said address bus for receiving an address signal; a plurality of tag storage memories for storing information relating to locations of said stored digital signals in said rams, said tag storage memories coupled to said data bus for receiving said information and coupled to said address bus for receiving first address signals, wherein each tag storage memory provides second address signals; a plurality of comparator means, each of which is associated with one of said tag storage memories, for comparing said first address signals and said second address signals and for providing an output signal based on said comparison, said first address signals being received from said address bus, said second address signals being received from its respective tag storage memory, said output signals from said comparator means being coupled to said rams for selecting one of a set of digital signals stored in said rams; first address signals from said address bus being coupled to said rams for selecting sets of stored digital signals; whereby said tag storage memories and comparator means provide set association for identifying a set of said digital signals from said sets of stored digital signals stored within said rams. 14. the memory system defined by claim 13 wherein said first address signals access said rams at the same time said first address signals access said tag storage memories. 15. the memory system defined by claim 13 wherein said first address signals are a subset of address signals generated by said processing means, said processing means coupled to said data bus and said address bus. 16. the memory system defined by claim 13 including hash function means coupled to receive said first address signals from said address bus and for providing a hashed output to said tag storage memories to provide more randomized distribution of said stored digital signals in said rams for the more frequently used addresses from said processing means. 17. the memory system defined by claim 16 wherein said hash function means exclusively ors selective bits of said first address signals. 18. in a digital computer system which includes processing means, an address bus, and a data bus, a memory comprising: a plurality of random-access memories (rams) for storing digital signals used by said processing means, said rams coupled to said address bus and said data bus for receiving said digital signals; a plurality of tag storage memories for storing information relating to locations of said stored digital signals in said rams, said tag storage memories coupled to said address bus and said data bus for receiving said information; a plurality of comparator means, each of which is associated with one of said tag storage memories for comparing a first field and a second field of digital signals and for providing an output signal based on said comparison, said first field of said signals being received from said address bus, said second field of said signals being received from its respective tag storage memory, said output signals from said comparator means being coupled to said rams for selecting a set of digital signals stored in said rams; circuit means for determining least recently used (lru) values such that from said lru values for each address applied to said tag storage memories it can be determined which set of locations within said rams was accessed the least, said circuit means making said determination of said lru values simultaneously while said rams are being accessed; whereby said tag storage memories enable the identification of least used memory locations within certain address ranges. 19. the memory system defined by claim 18 wherein said circuit means, after an output signal from one of said comparator means selects a set of stored digital signals within said rams, broadcasts the one of said lru value from said respective tag store memory, and wherein said lru values in the other said tag store memories remain unchanged if said lru values are greater than said broadcasted value, however, if said lru values are less than or equal to said broadcasted value said stored lru values are decremented, and wherein said circuit means causes said one lru value broadcasted to be set to a predetermined value. 20. in digital computer system which includes processing means, an address bus, and a data bus, a memory comprising: a plurality of random-access memories (rams) for storing digital signals used by said processing means, said rams coupled to said address bus and said data bus for receiving said digital signals; a plurality of tag storage memories coupled to said data bus for storing information relating to locations of said stored digital signals in said rams, and each of said tag storage memories providing a second field output; a first field derived from an address signal on said address bus; a hash function means coupled to said first field and said tag memories, said hash function means for exclusive oring selective bits of said first field and providing a hashed output to said tag memories, said hashed output for addressing said tag memories; a plurality of comparator means, each of which is associated with each one of said tag memories, said comparator means for comparing said first field and said second field and generating a hit set as an output; a circuit means coupled to said plurality of comparator outputs for selecting least recently used (lru) value from said tag memories, said circuit for comparing lru values stored in said tag memories to a lru value of said hit set, wherein if said stored lru value is greater than said hit set lru value, said stored lru value remain unchanged and if less than or equal to that of said hit set lru value, said stored lru value is decremented by one and restored; said circuit means functioning substantially simultaneous to said address signal, wherein said address signal for accessing a set of locations in said rams and said comparator means accessing a particular set from a set of locations in said rams, whereby simultaneous accessing of said rams provides for more rapid cycle times for said memory system.
background of the invention 1. field of the invention the invention relates to a memory system for use in digital computers. 2. prior art countless memory systems are known for permitting a processing means (e.g., central processing unit, arithmetic logic unit, etc.) to select locations in a random-access memory (ram). for purposes of discussion, and recognizing the pitfalls in characterizing memory systems, the prior art is briefly discussed in two general categories. one category (non-virtual memories) receives a logical address and employs some means such as an address extender technique, memory management unit (mmu), bank switching, etc., to provide a larger, physical address for addressing a ram. in the second category, a larger logical address from the processing means is translated to a generally smaller, physical address for accessing the ram. as will be seen, the present invention is more like the second category of memory systems than the first. the first category of memory systems is typically used by microprocessors, and the like, and often uses an mmu. this unit receives a portion of the logical address and provides a portion of the physical address. for the mapping provided with this non-virtual storage, a physical address exists for each logical address. in the second category, often two memories for storing data are employed. one, commonly referred to as a data cache, is a smaller, higher speed ram (e.g., employing static devices and having a system cycle time of approximately 200 nsec.). data frequently addressed by the processing means is stored in the data cache memory. a larger ram (e.g., dynamic devices with system cycle times of 1-2 micro sec.) provides the bulk of the ram storage. in a typical process, usually more than 90% of the time, data sought by the processing means is in the data cache and if not there, much greater than 99% of the time the data is in the dynamic ram. a fast memory (address translation unit (atu) or translation look-aside buffer) is used to examine addresses from the processing means and for providing addresses for the rams. as many as three serial accesses can be required with this arrangement. the effective cycle time for this memory system is in the 300-400 nsec. range for the described examples. the effective cycle time is reduced from what would appear to be faster access in the data cache, since to resolve a miss in the data cache, and actually access the main ram requires approximately 1-2 microsec. because of the serial accessing. with the above described memory, for each context switch, the atu cache must be reprogrammed, thus further reducing the speed of the memory system where context switching is required. as will be seen, the present invention employs only one type of memory for data storage (e.g., dynamic rams) without the equivalent of a data cache. an associative memory operation used to identify locations in ram occurs in parallel with the accessing of portions of the ram to accelerate the overall cycle time. context switching can occur much more quickly than with prior art systems. the cycle time in the invented system is slightly slower than in the above-described virtual memory systems. however, because of numerous operation advantages the effective cycle time in many cases is faster without the complications inherent in prior art memory systems. for instance, the invented system has a guaranteed, constant cycle time (assuming the data is in memory). this is particularly important for "pinned" or "locked" pages. summary of the invention a memory system for use in a digital computer is described. the memory itself comprises a plurality of rams which store digital signals for the computer's processing means or like means. a plurality of tag storage memories are used for storing information relating to the locations of information stored in the rams. these tag storage memories are programmed from the data bus. a plurality of comparators each of which is associated with one of the tag storage memories compares a first field and second field of digital signals, one received from the address bus and the other from the tag storage memories. the comparators' output signals indicate a set association used in selection of sets in memory. a hardware implemented "hash function" greatly reduces the likelihood of collision. this logic receives least significant bits of the segment, space and page offset of a universal or uniform address and provides a line address for the tag storage memories and rams. the invented memory system includes other unique features which shall be described in the body of the application, such as a hardware implemented page replacement algorithm. in general, the invented memory system provides equivalent (or better) performance over the more commonly used virtual memory systems without many of the complications associated with these systems. brief description of the drawings fig. 1 is a block diagram illustrating the coupling of the invented memory system in a computer. fig. 2 is a block diagram used to describe the address bit distribution used in the presently preferred embodiment. fig. 3 is a block diagram of the portion of the invented memory system used for set association. fig. 4a is a diagram used in describing the hash function used in the presently preferred embodiment. fig. 4b is an electrical schematic of the hash function means used in the presently preferred embodiment. fig. 5 is a block diagram of the hardware implemented page replacement algorithm. detailed description of the invention a random-access memory system for use with a digital computer is described. in the following description numerous specific details are set forth such as specific number of bits, etc., in order to provide a thorough understanding of the present invention. it will be obvious to one skilled in the art, however, that the present invention may be practiced without these specific details. in other instances, well-known circuits and processes have not been described in detail in order not to unnecessarily obscure the present invention. for a discussion of the particular computer system in which the memory system of the present invention is employed, see co-pending application ser. no. 602,154, filed apr. 19, 1984, entitled "computer bus apparatus with distributed arbitration" and assigned to the assignee of the present invention. block diagram of fig. 1 referring first to fig. 1, the memory system of the present invention is illustrated as memory 11 and field isolation unit 12. the memory 11 includes the rams (e.g., 64k "chips") which provide system storage and the circuits for accessing these rams. in the presently preferred embodiment, the memory system is used with an arithmetic logic unit 10 which provides a 67 bit address. 60 bits of this address are coupled to memory 11 and 7 to the field isolation unit (fiu) 12. the data bus 20 associated with the alu 10 is coupled to the memory 11 and fiu 12. the remainder of the computer system associated with the alu 10 such as input/output ports, etc., is not illustrated in fig. 1. the memory 11 in its presently preferred embodiment may employ one to four boards each of which stores two megabytes. the 60-bit address coupled to the memory 11 selects a 128-bit word which is coupled to the field isolation 12 over the bus 20. the 7 bits on bus 15 selects (isolates) 1 to 64 bits within the 128-bit word. thus, the alu 10 may address anything from a single bit to and including a 64-bit word. as presently implemented, the memory employs 64k dynamic nmos "chips" for main storage, although other memory devices may be employed. logical address bit distribution referring to fig. 2, the 67 bits of the universal or uniform address from the alu are shown at the top of the figure. as discussed in conjunction with fig. 1, 7 bits of this address are coupled to the fiu 12. the remaining 60 bits are coupled to the memory boards (one to four) within the memory system. six bits of the 60 bits on each board are used for a page offset. fifty-four bits are used for set association on each board (block 25). these bits identify one set on the boards by association; there are four sets on each board and 512 lines within each set. eighteen bits of the 54 bits select from the 512 possible lines a 64.times.128 bit field (block 26). as illustrated by block 27, 6 bits (page offset) select a single 128-bit word from the 64.times.128 bits. then from the 128 bits, a one to 64-bit word is selected by the 7 bits coupled to the fiu as illustrated by block 28. in practice, 15 bits of the address begin accessing the ram memory to select four 128-bit word sets on each board present in the system. concurrently with this accessing, the set association occurs as illustrated by blocks 25, 26 and 27 of fig. 2 to select a single 128-bit word from all of the 128-bit words selected by all the boards. thus, the accessing of four 128-bit words on each board occur in parallel with the set association required to select a single 128-bit word. parity bits used throughout the memory system are not discussed or shown to prevent unnecessary complications in the description. these bits may be implemented using well-known circuits and techniques. set association logic referring to fig. 3, the set association process on each board employs four tag store memories such as memories 40-43, and four comparators, one associated with each of the tag store memories shown as comparators 45-48 in fig. 3. these tag store memories and comparators are duplicated on each of the memory boards. each tag store memory is addressed by a 9-bit line number field and a four bit set number field not relevant to the present discussion, and provides a 54-bit output to its respective comparator. (a field of four additional bits occurs for the page replacement algorithm discussed later in addition to other outputs such as a parity bit.) this 54-bit tag value is compared in the comparators with a 54 bit field of address from the alu. if the 54 bits from the tag store memory matches the 54 bits of the logical address, then an output signal from the comparator selects a 128-bit word. tag values are written into the tag memories from data bus 20 and may be read on this bus. as again can be seen in fig. 3, 7 bits of the 67 bits of the physical address are used for field isolation, 6 bits for the page offset, and 54 bits are coupled to the comparators. eighteen bits are coupled to a hash function means 35 which is discussed in conjunction with figs. 4a and 4b. the output of this means are the 9 bit line address field used to address both the tag store memories and the rams. in the preferred embodiment there are two tag storage memories and two comparators per board. each pair is used twice per memory cycle. for purpose of explanation, four (4) separate memories and comparators are illustrated. hash function logic in fig. 4a, the universal or uniform address from the alu is shown as comprising a 32 bit name, a 3 bit space field, and a 32 bit offset. the name is further implemented as a 24 bit segment and a 8 bit virtual processor identification (vpid). the page offset includes a 19 bit page field, a 6 bit word field, and finally, 7 bits which are used to isolate a 1-64 bit field from a 128 bit word. in a typical application, the more significant bits of the segment and page and the most significant bit of the space field will vary very little. and, in contrast, the least significant bits of the segment, page and space field tend to vary a great deal. if the memory system is implemented without a hash function, the repeated variations of the least significant bits, particularly of the segment and page, will cause repeated collisons, that is, address lines within the ram will not be available for many addresses. to reduce the probability of such collision, the hash function logic of fig. 4b provides exclusive oring of these highly varying, least significant bits. the hash function logic of fig. 3 comprises 9 exclusive or gates, 70a through 70i, as shown in fig. 4b. the exclusive or gate 70a receives the segment address bit 23 and the space address bit 2, and provides the line address bit 0. similarly, the exclusive or gate 70b receives the segment address bit 22 and space bit 1, and provides the line address bit 1. this hashing is represented by line 71 of fig. 4a. the gates 70c through 70i provide the segment/page hashing, and for instance, the gate 70c receives the segment address 21 and page address 12 and provides the line address 2. the remaining gates 70d through 70i receive the remaining least significant bits of the page address and segment address as indicated in fig. 4b and provide the remaining line addresses 4-8. this hashing is represented by line 72 of fig. 4a. this exclusive oring causes a wide dispersal, or mapping, of the most frequently used addresses, thereby reducing the probability of collision. it should be noted that the hash function is implemented in hardware, and substantially, no cycle time is lost in hashing the addresses. the delay associated with the exclusive or gates 70a through 70i (e.g., 5 nanoseconds) is almost de minimus when compared to the cycle time of the mos rams. in some prior art memories, hash functions are implemented with software routines, and thus effectively, are performed in series with memory processing. this increases cycle time, thereby reducing the usefulness of the hash function. page replacement algorithm the memory system includes a hardware implemented page replacement algorithm. it is used to identify least used lines when a collision occurs, allowing a page in ram to be displayed. as previously mentioned, a 4-bit field is associated with each line in each of the tag store memories. this field is used for implementing the page replacement algorithm. the memory value represented by these four bits is referred to as the least recently used (lru) value. as will be seen, this value is unique in all sets for any given line. when the memory system is addressed, two possible conditions can occur at the output of the comparators, such as comparators 45 through 48 of fig. 3. either no match occurs indicating that there is no location in memory corresponding to that address (no hit) or if a comparison occurs, this indicates a location exists in memory corresponding to that address (hit). the signal (and its complement) at the output of each comparator for these conditions is coupled to the circuit of fig. 5 on lines 74 and 75. the circuit of fig. 5 is used to implement the lru algorithm. for purposes of discussion, it will be assumed that the circuitry is repeated for each set, that is, up to 16 such circuits are used if four boards are employed. in actual practice, as was the case for the tag store memories and comparators, half this number of circuits is employed; each circuit is used twice per memory cycle. referring now to fig. 5, the lru value from the tag store memory is coupled to a register 77. a clocking signal applied to this register causes it to accept the 4 bit lru value from the tag storage memory. the output of the register 77 is coupled to the driver 78, comparator 79, adder 81, and multiplexer 82. the "greater than" comparator 79 compares the two input signals to this comparator and provides a "one" output if the 4 bit digital number on bus 87 is greater than the number on bus 80, and conversely, a "zero" output if the number on bus 87 is equal to or less than the number on bus 80. a one at the output of comparator 79 causes the multiplexer 82 to select bus 87; otherwise, the multiplexer 82 selects the output of the "minus one" adder 81. the output of the multiplexer 82 is coupled to bus 86 through a driver 83 for a no-hit condition. first, assume that a particular set contains the data for a particular address and thus, a hit occurs. the set i hit/signal on line 74 is low, and since it is coupled through an inverting input terminal of the driver 78, this driver is selected. the 4 bit lru value for the selected set passes through the driver 78 and is broadcast to all the other sets on the lru bus 80. at the same time, driver 85 is selected and the maximum value (most recently used) value is coupled through the driver 85 onto the bus 86. for four boards, this value is 15; for three boards, 11; for two boards, 7; and for one board, 3. the value is coupled to the tag store memory associated with the hit condition. assume now that the circuit of fig. 5 is part of one of those sets which did not hit. obviously, for this condition, the lru value coupled to register 77 is not put on the bus 80 since the driver 78 is not activated. that is, only one lru value, that corresponding to the hit set is put on the bus 80. the lru value is coupled to the comparator 79, multiplexer 82, and adder 81. if the value on line 87 is greater than the lru value for the hit set, this 4 bit value on bus 87 passes through the multiplexer 82, driver 83, and is restored into the tag store memory for that set. if, on the other hand, the value on bus 87 is less than, or equal to, the value on the bus 80, multiplexer 82 selects the output of the adder 81. the adder 81 subtracts 1 from the value on line 87 and this new lru value is coupled through the multiplexer 82 and driver 83 and is placed back into the tag store memory. (in practice, adder 81 is a rom used to also update the parity bit.) thus, as apparent, the circuit of fig. 5, (i) for the hit set places in memory the maximum lru value; (ii) if the stored lru value is greater than the lru value for the hit set, the stored value is returned to memory, and finally, (iii) if the stored lru value is less than or equal to that of the hit set, it is decremented by 1 and re-stored. if no hit occurs in any of the sets (collision condition), the set with an lru value of zero is the least recently used set for the address line. this line is used (data replaced in rams) and the lru value for this line is set to the maximum value and all the lru values are decremented. initially, the lru values for each line are set with a different value for each set based on a predetermined highest implemented set number. from analyzing the lru values from the circuit of fig. 5, it will become apparent that the lru values are always unique. thus, there will only be one set with a zero value lru number. it is important to note that the lru algorithm is implemented in hardware and the lru values are determined substantially in parallel while the memory is accessed. this eliminates the time required in some prior art memories to calculate new lru values. referring again to fig. 3, when the memory is to be accessed, ignoring for the moment the 7 bits to the field isolation unit, the 6 bits of the page offset and 9 bits for the line address, are immediately coupled to the memory since there is substantially no delay involved in the hash function logic means 35. these bits immediately begin accessing four 128 bit words on each board. while this is occurring, the tag store memories are addressed, and the comparison is completed within the comparators 45 through 48. the tag store memories are static memories and their cycle time is much shorter than that of the rams. before the four, 128 bit words on each board are selected, the results of the comparison are completed, and assuming a hit condition occurs, one 128 bit word is coupled to the fiu. the 7 bits to the field isolation unit simply allow some or all of these bits to appear on the data bus, and the "setting up" for this isolation occurs during the time that the rams are being accessed. consequently, the cycle for accessing the data or the like stored in the rams begins substantially when the address is available on the address bus with the set association performed through the tag storage memories and comparators occuring in parallel. thus, as mentioned, the cycle time of the memory is a known, constant time without the unusually long access times that occur in prior art systems, for instance, when the data is not located in the data cache. an important feature of the presently described memory is the fact that both the least recently used algorithm and hash functions are implemented in hardware in a manner that does not significantly increase access time, or require interaction with the alu or cpu.
005-832-435-738-164
US
[ "CA", "EP", "US", "JP" ]
A63B71/06,A61B5/0205,A63B69/00,G16H20/30,G16H40/63,A61B5/113,A61B5/08,A61B5/00,A61B5/01,A61B5/024,A61B5/0402,A61B5/0476,A61B5/0488,A61B5/11,A61B5/145,A61B5/1455,A63B24/00,A61B5/04
2009-09-01T00:00:00
2009
[ "A63", "A61", "G16" ]
method and system for monitoring physiological and athletic performance characteristics of a subject
the present invention is directed to systems and methods for monitoring characteristics of a subject. a system according to an exemplary embodiment of the invention includes a sensor subsystem including at least one respiratory sensor disposed proximate to the subject and configured to detect a respiratory characteristic of the subject, wherein the sensor subsystem is configured to generate and transmit at least one respiratory signal representing the respiratory characteristic, and at least one physiological sensor disposed proximate to the subject and configured to detect a physiological characteristic of the subject, wherein the sensor subsystem is configured to generate and transmit at least one physiological signal representing the physiological characteristic, and a processor subsystem in communication with the sensor subsystem, the processor subsystem being configured to receive at least one of the at least one respiratory signal and the at least one physiological signal.
a fitness monitoring system for monitoring a subject engaged in a physical activity, the system comprising: a sensor subsystem including a first sensor and a second sensor, wherein the first and second sensors are responsive to changes in distance therebetween, wherein the sensor subsystem is configured to generate and transmit a distance signal representative of the distance between the first and second sensors; and a physiological sensor configured to generate and transmit a physiological signal representative of a physiologic parameter of the subject; and a processor subsystem in communication with the sensor subsystem and the physiological sensor, the processor subsystem being configured to receive the distance signal and the physiological signal, wherein the processor subsystem is configured to process the physiological signal to obtain a signal that is representative of a physiological parameter of the subject. the fitness monitoring system of claim 1, wherein the physiological sensor is configured to monitor at least one of electrical activity of the brain, electrical activity of the heart, pulse rate, blood oxygen saturation level, skin temperature, emg, ecg, eeg, and core temperature. the fitness monitoring system of claim 1, further comprising a monitoring subsystem configured to receive the distance signal, wherein the processor subsystem is configured to process the distance signal to obtain a signal that is representative of a respiratory parameter, and wherein the monitoring subsystem is configured to display a representation of the respiratory parameter. the fitness monitoring system of claim 3, wherein the processor subsystem comprises a plurality of stored respiratory benchmarks, and wherein the processor subsystem is further configured to compare the respiratory parameter to the plurality of stored respiratory benchmarks and to generate and transmit a status signal in response to a determination that the respiratory parameter corresponds to one of the stored respiratory benchmarks. the fitness monitoring system of claim 1, wherein the processor subsystem is further configured to determine a respiratory activity of the subject based on the distance signal and to generate and transmit a respiratory activity signal representative of the respiratory activity. the fitness monitoring system of claim 1, wherein the processor subsystem comprises a plurality of stored physiological benchmarks, and wherein the processor subsystem is further configured to compare the physiological parameter to the stored physiological benchmarks and to generate and transmit a status signal in response to a determination that the physiological parameter corresponds to one of the stored physiological benchmarks. a fitness monitoring system for monitoring a subject engaged in a physical activity, the system comprising: a sensor subsystem comprising: a first sensor and a second sensor, wherein the first and second sensors are responsive to changes in distance therebetween, wherein the sensor subsystem is configured to generate and transmit a distance signal representative of the distance between the first and second sensors; and a third sensor, wherein the third sensor is a spatial sensor configured to detect movement of the subject, wherein the sensor subsystem is configured to generate and transmit a spatial signal representative of a movement of a body part of the subj ect; and a processor subsystem in communication with the sensor subsystem, the processor subsystem being configured to receive the distance signal and the spatial signal. the fitness monitoring system of claim 7, wherein the sensor subsystem comprises a plurality of sensors responsive to changes in distance therebetween. the fitness monitoring system of claim 7, wherein the spatial sensor includes at least one of an optical encoder, a proximity switch, a hall effect switch, a laser interferometry system, an inertial sensor, and a global positioning system. the fitness monitoring system of claim 7, further comprising a monitoring subsystem, wherein the processor subsystem is configured to process the distance signal to obtain a signal that is representative of a respiratory parameter, and wherein the monitoring subsystem is configured to display a representation of the respiratory parameter. the fitness monitoring system of claim 10, wherein the processor subsystem comprises a plurality of stored respiratory benchmarks, and wherein the processor subsystem is further configured to compare the respiratory parameter to the plurality of stored respiratory benchmarks and to generate and transmit a status signal in response to a determination that the distance signal corresponds to one of the stored respiratory benchmarks. the fitness monitoring system of claim 7, wherein the processor subsystem is further configured to determine a respiratory activity of the subject based on the distance signal, and to generate and transmit a respiratory activity signal representative of the respiratory activity. the fitness monitoring system of claim 7, wherein the processor subsystem comprises a plurality of stored spatial benchmarks, and wherein the processor subsystem is further configured to compare the spatial signal to the plurality of stored spatial benchmarks, and to generate and transmit a status signal in response to a determination that the spatial signal corresponds to one of the stored spatial benchmarks. a method for monitoring a subject engaged in a physical activity, the method comprising: generating a distance signal representative of the distance between a first sensor and a second sensor and transmitting the respiratory signal to a processor subsystem, wherein the respiratory signal is generated by a sensor subsystem, wherein the first and second sensors are responsive to changes in distance therebetween; generating a spatial signal representative of an orientation of a body part of the subject and transmitting the spatial signal to the processor subsystem; and receiving the respiratory signal and the spatial signal at the processor subsystem. the method of claim 14, further comprising: processing the respiratory signal to obtain a signal which is representative of a respiratory parameter of the subject, and comparing the respiratory parameter to a plurality of stored respiratory benchmarks; and generating and transmitting a status signal in response to a determination that the respiratory parameter corresponds to one of the stored respiratory benchmarks.
cross-reference to related applications this non-provisional application claims priority to u.s. provisional application no. 61/275,586, filed september 1, 2009 , which is incorporated herein by reference in its entirety. field of the invention the present invention relates generally to methods and systems for monitoring physiological and athletic performance characteristics of a subject. more particularly, the invention relates to improved methods and systems for determining a plurality of physiological and athletic performance characteristics, and characterizing respiratory activity and associated events, as well as spatial parameters, in real time. the methods and systems of the present invention can be applied in a variety of fields, e.g., health care, medical diagnosis and monitoring, and athletic monitoring and coaching. background of the invention in medical diagnosis and treatment of a subject, it is often necessary to assess one or more physiological characteristics; particularly, respiratory characteristics. a key respiratory characteristic is respiratory air volume (or tidal volume). respiratory air volume and other respiratory characteristics are also useful to assess athletic performance, for example, by aiding in detection of changes in physiological state and/or performance characteristics. monitoring physiological and performance parameters of a subject can be important in planning and evaluating athletic training and activity. a subject may exercise or otherwise engage in athletic activity for a variety of reasons, including, for example, maintaining or achieving a level of fitness, to prepare for or engage in competition, and for enjoyment. the subject may have a training program tailored to his or her fitness level and designed to help him or her progress toward a fitness or exercise goal. physiological and performance parameters of a subject can provide useful information about the subject's progression in a training program, or about the athletic performance of the subject. in order to accurately appraise the subject's fitness level or progress toward a goal, it may be useful to determine, monitor, and record various physiological or performance parameters, and related contextual information. various methods and systems utilizing heart rate have been introduced to approximate effort and physiological stress during exercise. convenient, practicable, and comfortable means of measuring pulmonary ventilation in non-laboratory conditions, however, have been scarce. while of good value, heart rate can only give an approximation as to the true physiological state of an athlete or medical patient, as it can be confounded by external factors including, for example, sleep levels, caffeine, depressants, beta blockers, stress levels, hydration status, temperature, etc. furthermore, accurate use of heart rate to gauge physiological performance requires knowledge of the amount of blood flowing to the muscles, which in turn requires knowledge of the instantaneous stroke volume of the heart as well as the rate of pumping. these parameters can be difficult to determine while a subject is engaging in a physical activity. various conventional methods and systems have been employed to measure (or determine) tidal volume. one method includes having the patient or subject breathe into a mouthpiece connected to a flow rate measuring device. flow rate is then integrated to provide air volume change. as is well known in the art, there are several drawbacks and disadvantages associated with employing a mouthpiece. a significant drawback associated with a mouthpiece and nose-clip measuring device is that the noted items cause changes in the monitored subject's respiratory pattern (i.e., rate and volume). tidal volume determinations based on a mouthpiece and nose-clip are, thus, often inaccurate. a mouthpiece is difficult to use for monitoring athletic performance as well as for long term monitoring, especially for ill, sleeping, or anesthetized subjects. it is uncomfortable for the subject, tends to restrict breathing, and is generally inconvenient for the physician or technician to use. monitoring respiratory characteristics using a mouthpiece is particularly impractical in the athletic performance monitoring context. during athletic activities, the mouthpiece interferes with the athlete's performance. the processing and collection accessories necessary to monitor the breathing patterns captured by the mouthpiece add further bulk to such devices. these systems also typically require an on-duty technician to set up and operate, further complicating their use. other conventional devices for determining tidal volume include respiration monitors. illustrative are the systems disclosed in u.s. patent no. 3,831,586, issued august 27, 1974 and u.s. patent no. 4,033,332, issued july 5, 1977 , each of which is incorporated by reference herein in its entirety. although the noted systems eliminate many of the disadvantages associated with a mouthpiece, the systems do not, in general, provide an accurate measurement of tidal volume. further, the systems are typically only used to signal an attendant when a subject's breathing activity changes sharply or stops. a further means for determining tidal volume is to measure the change in size (or displacement) of the rib cage and abdomen, as it is well known that lung volume is a function of these two parameters. a number of systems and devices have been employed to measure the change in size (i.e., δ circumference) of the rib cage and abdomen, including mercury in rubber strain gauges, pneumobelts, respiratory inductive plethysmograph (rip) belts, and magnetometers. see, d.l. wade, "movements of the thoracic cage and diaphragm in respiration", j. physiol., pp. 124-193 (1954 ), mead, et al., "pulmonary ventilation measured from body surface movements", science, pp. 196, 1383-1384 (1967 ). rip belts are a common means employed to measure changes in the cross-sectional areas of the rib cage and abdomen. rip belts include conductive loops of wire that are coiled and sewed into an elastic belt. as the coil stretches and contracts in response to changes in a subject's chest cavity size, a magnetic field generated by the wire changes. the output voltage of an rip belt is generally linearly related to changes in the expanded length of the belt and, thus, changes in the enclosed cross-sectional area. in practice, measuring changes in the cross-sectional areas of the abdomen can increase the accuracy of rip belt systems. to measure changes in the cross-sectional areas of the rib cage and abdomen, one belt is typically secured around the mid-thorax and a second belt is typically placed around the mid-abdomen. rip belts can also be embedded in a garment, such as a shirt or vest, and appropriately positioned therein to measure rib cage and abdominal displacements, and other anatomical and physiological parameters, such as jugular venous pulse, respirationrelated intra-plural pressure changes, etc. illustrative is the vivometrics, inc. lifeshirt ® disclosed in u.s. patent no. 6,551,252, issued april 22, 2003 and u.s. patent no. 6,341,504, issued january 29, 2002 , each of which is incorporated by reference herein in its entirety. there are some drawbacks, however, to many rip belt systems. for example, rip belts are expensive in terms of material construction and in terms of the electrical and computing power required to operate them. in addition, the coils are generally large and tight on the chest and therefore can be cumbersome and uncomfortable for the athlete. other technologies have been developed in an attempt to monitor respiratory characteristics of a subject while avoiding the drawbacks of rip belt systems. these technologies generally work on a strain gauge principle and are often textile based. however, such technologies suffer significantly from motion interference that, by and large, renders them useless in athletic training applications where motion is necessarily at a relatively high level. in an attempt to rectify the drawbacks of the rip belt and strain gauge systems, various magnetometer systems have been recently developed to measure displacements of the rib cage and abdomen. respiratory magnetometer systems typically comprise one or more tuned pairs of air-core magnetometers or electromagnetic coils. other types of magnetometers sensitive to changes in distance therebetween can also be used. one magnetometer is adapted to transmit a specific high frequency ac magnetic field and the other magnetometer is adapted to receive the field. the paired magnetometers are responsive to changes in a spaced distance therebetween; the changes being reflected in changes in the strength of the magnetic field. to measure changes in (or displacement of) the anteroposterior diameter of the rib cage, a first magnetometer is typically placed over the sternum at the level of the 4th intercostal space and the second magnetometer is placed over the spine at the same level. using additional magnetometers can increase the accuracy of the magnetometer system. for example, to measure changes in the anteroposterior diameter of the abdomen, a third magnetometer can be placed on the abdomen at the level of the umbilicus and a fourth magnetometer can be placed over the spine at the same level. over the operational range of distances, the output voltage is linearly related to the distance between two magnetometers provided that the axes of the magnetometers remain substantially parallel to each other. as rotation of the axes can change the voltage, the magnetometers are typically secured to the subject's skin in a parallel fashion and rotation due to the motion of underlying soft tissue is minimized. as set forth herein, magnetometers can also be embedded in or carried by a wearable garment, such as a shirt or vest. the wearable monitoring garment eliminates the need to attach the magnetometers directly to the skin of a subject and, hence, resolves all issues related thereto. the wearable monitoring garment also facilitates repeated and convenient positioning of magnetometers at virtually any appropriate (or desired) position on a subject's torso. various methods, algorithms, and mathematical models have been employed with the aforementioned systems to determine tidal volume and other respiratory characteristics. in practice, "two-degrees-of-freedom" models are typically employed to determine tidal volume from rip belt-derived rib cage and abdominal displacements. the "two-degrees-of-freedom" models are premised on the inter-related movements by and between the thoracic cavity and the anterior and lateral walls of the rib cage and the abdomen, i.e., since the first rib and adjacent structures of the neck are relatively immobile, the moveable components of the thoracic cavity are taken to be the anterior and lateral walls of the rib cage and the abdomen. changes in volume of the thoracic cavity will then be reflected by displacements of the rib cage and abdomen. as is well known in the art, displacement (i.e., movement) of the rib cage can be directly assessed with an rip belt. diaphragm displacement cannot, however, be measured directly. but, since the abdominal contents are essentially incompressible, caudal motion of the diaphragm relative to the pelvis and the volume it displaces is reflected by outward movement of the anterolateral abdominal wall. the "two-degrees-of-freedom" model embraced by many in the field holds that tidal volume (v t ) is equal to the sum of the volume displacements of the rib cage and abdomen, i.e.: where rc and ab represent linear displacements of the rib cage and abdomen, respectively, and α and β represent volume-motion coefficients. the accuracy of the "two-degrees-of-freedom" model and, hence, methods employing same to determine volume-motion coefficients of the rib cage and abdomen, is limited by virtue of changes in spinal flexion that can accompany changes in posture. it has been found that v t can be over or under-estimated by as much as 50% of the vital capacity with spinal flexion and extension. see, mccool, et al., "estimates of ventilation from body surface measurements in unrestrained subjects", j. appl. physiol., vol. 61, pp. 1114-1119 (1986 ) and paek, et al., "postural effects on measurements of tidal volume from body surface displacements", j. appl. physiol., vol. 68, pp. 2482-2487 (1990 ). there are two major causes that contribute to the noted error and, hence, limitation. a first contributing cause of the error is due to the substantial displacement of the summed rib cage and abdomen signals that occurs with isovolume spinal flexion and extension or pelvic rotation. the second contributing cause of the error is due to posturally-induced changes in volume-motion coefficients. with isovolume spinal flexion, the rib cage comes down with respect to the pelvis and the axial dimension of the anterior abdominal wall becomes smaller. therefore, less abdominal cavity is bordered by the anterior abdominal wall. with a smaller anterior abdominal wall surface to displace, a given volume displacement of the abdominal compartment would be accompanied by a greater outward displacement of the anterior abdominal wall. the abdominal volume-motion coefficient would accordingly be reduced. it has, however, been found that the addition of a measure of the axial motion of the chest wall e.g., changes in the distance between the xiphoid and the pubic symphysis (xi), provides a third degree of freedom, which, when employed to determine tidal volume (v t ) can reduce the posture related error associated with the "two-degrees-of-freedom" model to within 15% of that measured by spirometry. see, paek, et al., "postural effects on measurements of tidal volume from body surface displacements", j. appl. physiol., vol. 68, pp. 2482-2487 (1990 ); and smith, et al., "three degree of freedom description of movement of the human chest wall", j. appl. physiol., vol. 60, pp. 928-934 (1986 ). several magnetometer systems are thus adapted to additionally measure the displacement of the chest wall. illustrative are the magnetometer systems disclosed in copending u.s. patent application no. 12/231,692, filed september 5, 2008 , which is incorporated by reference herein in its entirety. various methods, algorithms and models are similarly employed with the magnetometer systems to determine tidal volume (v t ) and other respiratory characteristics based on measured displacements of the rib cage, abdomen, and chest wall. the model embraced by many in the field is set forth in equation 2 below: where: δrc represents the linear displacement of the rib cage; δab represents the linear displacement of the abdomen; δxi represents axial displacement of the chest wall; α represents a rib cage volume-motion coefficient; β represents an abdominal volume-motion coefficient; and γ represents a chest wall volume-motion coefficient. there are, however, similarly several drawbacks and disadvantages associated with the noted "three-degrees-of-freedom" model. a major drawback is that posture related errors in tidal volume determinations are highly probable when a subject is involved in freely moving postural tasks, e.g., bending, wherein spinal flexion and/or extension is exhibited. the most pronounced effect of spinal flexion is on the abdominal volume-motion coefficient (β). with bending, β decreases as the xiphi-umbilical distance decreases. various approaches and models have thus been developed to address the noted dependency and, hence, enhance the accuracy of tidal volume (v t ) determinations. in copending u.s. patent application no. 12/231,692 , a modified "three-degrees-of-freedom" model is employed to address the dependence of β on the xiphi-umbilical distance, i.e.: where: δrc represents the linear displacement of the rib cage; δab represents the linear displacement of the abdomen; δxi represents the change in the xiphi-umbilical distance from an upright position; α represents a rib cage volume-motion coefficient; β represents an abdominal volume-motion coefficient; β u represents the value of the abdominal volume-motion coefficient (β) in the upright position; ε represents the linear slope of the relationship of β as a function of the xiphi-umbilical distance xi; (b u + εxi) represents the corrected abdominal volume-motion coefficient; and γ represents a xiphi-umbilical volume-motion coefficient. the "three-degrees-of-freedom" model reflected in equation 3 above and the associated magnetometer systems and methods disclosed in co-pending u.s. patent application no. 12/231,692 have been found to reduce the posture related error(s) in tidal volume (v t ) and other respiratory characteristic determinations. there are, however, several issues with the disclosed magnetometer systems and methods. one issue is that the magnetometer systems require complex calibration algorithms and associated techniques to accurately determine tidal volume (v t ) and other respiratory characteristics. a further issue, which is discussed in detail herein, is that the chest wall and respiratory data provided by the disclosed systems (and associated methods) is limited and, hence, limits the scope of respiratory characteristics and activity determined therefrom. brief summary of the invention the present invention provides apparatuses and methods for improved monitoring of a subject's respiratory characteristics, which is of particular use in the fields of athletic performance monitoring and medical evaluation. in accordance with the above objects and those that will be mentioned and will become apparent below, a fitness monitoring system for monitoring a subject engaged in a physical activity, in accordance with one embodiment of the invention, includes a first subsystem including a first plurality of paired electromagnetic coils disposed proximate to a subject, the first subsystem being configured to generate and transmit a plurality of coil signals, each of the plurality of coil signals representing a change in the distance between a pair of electromagnetic coils, and a second subsystem in communication with the first subsystem, the second subsystem being configured to receive the plurality of coil signals. the monitoring system can be configured to measure and/or calculate various performance parameters associated with an athlete's physical activity, as explained in further detail below. the monitoring system may include or communicate with one or more sensors for detecting information used to measure and/or calculate performance parameters. suitable sensors may include, for example, the sensors disclosed in commonly owned u.s. patent application no. 11/892,023, filed february 19, 2009 , titled "sports electronic training system, and applications thereof", commonly owned u.s. patent application no. 12/467,944, filed may 18, 2009 , titled "portable fitness monitoring systems, and applications thereof", and commonly owned u.s. patent application no. 12/836,421, filed july 14, 2010 , titled "fitness monitoring methods, systems, and program products, and applications thereof", each of which is incorporated by reference herein in its entirety. in accordance with another embodiment of the invention, a method for monitoring a subject engaged in a physical activity is provided. the method includes transmitting a plurality of coil signals, wherein the plurality of coil signals is generated by a first plurality of paired electromagnetic coils disposed proximate to a subject, and wherein each of the plurality of coil signals is representative of a change in the distance between a pair of electromagnetic coils, and receiving the plurality of coil signals. in accordance with another embodiment of the invention, a monitoring system for noninvasively monitoring physiological parameters of a subject is provided. the monitoring system includes (i) a magnetometer subsystem having a first plurality of paired magnetometers, each of the first plurality of paired magnetometers being responsive to changes in a spaced distance therebetween, the magnetometer subsystem being adapted to generate and transmit a plurality of magnetometer signals, each of the magnetometer signals representing a change in spaced distance between a respective one of the first plurality of paired magnetometers, the first plurality of paired magnetometers being positioned at a plurality of first spaced magnetometer positions, at least a second plurality of the first plurality of paired magnetometers being positioned at second spaced magnetometer positions proximate the subject's chest region, and (ii) a processor subsystem in communication with the magnetometer subsystem and adapted to receive the plurality of magnetometer signals, the processor subsystem being programmed and adapted to control the magnetometer subsystem, the processor subsystem being further programmed and adapted to process the magnetometer signals, the processor subsystem including at least one empirical relationship for determining at least one respiratory characteristic from the plurality of magnetometer signals, and adapted to generate and transmit at least one respiratory characteristic signal representing the respiratory characteristic. in accordance with another embodiment of the invention, the monitoring system includes a data monitoring subsystem programmed and adapted to receive the respiratory characteristic signal, the data monitoring subsystem being programmed and adapted to recognize and display the respiratory characteristic represented by the respiratory characteristic signal. in accordance with another embodiment of the invention, the monitoring system includes a transmission subsystem adapted to control transmission of the first plurality of magnetometer signals from the magnetometer subsystem to the processor subsystem and the respiratory characteristic signal from the processor subsystem to the data monitoring subsystem. in accordance with another embodiment of the invention, the transmission subsystem includes a wireless communication network. in accordance with another embodiment of the invention, the monitoring system includes at least one physiological sensor adapted to detect at least one physiological characteristic associated with the subject, the physiological sensor being further adapted to generate and transmit at least one physiological parameter signal representing the detected physiological characteristic. in accordance with another embodiment of the invention, the monitoring system includes at least one spatial parameter sensor adapted to detect orientation and motion of the subject, the spatial parameter sensor being further adapted to generate and transmit a first spatial parameter signal representing a detected orientation of the subject and a second spatial parameter signal representing a detected motion of the subject. in accordance with another embodiment of the invention, the processor subsystem is further programmed and adapted to determine movement of the subject's chest wall based on the first plurality of magnetometer signals, and to generate and transmit a chest wall signal representing the chest wall movement. in accordance with another embodiment of the invention, the processor subsystem is further programmed and adapted to determine at least one respiratory activity of the subject based on the chest wall movement, and to generate and transmit a respiratory activity signal representing the respiratory activity. in accordance with another embodiment of the invention, the processor subsystem is further programmed and adapted to generate at least one three-dimensional model of the subject's chest wall from the first plurality of magnetometer signals. in accordance with another embodiment of the invention, the processor subsystem includes a plurality of stored adverse physiological characteristics, and the processor subsystem is further programmed and adapted to compare the detected physiological characteristic to the plurality of stored adverse physiological characteristics, and to generate and transmit a warning signal if the detected physiological characteristic is one of the plurality of stored adverse physiological characteristics. in accordance with another embodiment of the invention, the processor subsystem includes a first plurality of chest wall parameters, each of the first plurality of chest wall parameters having at least a third plurality of magnetometer signals and at least a first spatial parameter associated therewith, each of the first plurality of chest wall parameters representing a first respiratory characteristic and first anatomical parameter. in accordance with another embodiment of the invention, the processor subsystem is further programmed and adapted to compare the first plurality of magnetometer signals and the spatial parameter signals to the first plurality of chest wall parameters, to select a respective one of the first plurality of chest wall parameters based on the first plurality of magnetometer signals and the spatial parameter signals, and to generate and transmit at least a first chest wall parameter signal representing the selected first chest wall parameter. brief description of the figures further features and advantages will become apparent from the following and more particular description of the present invention, as illustrated in the accompanying drawings, and in which like referenced characters generally refer to the same parts or elements throughout the views. fig. 1 is a schematic illustration of a physiology monitoring system, according to one embodiment of the invention. fig. 2 is a schematic illustration of a dual-paired electromagnetic coil arrangement, according to one embodiment of the invention. fig. 3 is a side view of a subject, showing the position of the dual-paired electromagnetic coil arrangement of fig. 2 on the subject, according to one embodiment of the invention. fig. 4 is a perspective view of the subject, showing the position of electromagnetic coils on the front of the subject, according to one embodiment of the invention. fig. 5 is a plane view of the subject's back, showing the position of electromagnetic coils thereon, according to one embodiment of the invention. figs. 6 and 7 are schematic illustrations of a multiple-paired electromagnetic coil arrangement, according to one embodiment of the invention. fig. 8 is a perspective view of a subject, showing the position of the multiple-paired electromagnetic coils shown in fig. 6 on the front of the subject, according to one embodiment of the invention. fig. 9 is a plane view of the subject's back, showing the position of electromagnetic coils thereon, according to one embodiment of the invention. fig. 10-12 are schematic illustrations of coil transmission axes provided by several multiple-paired coil embodiments of the invention. fig. 13 is a perspective view of a subject, showing alternative positions of the multiple-paired electromagnetic coils shown in fig. 6 on the front of the subject, according to another embodiment of the invention. fig. 14 is a plane view of the subject's back, showing the positioning of three pairs of electromagnetic coils thereon, according to another embodiment of the invention. fig. 15 is a plane view of the subject's back, showing alternative positions of the paired electromagnetic coils shown in fig. 14 thereon, according to another embodiment of the invention. fig. 16 is a perspective view of a subject, showing the position of six pairs of electromagnetic coils on the front and one side of the subject, according to another embodiment of the invention. fig. 17 is a plane view of the subject's back, showing the position of five pairs of electromagnetic coils on the back and both sides of the subject, according to another embodiment of the invention. detailed description of the invention before describing the present invention in detail, it is to be understood that this invention is not limited to particularly exemplified methods, apparatuses, systems, or circuits, as such may, of course, vary. thus, although a number of methods and systems similar or equivalent to those described herein can be used in the practice of the present invention, the preferred methods, apparatus and systems are described herein. it is also to be understood that the terminology used herein is for the purpose of describing particular embodiments of the invention only and is not intended to be limiting. unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one having ordinary skill in the art to which the invention pertains. as used in this specification and the appended claims, the singular forms "a", "an", and "the" include plural referents unless the content clearly dictates otherwise. further, all publications, patents, and patent applications cited herein, whether supra or infra, are hereby incorporated by reference in their entirety. the publications discussed herein are provided solely for their disclosure prior to the filing date of the present application. nothing herein is to be construed as an admission that the present invention is not entitled to antedate such publication(s) by virtue of prior invention. further, the dates of publication may be different from the actual publication dates, which may need to be independently confirmed. definitions the terms "respiratory parameter" and "respiratory characteristic", as used herein, mean and include a characteristic associated with the respiratory system and functioning thereof, including, without limitation, breathing frequency (fb), tidal volume (v t ), inspiration volume (v i ), expiration volume (v e ), minute ventilation (ve), inspiratory breathing time, expiratory breathing time, and flow rates (e.g., rates of change in the chest wall volume). the terms "respiratory parameter" and "respiratory characteristic" further mean and include inferences regarding ventilatory mechanics from synchronous or asynchronous movements of the chest wall compartments. according to the present invention, flow rates and respiratory accelerations can be determined from a volume signal. further, numerous inferences regarding ventilatory mechanics can be drawn from the degree of asynchrony in movement occurring amongst the discrete compartments that make up the chest wall. the terms "respiratory system disorder", "respiratory disorder", and "adverse respiratory event", as used herein, mean and include any dysfunction of the respiratory system that impedes the normal respiration or ventilation process. the terms "physiological parameter" and "physiological characteristic", as used herein, mean and include, without limitation, electrical activity of the heart, electrical activity of other muscles, electrical activity of the brain, pulse rate, blood pressure, blood oxygen saturation level, skin temperature, and core temperature. the terms "spatial parameter" and "spatial characteristic", as used herein, mean and include a subject's orientation and/or movement. the terms "patient" and "subject", as used herein, mean and include humans and animals. pulmonary ventilation, tidal volume, respiratory rate, and other associated respiratory characteristics can provide a reliable and practical measure of oxygen and carbon dioxide transpiration in a living body. respiratory characteristics are directly related to exercise effort, physiological stress, and other physiological characteristics. one way to externally determine tidal volume is to measure the change in thoracic volume. change in thoracic volume is caused by the expansion and contraction of the lungs. as the gas pressure in the lungs at the maxima and minima of the pressure ranges is equilibrated to surrounding air pressure, there is a very close and monotonic relationship between the volume of the lungs and the volume of air inspired. accurate measurement of the change in thoracic volume involves measuring the change in the diameter of the chest at the ribcage. measurement of the change in the diameter of the chest below the ribcage can provide additional accuracy to the measurement. monitoring changes in the diameter of the chest below the ribcage can account for diaphragm delivered breathing where the contraction and relaxation of the diaphragm muscle causes the organs of the abdomen to be pushed down and outwards, thereby increasing the available volume of the lungs. monitoring and analyzing respiratory characteristics can be particularly useful in athletic applications, as there is a direct link between performance and an athlete's processing of oxygen and carbon dioxide. for example, in many athletic training situations, it is helpful to know when the athlete's body transitions between aerobic exercise and anaerobic exercise, sometimes referred to as the athlete's ventilatory threshold. crossing over the ventilatory threshold level is an indicator of pending performance limitations during sport activities. for example, it can be beneficial for athletes to train in the anaerobic state for limited periods of time. however, for many sports, proper training requires only limited periods of anaerobic exercise interrupted by lower intensity aerobic exercises. it is difficult for an athlete to determine which state, anaerobic or aerobic, he or she is in without referencing physiological characteristics such as respiratory characteristics. therefore, respiratory monitoring and data processing can provide substantial benefits in athletic training by allowing for accurate and substantially instantaneous measurements of the athlete's exercise state. changes in an athlete's ventilatory threshold over time, as well as patterns of tidal volume during postexercise recovery, can be valuable to measure improvements in the athlete's fitness level over the course of a training regime. respiratory monitoring can further allow for monitoring and analyzing changes in a subject's resting metabolic rate. a second ventilatory threshold exists at the point when the load on the body is such that the pulmonary ventilation is no longer sufficient to support life sustainably. dwelling too long in this state will lead to collapse and so determination of this point can be of value in medical applications, and particularly to first responders and other emergency response personnel. as indicated above, the present invention is directed to noninvasive methods and associated systems for monitoring the physiological status of a subject; particularly, the status of the subject's respiratory system. magnetometers can be used, and can be embedded in or carried by a wearable garment, such as a shirt or vest. the wearable monitoring garment eliminates the need to attach the magnetometers directly to the skin of a subject and, hence, resolves all issues related thereto. the wearable monitoring garment also facilitates repeated and convenient positioning of magnetometers at virtually any appropriate (or desired) position on a subject's torso. as will be readily appreciated by one having ordinary skill in the art, the methods and systems of the invention provide numerous significant advantages over conventional methods and systems for monitoring physiological status. among the advantages are the provision of physiology monitoring methods and systems that provide (i) accurate, real-time determination of a plurality of physiological characteristics, (ii) accurate determination of a plurality of respiratory parameters and characteristics, (iii) accurate assessment of chest wall movement(s) and the relationship(s) thereof to respiratory activity and respiratory associated events, such as speaking and coughing, (iv) real-time determination and characterization of respiratory events, and (v) real-time determination and characterization of a subject's orientation and movement. a further significant advantage is the provision of additional and pertinent data relating to chest wall movement that facilitates three-dimensional modeling of chest wall shape and movement of ambulatory subjects. another significant advantage of the present invention is the provision of systems and associated methods that facilitate evaluation and quantification of ventilatory mechanics, e.g., synchronous and asynchronous movement of the chest wall compartments. as will readily be appreciated by one having ordinary skill in the art, this has implications in many fields of use, including applications related to specific disease states, such as asthma and chronic obstructive pulmonary disease (copd), and acute disease states, such as pneumo-thorax and pulmonary embolism. another advantage of the present invention is the provision of systems for accurately determining tidal volume (v t ) and other respiratory characteristics that do not require complex calibration algorithms and associated methods. this similarly has significant implications in many fields of use, including applications related to specific disease states, such as copd. several embodiments of the physiology monitoring systems and associated methods of the invention will now be described in detail. it is understood that the invention is not limited to the systems and associated methods described herein. indeed, as will be appreciated by one having ordinary skill in the art, systems and associated methods similar or equivalent to the described systems and methods can also be employed within the scope of the present invention. further, although the physiology monitoring systems and associated methods are described herein in connection with monitoring physiological parameters and characteristics in a human body, the invention is in no way limited to such use. the physiology monitoring systems and associated methods of the invention can also be employed to monitor physiological parameters in nonhuman bodies. the physiology monitoring systems and associated methods of the invention can also be employed in non-medical contexts, e.g., determining volumes and/or volume changes in extensible bladders used for containing liquids and/or gasses. referring first to fig. 1 , there is shown a schematic illustration of one embodiment of a physiology monitoring system according to the present invention. as illustrated in fig. 1 , the physiology monitoring system 10 preferably includes a data acquisition subsystem 20, a control-data processing subsystem 40, a data transmission subsystem 50, a data monitoring subsystem 60, and a power source 70, such as a battery. data acquisition subsystem in accordance with one embodiment of the invention, the data acquisition subsystem 20 includes means for acquiring anatomical parameters that can be employed to determine at least one respiratory characteristic, more preferably a plurality of respiratory characteristics, in cooperation with control-data processing subsystem 40, and, in some embodiments, data monitoring subsystem 60. the anatomical parameters may include changes in (or displacements of) the anteroposterior diameters of the rib cage and abdomen, and axial displacement of the chest wall. the means for acquiring the noted parameters, e.g., sensors. the sensors can include paired electromagnetic coils or magnetometers. although the present invention is described herein in terms of magnetometers and magnetometer systems, it is understood that other types of sensor systems capable of measuring changes in distance between two or more sensors in the system can be used in place of, or in addition to, magnetometers. specifically, the invention is not limited to the use of electromagnetic coils or magnetometers to measure changes in the anteroposterior diameters of the rib cage and abdomen, and axial displacement of the chest wall. various additional means and devices that can be readily adapted to measure the noted anatomical parameters can be employed within the scope of the invention. such means and devices include, without limitation, hall effect sensors and electronic compass sensors. wireless sensors with the capability of measuring time delay in a signal sent from one sensor to another and thereby determine the distance between the two sensors can be substituted for or provided in addition to magnetometers in accordance with the present invention. according to the invention, at least two magnetometers can be employed to measure the noted subject parameters (or displacements). in some embodiments of the invention, two pairs of magnetometers are employed. in some embodiments, more than two pairs of magnetometers are employed. referring now to fig. 2 , there is shown one embodiment of a dual-paired electromagnetic coil arrangement for detecting and measuring displacement(s) of the rib cage, abdomen, and chest wall. as illustrated in fig. 2 , the electromagnetic coils include first transmission and receive coils 22a, 22b, and second transmission and receive coils 24a, 24b. in fig. 2 , the letter t designates the transmission coils and the letter r designates the receiving coils, however, the coils are not limited to such designations. the electromagnetic coils of embodiments of the present invention are described as "receiving" or "transmitting," however, each receiving coil can alternatively and independently be a transmitting coil, and each transmitting coil can alternatively and independently be a transmitting coil. coils can also perform both receiving and transmitting functions. details of the noted arrangement and associated embodiments (discussed below) are set forth in co-pending u.s. patent application no. 12/231,692, filed september 5, 2008 , co-pending u.s. patent application no. 61/275,576, filed september 1, 2009 , and co-pending u.s. patent application no. 12/869,576 , filed concurrently herewith, each of which, as indicated above, is expressly incorporated by reference herein in its entirety. as set forth in the noted applications, in some embodiments of the invention, at least receive coil 24b is adapted to receive coil transmissions from each of transmission coils 22a, 24a (i.e., at least receive coil 24b may be a dual function coil, where "dual function coil" refers to a coil capable of receiving transmissions from a plurality of different transmission coils). in some embodiments, each receive coil 22b, 24b is adapted to receive transmissions from each transmission coil 22a, 24a. referring now to figs. 3-5 , there is shown the position of coils 22a, 22b, 24a, 24b on a subject or patient 100, in accordance with one embodiment of the invention. as illustrated in figs. 3-5 , first transmission coil 22a is preferably positioned on front 101 of subject 100 proximate the umbilicus of subject 100, and first receive coil 22b is preferably positioned proximate the same axial position, but on back 102 of subject 100. second receive coil 24b is preferably positioned on front 101 of subject 100 proximate the base of the sternum, and second transmission coil 24a is preferably positioned proximate the same axial position, but on back 102 of subject 100. as set forth in co-pending u.s. patent application no. 12/231,692 , the positions of transmission coils 22a, 24a and receive coils 22b, 24b can be reversed (i.e., transmission coil 22a and receive coil 24b can be placed on back 102 of subject 100 and transmission coil 24a and receive coil 22b can be placed on front 101 of subject 100. both transmission coils 22a and 24a can also be placed on front 101 or back 102 of subject 100 and receive coils 22b and 24b can be placed on the opposite side. referring back to fig. 3 , an arrow 23 represents the chest wall or, in this instance, the xiphi-umbilical distance (xi) that is monitored. an arrow 25 represents the monitored rib cage distance, while an arrow 29 represents the monitored abdominal distance. in accordance with one embodiment of the invention, wherein coil 24b is a dual function coil, as subject or patient 100 breathes, displacement(s) of the rib cage and abdomen (i.e., changes in the distance between each pair of coils 22a, 22b and 24a, 24b, denoted, respectively, by arrow 29 and arrow 25), is determined from measured changes in voltage between paired coils 22a, 22b and 24a, 24b. the axial displacement of the chest wall, denoted by arrow 23, (e.g., xiphi-umbilical distance (xi)), is also determined from measured changes in voltage between transmission coil 22a and receive coil 24b. as indicated above, in some embodiments of the invention, more than two pairs of electromagnetic coils can be employed. as set forth in u.s. patent application no. 61/275,575, filed september 1, 2009 , and co-pending u.s. patent application no. 12/869,582 , filed concurrently herewith, each of which is incorporated by reference herein in its entirety, adding additional electromagnetic coils in anatomically appropriate positions on a subject provides numerous significant advantages over dual-paired coil embodiments. among the advantages is the provision of additional (and pertinent) data and/or information regarding chest wall movement(s) and the relationship(s) thereof to respiratory activity and respiratory associated events, such as speaking, sneezing, laughing, and coughing. further, the multiple single, cross, and interaction axes of the electromagnetic coil transmissions that result from the additional coils (and placement thereof) provide highly accurate quantification of changes in chest wall volume, and facilitate three-dimensional modeling of chest wall shape and movement of ambulatory subjects, and the evaluation and quantification of ventilatory mechanics, e.g., synchronous and asynchronous movement of the chest wall compartments. referring now to figs. 6-17 , the multiple-paired coil embodiments of the invention will now be described in detail. it is, however, to be understood that the invention is not limited to the multiple-paired coil embodiments described herein. as will be appreciated by one having ordinary skill in the art, the multiple-paired coil embodiments can include any number of additional electromagnetic coils (e.g., 3, 4, 5, 6, 7, 8, 9, 10). for example, in embodiments using three magnetometers, for example, electromagnetic coils, it is understood that the three electromagnetic coils can function as multiple pairs. specifically, referring to the coils as first, second, and third coils, the first coil can form a pair with the second coil and the first coil can also form a pair with the third coil. in addition, the second coil can also form a pair with the third coil. thus, a magnetometer system utilizing three electromagnetic coils can be configured to form one, two, or three pairs. each of the first, second, and third coils can be configured to transmit signals, receive signals, or to both receive and transmit signals. a magnetometer can communicate with a plurality of other magnetometers, and therefore a particular magnetometer can form a part of more than one pair. the position of the additional coils and the function thereof can also be readily modified and/or adapted for a particular application within the scope of the present invention. referring first to figs. 6-8 , there is shown one embodiment of the multiplepaired coil embodiment of the invention. as illustrated in fig. 7 , the noted embodiment similarly includes electromagnetic coils 22a, 22b, 24a, 24b. according to the invention, any of the aforementioned dual-paired coil embodiments associated with coils 22a, 22b, 24a, 24b can be employed with the multiple-paired coil embodiments of the invention. as also illustrated in figs. 6 and 7 , the multiple-paired coil embodiment can further includes at least two additional pairs of electromagnetic coils: third transmission coil 32a, third receive coil 32b, fourth transmission coil 34a, and fourth receive coil 34b. in some embodiments of the invention, at least one of the two additional receive coils 32b, 34b is a dual function coil and, hence, adapted to receive transmissions from each of transmission coils 32a, 22a, 34a. in some embodiments, each receive coil 32b, 34b is adapted to receive transmissions from each transmission coil 32a, 22a, 34a. referring now to figs. 8 and 9 , there is shown the position of coils 22a, 22b, 24a, 24b, 32a, 32b, 34a, 34b on a subject or patient 100, in accordance with one embodiment of the invention. as illustrated in figs. 8 and 9 , first transmission coil 22a is preferably positioned on front 101 of subject 100 proximate the umbilicus of subject 100, and first receive coil 22b is preferably positioned proximate the same axial position, but on back 102 of subject 100. second receive coi124b is preferably positioned on front 101 of subject 100 proximate the base of the sternum, and second transmission coil 24a is positioned proximate the same axial position, but on back 102 of subject 100. third transmission coil 32a is preferably positioned on front 101 of subject 100 and axially spaced to the right of first transmission coil 22a. fourth transmission coil 34a is preferably positioned on front 101 of subject 100 and axially spaced to the left of first transmission coil 22a. in the illustrated embodiment, each transmission coil 32a, 22a, 34a is preferably positioned proximate the same axial plane (denoted "ap 1 " in figs. 6 and 7 ). third receive coil 32b is preferably positioned on front 101 of subject 100 and axially spaced to the right of second receive coil 24b. fourth receive coil 34b is preferably positioned on front 101 of subject 100 and axially spaced to the left of second receive coil 24b. preferably, each receive coil 32b, 24b, 34b is similarly positioned proximate the same axial plane (denoted "ap 2 " in figs. 6 and 7 ). as will readily be appreciated by one having ordinary skill in the art, the axial spacing of coils 32a, 32b, 34a, 34b will, in many instances, be dependant on the body size and structure of the subject, e.g., adult, female, male, adolescent. the distance between and amongst the coils can also vary with the degree of measurement precision required or desired. preferably, in the noted embodiment, the axial spacing between coils 32a, 32b, 34a, 34b and coils 22a, 22b, 24a, 24b is substantially equal or uniform. as indicated above, a significant advantage of the multiple-paired coil embodiments of the invention is the provision of multiple single, cross, and interaction coil transmission axes that facilitate three-dimensional modeling of chest wall shape and movement of ambulatory subjects, and evaluation and quantification of ventilatory mechanics, e.g., synchronous and asynchronous movement of the chest wall compartments. a further significant advantage of the multiple-paired coil embodiments of the invention is that real-time, three-dimensional models of the chest wall can be created by simultaneous monitoring of the chest wall with the multiple-paired coils of the invention. another advantage is that with sufficiently tight tolerances on the coil field strength(s), volume calibration would not be necessary. measurement precision would, thus, be determined by the geometrical void spaces between the various coil pairs. referring now to figs. 10-12 , there are shown several schematic illustrations of coil transmission axes provided by three multiple-paired coil embodiments of the invention. referring first to fig. 10 , there is shown one embodiment, wherein each receive coil 32b, 24b, 34b, 22b is a single function coil. receive coil 32b is adapted to receive a transmission t 32 from transmission coil 32a. receive coil 24b is adapted to receive a transmission t 22 from transmission coil 22a. receive coil 34b is adapted to receive a transmission t 34 from transmission coil 34a. receive coil 22b is adapted to receive a transmission t 24 from transmission coil 24a. referring now to fig, 12 , there is shown another embodiment, wherein receive coil 24b is a dual function coil. receive coil 32b is adapted to receive transmission t 32 from transmission coil 32a, receive coil 34b is adapted to receive transmission t 34 from transmission coil 34a, and receive coil 22b is adapted to receive transmission t 24 from transmission coil 24a. receive coil 24b is, however, adapted to receive transmission t 32 from transmission coil 32a, transmission t 22 from transmission coil 22a, transmission t 34 from transmission coil 34a, and transmission t 24 from transmission coil 24a. in a further embodiment, illustrated in fig. 11 , each receive coil 32b, 24b, 34b, 22b is a dual function coil. as illustrated in fig. 11 , receive coil 32b is adapted to receive transmission t 32 from transmission coil 32a, transmission t 22 from transmission coil 22a, transmission t 34 from transmission coil 34a, and transmission t 24 from transmission coil 24a. receive coils 24b, 34b, and 22b are also adapted to receive transmission t 32 from transmission coil 32a, transmission t 22 from transmission coil 22a, transmission t 34 from transmission coil 34a, and transmission t 24 from transmission coil 24a. the noted multiple-paired coil embodiments significantly enhance the available data and information associated with chest wall movement and, hence, respiratory activity and respiratory associated events. the additional data and information also facilitates the evaluation and quantification of ventilatory mechanics, e.g., synchronous and asynchronous movement of the chest wall compartments. the supplemental coil transmissions (or signals) can also be readily employed to reduce or eliminate the frequency and impact of magnetic field interference and artifacts, which are commonly encountered in electromagnetic coil systems. as indicated above, the multiple-paired coil embodiments of the invention are not limited to the embodiment described above, wherein two additional pairs of electromagnetic coils are uniformly positioned on the front of a subject. referring now to figs. 13-17 , there are shown additional multiple-paired coil embodiments of the invention. referring first to fig. 13 , there is shown a multiple-paired coil embodiment, wherein the two additional coil pairs 32a, 32b, and 34a, 34b are non-uniformly positioned on front 101 of subject 100. as indicated, the additional coil pairs can be positioned at any appropriate (or desired) positions on the torso of subject 100. additional paired coils (e.g., transmission coil 36a paired with receive coil 36b, and transmission coil 38a paired with receive coil 38b) can also be positioned on back 102 of subject 100, as illustrated in fig. 14 . coils 36a, 36b, 38a, 38b can be positioned uniformly, as shown in fig. 14 , or non-uniformly, as illustrated in fig. 15 . referring now to figs. 16-17 , there is shown another multiple-paired coil embodiment, wherein additional paired coils are positioned on the torso of subject 100. as illustrated in fig. 16 , additional paired coils (e.g., transmission coil 33a paired with receive coil 33b, and transmission coil 35a paired with receive coil 35b) can be positioned on front 101 of subject 100. in the noted embodiment, transmission coil 33a is preferably positioned above and between transmission coils 32a and 22a, and transmission coil 35a is preferably positioned above and between transmission coils 22a and 34a. receive coil 33b is preferably positioned above and between receive coils 32b and 24b, and receive coil 35b is preferably positioned above and between receive coils 24b and 34b. as illustrated in figs. 16 and 17 , additional paired coils (e.g., transmission coil 37a paired with receive coil 37b, and transmission coil 39a paired with receive coil 39b) can be also positioned on opposite sides of the subject 100. additionally, the transmission coils and receive coils disclosed herein need not necessarily be paired one-to-one. for example, a single receive coil may be configured to receive transmissions from multiple transmission coils, and a single transmission coil may be configured to transmit to multiple receive coils. as indicated above, the multiple-paired coil embodiments of the invention are not limited to the multiple-paired coil embodiments shown in figs. 6-17 . it is again emphasized that the multiple-paired coil embodiments can include any number of additional pairs of electromagnetic coils. further, the position of the additional coils and the function thereof can also be readily modified and/or adapted for a particular application within the scope of the present invention. in some embodiments of the invention, the data acquisition subsystem 20 can include means for directly monitoring the orientation and/or movement of subject 100, e.g., spatial parameters. according to the invention, various conventional means can be employed to monitor or measure subject orientation and movement, including optical encoders, proximity and hall effect switches, laser interferometry, accelerometers, gyroscopes, global positioning systems (gps), and/or other spatial sensors. in one embodiment, the means for directly monitoring the orientation and movement of a subject includes at least one multi-function inertial sensor, e.g., 3-axis accelerometer or 3-axis gyroscope. as is well known in the art, orientation and motion of a subject can be readily determined from the signals or data transmitted by a multi-function accelerometer. according to the invention, the accelerometer can be disposed in any anatomically appropriate position on a subject. in one embodiment of the invention, an accelerometer (denoted "ac 1 " in fig. 8 ) is disposed proximate the base of the subject's sternum. control-data processing subsystem according to the present invention, control-data processing subsystem 40 can include programs, instructions, and associated algorithms for performing the methods of the invention, including control algorithms and associated parameters to control data acquisition subsystem 20 and, hence, the paired electromagnetic coils, e.g., coils 22a, 22b, 24a, 24b, 32a, 32b, 34a, 34b and the function thereof, and the transmission and receipt of coil transmissions, e.g., transmissions t 32 , t 22 , t 34 , and t 24 , as well as data transmission subsystem 50 and data monitoring subsystem 60. such is discussed in detail below. control-data processing subsystem 40 is further programmed and adapted to retrieve and process coil transmissions or signals from the electromagnetic coils (e.g., coils 22a, 22b, 24a, 24b, 32a, 32b, 34a, 34b) in order to determine physiological information associated with monitored subject 100, to retrieve, process, and interpret additional signals transmitted by additional spatial parameter and physiological sensors (discussed below), and to transmit selective coil data, physiological and spatial parameters, physiological characteristics, and subject information to data monitoring subsystem 60. in a preferred embodiment of the invention, control-data processing subsystem 40 further includes at least one "n-degrees-of-freedom" model or algorithm for determining at least one respiratory characteristic (e.g., v t ) from the retrieved coil transmissions or signals (e.g., measured displacements of the rib cage, abdomen, and chest wall). in one embodiment, control-data processing subsystem 40 includes one or more "three-degrees-of-freedom" models or algorithms for determining at least one respiratory characteristic (preferably, a plurality of respiratory characteristics) from the retrieved coil transmissions (or signals). preferred "three-degrees-of-freedom" models (or algorithms) are set forth in co-pending u.s. patent application no. 12/231,692 . in some embodiments, control-data processing subsystem 40 is further programmed and adapted to assess physiological characteristics and parameters by comparison with stored physiological benchmarks. control-data processing subsystem 40 can also be programmed and adapted to assess respiratory and spatial characteristics and parameters by comparison with stored respiratory and spatial benchmarks control-data processing subsystem 40 can generate status signals if corresponding characteristics or parameters are present. the benchmarks may indicate, for example, adverse conditions or fitness goals, and the status signals may include warnings or alarms. control-data processing subsystem 40 also preferably includes suitable algorithms that are designed and adapted to determine respiratory characteristics, parameters, and statuses from measured multiple, interactive chest wall displacements. the algorithms are also preferably adapted to discount measured chest wall displacements that are associated with non-respiration movement, e.g., twisting of the torso, to enhance the accuracy of respiratory characteristic (and/or parameter) determinations. control-data processing subsystem 40 additionally preferably includes suitable programs, algorithms, and instructions to generate three-dimensional models of subject's chest wall from the measured multiple, interactive chest wall displacements. according to the invention, various programs and methods known in the mathematical arts (e.g., differential geometric methods) can be employed to process the signals (reflecting the chest wall distances and displacement) into a representation of the shape of the torso. indeed, it is known that providing sufficient distances defined on a two dimensional surface (a metric) permit the shape of the surface to be constructed in a three dimensional space. see, e.g., badler, et al., "simulating humans: computer graphics, animation, and control", (new york: oxford university press, 1993 ) and decarlo, et al., "integrating anatomy and physiology for behavior modeling", medicine meets virtual reality 3 (san diego, 1995 ). preferably, in some embodiments of the invention, control-data processing subsystem 40 is further programmed and adapted to determine additional and, in some instances, interrelated anatomical parameters, such as bending, twisting, coughing, etc., from the measured multiple, interactive chest wall displacements. in one embodiment, control-data processing subsystem 40 is programmed and adapted to compare retrieved coil transmissions reflecting measured chest wall displacements with stored selective combinations of coil transmissions and chest wall parameters that are associated therewith (e.g., "normal respiration and bending", "normal respiration and coughing"). by way of example, in one embodiment, a first chest wall parameter (cwp 1 ) defined as (or reflecting) "normal respiration and twisting of the torso" is stored in control-data processing subsystem 40. the coil transmissions and data associated with the first chest wall parameter (cwp 1 ) include transmissions t 32 , t 22 , t 34 , and t 24 received by receive coil 24b that can represent displacements x, y, and z. during monitoring of subject 100, similar coil transmissions may be received by receive coil 24b. control-data processing subsystem 40 then compares the detected (or retrieved) transmissions to the stored transmissions and chest wall parameters associated therewith to determine (in real-time) the chest wall movement and, hence, respiratory activity based thereon; in this instance "normal respiration and twisting of the torso". in some embodiments, the signals transmitted by the accelerometer (e.g., spatial parameter signals) are employed with the detected coil transmissions to determine and classify chest wall movement and associated respiratory activity of the monitored subject. in the noted embodiments, each stored chest wall parameter also includes spatial parameter signals associated with the chest wall parameter (e.g., normal respiration and twisting of the torso). according to the invention, controldata processing subsystem 40 is adapted to compare retrieved coil transmissions and spatial parameter signals to the stored transmissions and spatial parameter signals, and the chest wall parameters associated therewith, to determine the chest wall movement and, hence, respiratory activity based thereon. in some embodiments, the spatial parameter signals are used to generate a spatial model of the subject. the spatial model can be two-dimensional or three-dimensional, and can reflect the real-time orientation and movement of the subject. the spatial model can be displayed to provide the subject or another with a representation of the real-time orientation and movement of the subject. in some embodiments of the invention, control-data processing subsystem 40 is programmed and adapted to determine chest wall movement and respiratory activity based on retrieved coil transmissions, spatial parameter signals, and audio signals. in the noted embodiments, data acquisition subsystem 20 can also include an audio sensor, such as, e.g., a microphone, that is disposed in an anatomically appropriate position on a subject, e.g., proximate the throat. according to the invention, each stored chest wall parameter also includes at least one audio parameter (e.g., > n db, based on the audio signal) that is associated with the chest wall parameter (e.g., normal respiration and coughing). suitable speech and cough parameters (and threshold determinations) are set forth in u.s. patent no. 7,267,652, issued september 11, 2007 , which is incorporated by reference herein in its entirety. upon receipt of coil transmissions, spatial parameter signals, and audio signals, control-data processing subsystem 40 compares the retrieved coil transmissions, spatial parameter signals, and audio signals to the stored transmissions, spatial parameter signals, and audio parameters, and the chest wall parameters associated therewith, to determine the chest wall movement and respiratory activity based thereon (e.g., normal respiration and coughing). in some embodiments of the invention, control-data processing subsystem 40 is programmed and adapted to determine fitness activity based on retrieved coil transmissions, spatial parameter signals, and audio signals. in the noted embodiments, data acquisition subsystem 20 may also include an audio sensor, such as, for example, a microphone, that is disposed in an anatomically appropriate position on a subject (e.g., proximate the throat). upon receipt of coil transmissions, spatial parameter signals, and audio signals, control-data processing subsystem 40 compares the retrieved coil transmissions, spatial parameter signals, and audio signals to the stored transmissions, spatial parameter signals, and audio parameters, and the chest wall parameters associated therewith, to determine a fitness activity of the subject (e.g., running, jogging, stretching, swimming, performing push-ups, performing sit-ups, performing chin-ups, performing arm curls, playing basketball, playing baseball, or playing soccer). referring first to fig. 1 , there is shown a schematic illustration of one embodiment of a physiology monitoring system according to the present invention. as illustrated in fig. 1 , the physiology monitoring system 10 preferably includes a data acquisition subsystem 20, a control-data processing subsystem 40, a data transmission subsystem 50, a data monitoring subsystem 60, and a power source 70, such as a battery. control-data processing subsystem 40 is also referred to herein as "processor subsystem," "processing subsystem," and "data processing subsystem." the terms control-data processing subsystem, processor subsystem, processing subsystem, and data processing subsystem are used interchangeably in the present application. data monitoring subsystem according to embodiments of the invention, data monitoring subsystem 60 is designed and adapted to receive and, in some embodiments, to selectively monitor coil transmissions or signals (e.g., transmissions t 32 , t 22 , t 34 , and t 24 ) and to display parameters associated therewith (e.g., displacement(s) along a selective axis), and/or a chest wall parameter (e.g., cwp 1 ), and/or a respiratory characteristic (e.g., v t ) or event. data monitoring subsystem 60 is further preferably designed and adapted to display selective subject parameters, characteristics, information, and warnings or alarms. data monitoring subsystem 60 can also be adapted to display data or broadcast data aurally. the aurally presented data can be voice messages, music, or other noises signifying an event. data monitoring subsystem 60 can be adapted to allow headphones or speakers to connect to the data monitoring subsystem, either wireless or wired, to broadcast the aural data. data monitoring subsystem 60 can be adapted to include a display, or to allow a display to connect to the data monitoring subsystem, to display the data. such display can include, for example, a liquid crystal display (lcd), a plasma display, a cathode ray tube (crt) display, a light emitting diode (led) display, or an organic light emitting diode (oled) display. in some embodiments of the invention, data monitoring subsystem 60 is also adapted to receive and, in some embodiments, selectively monitor spatial parameter signals and signals transmitted by additional anatomical and physiological sensors (e.g., signals indicating skin temperature, or spo 2 ) and to display parameters and information associated therewith. the parameters can be associated with an athlete's physical activity. physical or anatomical parameters measured and/or calculated may include, for example, time, location, distance, speed, pace, stride count, stride length, stride rate, and/or elevation. physiological parameters measured and/or calculated may include, for example, heart rate, respiration rate, blood oxygen level, blood flow, hydration status, calories burned, muscle fatigue, and/or body temperature. in an embodiment, performance parameters may also include mental or emotional parameters such as, for example, stress level or motivation level. mental and emotional parameters may be measured and/or calculated directly or indirectly either through posing questions to the athlete or by measuring things such as, for example, trunk angle or foot strike characteristics while running. in some embodiments of the invention, data monitoring subsystem 60 includes a local electronic module or local data unit (ldu). the term "local" as used in connection with an ldu is intended to mean that the ldu is disposed close to the electromagnetic coils, such as on or in a wearable garment containing the coils (discussed in detail below). in some embodiments of the invention, the ldu is preferably adapted to receive and monitor coil transmissions (or signals), to preprocess the coil transmissions, to store the coil transmissions and related data, and to display selective data, parameters, physiological characteristics, and subject information. in some embodiments, the ldu is also adapted to receive and monitor the spatial parameter transmissions (or signals) and additional signals transmitted by additional anatomical and physiological sensors (if employed), to preprocess the signals, to store the signals and related data, and to display selective data, physiological and spatial parameters, physiological characteristics, and subject information via a variety of media, such as a personal digital assistant (pda), a mobile phone, and/or a computer monitor, etc. in some embodiments, the ldu includes a remote monitor or monitoring facility. in these embodiments, the ldu is further adapted to transmit selective coil and sensor data, physiological parameters and characteristics, spatial parameters, and subject information to the remote monitor or facility. in some embodiments of the invention, the ldu includes the features and functions of control-data processing subsystem 40 (e.g., an integral control-processing/monitoring subsystem) and, hence, is also adapted to control data acquisition subsystem 20. the ldu is thus adapted to control the paired coils that are employed, to determine selective physiological characteristics and parameters, to assess physiological characteristics and parameters for adverse conditions, and to generate warnings or alarms if adverse characteristics or parameters are present. suitable ldus are described in co-pending international application no. pct/us2005/021433 (pub. no. wo 2006/009830 a2 ), published january 26, 2006, which is incorporated by reference herein in its entirety. in some embodiments of the invention, monitoring subsystem 60 includes a separate, remote monitor or monitoring facility. according to embodiments of the invention, the remote monitor or facility is adapted to receive sensor data and information, physiological and spatial parameters, physiological characteristics, and subject information from controldata processing subsystem 40, and to display the selective coil sensor data and information, physiological and spatial parameters, physiological characteristics, and subject information via a variety of mediums, such as a pda, computer monitor, etc. data transmission subsystem according to embodiments of the invention, various communication links and protocols can be employed to transmit control signals to data acquisition subsystem 20 and, hence, paired coils, and to transmit coil transmissions (or signals) from the paired coils to control-data processing subsystem 40. various communication links and protocols can be employed to transmit data and information, including coil transmissions (or signals) and related parameters, physiological characteristics, spatial parameters, and subject information from controldata processing subsystem 40 to data monitoring subsystem 60. in some embodiments of the invention, the communication link between data acquisition subsystem 20 and control-data processing subsystem 40 includes conductive wires or similar direct communication means. in some embodiments, the communication link between data acquisition subsystem 20 and control-data processing subsystem 40, as well as between control-data processing subsystem 40 and data monitoring subsystem 60, is a wireless link. according to embodiments of the invention, data transmission subsystem 50 is programmed and adapted to monitor and control the noted communication links and, hence, transmissions by and between data acquisition subsystem 20, control-data processing subsystem 40, and data monitoring subsystem 60. in some embodiments of the invention, data acquisition subsystem 20 includes at least one additional physiological sensor (preferably, a plurality of additional physiological sensors) adapted to monitor and record one or more physiological characteristics associated with monitored subject 100. the physiological sensors can include, without limitation, sensors that are adapted to monitor and record electrical activity of the brain, heart, and other muscles (e.g., eeg, ecg, emg), pulse rate, blood oxygen saturation level (e.g., spo 2 ), skin temperature, and core temperature. physiological parameters measured and/or calculated may include, for example, heart rate, respiration rate, blood oxygen level, blood flow, hydration status, calories burned, muscle fatigue, and/or body temperature. exemplary physiological sensors are disclosed in u.s. patent no. 6,551,252 , u.s. patent no. 7,267,652 , and co-pending u.s. patent application no. 11/764,527, filed june 18, 2007 , each of which is incorporated by reference herein in its entirety. according to exemplary embodiments of the invention, the additional sensors can be disposed in a variety of anatomically appropriate positions on a subject. by way of example, a first sensor (e.g., a pulse rate sensor) can be disposed proximate the heart of subject 100 to monitor pulse rate, and a second sensor (e.g., a microphone) can be disposed proximate the throat of subject 100 to monitor sounds emanating therefrom (e.g., sounds reflecting coughing). as indicated above, data acquisition subsystem 20 can also include one or more audio sensors, such as, for example, a microphone, for monitoring sounds generated by a monitored subject, and a speaker to enable two-way communication by and between the monitored subject and a monitoring station or individual. according to embodiments of the invention, the paired coils (e.g., electromagnetic coils 22a, 22b, 24a, 24b, and the aforementioned additional sensors) can be positioned on or proximate a subject by various suitable means. thus, in some embodiments, the paired coils and/or additional sensors can be directly attached to the subject. according to embodiments of the invention, application of the coils and sensors to the body of subject 100 can be achieved via a large range of adhesive techniques providing appropriate strengths and duration of attachment, such as surgical tape and biocompatible adhesives. in some embodiments, the paired coils, additional sensors, processing and monitoring systems (e.g., ldus, if employed) are embedded in or carried by a wearable garment or item that can be comfortably worn by a monitored subject. the associated wiring, cabling, and other power and signal transmission apparatuses and/or systems can also be embedded in the wearable garment. according to embodiments of the invention, the wearable monitoring garment can be one or more of a variety of garments, such as a shirt, vest or jacket, belt, cap, patch, and the like. a suitable wearable monitoring garment (a vest) is illustrated and described in co-pending u.s. patent application no. 61/275,576, filed september 1, 2009 , co-pending u.s. patent application no. 12/869,576 , filed concurrently herewith, co-pending u.s. patent application no. 61/275,633, filed september 1, 2009 , and co-pending u.s. patent application no. 12/869,627 , filed concurrently herewith, each of which is incorporated by reference herein in its entirety. additional suitable garments are also disclosed in u.s. patent no. 7,267,652, issued september 11, 2007 , u.s. patent no. 6,551,252, issued april 22, 2003 , and u.s. patent no. 6,047,203, issued april 4, 2000 ; each of which is incorporated by reference herein in its entirety. as set forth in the noted incorporated references, paired coils or magnetometers, and additional sensors, processing and monitoring systems, ldus, and other equipment can be arranged in or carried by the wearable monitoring garment, for example, in open or closed pockets, or attached to the garment, as by sewing, gluing, a hook and pile system, e.g., velcro® such as that manufactured by velcro, inc., and the like. the methods and systems of the invention, described above, thus provide numerous significant advantages over conventional physiology monitoring methods and systems. among the advantages are the provision of methods and systems that provide (i) accurate, real-time determination of a plurality of physiological characteristics, (ii) accurate determination of a plurality of respiratory parameters and characteristics, (iii) accurate assessment of chest wall movement(s) and the relationship(s) thereof to respiratory activity and respiratory associated events, such as speaking and coughing, (iv) real-time determination and characterization of respiratory events, and (v) real-time determination and characterization of the orientation and movement of a subject. a further significant advantage is the provision of additional and pertinent data that facilitates three-dimensional modeling of chest wall shape and movement of ambulatory subjects. another significant advantage of the invention is the provision of systems and associated methods that facilitate evaluation and quantification of ventilatory mechanics (e.g., synchronous and asynchronous movement of the chest wall compartments) and "real-time" three-dimensional modeling of the chest wall. as stated above, this has huge implications in the field of use, as well as applications to specific disease states, such as asthma and copd, and to acute disease states, such as pneumo-thorax and pulmonary embolism. another advantage of the invention is the provision of systems for accurately determining tidal volume (v t ) and other respiratory characteristics that do not require complex calibration algorithms and associated methods. this similarly has huge implications in the field of use, as well as applications for specific disease states, such as copd. yet another advantage of the invention is the provision of monitoring systems that allow for measurement of front to back separation between magnetometers as well as vertical separation between different sets of magnetometers. this allows the system to separate a desired signal and information from motion artifacts caused by ambulatory motion. additional advantages and applications of the present invention are apparent with reference to the systems and methods disclosed in u.s. patent application no. 12/869,578 , filed concurrently herewith, u.s. patent application no. 12/869,582 , filed concurrently herewith, u.s. patent application no. 12/869,576 , filed concurrently herewith, u.s. patent application no. 12/869,592 , filed concurrently herewith, u.s. patent application no. 12/869,627 , filed concurrently herewith, u.s. patent application no. 12/869,625 , filed concurrently herewith, and u.s. patent application no. 12/869,586 , filed concurrently herewith, each of which is incorporated by reference herein in its entirety. without departing from the spirit and scope of this invention, one of ordinary skill can make various changes and modifications to the invention to adapt it to various usages and conditions. as such, these changes and modifications are properly, equitably, and intended to be, within the full range of equivalence of the invention. further embodiments of the invention are mentioned as follows: 1. a fitness monitoring system for monitoring a subject engaged in a physical activity, the system comprising: a sensor subsystem including a first sensor and a second sensor, wherein the first and second sensors are responsive to changes in distance therebetween, wherein the sensor subsystem is configured to generate and transmit a distance signal representative of the distance between the first and second sensors; and a physiological sensor configured to generate and transmit a physiological signal representative of a physiologic parameter of the subject; and a processor subsystem in communication with the sensor subsystem and the physiological sensor, the processor subsystem being configured to receive the distance signal and the physiological signal, wherein the processor subsystem is configured to process the physiological signal to obtain a signal that is representative of a physiological parameter of the subject. 2. the fitness monitoring system of embodiment 1, wherein the first sensor is configured to be secured to the skin of the subject. 3. the fitness monitoring system of embodiment 1, wherein the first sensor is adhered to the skin by a biocompatible adhesive. 4. the fitness monitoring system of embodiment 3, wherein the second sensor is configured to be secured to the skin of the subject. 5. the fitness monitoring system of embodiment 1, wherein the first and second sensors comprise magnetometers. 6. the fitness monitoring system of embodiment 1, wherein the physiological sensor is configured to monitor at least one of electrical activity of the brain, electrical activity of the heart, pulse rate, blood oxygen saturation level, skin temperature, emg, ecg, eeg, and core temperature. 7. the fitness monitoring system of embodiment 1, further comprising a monitoring subsystem configured to receive the distance signal, wherein the processor subsystem is configured to process the distance signal to obtain a signal that is representative of a respiratory parameter, and wherein the monitoring subsystem is configured to display a representation of the respiratory parameter. 8. the fitness monitoring system of embodiment 7, wherein the processor subsystem comprises a plurality of stored respiratory benchmarks, and wherein the processor subsystem is further configured to compare the respiratory parameter to the plurality of stored respiratory benchmarks and to generate and transmit a status signal in response to a determination that the respiratory parameter corresponds to one of the stored respiratory benchmarks. 9. the fitness monitoring system of embodiment 8, wherein the plurality of stored respiratory benchmarks comprise at least one of adverse fitness states and fitness goals. 10. the fitness monitoring system of embodiment 1, wherein the processor subsystem is further configured to determine a respiratory activity of the subject based on the distance signal and to generate and transmit a respiratory activity signal representative of the respiratory activity. 11. the fitness monitoring system of embodiment 1, wherein the processor subsystem comprises a plurality of stored physiological benchmarks, and wherein the processor subsystem is further configured to compare the physiological parameter to the stored physiological benchmarks and to generate and transmit a status signal in response to a determination that the physiological parameter corresponds to one of the stored physiological benchmarks. 12. a fitness monitoring system for monitoring a subject engaged in a physical activity, the system comprising: a sensor subsystem comprising: a first sensor and a second sensor, wherein the first and second sensors are responsive to changes in distance therebetween, wherein the sensor subsystem is configured to generate and transmit a distance signal representative of the distance between the first and second sensors; and a third sensor, wherein the third sensor is a spatial sensor configured to detect movement of the subject, wherein the sensor subsystem is configured to generate and transmit a spatial signal representative of a movement of a body part of the subject; and a processor subsystem in communication with the sensor subsystem, the processor subsystem being configured to receive the distance signal and the spatial signal. 13. the fitness monitoring system of embodiment 12, wherein the first and second sensors comprise magnetometers. 14. the fitness monitoring system of embodiment 12, wherein the spatial sensor is configured to detect the orientation of a body part of the subject, and wherein the spatial signal includes representative of the orientation of the body part. 15. the fitness monitoring system of embodiment 12, wherein the first and second sensors are configured to be secured directly to the subject's skin. 16. the fitness monitoring system of embodiment 15, wherein the first and second sensors are configured to be secured to the skin by surgical tape. 17. the fitness monitoring system of embodiment 15, wherein the first and second sensors are configured to be secured to the skin by a biocompatible adhesive 18. the fitness monitoring system of embodiment 12, wherein the sensor subsystem comprises a plurality of sensors responsive to changes in distance therebetween. 19. the fitness monitoring system of embodiment 18 wherein the sensor subsystem is configured to generate and transmit a plurality of distance signals, and wherein each distance signal is representative of a distance between at least two magnetometers. 20. the fitness monitoring system of embodiment 12, wherein the spatial sensor includes at least one of an optical encoder, a proximity switch, a hall effect switch, a laser interferometry system, an inertial sensor, and a global positioning system. 21. the fitness monitoring system of embodiment 12, further comprising a monitoring subsystem, wherein the processor subsystem is configured to process the distance signal to obtain a signal that is representative of a respiratory parameter, and wherein the monitoring subsystem is configured to display a representation of the respiratory parameter. 22. the fitness monitoring system of embodiment 21, wherein the processor subsystem comprises a plurality of stored respiratory benchmarks, and wherein the processor subsystem is further configured to compare the respiratory parameter to the plurality of stored respiratory benchmarks and to generate and transmit a status signal in response to a determination that the distance signal corresponds to one of the stored respiratory benchmarks. 23. the fitness monitoring system of embodiment 22, wherein the plurality of stored respiratory benchmarks comprises at least one of adverse fitness states and fitness goals. 24. the fitness monitoring system of embodiment 12, wherein the processor subsystem is further configured to determine a respiratory activity of the subject based on the distance signal, and to generate and transmit a respiratory activity signal representative of the respiratory activity. 25. the fitness monitoring system of embodiment 12, wherein the processor subsystem comprises a plurality of stored spatial benchmarks, and wherein the processor subsystem is further configured to compare the spatial signal to the plurality of stored spatial benchmarks, and to generate and transmit a status signal in response to a determination that the spatial signal corresponds to one of the stored spatial benchmarks. 26. the fitness monitoring system of embodiment 25, wherein the plurality of stored spatial benchmarks comprises at least one of adverse fitness states and fitness goals. 27. the fitness monitoring system of embodiment 12, wherein the processor subsystem is further configured to determine a fitness activity of the subject based on the spatial signal, and to generate and transmit a fitness activity signal representative of the fitness activity. 28. the fitness monitoring system of embodiment 14, wherein the processor subsystem is further configured to generate a three-dimensional spatial model of the orientation and movement of the subject based on the spatial signal. 29. a method for monitoring a subject engaged in a physical activity, the method comprising: generating a distance signal representative of the distance between a first sensor and a second sensor and transmitting the respiratory signal to a processor subsystem, wherein the respiratory signal is generated by a sensor subsystem, wherein the first and second sensors are responsive to changes in distance therebetween; generating a spatial signal representative of an orientation of a body part of the subject and transmitting the spatial signal to the processor subsystem; and receiving the respiratory signal and the spatial signal at the processor subsystem. 30. the method of embodiment 29, further comprising generating a physiological signal representative of a physiological parameter of the subject and transmitting the physiological signal to a processor subsystem; 31. the method of embodiment 29, further comprising displaying a representation of the respiratory parameter. 32. the method of embodiment 29, further comprising: processing the respiratory signal to obtain a signal which is representative of a respiratory parameter of the subject, and comparing the respiratory parameter to a plurality of stored respiratory benchmarks; and generating and transmitting a status signal in response to a determination that the respiratory parameter corresponds to one of the stored respiratory benchmarks. 33. the method of embodiment 32, wherein the plurality of stored respiratory benchmarks comprise at least one of adverse fitness states and fitness goals. 34. the method of embodiment 29, further comprising: determining a respiratory activity of the subject based on the respiratory signal; and generating and transmitting a respiratory activity signal representative of the respiratory activity. 35. the method of embodiment 29, further comprising: comparing the orientation of the body part to a plurality of stored spatial benchmarks; and generating and transmitting a status signal in response to a determination that the orientation of the body part corresponds to one of the stored spatial benchmarks. 36. the method of embodiment 35, wherein the plurality of stored spatial benchmarks comprises at least one of adverse fitness states and fitness goals. 37. the method of embodiment 29, further comprising: determining at least one fitness activity of the subject based on the spatial signal; and generating and transmitting a spatial activity signal representative of the spatial activity. 38. the method of embodiment 29, further comprising generating a three-dimensional spatial model of the orientation and movement of the subject based on the spatial signal. 39. the method of embodiment 30, further comprising: comparing the physiological signal to a plurality of stored physiological benchmarks; and generating and transmitting a status signal in response to a determination that the physiological signal corresponds to one of the stored physiological benchmarks. 40. the method of embodiment 39, wherein the plurality of stored physiological benchmarks comprises at least one of adverse fitness states and fitness goals. 41. the method of embodiment 29, wherein the spatial signal further represents movement of a body part of the subject.
006-211-996-899-165
JP
[ "KR", "US", "EP" ]
D01D4/02,D01D5/00,D04H1/56,D01D5/08,D04H3/16,B29C47/10,D01D5/098
2008-05-28T00:00:00
2008
[ "D01", "D04", "B29" ]
spinning apparatus and apparatus and process for manufacturing nonwoven fabric
a spinning apparatus comprising one or more exits for extruding liquid, and an exit for ejecting gas, located upstream of the exits for extruding liquid, wherein the apparatus comprises a columnar hollow for liquid, in which the exit for extruding liquid forms one end of the columnar hollow; the apparatus comprises a columnar hollow for gas having the exit for ejecting gas; a virtual column for liquid, extended from the columnar hollow for liquid, is adjacent to a virtual column for gas, extended from the columnar hollow for gas; the central axis of the columnar hollow for liquid is parallel to the central axis of the columnar hollow for gas; and there exists only one straight line having the shortest distance between an outer boundary of the cross-section of the columnar hollow for gas and an outer boundary of the cross-section of the columnar hollow for liquid, is disclosed.
1. a spinning apparatus comprising one or more exits for extruding liquid, which are capable of extruding a spinning liquid, and an exit for ejecting gas, which is located upstream of each of the exits for extruding liquid and is capable of ejecting a gas, wherein (1) the spinning apparatus comprises a columnar hollow for liquid hl), in which the exit for extruding liquid forms one end of the columnar hollow for liquid, (2) the spinning apparatus comprises a columnar hollow for gas (hg) of which one end is the exit for ejecting gas, (3) a virtual column for liquid (hvl) which is extended from the columnar hollow for liquid (hl) is located adjacent to a virtual column for gas (hvg) which is extended from the columnar hollow for gas (hg), (4) a central axis of an extruding direction in the columnar hollow for liquid (hl) is parallel to a central axis of an ejecting direction in the columnar hollow for gas (hg), and (5) when the columnar hollow for gas and the columnar hollow for liquid are cross-sectioned with a plane perpendicular to the central axis of the columnar hollow for gas, there exists only one straight line having the shortest distance between an outer boundary of the cross-section of the columnar hollow for gas (hg) and an outer boundary of the cross-section of the columnar hollow for liquid (hl). 2. the spinning apparatus according to claim 1 , wherein the spinning apparatus has one exit for extruding liquid. 3. an apparatus for manufacturing a nonwoven fabric, comprising the spinning apparatus according to claim 2 and a fibers collection means. 4. a process for manufacturing a nonwoven fabric comprising the steps of: extruding a spinning liquid from a spinning apparatus for manufacturing a nonwoven fabric, wherein the spinning liquid is fiberized as fibers; and accumulating the fiberized fibers on a fibers collection means to obtain a nonwoven fabric, wherein said spinning apparatus comprising one or more exits for extruding liquid, the one or more exits for extruding liquid are capable of extruding the spinning liquid, and an exit for ejecting gas, the exit for ejecting gas is located upstream of each of the exits for extruding liquid and is capable of ejecting a gas, wherein (1) the spinning apparatus comprises a columnar hollow for liquid (hl), wherein one of the one or more exits for extruding liquid forms one end of the columnar hollow for liquid, (2) the spinning apparatus comprises a columnar hollow for gas (hg), wherein one end of the columnar hollow for gas is the exit for ejecting gas, (3) a virtual column for liquid (hvl) which is extended from the columnar hollow for liquid (hl) is located adjacent to a virtual column for gas (hvg) which is extended from the columnar hollow for gas (h), (4) a central axis of an extruding direction in the columnar hollow for liquid (hl) is parallel to a central axis of an ejecting direction in the columnar hollow for gas (hg), (5) when the columnar hollow for gas and the columnar hollow for liquid are cross-sectioned with a plane perpendicular to the central axis of the columnar hollow for gas, there exists only one straight line having a shortest distance between an outer boundary of the cross-section of the columnar hollow for gas (hg) and an outer boundary of the cross-section of the columnar hollow for liquid (hl), and (6) a gas having a flow rate of 100 m/sec. or more is ejected from the exit for ejecting gas of the spinning apparatus. 5. a spinning apparatus comprising two or more exits for extruding liquid, the two or more exits for extruding liquid are capable of extruding a spinning liquid, and an exit for ejecting gas, the exit for ejecting gas is located upstream of each of the two or more exits for extruding liquid and is capable of ejecting a gas, wherein (1) the spinning apparatus comprises columnar hollows for liquid, in which each of the two or more exits for extruding liquid forms one end of the corresponding columnar hollow for liquid, (2) the spinning apparatus comprises a columnar hollow for gas of which one end is the exit for ejecting gas, (3) a virtual column for liquid extends from each of the columnar hollows for liquid, each virtual column for liquid is located adjacent to a virtual column for gas which is extended from the columnar hollow for gas, (4) each central axis of an extruding direction in each of the columnar hollows for liquid is parallel to a central axis of an ejecting direction in the columnar hollow for gas, and (5) when the columnar hollow for gas and the columnar hollows for liquid are cross-sectioned with a plane perpendicular to the central axis of the columnar hollow for gas, there exists only one straight line having a shortest distance between an outer boundary of the cross-section of the columnar hollow for gas and an outer boundary of the cross-section of each of the columnar hollows for liquid, at any combination of the columnar hollow for gas and each of the columnar hollows for liquid. 6. the spinning apparatus according to claim 5 , wherein an outer shape of each exit for extruding liquid is circular. 7. the spinning apparatus according to claim 5 , wherein an outer shape of the exit for ejecting gas is circular. 8. an apparatus for manufacturing a nonwoven fabric, comprising the spinning apparatus according to claim 5 and a fibers collection means. 9. a process for manufacturing a nonwoven fabric comprising the steps of: extruding a spinning liquid from a spinning apparatus for manufacturing a nonwoven fabric, wherein the spinning liquid is fiberized as fibers; and accumulating the fiberized fibers on a fibers collection means to obtain a nonwoven fabric, wherein said spinning apparatus comprising two or more exits for extruding liquid, the two or more exits for extruding liquid are capable of extruding the spinning liquid, and an exit for ejecting gas, the exit for ejecting gas is located upstream of each of the two or more exits for extruding liquid and is capable of ejecting a gas, wherein (1) the spinning apparatus comprises columnar hollows for liquid, in which each of the two or more exits for extruding liquid forms one end of a corresponding columnar hollow for liquid, (2) the spinning apparatus comprises a columnar hollow for gas, wherein one end of the column hollow for gas is the exit for ejecting gas, (3) a virtual column for liquid extends from each of the columnar hollows for liquid, each virtual column for liquid is located adjacent to a virtual column for gas which extends from the columnar hollow for gas, (4) each central axis of an extruding direction in each of the columnar hollows for liquid is parallel to a central axis of an ejecting direction in the columnar hollow for gas, (5) when the columnar hollow for gas and the columnar hollows for liquid are cross-sectioned with a plane perpendicular to the central axis of the columnar hollow for gas, there exists only one straight line having a shortest distance between an outer boundary of the cross-section of the columnar hollow for gas and an outer boundary of the cross-section of each of the columnar hollows for liquid, at any combination of the columnar hollow for gas and each of the columnar hollows for liquid, and (6) the spinning liquid is extruded from the exits for extruding liquid under two or more different extruding conditions. 10. the process according to claim 9 , wherein two or more types of spinning liquids different in concentration are extruded. 11. the process according to claim 9 , wherein two or more types of spinning liquids containing different polymers are extruded. 12. the process according to claim 9 , wherein two or more types of spinning liquids containing different solvents are extruded.
cross reference to related applications this application claims priority to japanese patent application numbers: 2008-139948, filed may 28, 2008; 2008-154679, filed jun. 12, 2008; and 2008-204830, filed aug. 7, 2007. the entire contents of each of the prior applications are incorporated herein by reference. technical field the present invention relates to a spinning apparatus, an apparatus comprising the same for manufacturing a nonwoven fabric, and a process for manufacturing a nonwoven fabric using the nonwoven fabric manufacturing apparatus. background art fibers having a small fiber diameter can impart various excellent properties, such as a separating property, a liquid-holding capacity, a wiping property, a shading property, an insulating property, or flexibility, to a nonwoven fabric, and therefore, it is preferable that fibers which form a nonwoven fabric have a small fiber diameter. as a process for manufacturing such fibers having a small fiber diameter, electrospinning is known. in this process, a spinning liquid is extruded from a nozzle, and at the same time, an electrical field is applied to the extruded spinning liquid to thereby draw the spinning liquid and thin the diameter of the spinning liquid, and fibers are directly collected on a fibers collection means to form a nonwoven fabric. according to the electrospinning, a nonwoven fabric consisting of fibers having an average fiber diameter of 1 μm or less can be produced. it is necessary in the electrospinning that a high voltage should be applied to the nozzle or the fibers collection means, to apply an electrical field to the spinning liquid, and therefore, a complicated apparatus is needed and the electrospinning wastes energy. to solve these problems, patent literature 1 proposes “an apparatus for forming a non-woven mat of nanofibers by using a pressurized gas stream includes parallel, spaced apart first ( 12 ), second ( 22 ), and third ( 32 ) members, each having a supply end ( 14 , 24 , 34 ) and an opposing exit end ( 16 , 26 , 36 ). the second member ( 22 ) is adjacent to the first member ( 12 ). the exit end ( 26 ) of the second member ( 22 ) extends beyond the exit end ( 16 ) of the first member ( 12 ). the first ( 12 ) and second ( 22 ) members define a first supply slit ( 18 ). the third member ( 32 ) is located adjacent to the first member ( 12 ) on the opposite side of the first member ( 12 ) from the second member ( 22 ). the first ( 12 ) and third ( 32 ) members define a first gas slit ( 38 ), and the exit ends ( 16 , 26 , 36 ) of the first ( 12 ), second ( 22 ) and third ( 32 ) members define a gas jet space ( 20 ). a method for forming a nonwoven mat of nanofibers by using a pressurized gas stream is also included.”, as shown in fig. 2 . this apparatus does not require the application of a high voltage, and therefore, can solve the problems. however, because flat-shaped first, second, and third members are arranged parallel to each other in the apparatus, and the pressurized gas stream is applied to a sheet-like spinning liquid, it is considered that the spinning liquid is difficult to have a fibrous form and the nonwoven fabric contains a lot of droplets, and that, if fibers can be obtained, the diameter of the fibers would become thick. as a similar spinning apparatus, patent literature 2 proposes “an apparatus for forming nanofibers by using a pressurized gas stream comprising a center tube, a first supply tube that is positioned concentrically around and apart from the center tube, a middle gas tube positioned concentrically around and apart from the first supply tube, and a second supply tube positioned concentrically around and apart from the middle gas tube, wherein the center tube and first supply tube form a first annular column, the middle gas tube and the first supply tube form a second annular column, the middle gas tube and second supply tube form a third annular column, and the tubes are positioned so that first and second gas jet spaces are created between the lower ends of the center tube and first supply tube, and the middle gas tube and second supply tube, respectively”. this apparatus also does not require the application of a high voltage, and can solve the problems. however, because the pressurized gas stream is applied to a spinning liquid annularly extruded, spinning cannot be stably performed, and the spinning liquid is difficult to have a fibrous form and the nonwoven fabric contains a lot of droplets. citation list patent literature [patent literature 1] japanese translation publication (kohyo) no. 2005-515316 (abstract, table 1, and the like)[patent literature 2] u.s. pat. no. 6,520,425 (abstract, fig. 2, and the like) summary of invention technical problem an object of the present invention is to solve the above problems, that is, to provide a simple spinning apparatus capable of producing a nonwoven fabric consisting of fibers having a small fiber diameter, an apparatus for manufacturing a nonwoven fabric comprising this spinning apparatus, and a process for manufacturing a nonwoven fabric using this apparatus for manufacturing a nonwoven fabric. another object of the present invention is to provide a simple and energy-efficient spinning apparatus capable of producing a nonwoven fabric having a uniform uniformity and consisting of fibers having a small fiber diameter with a high productivity, and an apparatus for manufacturing a nonwoven fabric comprising this spinning apparatus. still another object of the present invention is to provide a process for manufacturing a nonwoven fabric having an excellent uniformity in which two or more types of fibers having a small fiber diameter and different in fiber diameter, resin composition, or the like are uniformly mixed, with a low energy consumption and a high productivity. the present invention relates to a process for manufacturing a nonwoven fabric capable of providing from a thin nonwoven fabric to a thick nonwoven fabric. solution to problem the present invention relates to [1] a spinning apparatus comprising one or more exits for extruding liquid, which are capable of extruding a spinning liquid, and an exit for ejecting gas, which is located upstream of each of the exits for extruding liquid and is capable of ejecting a gas, wherein(1) the spinning apparatus comprises a columnar hollow for liquid (hl), in which the exit for extruding liquid forms one end of the columnar hollow for liquid,(2) the spinning apparatus comprises a columnar hollow for gas (hg) of which one end is the exit for ejecting gas,(3) a virtual column for liquid (hvl) which is extended from the columnar hollow for liquid (hl) is located adjacent to a virtual column for gas (hvg) which is extended from the columnar hollow for gas (hg),(4) a central axis of an extruding direction in the columnar hollow for liquid (hl) is parallel to a central axis of an ejecting direction in the columnar hollow for gas (hg), and(5) when the columnar hollow for gas and the columnar hollow for liquid are cross-sectioned with a plane perpendicular to the central axis of the columnar hollow for gas, there exists only one straight line having the shortest distance between an outer boundary of the cross-section of the columnar hollow for gas (hg) and an outer boundary of the cross-section of the columnar hollow for liquid (hl),[2] the spinning apparatus of [1], wherein the spinning apparatus has one exit for extruding liquid,[3] an apparatus for manufacturing a nonwoven fabric, characterized by comprising the spinning apparatus of [2] and a fibers collection means,[4] a process for manufacturing a nonwoven fabric, characterized by using the apparatus of [3], and ejecting a gas having a flow rate of 100 m/sec. or more from the exit for ejecting gas of the spinning apparatus,[5] the spinning apparatus of [1], wherein the spinning apparatus has two or more exits for extruding liquid, and(1) the spinning apparatus comprises columnar hollows for liquid, in which each of the exits for extruding liquid forms one end of the corresponding columnar hollow for liquid,(2) the spinning apparatus comprises the columnar hollow for gas of which one end is the exit for ejecting gas,(3) each virtual column for liquid which is extended from each of the columnar hollows for liquid is located adjacent to the virtual column for gas which is extended from the columnar hollow for gas,(4) each central axis of the extruding direction in each of the columnar hollows for liquid is parallel to the central axis of the ejecting direction in the columnar hollow for gas, and(5) when the columnar hollow for gas and the columnar hollows for liquid are cross-sectioned with a plane perpendicular to the central axis of the columnar hollow for gas, there exists only one straight line having the shortest distance between the outer boundary of the cross-section of the columnar hollow for gas and an outer boundary of the cross-section of each of the columnar hollows for liquid, at any combination of the columnar hollow for gas and each of the columnar hollows for liquid,[6] the spinning apparatus of [5], characterized in that the outer shape of each exit for extruding liquid is circular,[7] the spinning apparatus of [5] or [6], characterized in that the outer shape of the exit for ejecting gas is circular,[8] an apparatus for manufacturing a nonwoven fabric, characterized by comprising the spinning apparatus of any one of [5] to [7] and a fibers collection means,[9] a process for manufacturing a nonwoven fabric, characterized by using the apparatus of [8],[10] a process for manufacturing a nonwoven fabric, characterized by using the apparatus of [8], and comprising the steps of extruding a spinning liquid from the exits for extruding liquid under two or more different extruding conditions to be fiberized, and accumulating the fiberized fibers on the fibers collection means to obtain a nonwoven fabric,[11] the process of [10], characterized by extruding two or more types of spinning liquids different in concentration,[12] the process of [10], characterized by extruding two or more types of spinning liquids containing different polymers, and[13] the process of [10], characterized by extruding two or more types of spinning liquids containing different solvents. advantageous effects of invention the spinning apparatus of [1] according to the present invention is a simple and energy-efficient apparatus capable of producing a nonwoven fabric consisting of fibers having a small fiber diameter. the spinning apparatus of [2] according to the present invention is “a spinning apparatus comprising an exit for extruding liquid, which is capable of extruding a spinning liquid, and an exit for ejecting gas, which is located upstream of each of the exits for extruding liquid and is capable of ejecting a gas, wherein (1) the spinning apparatus comprises a columnar hollow for liquid (hl), in which the exit for extruding liquid forms one end of the columnar hollow for liquid,(2) the spinning apparatus comprises a columnar hollow for gas (hg) of which one end is the exit for ejecting gas,(3) a virtual column for liquid (hvl) which is extended from the columnar hollow for liquid (hl) is located adjacent to a virtual column for gas (hvg) which is extended from the columnar hollow for gas (hg),(4) a central axis of an extruding direction in the columnar hollow for liquid (hl) is parallel to a central axis of an ejecting direction in the columnar hollow for gas (hg), and(5) when the columnar hollow for gas and the columnar hollow for liquid are cross-sectioned with a plane perpendicular to the central axis of the columnar hollow for gas, there exists only one straight line having the shortest distance between an outer boundary of the cross-section of the columnar hollow for gas (hg) and an outer boundary of the cross-section of the columnar hollow for liquid (hl)”. in this apparatus, the spinning liquid extruded from the exit for extruding liquid is adjacent and parallel to the gas ejected from the exit for ejecting gas, and a shearing action of the gas and the accompanying airstream is single-linearly exerted on the spinning liquid, and therefore, fibers of which the diameter is thinned can be spun. this spinning apparatus is a simple and energy-efficient apparatus, because the application of a high voltage to the spinning liquid as well as the heating of the spinning liquid and the gas is not required. the apparatus of [3] for manufacturing a nonwoven fabric, according to the present invention, comprises the fibers collection means, and therefore, fibers of which the diameter is thinned can be accumulated thereon to produce a nonwoven fabric. in the process of [4] according to the present invention, when a gas having a flow rate of 100 m/sec. or more is ejected, generation of droplets can be avoided, and a nonwoven fabric comprising fibers of which the diameter is thinned can be efficiently produced. the spinning apparatus of [5] according to the present invention is “a spinning apparatus comprising two or more exits for extruding liquid, which are capable of extruding a spinning liquid, and an exit for ejecting gas, which is located upstream of each of the exits for extruding liquid and is capable of ejecting a gas, wherein (1) the spinning apparatus comprises columnar hollows for liquid, in which each of the exits for extruding liquid forms one end of the corresponding columnar hollow for liquid,(2) the spinning apparatus comprises the columnar hollow for gas of which one end is the exit for ejecting gas,(3) each virtual column for liquid which is extended from each of the columnar hollows for liquid is located adjacent to the virtual column for gas which is extended from the columnar hollow for gas,(4) each central axis of the extruding direction in each of the columnar hollows for liquid is parallel to the central axis of the ejecting direction in the columnar hollow for gas, and(5) when the columnar hollow for gas and the columnar hollows for liquid are cross-sectioned with a plane perpendicular to the central axis of the columnar hollow for gas, there exists only one straight line having the shortest distance between the outer boundary of the cross-section of the columnar hollow for gas and an outer boundary of the cross-section of each of the columnar hollows for liquid, at any combination of the columnar hollow for gas and each of the columnar hollows for liquid”. in this apparatus, each of the spinning liquids extruded from each of the exits for extruding liquid is independently adjacent and parallel to the gas ejected from the exit for ejecting gas, and the shearing action of the gas and the accompanying airstream is independently and single-linearly exerted on each of the spinning liquids, and therefore, fibers of which the diameter is thinned can be spun. this spinning apparatus is a simple and energy-efficient apparatus, because the application of a high voltage to each spinning liquid is not required. further, because the spinning liquids extruded from two or more exits for extruding liquid can be fiberized by the gas ejected from only one exit for ejecting gas, the amount of the gas can be reduced, and as a result, the scattering of fibers can be avoided, and a nonwoven fabric having an excellent uniformity can be produced with a high productivity. furthermore, this spinning apparatus is an energy-efficient apparatus, because the amount of the gas can be reduced, and a high-capacity suction apparatus is not required. in the spinning apparatus of [6] according to the present invention, because the outer shape of each of the exits for extruding liquid is circular, the shearing action of the gas ejected from the exit for ejecting gas and the accompanying airstream can be efficiently and single-linearly exerted on each cylindrical spinning liquid extruded from each of the exits for extruding liquid, and fibers of which the diameter is thinned can be easily spun. in the spinning apparatus of [7] according to the present invention, because the outer shape of the exit for ejecting gas is circular, wherever each exit for extruding liquid is arranged with respect to the exit for ejecting gas, each spinning liquid extruded from each exit for extruding liquid may be independently and single-linearly subjected to the shearing action of the gas ejected from the exit for ejecting gas and the accompanying airstream to easily spin fibers of which the diameter is thinned. the apparatus of [8] for manufacturing a nonwoven fabric, according to the present invention, comprises the fibers collection means, and therefore, fibers of which the diameter is thinned can be accumulated thereon to produce a nonwoven fabric with a high productivity. in the process of [8] or [9] according to the present invention, each of the spinning liquids extruded from each of the exits for extruding liquid is independently adjacent and parallel to the gas ejected from the exit for ejecting gas, and the shearing action of the gas and the accompanying airstream is independently and single-linearly exerted on each of the spinning liquids, and therefore, fibers of which the diameter is thinned can be spun. further, because the spinning liquids extruded from two or more exits for extruding liquid can be fiberized by the gas ejected from only one exit for ejecting gas, the amount of the gas can be reduced, and as a result, the scattering of fibers can be avoided, and a nonwoven fabric having an excellent uniformity can be produced with a high productivity. in this regard, this spinning apparatus is an energy-efficient apparatus, because the amount of the gas can be reduced, and a high-capacity suction apparatus as well as the application of a high voltage to each spinning liquid is not required. furthermore, from a thin nonwoven fabric to a thick nonwoven fabric can be produced, because the amount of the gas can be reduced, and a suction is not necessary to be enhanced. still furthermore, because one or more spinning liquids are extruded from the exits for extruding liquid under two or more different extruding conditions to be fiberized in the process of [9] according to the present invention, a nonwoven fabric having an excellent uniformity in which two or more different types of fibers in fiber diameter, resin composition, or the like are uniformly mixed can be produced. in the process of [11] according to the present invention, a nonwoven fabric having an excellent uniformity in which two or more types of fibers different in fiber diameter are uniformly mixed can be produced by extruding two or more types of spinning liquid different in concentration. in the process of [12] according to the present invention, a nonwoven fabric having an excellent uniformity in which two or more types of fibers different in resin composition are uniformly mixed can be produced by extruding two or more types of spinning liquid containing different polymers. in the process of [13] according to the present invention, a nonwoven fabric having an excellent uniformity in which two or more types of fibers different in fiber diameter are uniformly mixed can be produced by extruding two or more types of spinning liquid containing different solvents. brief description of drawings [ fig. 1 ] (a) fig. 1( a ) is an enlarged perspective view showing the tip portion of an embodiment of the spinning apparatus of the present invention. (b) fig. 1( b ) is a cross-sectional view taken along plane c in fig. 1( a ). fig. 2 is a cross-sectional view of a conventional spinning apparatus. fig. 3 is a cross-sectional plane view showing the arrangement of the nozzle for extruding liquid and the nozzle for ejecting gas used in comparative example 1. fig. 4 is an enlarged perspective view showing the tip portion of another embodiment of the spinning apparatus of the present invention. [ fig. 5 ] (a) fig. 5( a ) is a cross-sectional plane view of an embodiment, taken along the plane perpendicular to the central axis of the columnar hollow for gas (a cross-sectional plane view taken along plane c in fig. 4) . (b) fig. 5( b ) is a cross-sectional plane view of another embodiment, taken along the plane perpendicular to the central axis of the columnar hollow for gas. (c) fig. 5( c ) is a cross-sectional plane view of still another embodiment, taken along the plane perpendicular to the central axis of the columnar hollow for gas. (d) fig. 5( d ) is a cross-sectional plane view of still another embodiment, taken along the plane perpendicular to the central axis of the columnar hollow for gas. (e) fig. 5( e ) is a cross-sectional plane view of still another embodiment, taken along the plane perpendicular to the central axis of the columnar hollow for gas. [ fig. 6 ] (a) fig. 6( a ) is a cross-sectional plane view of an embodiment, taken along the plane perpendicular to the central axis of the columnar hollow for gas. (b) fig. 6( b ) is a cross-sectional plane view of another embodiment, taken along the plane perpendicular to the central axis of the columnar hollow for gas. (c) fig. 6( c ) is a cross-sectional plane view of still another embodiment, taken along the plane perpendicular to the central axis of the columnar hollow for gas. description of embodiments the spinning apparatus of the present invention will be explained with reference to fig. 1( a ) that is an enlarged perspective view showing the tip portion of an embodiment of the spinning apparatus of the present invention, and fig. 1( b ) that is a cross-sectional view taken along plane c in fig. 1( a ). the spinning apparatus of the present invention contains a single nozzle for extruding liquid (nl) having, at one end thereof, an exit for extruding liquid (el) capable of extruding a spinning liquid, and a single nozzle for ejecting gas (ng) having, at one end thereof, an exit for ejecting gas (eg) capable of ejecting a gas; the outer wall of the former nozzle (nl) is directly contacted with the outer wall of the latter nozzle (ng); and the exit for ejecting gas (eg) of the nozzle for ejecting gas (ng) is located upstream of the exit for extruding liquid (el). the nozzle for extruding liquid (nl) has a columnar hollow for liquid (hl) of which one end is the exit for extruding liquid (el), and the nozzle for ejecting gas (ng) has a columnar hollow for gas (hg) of which one end is the exit for ejecting gas (eg). a virtual column for liquid (hvl) which is extended from the columnar hollow for liquid (hl) is located adjacent to a virtual column for gas (hvg) which is extended from the columnar hollow for gas (hg), and the distance between these virtual columns corresponds to the sum of the wall thickness of the nozzle for extruding liquid (nl) and the wall thickness of the nozzle for ejecting gas (ng). the central axis of the extruding direction (al) of the columnar hollow for liquid (hl) is parallel to the central axis of the ejecting direction (ag) of the columnar hollow for gas (hg). as shown in fig. 1( b ) that is a cross-sectional view taken along plane c perpendicular to the central axis of the columnar hollow for gas (hg), the outer shape of a cross-section of the columnar hollow for gas (hg), and the outer shape of a cross-section of the columnar hollow for liquid (hl) are circular, and only a single straight line (l 1 ) having the shortest distance between the outer boundaries of these cross-sections can be drawn. in this spinning apparatus as shown in fig. 1 , when a spinning liquid and a gas are supplied to the nozzle for extruding liquid (nl) and the nozzle for ejecting gas (ng), respectively, the spinning liquid flows through the columnar hollow for liquid (hl) and is extruded from the exit for extruding liquid (el) in the axis direction of the columnar hollow for liquid (hl), and simultaneously, the gas flows through the columnar hollow for gas (hg) and is ejected from the exit for ejecting gas (eg) in the axis direction of the columnar hollow for gas (hg). the ejected gas is adjacent to the extruded spinning liquid, the ejecting direction of the gas is parallel to the extruding direction of the spinning liquid, and there exists only a single point having the shortest distance between the ejected gas and the extruded spinning liquid on plane c, that is, the spinning liquid is single-linearly subjected to a shearing action of the gas and the accompanying airstream, and therefore, the spinning liquid is spun in the axis direction of the columnar hollow for liquid (hl) while the diameter thereof is thinned, and simultaneously, the spinning liquid is fiberized by evaporating the solvent contained in the spinning liquid. as described above, the spinning apparatus as shown in fig. 1 does not require the application of a high voltage to the spinning liquid, as well as the heating of the spinning liquid and the gas, and is a simple and energy-efficient apparatus. the nozzle for extruding liquid (nl) may be any nozzle capable of extruding a spinning liquid, and the shape of the exit for extruding liquid (el) is not particularly limited. the shape of the exit for extruding liquid (el) may be, for example, circular, oval, elliptical, or polygonal (such as triangle, quadrangle, or hexagonal), and is preferably circular, because the shearing action of the gas and the accompanying airstream can be single-linearly exerted on the spinning liquid, and generation of droplets can be avoided. when the shape of the exit for extruding liquid (el) is polygonal, the shearing action of the gas and the accompanying airstream can be single-linearly exerted on the spinning liquid, by arranging one vertex of the polygon at the side of the nozzle for ejecting gas (ng), and as a result, generation of droplets can be avoided. that is to say, when the columnar hollow for gas (hg) and the columnar hollow for liquid (hl) are cross-sectioned with a plane perpendicular to the central axis of the columnar hollow for gas (hg), only a single straight line having the shortest distance between the outer boundary of the cross-section of the columnar hollow for gas (hg) and the outer boundary of the cross-section of the columnar hollow for liquid (hl) can be drawn, and therefore, the extruded spinning liquid is single-linearly subjected to the shearing action of the gas and the accompanying airstream, and as a result, generation of droplets can be avoided. the size of the exit for extruding liquid (el) is not particularly limited, but is preferably 0.03 to 20 mm 2 , more preferably 0.03 to 0.8 mm 2 . when the size is less than 0.03 mm 2 , it tends to become difficult to extrude a spinning liquid having a high viscosity. when the size is more than 20 mm 2 , it tends to become difficult to exert the shearing action on the overall spinning liquid extruded, and therefore, droplets are liable to occur. the nozzle for extruding liquid (nl) may be formed of any material such as a metal or a resin, and a resin or metal tube may be used as the nozzle. although fig. 1 shows a cylindrical nozzle for extruding liquid (nl), a nozzle having an acute-angled edge in which a tip portion is slantingly cut away with a plane may be used. this nozzle having an acute-angled edge is advantageous to a spinning liquid having a high viscosity. when the nozzle having an acute-angled edge is used so that the acute-angled edge is arranged at the side of the nozzle for ejecting gas, the spinning liquid may be effectively subjected to the shearing action of the gas and the accompanying airstream, and therefore, may be stably fiberized. the nozzle for ejecting gas (ng) may be any nozzle capable of ejecting a gas, and the shape of the exit for ejecting gas (eg) is not particularly limited. the shape of the exit for ejecting gas (eg) may be, for example, circular, oval, elliptical, or polygonal (such as triangle, quadrangle, or hexagonal), and is preferably circular, because the spinning liquid is effectively subjected to the shearing action of the gas and the accompanying airstream. when the shape of the exit for ejecting gas (eg) is polygonal, and one of the vertices of the polygon is arranged at the side of the nozzle for extruding liquid (nl), the shearing action of the gas and the accompanying airstream can be efficiently exerted on the spinning liquid. that is to say, when the columnar hollow for gas (hg) and the columnar hollow for liquid (hl) are cross-sectioned with a plane perpendicular to the central axis of the columnar hollow for gas (hg), only a single straight line having the shortest distance between the outer boundary of the cross-section of the columnar hollow for gas (hg) and the outer boundary of the cross-section of the columnar hollow for liquid (hl) can be drawn, and therefore, the extruded spinning liquid is single-linearly subjected to the shearing action of the gas and the accompanying airstream, and as a result, generation of droplets can be avoided. the size of the exit for ejecting gas (eg) is not particularly limited, but is preferably 0.03 to 79 mm 2 , more preferably 0.03 to 20 mm 2 . when the size is less than 0.03 mm 2 , it tends to become difficult to exert the shearing action on the overall spinning liquid extruded, and therefore, it tends to become difficult to be stably fiberized. when the size is more than 79 mm 2 , a flow rate sufficient to exert the shearing action on the spinning liquid, that is, a large amount of gas, is required, and it is wasteful. the size of the exit for ejecting gas (eg) is preferably as same as, or larger than, that of the exit for extruding liquid (el), because the spinning liquid is effectively subjected to the shearing action of the gas and the accompanying airstream. the nozzle for ejecting gas (ng) may be formed of any material such as a metal or a resin, and a resin or metal tube may be used as the nozzle. because the nozzle for ejecting gas (ng) is arranged so that the exit for ejecting gas (eg) is located upstream (i.e., at the side where a spinning liquid is supplied) of the exit for extruding liquid (el), the spinning liquid can be prevented from rising around the exit for extruding liquid. as a result, the exit for extruding liquid is not soiled with the spinning liquid, and spinning may be carried out over a long period. the distance between the exit for ejecting gas (eg) and the exit for extruding liquid (el) is not particularly limited, but is preferably 10 mm or less, more preferably 5 mm or less. when this distance is more than 10 mm, the shearing action of the gas and the accompanying airstream is not sufficiently exerted on the spinning liquid, and it tends to become difficult to be fiberized. the lower limit of the distance between the exit for ejecting gas (eg) and the exit for extruding liquid (el) is not particularly limited, so long as the exit for ejecting gas (eg) does not accord with the exit for extruding liquid (el). the columnar hollow for liquid (hl) is a passage which the spinning liquid flows through, and forms the shape of the spinning liquid when extruded. the columnar hollow for gas (hg) is a passage which the gas flows through, and forms the shape of the gas when ejected. the virtual column for liquid (hvl), which is extended from the columnar hollow for liquid (hl), is a flight route of the spinning liquid immediately after being extruded from the exit for extruding liquid (el). the virtual column for gas (hvg), which is extended from the columnar hollow for gas (hg), is an ejection route of the gas immediately after being ejected from the exit for ejecting gas (eg). the distance between the virtual column for liquid (hvl) and the virtual column for gas (hvg) corresponds to the sum of the wall thickness of the nozzle for extruding liquid (nl) and the wall thickness of the nozzle for ejecting gas (ng), and preferably 2 mm or less, more preferably 1 mm or less. when this distance is more than 2 mm, the shearing action of the gas and the accompanying airstream is not sufficiently exerted on the spinning liquid, and it tends to become difficult to be fiberized. the virtual column for liquid (hvl) and the virtual column for gas (hvg) are columns of which the inside is filled. for example, in a case where a cylindrical virtual portion for liquid is covered with a hollow-cylindrical virtual portion for gas (or in a case where a cylindrical virtual portion for gas is covered with a hollow-cylindrical virtual portion for liquid), when the virtual column for gas and the virtual column for liquid are cross-sectioned with a plane perpendicular to the central axis of the virtual column for gas, there exist an infinite number of straight lines having the shortest distance between the outer boundary of the cross-section of the virtual portion for liquid and the inner boundary of the cross-section of the virtual portion for gas (or between the outer boundary of the cross-section of the virtual portion for gas and the inner boundary of the cross-section of the virtual portion for liquid). therefore, the shearing action of the gas and the accompanying airstream is exerted on the spinning liquid at various points, and as a result, the spinning liquid is not sufficiently fiberized, and a lot of droplets occur. these “virtual columns” are portions which are extended from the inner walls of the nozzles, respectively. because the central axis of the extruding direction (al) of the columnar hollow for liquid (hl) is parallel to the central axis of the ejecting direction (ag) of the columnar hollow for gas (hg), the shearing action of the gas and the accompanying airstream can be single-linearly exerted on the extruded spinning liquid, and thus, fibers can be stably formed. when these central axes coincide with each other, for example, in a case where a cylindrical hollow portion for liquid is covered with a hollow-cylindrical hollow portion for gas, or in a case where a cylindrical hollow portion for gas is covered with a hollow-cylindrical hollow portion for liquid, the shearing action of the gas and the accompanying airstream cannot be single-linearly exerted on the spinning liquid, and as a result, the spinning liquid is not sufficiently fiberized, and a lot of droplets occur. alternatively, when these central axes are skew, or intersect with each other, the shearing action of the gas and the accompanying airstream is not exerted, or is not uniform if exerted, and thus, the spinning liquid is not stably fiberized. the term “parallel” means that the central axis of the extruding direction (al) of the columnar hollow for liquid (hl) and the central axis of the ejecting direction (ag) of the columnar hollow for gas (hg) are coplanar and parallel. the term “the central axis of the extruding (or ejecting) direction” means the line that is bounded by the center of the exit for extruding liquid (or for ejecting gas) and the center of the cross-section of the virtual column for liquid (or for gas). in the spinning apparatus of the present invention, when the columnar hollow for gas (hg) and the columnar hollow for liquid (hl) are cross-sectioned with a plane perpendicular to the central axis of the columnar hollow for gas (hg), only a single straight line having the shortest distance between the outer boundary of the cross-section of the columnar hollow for gas (hg) and the outer boundary of the cross-section of the columnar hollow for liquid (hl) can be drawn [ fig. 1( b )]. because the gas ejected from the columnar hollow for gas and the accompanying airstream single-linearly act on the spinning liquid extruded from the columnar hollow for liquid, the shearing action is single-linearly exerted on the spinning liquid to thereby perform stable spinning without generation of droplets. for example, when two straight lines can be drawn, because the shearing action is not stably exerted, for example, on one point and on another point by turns, droplets occur and stable spinning cannot be carried out. although not shown in fig. 1( a ), the nozzle for extruding liquid (nl) is connected to a reservoir for a spinning liquid (for example, a syringe, a stainless steel tank, a plastic tank, or a bag made of a resin, such as a vinyl chloride resin or a polyethylene resin), and the nozzle for ejecting gas (ng) is connected to a gas supply equipment (for example, a compressor, a gas cylinder, or a blower). although fig. 1 shows a set of spinning apparatus, two or more sets of spinning apparatus can be arranged. the productivity can be improved by arranging two or more sets of spinning apparatus. fig. 1 shows an embodiment in which the nozzle for extruding liquid (nl) and the nozzle for ejecting gas (ng) are fixed, but the present invention is not limited to this embodiment shown in fig. 1 , so long as these nozzles comply with the relations as described above. such nozzles may be prepared by, for example, boring a base material having a step height to form the columnar hollow for liquid (hl) and the columnar hollow for gas (hg). the spinning apparatus may comprise a means capable of freely adjusting the position of the exit for extruding liquid (el) of the nozzle for extruding liquid (nl) and/or the position of the exit for ejecting gas (eg) of the nozzle for ejecting gas (ng). the apparatus of the present invention for manufacturing a nonwoven fabric comprises a fibers collection means as well as the spinning apparatus as described above, and thus, a nonwoven fabric can be produced by collecting fibers. the fibers collection means may be any support capable of directly accumulating fibers thereon, for example, a nonwoven fabric, a woven fabric, a knitted fabric, a net, a drum, a belt, or a flat plate. because the gas is ejected in the present invention, it is preferable that an air-permeable support is used and a suction apparatus is arranged on the opposite side of the fibers collection means from the spinning apparatus, so that fibers are easily accumulated and the collected fibers are not disturbed by suction of the gas. it is preferable that the fibers collection means is arranged opposite to the exit for ejecting gas (ng) of the spinning apparatus, because fibers can be properly captured to produce a nonwoven fabric. it is most preferable that the fibers collection means is arranged so that the surface thereof for capturing fibers is perpendicular to the central axis of the ejecting direction of gas (ag). in this regard, even if the fibers collection means is arranged so that the surface thereof for capturing fibers is parallel to the central axis of the ejecting direction of gas (ag), fibers can be accumulated on the fibers collection means, by locating the fibers collection means downward in the gravity direction and sufficiently far from the exit for ejecting gas so that the spinning force of the fibers is lost, or by applying a gas stream capable of changing the spinning direction. therefore, the central axis of the ejecting direction of gas (ag) of the spinning apparatus may intersect with the gravity direction. when the fibers collection means is arranged opposite to the exit for ejecting gas (eg) of the spinning apparatus, the distance between the fibers collection means and the exit for extruding liquid (el) of the spinning apparatus varies in accordance with the amount of a spinning liquid extruded or the flow rate of a gas, and is not particularly limited, but is preferably 50 to 1000 mm. when this distance is less than 50 mm, a nonwoven fabric sometimes cannot be obtained, because fibers are accumulated, while the solvent contained in the spinning liquid does not completely evaporate and remains, and the shape of each fiber accumulated cannot be maintained. when this distance is more than 1000 mm, the gas flow is liable to be disturbed, and therefore, the fibers are liable to be broken and scattered. in addition to the fibers collection means, the apparatus of the present invention for manufacturing a nonwoven fabric preferably comprises a container for spinning capable of containing the spinning apparatus and the fibers collection means. when the apparatus is equipped with the container for spinning, the diffusion of the solvent evaporated from the spinning liquid can be avoided and, in some cases, the solvent can be recovered to be re-used. when the spinning apparatus and the fibers collection means are contained in the spinning container, it is preferable that an exhaust apparatus other than the suction apparatus to suction the fibers is connected to the spinning container. when spinning is carried out, the concentration of solvent vapor in the spinning container is gradually increased to suppress the evaporation of the solvent, and as a result, unevenness of fiber diameters is liable to occur, and it tends to become difficult to be fiberized. however, the unevenness of fiber diameter can be lowered and fiberization can be stably performed, by exhausting the gas from the spinning container to maintain a constant concentration of the solvent contained in the spinning container. further, it is preferable that a supply equipment of a gas of which the temperature and humidity are controlled is connected to the spinning container, because the concentration of solvent vapor in the spinning container can be stabilized, and the unevenness of fiber diameter can be lowered. the process of the present invention for manufacturing a nonwoven fabric is a process using the above apparatus for manufacturing a nonwoven fabric, and ejecting a gas having a flow rate of 100 m/sec. or more from the exit for ejecting gas (eg) of the spinning apparatus. generation of droplets can be avoided, and a nonwoven fabric containing fibers of which the diameter is thinned can be efficiently produced by ejecting the gas having a flow rate of 100 m/sec. or more from the exit for ejecting gas (eg). the gas is ejected at a flow rate of, preferably 150 m/sec. or more, more preferably 200 m/sec. or more. the upper limit of the gas flow rate is not particularly limited, so long as the fibers accumulated on the fibers collection means are not disturbed. a gas having such a flow rate can be ejected by, for example, supplying the gas to the columnar hollow for gas (hg) from a compressor. the gas is not particularly limited, but air, a nitrogen gas, an argon gas, or the like may be used, and use of air is economical. the gas can contain vapor of a solvent which has an affinity for the spinning liquid or vapor of a solvent which lacks an affinity for the spinning liquid. by controlling the amount of vapor of a solvent, an evaporation rate of the solvent from the spinning liquid, or a solidification rate of the spinning liquid can be controlled, and as a result, the stability of spinning can be improved, or the fiber diameter can be controlled. a spinning liquid used in the process of the present invention is not particularly limited, and may be any liquid prepared by dissolving a desired polymer in a solvent. more particularly, a spinning liquid prepared by dissolving one, or two or more polymers selected from, for example, polyethylene glycol, partially saponified polyvinyl alcohol, completely saponified polyvinyl alcohol, polyvinylpyrrolidone, polylactic acid, polyester, polyglycolic acid, polyacrylonitrile, polyacrylonitrile copolymer, polymethacrylic acid, polymethylmethacrylate, polycarbonate, polystyrene, polyamide, polyimide, polyethylene, or polypropylene, in one, or two or more solvents selected from, for example, water, acetone, methanol, ethanol, propanol, isopropanol, tetrahydrofuran, dimethylsulfoxide, 1,4-dioxane, pyridine, n,n-dimethylformamide, n,n-dimethylacetamide, n-methyl-2-pyrrolidone, acetonitrile, formic acid, toluene, benzene, cyclohexane, cyclohexanone, carbon tetrachloride, methylene chloride, chloroform, trichloroethane, ethylene carbonate, diethyl carbonate, or propylene carbonate, may be used. the viscosity of a spinning liquid when spinning is carried out is preferably 10 to 10000 mpa·s, more preferably 20 to 8000 mpa·s. when the viscosity is less than 10 mpa·s, the spinning liquid exhibits a poor spinnability due to a low viscosity, and it tends to become difficult to have a fibrous form. when the viscosity is more than 10000 mpa·s, the spinning liquid is difficult to be drawn, and it tends to become difficult to have a fibrous form. therefore, even if the viscosity at room temperature is more than 10000 mpa·s, such a spinning liquid may be used, provided that the viscosity falls within the preferable range by heating the spinning liquid per se or the columnar hollow for liquid (hl). by contrast, even if the viscosity at room temperature is less than 10 mpa·s, such a spinning liquid may be used, provided that the viscosity rises within the preferable range by cooling the spinning liquid per se or the columnar hollow for liquid (hl). the term “viscosity” as used herein means a value measured at the temperature same as that when spinning is carried out, using a viscometer, when the shear rate is 100 s −1 . the amount of a spinning liquid extruded from the exit for extruding liquid (el) is not particularly limited, because it varies depending on the viscosity of the spinning liquid or the flow rate of a gas. it is preferably 0.1 to 100 cm 3 /hour. the spinning apparatus of the present invention will be explained with reference to fig. 4 that is an enlarged perspective view showing the tip portion of an embodiment having two exits for extruding liquid and an exit for ejecting gas, and fig. 5( a ) that is a cross-sectional view taken along plane c in fig. 4 . the spinning apparatus of the present invention contains a first nozzle for extruding liquid (nl 1 ) having, at one end thereof, a first exit for extruding liquid (el 1 ) capable of extruding a spinning liquid, a second nozzle for extruding liquid (nl 2 ) having, at one end thereof, a second exit for extruding liquid (el 2 ) capable of extruding a spinning liquid, and a nozzle for ejecting gas (ng) having, at one end thereof, an exit for ejecting gas (eg) capable of ejecting a gas; the outer walls of the nozzles for extruding liquid (nl 1 , nl 2 ) are directly contacted with the outer wall of the nozzle for ejecting gas (ng) so that the nozzle for ejecting gas (ng) is sandwiches between the nozzles for extruding liquid (nl 1 and nl 2 ); and the exit for ejecting gas (eg) of the nozzle for ejecting gas (ng) is located upstream of each of the first exit for extruding liquid (el 1 ) and the second exit for extruding liquid (el 2 ). the first nozzle for extruding liquid (nl 1 ) has a first columnar hollow for liquid (hl 1 ) of which one end is the first exit for extruding liquid (el 1 ), the second nozzle for extruding liquid (nl 2 ) has a second columnar hollow for liquid (hl 2 ) of which one end is the second exit for extruding liquid (el 2 ), and the nozzle for ejecting gas (ng) has a columnar hollow for gas (hg) of which one end is the exit for ejecting gas (eg). a first virtual column for liquid (hvl 1 ) which is extended from the first columnar hollow for liquid (hl 1 ) is located adjacent to a virtual column for gas (hvg) which is extended from the columnar hollow for gas (hg), and the distance between these virtual columns corresponds to the sum of the wall thickness of the first nozzle for extruding liquid 11 ) and the wall thickness of the nozzle for ejecting gas (ng); and the second virtual column for liquid (hvl 2 ) which is extended from the second columnar hollow for liquid (hl 2 ) is located adjacent to a virtual column for gas (hvg) which is extended from the columnar hollow for gas (hg), and the distance between these virtual columns corresponds to the sum of the wall thickness of the second nozzle for extruding liquid (nl 2 ) and the wall thickness of the nozzle for ejecting gas (ng). the first central axis of the extruding direction (al 1 ) of the first columnar hollow for liquid (hl 1 ) is parallel to the central axis of the ejecting direction (ag) of the columnar hollow for gas (hg); and the second central axis of the extruding direction (al 2 ) of the second columnar hollow for liquid (hl 2 ) is parallel to the central axis of the ejecting direction (ag) of the columnar hollow for gas (hg). when the columnar hollow for gas (hg) and the columnar hollows for liquid (hl 1 , hl 2 ) are cross-sectioned with a plane perpendicular to the central axis (ag) of the columnar hollow for gas (hg), the outer shape of a cross-section of the columnar hollow for gas (hg), and the outer shape of a cross-section of each of the columnar hollows for liquid (hl 1 , hl 2 ) are circular, and only one straight line (l 1 , l 2 ) having the shortest distance between the outer boundary of the cross-section of the columnar hollow for gas (hg) and the outer boundary of the cross-section of each of the columnar hollows for liquid (hl 1 , hl 2 ), at any combination of the columnar hollow for gas and each of the columnar hollows for liquid, can be drawn [see fig. 5( a )]. in this spinning apparatus as shown in fig. 4 , when spinning liquids are supplied to the first nozzle for extruding liquid (nl 1 ) and the second nozzle for extruding liquid (nl 2 ), and a gas is supplied to the nozzle for ejecting gas (ng), the spinning liquids supplied to the first and second nozzles flow through the first columnar hollow for liquid (hl 1 ) and the second columnar hollow for liquid (hl 2 ), and are extruded from the first exit for extruding liquid (el 1 ) and the second exit for extruding liquid (el 2 ), in the first axis direction of the first columnar hollow for liquid (hl 1 ) and the second axis direction of the second columnar hollow for liquid (hl 2 ), respectively, and simultaneously, the gas flows through the columnar hollow for gas (hg) and is ejected from the exit for ejecting gas (eg) in the axis direction of the columnar hollow for gas (hg). the ejected gas is adjacent to each of the extruded spinning liquids, the central axis (ag) of the ejected gas is parallel to the central axis (al 1 , al 2 ) of each of the extruded spinning liquids at the closest range of each exit for extruding liquid, and there exists only a single point having the shortest distance between the ejected gas and each of the extruded spinning liquids on plane c at any combination, that is, each spinning liquid is single-linearly subjected to the shearing action of the gas and the accompanying airstream, and therefore, each spinning liquid is spun in the first axis direction of the first columnar hollow for liquid (hl 1 ) or the second axis direction of the second columnar hollow for liquid (hl 2 ) while the diameter thereof is thinned, and simultaneously, each spinning liquid is fiberized by evaporating the solvent contained in each spinning liquid. as described above, the spinning apparatus as shown in fig. 4 does not require the application of a high voltage to each of the spinning liquids, and is a simple and energy-efficient apparatus. because two spinning liquids can be spun and fiberized by only a gas stream, the amount of the gas can be reduced, and as a result, the scattering of fibers can be avoided, and a nonwoven fabric having an excellent uniformity can be produced with a high productivity. further, the spinning apparatus is an energy-efficient apparatus, because the amount of the gas can be reduced, and a high-capacity suction apparatus is not required. furthermore, from a thin nonwoven fabric to a thick nonwoven fabric can be produced, because a suction is not necessary to be enhanced. the first nozzle for extruding liquid (nl 1 ) and the second nozzle for extruding liquid (nl 2 ) may be any nozzle capable of extruding a spinning liquid, and the outer shape of each of the first exit for extruding liquid (el 1 ) and the second exit for extruding liquid (el 2 ) is not particularly limited. the outer shape of each of the first and second exits for extruding liquid (el 1 , el 2 ) may be, for example, circular, oval, elliptical, or polygonal (such as triangle, quadrangle, or hexagonal), and is preferably circular, because the shearing action of the gas and the accompanying airstream can be single-linearly exerted on each of the spinning liquids, and generation of droplets can be avoided. that is to say, when the first and second nozzles for extruding liquid (nl 1 , nl 2 ) have a circular outer shape, and the columnar hollow for gas (hg) and the columnar hollows for liquid (hl 1 , hl 2 ) are cross-sectioned with a plane perpendicular to the central axis (ag) of the columnar hollow for gas (hg), there is a tendency that only one straight line (l 1 , l 2 ) having the shortest distance between the outer boundary of the cross-section of the columnar hollow for gas (hg) and the outer boundary of the cross-section of each of the columnar hollows for liquid (hl 1 , hl 2 ), at any combination of the columnar hollow for gas and each of the columnar hollows for liquid, can be drawn, and as a result, the shearing action of the gas and the accompanying airstream is single-linearly exerted on each of the spinning liquids, and generation of droplets can be avoided. the outer shape of the first exit for extruding liquid (el 1 ) may be the same as, or different from, that of the second exit for extruding liquid (el 2 ), but it is preferable that both outer shapes are circular. when the first and second exits for extruding liquid (el 1 , el 2 ) have a polygonal shape, it is preferable that these exits are arranged so that one vertex of each polygon is at the side of the nozzle for ejecting gas (ng), because the shearing action of the gas and the accompanying airstream is single-linearly exerted on each of the spinning liquids, and generation of droplets can be avoided. that is to say, in a case where the first and second nozzles for extruding liquid (nl 1 , nl 2 ) are arranged so that, when the columnar hollow for gas (hg) and the first and second columnar hollows for liquid (hl 1 , hl 2 ) are cross-sectioned with a plane perpendicular to the central axis (ag) of the columnar hollow for gas (hg), only one straight line [l 1 , l 2 in fig. 5( a ) to fig. 5( e )] having the shortest distance between the outer boundary of the cross-section of the columnar hollow for gas (hg) and the outer boundary of the cross-section of each of the first and second columnar hollows for liquid (hl 1 , hl 2 ), at any combination of the columnar hollow for gas and each of the columnar hollows for liquid, can be drawn, the shearing action of the gas and the accompanying airstream is single-linearly exerted on each of the spinning liquids, and as a result, stable spinning can be performed, and generation of droplets can be avoided. therefore, when the exit for ejecting gas (eg) has a circular shape, it is possible to arrange these nozzles so that one side of each of the first and second exits for extruding liquid (el 1 , el 2 ) is at the side of the nozzle for ejecting gas (ng) [see fig. 5( e )]. the size of each of the first exit for extruding liquid (el 1 ) and the second exit for extruding liquid (el 2 ) is not particularly limited, but is preferably 0.01 to 20 mm 2 , more preferably 0.01 to 2 mm 2 . when the size is less than 0.01 mm 2 , it tends to become difficult to extrude a spinning liquid having a high viscosity. when the size is more than 20 mm 2 , it tends to become difficult to single-linearly exert the action of the gas and the accompanying airstream on the spinning liquid, and therefore, it tends to become difficult to be stably spun. the first nozzle for extruding liquid (nl 1 ) and the second nozzle for extruding liquid (nl 2 ) may be formed of any material such as a metal or a resin, and a resin or metal tube may be used as the nozzles. although fig. 4 shows cylindrical first and second nozzles for extruding liquid (nl 1 , nl 2 ), a nozzle having an acute-angled edge in which a tip portion is slantingly cut away with a plane may be used as the nozzles. this nozzle having an acute-angled edge is advantageous to a spinning liquid having a high viscosity. when the nozzle having an acute-angled edge is used so that the acute-angled edge is arranged at the side of the nozzle for ejecting gas, the spinning liquid may be effectively subjected to the shearing action of the gas and the accompanying airstream, and therefore, may be stably fiberized. although fig. 4 shows two nozzles, i.e., the first and second nozzles for extruding liquid (nl 1 , nl 2 ), the number of the nozzles for extruding liquid is not limited to two, and may be three or more (see fig. 6 ). embodiments having many nozzles can efficiently use the gas to produce a nonwoven fabric with a high productivity. the nozzle for ejecting gas (ng) may be any nozzle capable of ejecting a gas, and the shape of the exit for ejecting gas (eg) is not particularly limited. the shape of the exit for ejecting gas (eg) may be, for example, circular, oval, elliptical, or polygonal (such as triangle, quadrangle, or hexagonal), and is preferably circular. this is because wherever each exit for extruding liquid is arranged with respect to the exit for ejecting gas, each spinning liquid extruded from each exit for extruding liquid may be independently and single-linearly subjected to the shearing action of the gas ejected from the exit for ejecting gas and the accompanying airstream to easily spin fibers of which the diameter is thinned. when the exit for ejecting gas (eg) has a polygonal shape, the shearing action of the gas and the accompanying airstream may be efficiently exerted on the spinning liquid, by arranging the nozzles so that one vertex of the polygon is at the side of the first nozzle for extruding liquid (nl 1 ) and another vertex thereof is at the side of the second nozzle for extruding liquid (nl 2 ). that is to say, as previously described, in a case where the first and second nozzles for extruding liquid (nl 1 , nl 2 ) are arranged so that, when the columnar hollow for gas (hg) and the first and second columnar hollows for liquid (hl 1 , hl 2 ) are cross-sectioned with a plane perpendicular to the central axis (ag) of the columnar hollow for gas (hg), only one straight line (l 1 , l 2 ) having the shortest distance between the outer boundary of the cross-section of the columnar hollow for gas (hg) and the outer boundary of the cross-section of each of the first and second columnar hollows for liquid (hl 1 , hl 2 ), at any combination of the columnar hollow for gas and each of the columnar hollows for liquid, can be drawn [see fig. 5( c ) to fig. 5( d )], the shearing action of the gas and the accompanying airstream is single-linearly exerted on each of the spinning liquids, and as a result, generation of droplets can be avoided. the size of the exit for ejecting gas (eg) is not particularly limited, but is preferably 0.01 to 79 mm 2 , more preferably 0.015 to 20 mm 2 . when the size is less than 0.01 mm 2 , it tends to become difficult to exert the shearing action on the overall spinning liquid extruded, and therefore, it tends to become difficult to be stably fiberized. when the size is more than 79 mm 2 , a flow rate sufficient to exert the shearing action on the spinning liquid, that is, a large amount of gas is required, and it is wasteful. the nozzle for ejecting gas (ng) may be formed of any material such as a metal or a resin, and a resin or metal tube may be used as the nozzle. because the nozzle for ejecting gas (ng) is arranged so that the exit for ejecting gas (eg) is located upstream (i.e., at the side where a spinning liquid is supplied) of the first and second exits for extruding liquid (el 1 , el 2 ), the spinning liquid can be prevented from rising around the first and second exits for extruding liquid (el 1 , el 2 ). as a result, the exit for extruding liquid is not soiled with the spinning liquid, and spinning may be carried out over a long period. the distance between the exit for ejecting gas (eg) and each of the first and second exits for extruding liquid (el 1 , el 2 ) is not particularly limited, but is preferably 10 mm or less, more preferably 5 mm or less. when this distance is more than 10 mm, the shearing action of the gas and the accompanying airstream is not sufficiently exerted on the spinning liquid at the first and second exits for extruding liquid (el 1 , el 2 ), and it tends to become difficult to be fiberized. the lower limit of the distance between the exit for ejecting gas (eg) and each of the first and second exits for extruding liquid (el 1 , el 2 ) is not particularly limited, so long as the exit for ejecting gas (eg) does not accord with each of the first and second exits for extruding liquid (el 1 , el 2 ). in this regard, the distance between the exit for ejecting gas (eg) and the first exit for extruding liquid (el 1 ) may be the same as, or different from, that between the exit for ejecting gas (eg) and the second exit for extruding liquid (el 2 ). when this distance is the same, the shearing action can be equally exerted on each spinning liquid to perform stable spinning, and therefore, it is preferable. the first columnar hollow for liquid (hl 1 ) and the second columnar hollow for liquid (hl 2 ) are passages which the spinning liquid flows through, and form the shape of the spinning liquid when extruded. the columnar hollow for gas (hg) is a passage which the gas flows through, and forms the shape of the gas when ejected. in the present invention, because each of the first and second columnar hollows for liquid (hl 1 , hl 2 ), and the columnar hollow for gas (hg) can generate a columnar spinning liquid and a columnar gas, respectively, the shearing action of the gas and the accompanying airstream can be sufficiently exerted on each spinning liquid, and each spinning liquid can be fiberized. the first virtual column for liquid (hvl 1 ), which is extended from the first columnar hollow for liquid (hl 1 ), is a flight route of the spinning liquid immediately after being extruded from the first exit for extruding liquid (el 1 ), and the second virtual column for liquid (hvl 2 ), which is extended from the second columnar hollow for liquid (hl 2 ), is a flight route of the spinning liquid immediately after being extruded from the second exit for extruding liquid (el 2 ). the virtual column for gas (hvg), which is extended from the columnar hollow for gas (hg), is an ejection route of the gas immediately after being ejected from the exit for ejecting gas (eg). the distance between the first virtual column for liquid (hvl 1 ) and the virtual column for gas (hvg) corresponds to the sum of the wall thickness of the first nozzle for extruding liquid (nl 1 ) and the wall thickness of the nozzle for ejecting gas (ng), and the distance between the second virtual column for liquid (hvl 2 ) and the virtual column for gas (hvg) corresponds to the sum of the wall thickness of the second nozzle for extruding liquid (nl 2 ) and the wall thickness of the nozzle for ejecting gas (ng). these distances are preferably 2 mm or less, more preferably 1 mm or less. when the distance is more than 2 mm, the shearing action of the gas and the accompanying airstream is not sufficiently exerted on the spinning liquid, and it tends to become difficult to be fiberized. the first virtual column for liquid (hvl 1 ), the second virtual column for liquid (hvl 2 ), and the virtual column for gas (hvg) are columns of which the inside is filled. for example, in a case where a cylindrical first or second virtual portion for liquid is covered with a hollow-cylindrical virtual portion for gas (or in a case where a cylindrical virtual portion for gas is covered with a hollow-cylindrical first or second virtual portion for liquid), when the virtual column for gas and the first or second virtual column for liquid are cross-sectioned with a plane perpendicular to the central axis (ag) of the virtual column for gas (hvg), there exist an infinite number of straight lines having the shortest distance between the outer boundary of the cross-section of the first or second virtual portion for liquid and the inner boundary of the cross-section of the virtual portion for gas (or between the outer boundary of the cross-section of the virtual portion for gas and the inner boundary of the cross-section of the first or second virtual portion for liquid). therefore, the shearing action of the gas and the accompanying airstream is exerted on the spinning liquid at various points, and as a result, the spinning liquid is not sufficiently fiberized, and a lot of droplets occur. these “virtual columns” are portions which are extended from the inner walls of the nozzles, respectively. because the first central axis of the extruding direction (al 1 ) of the first columnar hollow for liquid (hl 1 ) is parallel to the central axis of the ejecting direction (ag) of the columnar hollow for gas (hg), and the second central axis of the extruding direction (al 2 ) of the second columnar hollow for liquid (hl 2 ) is parallel to the central axis of the ejecting direction (ag) of the columnar hollow for gas (hg), the shearing action of the gas and the accompanying airstream can be single-linearly exerted on each of the extruded spinning liquids, and thus, fibers can be stably formed. when these central axes coincide with each other, for example, in a case where a cylindrical first or second hollow portion for liquid is covered with a hollow-cylindrical hollow portion for gas, or in a case where a cylindrical hollow portion for gas is covered with a hollow-cylindrical first or second hollow portion for liquid, the shearing action of the gas and the accompanying airstream cannot be single-linearly exerted on each of the spinning liquids, and as a result, the spinning liquid is not sufficiently fiberized, and a lot of droplets occur. alternatively, when these central axes are skew, or intersect with each other, the shearing action of the gas and the accompanying airstream is not exerted, or is not uniform if exerted, and thus, each of the spinning liquids is not stably fiberized. the term “parallel” means that the central axis of the extruding direction of the first or second columnar hollow for liquid and the central axis of the ejecting direction of the columnar hollow for gas are coplanar and parallel. the term “the central axis of the extruding (or ejecting) direction” means the line that is bounded by the center of the exit for extruding liquid (or for ejecting gas) and the center of the cross-section of the virtual column for liquid (or for gas). in the spinning apparatus of the present invention, when the columnar hollow for gas (hg) and the first and second columnar hollows for liquid (hl 1 , hl 2 ) are cross-sectioned with a plane perpendicular to the central axis (ag) of the columnar hollow for gas (hg), only a single straight line (l 1 ) having the shortest distance between the outer boundary of the cross-section of the columnar hollow for gas (hg) and the outer boundary of the cross-section of the first columnar hollow for liquid (hl 1 ) can be drawn, and only a single straight line (l 2 ) having the shortest distance between the outer boundary of the cross-section of the columnar hollow for gas (hg) and the outer boundary of the cross-section of the second columnar hollow for liquid (hl 2 ) can be drawn. because the gas ejected from the columnar hollow for gas (hg) and the accompanying airstream single-linearly act on each of the spinning liquid extruded from the first columnar hollow for liquid (hl 1 ) and the spinning liquid extruded from the second columnar hollow for liquid (hl 2 ), the shearing action is single-linearly exerted on each of the spinning liquids to thereby perform stable spinning without generation of droplets. for example, when two straight lines can be drawn, because the shearing action is not stably exerted, for example, on one point and on another point by turns, droplets occur and stable spinning cannot be carried out. although not shown in fig. 4 , the first and second nozzles for extruding liquid (nl 1 , nl 2 ) are connected to a reservoir for a spinning liquid (for example, a syringe, a stainless steel tank, a plastic tank, or a bag made of a resin, such as a vinyl chloride resin or a polyethylene resin), and the nozzle for ejecting gas (ng) is connected to a gas supply equipment (for example, a compressor, a gas cylinder, or a blower). although fig. 4 shows a set of spinning apparatus, two or more sets of spinning apparatus can be arranged. the productivity can be improved by arranging two or more sets of spinning apparatus. fig. 4 shows an embodiment in which the first nozzle for extruding liquid (nl), the second nozzle for extruding liquid (nl 2 ), and the nozzle for ejecting gas (ng) are fixed, but the present invention is not limited to this embodiment shown in fig. 4 , so long as these nozzles comply with the relations as described above. such nozzles may be prepared by, for example, boring a base material having step heights to form the first columnar hollow for liquid (hl 1 ), the second columnar hollow for liquid (hl 2 ), and the columnar hollow for gas (hg). the spinning apparatus may comprises a means capable of freely adjusting the position of the first exit for extruding liquid (el 1 ) of the first nozzle for extruding liquid (el 1 ), the position of the second exit for extruding liquid (el 2 ) of the second nozzle for extruding liquid (nl 2 ), and/or the position of the exit for ejecting gas (eg) of the nozzle for ejecting gas (ng). the apparatus of the present invention for manufacturing a nonwoven fabric comprises a fibers collection means as well as the spinning apparatus as described above, and thus, a nonwoven fabric can be produced by collecting fibers. because two or more nozzles for extruding liquid are arranged with respect to one nozzle for ejecting gas in this apparatus, and the amount of the ejected gas can be reduced, the scattering of fibers can be avoided, and a nonwoven fabric having an excellent uniformity can be produced with a high productivity. further, this apparatus is energy-efficient, because the amount of the gas can be reduced, and a high-capacity suction apparatus is not required. the fibers collection means may be any support capable of directly accumulating fibers thereon, and the examples as previously described may be used. it is preferable that an air-permeable support is used and a suction apparatus is arranged on the opposite side of the fibers collection means from the spinning apparatus, because of the same reasons as previously described. the fibers collection means may be arranged as previously described. when the fibers collection means is arranged opposite to the exit for ejecting gas (eg) of the spinning apparatus, the distance between the fibers collection means and the first and second exits for extruding liquid (el 1 , el 2 ) of the spinning apparatus varies in accordance with the amount of a spinning liquid extruded or the flow rate of a gas, and is not particularly limited, but is preferably 30 to 1000 mm. when this distance is less than 30 mm, a nonwoven fabric sometimes cannot be obtained, because fibers are accumulated, while the solvent contained in the spinning liquid does not completely evaporate and remains, and the shape of each fiber accumulated cannot be maintained. when this distance is more than 1000 mm, the gas flow is liable to be disturbed, and therefore, the fibers are liable to be broken and scattered. in addition to the fibers collection means, the apparatus of the present invention for manufacturing a nonwoven fabric preferably comprises a container for spinning capable of containing the spinning apparatus and the fibers collection means, because of the reasons as previously described. when a nonwoven fabric is produced by using the apparatus of the present invention for manufacturing a nonwoven fabric, the flow rate of the gas ejected from the exit for ejecting gas (eg) of the spinning apparatus, a method of ejecting the gas, and the type of the gas can be appropriately selected in a similar fashion as previously described. as previously described, a spinning liquid used in the process of the present invention is not particularly limited, and may be any liquid prepared by dissolving a desired polymer in a solvent. the viscosity of a spinning liquid when spinning is carried out is preferably 10 to 10000 mpa·s, more preferably 20 to 8000 mpa·s, because of the same reasons as previously described. the amount of each spinning liquid extruded from the exit for extruding liquid (el), the first exit for extruding liquid (el 1 ), and the second exit for extruding liquid (el 2 ) is not particularly limited, because it varies depending on the viscosity of each spinning liquid or the flow rate of a gas. it is preferably 0.1 to 100 cm 3 /hour. in this regard, the amount of a spinning liquid extruded from the first exit for extruding liquid (el 1 ) may be the same as, or different from, that of the second exit for extruding liquid (el 2 ). when the amounts are the same, fibers having a more uniform fiber diameter may be spun. another embodiment of the process of the present invention for manufacturing a nonwoven fabric is a process using the apparatus described above, and comprising the steps of extruding one or more spinning liquids from the exits for extruding liquid under two or more different extruding conditions to be fiberized, and accumulating the fiberized fibers on the fibers collection means to produce a nonwoven fabric. in this process, because the extruding conditions of the first nozzle for extruding liquid (nl 1 ) and the second nozzle for extruding liquid (nl 2 ) in fig. 4 are different, and the gas that acts on these extruded spinning liquid is the same, different types of fibers can be spun, and as a result, a nonwoven fabric having an excellent uniformity in which different types of fibers are uniformly mixed can be produced. the term “two or more different extruding conditions” as used herein means that each condition is not completely the same as the other condition(s), that is, each condition is different from the other condition(s) in one, or two or more conditions selected from, for example, the outer shape of the exit for extruding liquid, the size of the exit for extruding liquid, the distance between the exit for extruding liquid and the exit for ejecting gas, the amount of a spinning liquid extruded, the concentration of a spinning liquid, polymers contained in a spinning liquid, the viscosity of a spinning liquid, solvents contained in a spinning liquid, the ratio of polymers contained in a spinning liquid when the spinning liquid contains two or more polymers, the ratio of solvents contained in a spinning liquid when the spinning liquid contains two or more solvents, the temperature of a spinning liquid, or the type and/or the amount of an additive contained in a spinning liquid. among these conditions, when a polymer(s) contained in spinning liquids is the same, but the concentrations thereof in the spinning liquids are different, or when a polymer(s) contained in spinning liquids is the same, but solvents contained in the spinning liquids are different, a nonwoven fabric having an excellent uniformity in which two or more types of fibers having different fiber diameters are uniformly mixed can be produced. alternatively, when polymers contained in spinning liquids are different, a nonwoven fabric having an excellent uniformity in which two or more types of fibers containing different polymers are uniformly mixed can be produced. examples the present invention now will be further illustrated by, but is by no means limited to, the following examples. example 1 (preparation of spinning liquid) polyacrylonitrile (manufactured by aldrich) was dissolved in n,n-dimethylformamide so as to become a concentration of 10 mass % to prepare a spinning liquid (viscosity (temperature: 25° c.): 970 mpa·s). (preparation of apparatus for manufacturing nonwoven fabric) a manufacturing apparatus as shown in fig. 1 comprising the following parts was prepared. (1) reservoir for spinning liquid: syringe(2) air supply equipment: compressor(3) nozzle for extruding liquid (nl): metal nozzle(3)-1 exit for extruding liquid (el): circular, 0.4 mm in diameter (cross-sectional area: 0.13 mm 2 )(3)-2 columnar hollow for liquid (hl): cylindrical, 0.4 mm in diameter(3)-3 outer diameter of nozzle: 0.7 mm(3)-4 number of nozzles: 1(4) nozzle for ejecting gas (ng): metal nozzle(4)-1 exit for ejecting gas (eg): circular, 0.4 mm in diameter (cross-sectional area: 0.13 mm 2 )(4)-2 columnar hollow for gas (hg): cylindrical, 0.4 mm in diameter(4)-3 outer diameter of nozzle: 0.7 mm(4)-4 number of nozzles: 1(4)-5 positions: the nozzles were arranged so that the exit for ejecting gas (eg) was located 5 mm upstream of the exit for extruding liquid (el), and the outer walls of the nozzles were directly contacted with each other.(5) distance between virtual column for liquid (hvl) and virtual column for gas (hvg): 0.3 mm(6) central axis of extruding direction of liquid (al) and central axis of ejecting direction of gas (ag): parallel(7) number of straight lines having the shortest distance between the outer boundary of the cross-section of the columnar hollow for gas (hg) and the outer boundary of the cross-section of the columnar hollow for liquid (hl) when the columnar hollows are cross-sectioned with a plane perpendicular to the central axis of the columnar hollow for gas (hg): 1(8) fibers collection means: net (30 mesh)(8)-1 distance from exit for extruding liquid (el): 300 mm(9) suction apparatus for fibers: blower(10) container for spinning: acrylic case having a volume of 1 m 3(10)-1 gas supply equipment: precision air generator (manufactured by apiste, 1400-hdr) (manufacture of nonwoven fabric) fibers were accumulated on the fibers collection means (net) under the following conditions to produce a nonwoven fabric having a mass per unit area of 5 g/m 2 . (a) amount of spinning liquid extruded from nozzle for extruding liquid (nl): 3 cm 3 /hour(b) flow rate of air ejected: 200 m/sec.(c) moving speed of net: 0.65 mm/sec.(d) conditions for suctioning fibers: 30 cm/sec.(e) conditions for supplying gas: 25° c., 27% rh, 1 m 3 /min. comparative example 1 (preparation of spinning liquid) the same spinning liquid as that described in example 1 was prepared. (preparation of apparatus for manufacturing nonwoven fabric) a manufacturing apparatus comprising the following parts was prepared. (1) reservoir for spinning liquid: stainless steel tank(2) air supply equipment: compressor(3) nozzle for extruding liquid nl): metal nozzle(3)-1 exit for extruding liquid: circular, 0.7 mm in diameter (cross-sectional area: 0.38 mm 2 )(3)-2 columnar hollow for liquid: cylindrical, 0.7 mm in diameter(3)-3 outer diameter of nozzle: 1.1 mm(3)-4 number of nozzles: 1(4) nozzle for ejecting gas (ng): metal nozzle(4)-1 exit for ejecting gas: circular, 2.1 mm in diameter (cross-sectional area: 3.46 mm 2 )(4)-2 columnar hollow for gas: cylindrical, 2.1 mm in diameter(4)-3 outer diameter of nozzle: 2.5 mm(4)-4 number of nozzles: 1(4)-5 positions: the nozzles were arranged so that the exit for ejecting gas was located 2 mm upstream of the exit for extruding liquid, and the nozzle for ejecting gas and the nozzle for extruding liquid were concentrically located. as a result, the exit for ejecting gas has an annular shape having an inner diameter of 1.1 mm and an outer diameter of 2.1 mm (see fig. 3 ).(5) distance between virtual column for liquid and virtual column for gas: 0.4 mm(6) central axis of extruding direction of liquid and central axis of ejecting direction of gas: coaxial(7) number of straight lines having the shortest distance between the inner boundary of the cross-section of the columnar hollow for gas and the outer boundary of the cross-section of the columnar hollow for liquid when the columnar hollows are cross-sectioned with a plane perpendicular to the central axis of the columnar hollow for gas: infinite(8) fibers collection means: net (30 mesh)(8)-1 distance from exit for extruding liquid: 300 mm(9) suction apparatus for fibers: blower(10) container for spinning: acrylic case having a volume of 1 m 3(10)-1 gas supply equipment: precision air generator (manufactured by apiste, 1400-hdr) (manufacture of nonwoven fabric) spinning was carried out under the following conditions to produce a nonwoven fabric, but almost all of extruded spinning liquids did not have a fibrous form, and a nonwoven fabric was not obtained. (a) amount of spinning liquid extruded from nozzle for extruding liquid: 3 cm 3 /hour(b) flow rate of air ejected: 200 m/sec.(c) moving speed of net: 0.65 mm/sec.(d) conditions for suctioning fibers: 30 cm/sec.(e) conditions for supplying gas: 25° c., 27% rh, 1 m 3 /min. example 2 (preparation of spinning liquid) polyacrylonitrile (manufactured by aldrich) was dissolved in n,n-dimethylformamide so as to become a concentration of 10.5 mass % to prepare a spinning liquid (viscosity (temperature: 23° c.): 1100 mpa·s). (preparation of apparatus for manufacturing nonwoven fabric) a manufacturing apparatus as shown in fig. 4 comprising the following parts was prepared. (1) reservoir for spinning liquid: syringe(2) air supply equipment: compressor(3) first nozzle for extruding liquid (nl 1 ): metal nozzle(3)-1 first exit for extruding liquid (el 1 ): circular, 0.33 mm in diameter (cross-sectional area: 0.086 mm 2 )(3)-2 first columnar hollow for liquid (hl 1 ): cylindrical, 0.33 mm in diameter(3)-3 outer diameter of nozzle: 0.64 mm(4) second nozzle for extruding liquid (nl 2 ): metal nozzle(4)-1 second exit for extruding liquid (el 2 ): circular, 0.33 mm in diameter (cross-sectional area: 0.086 mm 2 )(4)-2 second columnar hollow for liquid (hl 2 ): cylindrical, 0.33 mm in diameter(4)-3 outer diameter of nozzle: 0.64 mm(5) nozzle for ejecting gas (ng): metal nozzle(5)-1 exit for ejecting gas (eg): circular, 0.33 mm in diameter (cross-sectional area: 0.086 mm 2 )(5)-2 columnar hollow for gas (hg): cylindrical, 0.33 mm in diameter(5)-3 outer diameter of nozzle: 0.64 mm(5)-4 positions: the nozzles were arranged so that the exit for ejecting gas (eg) was located 2 mm upstream of each of the first exit for extruding liquid (el 1 ) and the second exit for extruding liquid (el 2 ), and the outer walls of the nozzles were directly contacted with each other.(6)-1 distance between first virtual column for liquid (hvl 1 ) and virtual column for gas (hvg): 0.31 mm(6)-2 first central axis of extruding direction of liquid (al 1 ) and central axis of ejecting direction of gas (ag): parallel(6)-3 number of straight lines (l 1 ) having the shortest distance between the outer boundary of the cross-section of the columnar hollow for gas (hg) and the outer boundary of the cross-section of the first columnar hollow for liquid (hl 1 ) when the columnar hollows are cross-sectioned with a plane perpendicular to the central axis (ag) of the columnar hollow for gas (hg): 1(7)-1 distance between second virtual column for liquid (hvl 2 ) and virtual column for gas (hvg): 0.31 mm(7)-2 second central axis of extruding direction of liquid (al 2 ) and central axis of ejecting direction of gas (ag): parallel(7)-3 number of straight lines (l 2 ) having the shortest distance between the outer boundary of the cross-section of the columnar hollow for gas (hg) and the outer boundary of the cross-section of the second columnar hollow for liquid (hl 2 ) when the columnar hollows are cross-sectioned with a plane perpendicular to the central axis (ag) of the columnar hollow for gas (hg): 1(8)-1 fibers collection means: a net (a mesh-type conveyor net of which the surface was coated with a fluororesin) was arranged so that the surface thereof for capturing fibers was perpendicular to the center axis of the extruding direction of each spinning liquid.(8)-2 distance between fibers collection means and first and second exits for extruding liquid (el 1 , el 2 ): 150 mm(9) suction apparatus: suction box (suction diameter: 50 mm×230 mm)(10) container for spinning: acrylic case having a volume of 1 m 3(10)-1 gas supply equipment: precision air generator (manufactured by apiste, 1400-hdr)(10)-2 exhaust apparatus: fan connected to suction box (suction apparatus) (manufacture of nonwoven fabric) fibers were accumulated on the fibers collection means (net) under the following conditions to produce a nonwoven fabric (average fiber diameter: approximately 300 nm). a nonwoven fabric having an excellent uniformity could be produced without the scattering of fibers and with a high productivity. (a) amount of spinning liquid extruded from the first nozzle for extruding liquid (nl 1 ) and the second nozzle for extruding liquid (nl 2 ): 3 g/hour(b) flow rate of air ejected: 250 m/sec.(c) amount of air ejected: 1.3 l/min.(d) moving speed of net: 30 cm/min.(e) conditions for suction of suction box: maximum air volume 18 m 3 /min. (0.1 kw)(f) conditions for supplying gas: air (23° c., 50% rh) was supplied at a flow rate of 200 l/min.(g) conditions for exhausting gas: 201.3 l/min. or more example 3 (preparation of spinning liquid) polyacrylonitrile (manufactured by aldrich) was dissolved in n,n-dimethylformamide so as to become a concentration of 8 mass % to prepare spinning liquid a (viscosity (temperature: 23° c.): 500 mpa·s). further, polyacrylonitrile (manufactured by aldrich) was dissolved in n,n-dimethylformamide so as to become a concentration of 11 mass % to prepare spinning liquid b (viscosity (temperature: 23° c.): 1600 mpa·s). (preparation of apparatus for manufacturing nonwoven fabric) the manufacturing apparatus described in example 2 was prepared. (manufacture of nonwoven fabric) fibers were accumulated on the fibers collection means (net) under the following conditions to produce a nonwoven fabric. a nonwoven fabric having an excellent uniformity could be produced without the scattering of fibers and with a high productivity. fibers having an average fiber diameter of 0.2 μm and fibers having an average fiber diameter of 0.4 μm were uniformly mixed in the nonwoven fabric. (a) extruding condition of the first nozzle for extruding liquid (nl 1 ): spinning liquid a was extruded at a rate of 3 g/hour.(b) extruding condition of the second nozzle for extruding liquid (nl 2 ): spinning liquid b was extruded at a rate of 3 g/hour.(c) flow rate of air ejected: 250 m/sec.(d) amount of air ejected: 1.3 l/min.(e) moving speed of net: 30 cm/min.(f) conditions for suction of suction box: maximum air volume 18 m 3 /min. (0.1 kw)(g) conditions for supplying gas: air (23° c., 50% rh) was supplied at a flow rate of 200 l/min.(h) conditions for exhausting gas: 201.3 l/min. or more example 4 (preparation of spinning liquid) polyacrylonitrile (manufactured by aldrich) was dissolved in n,n-dimethylformamide so as to become a concentration of 8 mass % to prepare spinning liquid c (viscosity (temperature: 23° c.): 500 mpa·s). further, a pvdf (polyvinylidene fluoride) copolymer (manufactured by arkema) was dissolved in n,n-dimethylformamide so as to become a concentration of 20 mass % to prepare spinning liquid d (viscosity (temperature: 23° c.): 680 mpa·s). (preparation of apparatus for manufacturing nonwoven fabric) the manufacturing apparatus described in example 2 was prepared. (manufacture of nonwoven fabric) fibers were accumulated on the fibers collection means (net) under the following conditions to produce a nonwoven fabric. a nonwoven fabric having an excellent uniformity could be produced without the scattering of fibers and with a high productivity. acrylic fibers having an average fiber diameter of 0.2 μm and pvdf fibers having an average fiber diameter of 0.2 μm were uniformly mixed in the nonwoven fabric. (a) extruding condition of the first nozzle for extruding liquid (nl 1 ): spinning liquid c was extruded at a rate of 3 g/hour.(b) extruding condition of the second nozzle for extruding liquid (nl 2 ): spinning liquid d was extruded at a rate of 3 g/hour.(c) flow rate of air ejected: 250 m/sec.(d) amount of air ejected: 1.3 l/min.(e) moving speed of net: 30 cm/min.(f) conditions for suction of suction box: maximum air volume 18 m 3 /min. (0.1 kw)(g) conditions for supplying gas: air (23° c., 50% rh) was supplied at a flow rate of 200 l/min.(h) conditions for exhausting gas: 201.3 l/min. or more example 5 (preparation of spinning liquid) polyacrylonitrile (manufactured by aldrich) was dissolved in n,n-dimethylformamide so as to become a concentration of 8 mass % to prepare spinning liquid e (viscosity (temperature: 23° c.): 500 mpa·s). further, polyacrylonitrile (manufactured by aldrich) was dissolved in dimethyl sulfoxide so as to become a concentration of 8 mass % to prepare spinning liquid f (viscosity (temperature: 23° c.): 1800 mpa·s). (preparation of apparatus for manufacturing nonwoven fabric) the manufacturing apparatus described in example 2 was prepared. (manufacture of nonwoven fabric) fibers were accumulated on the fibers collection means (net) under the following conditions to produce a nonwoven fabric. a nonwoven fabric having an excellent uniformity could be produced without the scattering of fibers and with a high productivity. acrylic fibers having an average fiber diameter of 0.2 μm and acrylic fibers having an average fiber diameter of 0.4 μm were uniformly mixed in the nonwoven fabric. (a) extruding condition of the first nozzle for extruding liquid (nl 1 ): spinning liquid e was extruded at a rate of 3 g/hour.(b) extruding condition of the second nozzle for extruding liquid (nl 2 ): spinning liquid f was extruded at a rate of 3 g/hour.(c) flow rate of air ejected: 250 m/sec.(d) amount of air ejected: 1.3 l/min.(e) moving speed of net: 30 cm/min.(f) conditions for suction of suction box: maximum air volume 18 m 3 /min. (0.1 kw)(g) conditions for supplying gas: air (23° c., 50% rh) was supplied at a flow rate of 200 l/min.(h) conditions for exhausting gas: 201.3 l/min. or more reference signs list nl, nl n : nozzle for extruding liquidnl 1 : first nozzle for extruding liquidnl 2 : second nozzle for extruding liquidng: nozzle for ejecting gasel: exit for extruding liquidel 1 : first exit for extruding liquidel 2 : second exit for extruding liquideg: exit for ejecting gashl: columnar hollow for liquidhl 1 : first columnar hollow for liquidhl 2 : second columnar hollow for liquidhg: columnar hollow for gashvl: virtual column for liquidhvl 1 : first virtual column for liquidhvl 2 : second virtual column for liquidhvg: virtual column for gasal: central axis of the extruding direction (liquid)al 1 : first central axis of the extruding direction (liquid)al 2 : second central axis of the extruding direction (liquid)ag: central axis of the ejecting direction (gas)c: plane perpendicular to the central axis of the columnar hollow for gasl 1 : straight line having the shortest distance between outer boundariesl 1 : straight linel 2 : straight line12 : first member22 : second member32 : third member14 , 24 , 34 : supply end16 , 26 , 36 : opposing exit end18 : first supply slit38 : first gas slit20 : gas jet space
006-357-949-869-045
CN
[ "WO", "CN", "US" ]
G01D21/02,F28D1/053,F28F27/00,G01D11/24,G01D11/30
2019-08-29T00:00:00
2019
[ "G01", "F28" ]
sensor and heat exchanger
a sensor (10), comprising a housing (11), a circuit board (12), and a sensing chip (121) provided on the circuit board (12). an accommodating cavity (110) is provided in the housing (11); the housing (11) comprises a top wall (112), a bottom wall (111), and a side wall (113); the sensor (10) is provided with a second channel (151) for discharging liquid water; the second channel (151) runs through the bottom wall (111) or the side wall (113). when the sensor (10) is applied in the heat exchanger (100) and monitors the temperature and/or humidity near an outer surface of the heat exchanger (100), the monitoring accuracy may be relatively improved.
1 . a sensor, comprising: a housing, a circuit board, and a sensor chip fixed on the circuit board; wherein a material of the housing is metal, the housing defines a receiving cavity and a first channel extending through the housing, and the first channel is in fluid communication with the receiving cavity and an outside of the sensor; wherein the circuit board is at least partially received in the receiving cavity, and at least a part of the circuit board is bonded and fixed to the housing by a thermal conductive glue; and wherein the sensor chip is adapted for sensing at least one of a humidity signal and a temperature signal of an environment in the receiving cavity. 2 . the sensor according to claim 1 , wherein the material of the housing includes at least one of aluminum, stainless steel and copper; a material of the thermal conductive glue includes at least one of aluminum nitride, boron nitride, silicon nitride, aluminum oxide, magnesium oxide and silicon oxide; and a material of the circuit board includes aluminum nitride and/or aluminum oxide. 3 . the sensor according to claim 2 , wherein the housing defines a first opening and a second opening, a second channel extending through the housing is formed between the first opening and the second opening, and the second channel is adapted for discharging liquid water out of the housing; and wherein the first opening is closer to the receiving cavity than the second opening, and the second opening and the circuit board are located on opposite sides of the first opening, respectively. 4 . the sensor according to claim 3 , wherein the housing comprises a top wall, a bottom wall and a side wall, the side wall is provided on an outer peripheral side of the receiving cavity, and the side wall connects the top wall and the bottom wall; and wherein the circuit board is fixed to the side wall or to the top wall by the thermal conductive glue, the second channel is provided extending through the bottom wall or the side wall, and the first channel is provided extending through the side wall or the top wall. 5 . the sensor according to claim 4 , wherein the sensor further defines a third channel, the third channel is capable of being used for a wire to enter and exit, the third channel is provided extending through the side wall or the top wall, and each of the third channel and the first channel is provided at different positions of the side wall of the housing. 6 . the sensor according to claim 3 , wherein the second channel is one or more of a through hole, a slit or a gap. 7 . the sensor according to claim 4 , wherein the bottom wall includes an inclined wall and a matching portion, the inclined wall is inclinedly arranged in a direction away from the top wall with respect to a wall thickness direction of the side wall; the matching portion protrudes from an end of the inclined wall in a direction away from the top wall; and the second channel is provided extending through the matching portion. 8 . the sensor according to claim 7 , wherein at least a part of the inclined wall is connected to a lower end of the side wall, and the second channel is a circular through hole extending through the matching portion. 9 . the sensor according to claim 4 , wherein at least a part of the bottom wall forms as a concave wall, a center of the concave wall is recessed inwardly relative to an edge of the concave wall toward the top wall, and the second channel is closer to the side wall than the center of the concave wall. 10 . the sensor according to claim 4 , wherein an inner surface of the bottom wall is a straight wall surface, an included angle between the inner surface and a wall thickness direction of the side wall is recorded as a first included angle, and the first included angle is greater than or equal to 0° and less than 90°. 11 . the sensor according to claim 1 , wherein at least a part of the inner surface of the housing is coated with a hydrophilic coating or a hydrophobic coating. 12 . the sensor according to claim 1 , wherein the sensor chip is an integrated chip integrating a humidity detection function and a temperature detection function. 13 . the sensor according to claim 4 , wherein the housing of the sensor is further provided with a stab portion extending from the side wall to a side away from the receiving cavity, and a plurality of saw-tooth portions are provided on an outer periphery of the stab portion. 14 . a heat exchanger, wherein the heat exchanger comprises a sensor according to claim 1 , the heat exchanger includes at least one collecting pipe, a plurality of heat exchange tubes and at least one fin, the heat exchange tube is fixed to the collecting pipe, an inner channel of the heat exchange tube is in fluid communication with an inner cavity of the collecting pipe, the fin is located between two adjacent heat exchange tubes; and wherein the sensor is fixed to the fin, and the housing of the sensor is in contact with at least a part of a surface of the fin and/or a surface of the heat exchange tube. 15 . the heat exchanger according to claim 14 , wherein the heat exchanger includes a first collecting pipe and a second collecting pipe, the heat exchange tube includes a first end and a second end located at opposite ends of the heat exchange tube in a length direction, the first end is connected to the first collecting pipe, the second end is connected to the second collecting pipe, an inner channel of the heat exchange tube is in fluid communication with an inner cavity of the first collecting pipe and an inner cavity of the second collecting pipe; and wherein in the length direction of the heat exchange tube, one of the first collecting pipe and the second collecting pipe is closer to the sensor than the other of the first collecting pipe and the second collecting pipe. 16 . a sensor, comprising: a housing defining a receiving cavity, a first channel and a second channel, the first channel being in fluid communication with the receiving cavity, the first channel being adapted for air entering in the receiving cavity or leaving from the receiving cavity, the second channel being in fluid communication with the receiving cavity, and the second channel being adapted for discharging liquid water out of the housing; a circuit board being at least partially received in the receiving cavity, the circuit board being fixed to the housing, the circuit board being disposed above the second channel in a top-to-bottom direction; and a sensor chip being fixed on the circuit board, the sensor chip being adapted for sensing at least one of a humidity signal and a temperature signal of an environment in the receiving cavity; wherein each of the first channel and the second channel is provided at different positions of the housing, and the second channel is closer to a bottom of the sensor than the first channel in the top-to-bottom direction. 17 . the sensor according to claim 16 , wherein the housing comprises a top wall, a bottom wall and a side wall, the top wall and the bottom wall are located at opposite ends in the top-to-bottom direction, the side wall is provided around an outer peripheral side of the receiving cavity, and the side wall connects the top wall and the bottom wall; wherein the first channel is provided at least of the side wall or the top wall and extends therethrough, and the second channel is provided at the bottom wall and extends therethrough. 18 . the sensor according to claim 17 , wherein at least a part of the circuit board is fixed to the top wall through a glue, the second channel is located below the circuit board, and a thickness direction of the circuit board is co-directional with an axial direction of the second channel. 19 . the sensor according to claim 18 , wherein the bottom wall extends from the side wall to a direction away from the top wall, the bottom wall is funnel-shaped, and the second channel is disposed in a central part of the bottom wall. 20 . the sensor according to claim 18 , further defining a third channel, the third channel being used for a wire to go therethrough, both of the third channel and the second channel being provided at the side wall and extending therethrough, and the third channel and the second channel being disposed at two opposite sides of the side wall.
cross-reference to related applications this application is a bypass continuation of national phase conversion of international (pct) patent application no. pct/cn2020/112298, filed on aug. 29, 2020, which further requires priority of a chinese patent application no. 201910810768.8, filed on aug. 29, 2019 and titled “sensor, heat exchanger and heat exchange system”, the entire content of which is incorporated into this application herein by reference. technical field the present disclosure relates to a field of sensors, in particular, to sensors and heat exchangers. background frosting of a heat exchanger will cause heat transfer coefficient of the heat exchanger to decrease, and air ducts between the fins will be blocked so as to reduce the air volume, which directly affects the heat exchange efficiency of the heat exchanger of the heat pump system and the pressure drop on the air side. therefore, it is necessary to detect the frosting of the heat exchanger. in related technologies, a temperature and humidity sensor is used to detect the temperature and humidity of the heat exchanger. however, the measurement accuracy of the sensors in the related art still needs to be improved. summary according to one aspect of the present disclosure, a sensor is provided. the sensor includes a housing, a circuit board, and a sensor chip fixed on the circuit board; wherein a material of the housing is metal, the housing defines a receiving cavity and a first channel extending through the housing, and the first channel is in fluid communication with the receiving cavity and an outside of the sensor; wherein the circuit board is at least partially received in the receiving cavity, and at least a part of the circuit board is bonded and fixed to the housing by thermal conductive glue; and wherein the sensor chip is adapted for sensing at least one of a humidity signal and a temperature signal of an environment in the receiving cavity. in the present disclosure, the housing of the sensor is made of metal. at least a part of the circuit board is bonded and fixed to the housing through the thermal conductive glue. this is beneficial to transfer the environment temperature sensed by the metal housing to the circuit board through the thermal conductive glue, so that the environment temperature where the sensor chip is located is close to the temperature of the housing. the first channel is conducive to the communication between the air in the receiving cavity and the air outside the sensor. correspondingly, it is more beneficial to ensure that the temperature and humidity environment where the sensor chip is located is closer to the surface temperature and humidity environment of an object to be detected, thereby improving the accuracy of the corresponding detection signal of the sensor chip. according to another aspect of the present disclosure, a heat exchanger is provided and includes the above-mentioned sensor. the heat exchanger is a multi-channel heat exchanger or a tube-fin heat exchanger. the sensor is fixed on an outer surface of the heat exchanger and the sensor is in contact with at least a part of the outer surface of the heat exchanger. the surface temperature of the heat exchanger sensed by the metal housing can be transferred to the circuit board through the thermal conductive glue, so that the temperature of the environment where the sensor chip is located is close to the temperature of the housing. the first channel is conducive to the communication between the air in the receiving cavity and the air outside the sensor. correspondingly, it is more beneficial to ensure that the temperature and humidity environment where the sensor chip is located is closer to the surface temperature and humidity environment of the heat exchanger, thereby improving the accuracy of the corresponding detection signal of the sensor chip. brief description of drawings fig. 1 is a schematic structural view of a sensor in accordance with an embodiment of the present disclosure; fig. 2 is a schematic view of an exploded structure of the sensor in the embodiment of the present disclosure shown in fig. 1 ; fig. 3 is a schematic top view of the embodiment of the present disclosure shown in fig. 2 ; fig. 4 is a schematic cross-sectional view of the sensor of the embodiment shown in fig. 3 along the a-a direction; fig. 5 is a schematic structural view of a housing of the sensor in accordance with another embodiment of the present disclosure; fig. 6 is a schematic view of an exploded structure of the sensor in the another embodiment of the present disclosure shown in fig. 5 ; fig. 7 is a schematic top view of the embodiment of the present disclosure shown in fig. 6 ; fig. 8 is a schematic cross-sectional view of the housing of the sensor of the another embodiment shown in fig. 7 along the b-b direction; fig. 9 is a schematic structural view of a heat exchanger provided with a sensor in an embodiment of the present disclosure; and fig. 10 is a schematic view of an exemplary heat exchange system of the present disclosure. reference signs sensor 10 , 20 ;heat exchanger 100 ;heat exchange tube 20 , 21 , 22 ; first end of the heat exchange tube 211 , 212 ; second end 212 , 221 ;fin 30 ; wave crest portion 31 ; wave trough portion 32 ; side wall 33 ;heat exchange system 1000 ; compressor 1 ; first sensor 2 ; throttling device 3 ; second sensor 4 ; reversing device 5 . detailed description the exemplary embodiments will be described in detail here, and examples thereof are shown in the drawings. when the following description refers to the drawings, unless otherwise indicated, the same numbers in different drawings indicate the same or similar elements. the implementation embodiments described in the following exemplary embodiments do not represent all implementation embodiments consistent with the present disclosure. on the contrary, they are merely examples of devices and methods consistent with some aspects of the present disclosure as detailed in the appended claims. the terms used in the present disclosure are only for the purpose of describing specific embodiments, and are not intended to limit the present disclosure. in the description of the present disclosure, it should be understood that the terms “center”, “longitudinal”, “transverse”, “length”, “width”, “thickness”, “upper”, “lower”, “front”, ““rear”, “left”, “right”, “vertical”, “horizontal”, “top”, “bottom”, “inner”, “outer”, “clockwise”, “counterclockwise” and other directions or positional relationships are based on the positions or positional relationships shown in the drawings, and are only for the convenience of describing the disclosure and simplifying the description. it does not indicate or imply that the pointed devices or elements must have specific orientations, be constructed and operated in specific orientations, thereby it cannot be understood as a limitation of the present disclosure. in addition, the terms “first” and “second” are only used for descriptive purposes, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of indicated technical features. thus, the features defined with “first” and “second” may explicitly or implicitly include one or more of these features. in the description of the present disclosure, “a plurality of means two or more than two, unless otherwise specifically defined. in the description of the present disclosure, it should be noted that, unless otherwise clearly specified and limited, the terms “installation”, “connected” and “connection” should be understood in a broad meaning. for example, it can be a fixed connection, a detachable connection or an integral connection; it can be a mechanical connection or an electrical connection; it can be directly connected or indirectly connected through an intermediate medium, including the connection between two internal elements or the interaction between two elements. for those of ordinary skill in the art, the specific meanings of the above-mentioned terms in the present disclosure can be understood according to specific circumstances. in the present disclosure, unless otherwise clearly defined and limited, a first feature located “upper” or “lower” of a second feature may include the first feature and the second feature are in direct contact with each other, or may include the first feature and the second feature are in direct contact but through other features therebetween. moreover, the first feature located “above”, “over” or “on top of the second feature includes the first feature is directly above and obliquely above the second feature, or it simply means that the level of the first feature is higher than that of the second feature. the first feature located “below”, “under” and “at bottom of the second feature includes the first feature is directly below and obliquely below the second feature, or it simply means that the level of the first feature is lower than the second feature. the exemplary embodiments of the present disclosure will be described in detail below with reference to the drawings. in the case of no conflict, the following embodiments and features in the embodiments can be mutually supplemented or combined with each other. the terms used in the present disclosure are only for the purpose of describing specific embodiments, and are not intended to limit the present disclosure. the singular forms of “a”, “said” and “the” described in the present disclosure and appended claims are also intended to include plural forms, unless the context clearly indicates otherwise. the exemplary embodiments of the present disclosure will be described in detail below with reference to the drawings. in the case of no conflict, the following embodiments and features in the embodiments can be combined with each other. when heating in winter, the temperature of an outdoor heat exchanger is always lower than the environment air temperature. when it is lower than the dew point temperature of the environment air, condensed water is generated on surfaces of fins of the heat exchanger. when the temperature of the heat exchanger is further below 0° c., the condensed water turns into frost and adheres to the surfaces of the fins. when the frost is severe, the air ducts between the fins are partially or completely occupied by the frost. this will cause the heat transfer coefficient of the heat exchanger to decrease, and the air duct between the fins is blocked, which reduces the air volume. this directly affects the heat exchange efficiency of the heat exchanger of a heat pump system and the pressure drop on the air side. therefore, because there is a possibility of frosting on the surface of the heat exchanger, the accuracy of monitoring frosting needs to be improved, so that it is beneficial to take measures to avoid frosting in advance to maintain the heat exchange efficiency of the heat pump air conditioning system. in order to monitor the frosting of the heat exchanger, some related technologies use a temperature sensor to determine whether the heat exchanger is frosted based on 0° c. however, there is an error in this way of determining whether the frost is formed by using the temperature signal. for example, the humidity in the yangtze river basin is high. although the environment temperature t>0° c., the surface of the heat exchanger has been frosted. the northern area is dry and has the low humidity. although the temperature t<0° c., there is no frost on the surface of the heat exchanger. there are also some related technologies that use dew point temperature to determine whether frost is formed, which requires temperature and humidity sensors to detect environment temperature and humidity, and calculates the dew point temperature. at the same time, the temperature of the heat exchanger is detected and compared with the dew point temperature to determine whether the heat exchanger has frost. this calculation method is more complicated. according to the regnault principle, when a certain volume of the humid air is uniformly cooled under a constant total pressure, until the water vapor in the air reaches a saturated state, and this state is called the dew point. in other words, if you put a smooth metal surface in the air with a relative humidity lower than 100% and let it cool, when the temperature drops to a certain value, the relative humidity near the surface reaches 100%, and there will be dew (or frost) forms on the surface. the sensor used for the heat exchanger in the related art detects the temperature and humidity in the environment, and cannot accurately reflect the surface temperature and humidity of the heat exchanger. in fact, the surface temperature of the heat exchanger is lower than the environment temperature, and the humidity on the surface of the heat exchanger is greater than the environment humidity. when the humidity sensor detects that the humidity is close to 100%, frost has formed on the surface of the heat exchanger. the sensor of the embodiment of the present disclosure adopts a metal housing with good thermal conductivity, a ceramic circuit board such as aluminum nitride, and a thermal conductive sealant, so that the temperature of the housing and the temperature of the ceramic circuit board can be close to the surface temperature of the heat exchanger. therefore, the humidity sensor on the circuit board can detect the relative humidity of the surface of the sensor more accurately. there is no need to calculate the dew point temperature. when the humidity sensor detects that the current humidity signal is close to 100%, it indicates that the surface humidity (rh) of the heat exchanger is also close to 100%, so the surface of the heat exchanger will be frosted. by sending frosting information and controlling it, frosting on the surface of the sensor can be delayed. embodiments of the present disclosure provide sensors that can relatively improve the accuracy of temperature and/or humidity monitoring. the use of the sensor in conjunction with the heat exchanger can improve the accuracy of temperature and/or humidity monitoring on or near the surface of the heat exchanger. when the heat exchanger cooperates with the sensor and is used in the operation of the heat exchange system, the accuracy of monitoring frost or fog on the surface of the sensor can be relatively improved. it is easy to understand that in addition to being used in heat exchangers and heat pump systems, these sensors can also be used in other occasions where temperature and/or humidity need to be monitored. there is no limitation here. as shown in fig. 1 and fig. 2 , specific embodiments of the sensor 10 of the present disclosure will be described in conjunction with other drawings when necessary. fig. 1 is a schematic view of a structure of the sensor 10 according to an embodiment of the present disclosure. fig. 2 is a schematic view of an exploded structure of the sensor 10 in the embodiment of the present disclosure shown in fig. 1 . as shown in fig. 1 , the sensor 10 includes a housing 11 . the housing 11 is a metal housing, which has better thermal conductivity. the housing 11 defines a receiving cavity 110 . in some embodiments, the housing 11 includes a bottom wall 111 , a top wall 112 and a side wall 113 . the top wall 112 and the bottom wall 111 are located at opposite ends in a height direction of the sensor (an x direction in fig. 9 ). the side wall 113 connects the top wall 112 and the bottom wall 111 . the receiving cavity 110 is formed by enclosing the top wall 112 , the bottom wall 111 and the side wall 113 . in other words, the top wall 112 and the bottom wall 111 are located at opposite ends of the receiving cavity 110 in the height direction. the side wall 113 is disposed on a peripheral side of the receiving cavity 110 , and the side wall 113 is connected to the top wall 112 and the bottom wall 111 . it should be noted that the sensor 10 described in the embodiment in fig. 1 is substantially a cuboid. the bottom wall 111 and the top wall 112 are substantially squares. in some other embodiments, the structure of the sensor 10 may also be a cube, a cylinder, etc., which can be set as required, and there is no limitation here. as shown in fig. 2 , the sensor 10 includes a circuit board 12 . the circuit board 12 is provided with at least one sensor chip 121 . the sensor chip 121 can sense at least one of a humidity signal and a temperature signal of the air in the receiving cavity 110 . the circuit board 12 is at least partially received in the receiving cavity 110 , and the circuit 12 is fixed to the housing 11 . specifically, the circuit board 12 and the top wall 112 may be bonded and fixed by a thermal conductive glue 13 . in some other embodiments, the circuit board 12 is directly or indirectly connected to the side wall 113 . the material of a main body of the circuit board 12 may be a ceramic material. the ceramic material can be one or a mixture of aluminum nitride or aluminum oxide. the thermal conductive glue 13 includes a polymer bonding material and a thermal conductive material, which is prepared by filling the thermal conductive material in the polymer bonding material. optionally, the thermal conductive material includes one or more of aluminum nitride, boron nitride, silicon nitride, aluminum oxide, magnesium oxide, and silicon oxide. the thermal conductivity of the thermal conductive glue 13 is relatively strong, or in other words, the thermal resistance of the thermal conductive glue 13 is relatively small. with this arrangement, when the sensor 10 is used to test the temperature of the heat exchanger surface, the surface temperature of the heat exchanger can be much closer to the temperature of the sensor. in this embodiment, the circuit board 12 of the sensor 10 and the top wall 112 are connected by the thermal conductive glue 13 . in this embodiment, the side wall 113 and the top wall 112 can also be connected by the thermal conductive glue 13 or directly welded. at least a part of an inner surface 114 of the housing 11 of the sensor 10 is coated with a coating 115 . the coating 115 is a hydrophilic coating or a hydrophobic coating. the coating 115 facilitates the drainage of the condensed water in the housing 11 . in other words, the condensed water does not condense in the coating area. in other words, the condensed water will not form a hanging wall inside the sensor, thereby avoiding affecting the accuracy of the sensor to measure the surface humidity of the heat exchanger. as shown in fig. 4 , an inner surface of the side wall 113 of the housing 11 of the sensor 10 is all coated with the coating 115 . an inner surface of the bottom wall 111 is also coated with the coating 115 . with this arrangement, the inner surface 114 of the housing 11 of the sensor 10 facilitates the drainage of the condensed water and facilitates the sensor 10 to measure the humidity on the surface of the heat exchanger or its vicinity. the sensor 10 defines a first channel 141 which can allow the air to enter and exit. the first channel 141 extends through the side wall 113 or the top wall 112 . as shown in fig. 1 or fig. 2 in combination with fig. 3 and fig. 4 , the first channel 141 in this embodiment is located at the side wall 113 . the first channel 141 is a through hole. the diameter of the through hole is 0.1 μm to 1 mm. this arrangement facilitates the ingress and egress of the air, and can prevent dust and other debris from entering the receiving cavity 110 of the sensor 10 so as to damage the sensor 10 . in theory, the diameter of the through hole is as small as possible. however, due to process and cost constraints, it is sufficient to meet actual needs. in some other embodiments, the diameter of the through hole is 100 nm to 500 μm. in some other embodiments, the first channel 141 may also have other shapes, as long as the need can be achieved, and there is no limitation. the number of the first channels 141 may be one or more than two, as long as the test requirements are met, and it is not limited here. the sensor 10 defines a third channel 142 which can allow a wire (not shown in the drawings) to enter and exit. the wire is used to electrically connect the sensor 10 and other devices. the detection signal data of the sensor 10 can be imported into other data processing equipment, or data collection equipment, or other equipment through the wire. the third channel 142 extends through the side wall 113 or the top wall 112 . the third channel 142 and the first channel 141 are staggered. in other words, the third channel 142 and the first channel 141 are disposed at different positions of the housing 11 . in some implementation directions, the axial direction of the third channel 142 may be parallel or coincide with the axial direction of the first channel 141 . as shown in figs. 1, 2 and 4 , the third channel 142 is formed extending through the side wall 113 . there is only one third channel 142 , and the third channel 142 and the first channel 141 are disposed on opposite sides of the side wall 113 . in some other embodiments, there may be more than two third channels 142 , which can be set as required. the third channel 142 may be a through hole. it should be noted that the aperture of the third channel 142 is adapted to the size of the wire passing through it. this arrangement can prevent dust and other debris from entering the receiving cavity 110 of the sensor 10 so as to damage the sensor 10 . in some embodiments, a sealant may be used to fix the wire and the housing 11 together in order to prevent the wire from being pulled by an external force and causing the wire to fall off. in other embodiments, the third channel 142 may also overlap with the first channel 141 . that is, the air inlet and outlet channels and the wire channel can be the same. it is noted that, in the sensor shown in figs. 1 to 8 , at least a part of the wall portion 1420 forming the third channel 142 is formed by the side wall 113 extending outwardly in a direction away from the receiving cavity 110 . the wall portion 1420 can be used to fix the wire, so that the wire is firmly fixed, and to a certain extent, the wire is prevented from falling off. in some other embodiments, the wall portion 1420 may not be provided. in some environments with high humidity and low temperature, the condensed water may be generated in the housing of the sensor. if the condensed water cannot be discharged in time, the test results will be inaccurate, and in serious cases, it may even lead to the damage of electronic components so as to damage the sensor. in the present disclosure, the sensor 10 defines a second channel 151 for liquid water discharge. the second channel 151 extends through the bottom wall 111 or the side wall 113 . the housing 11 defines a first opening 1511 and a second opening 1522 . the second channel 151 is formed between the first opening 1511 and the second opening 1522 . one of the first opening 1511 and the second opening 1522 is located on the inner surface of the housing 11 , and the other of the first opening 1511 and the second opening 1522 is located on the outer surface of the housing 11 . for example, the first opening 1511 is closer to the receiving cavity 110 than the second opening 1522 . in addition, the second opening 1522 and the circuit board 12 are located on opposite sides of the first opening 1511 , respectively. in some embodiments, the inner surface of the bottom wall 111 is a straight wall surface. an included angle between the inner surface of the bottom wall 111 and a wall thickness direction of the side wall 113 is recorded as a first included angle. the first included angle is greater than or equal to 0°, and the first included angle is less than 90°. for example, the bottom wall 111 may be in a vertical relationship with the side wall 113 . that is, the first included angle between the inner surface of the bottom wall 111 and the thickness direction of the side wall 113 is 0°. the second channel 151 may be located at the middle position of the bottom wall 111 . of course, the inner surface of the bottom wall 111 may have a certain angle with the thickness direction of the side wall 113 . that is, the inner surface of the bottom wall 111 may be inclined upwardly or downwardly. in this way, the condensed water can flow along the inner surface of the bottom wall 111 under the action of gravity and finally be discharged from the second channel 151 . the second channel 151 is a through hole extending through the bottom wall 111 . in some other embodiments, the second channel 151 may be a slit or a gap. there can also be more than two second channels 151 , which can be set according to specific needs. as shown in fig. 1 or fig. 2 , the housing 11 of the sensor 10 is further provided with a stab portion 16 . the stab portion 16 is disposed at the side wall 113 . the stab portion 16 is formed to extend outwardly from the side wall 113 in a direction away from the receiving cavity 110 . a plurality of saw-tooth portions 161 are provided on the outer periphery of the stab portion 16 . the saw-tooth portions 161 can facilitate the use of the sensor 10 in conjunction with other devices, such as a microchannel heat exchanger. through the saw-tooth portions 161 , the sensor can be inserted between the fins of the microchannel heat exchanger and used in cooperation. of course, the sensor 10 may not be provided with the stab portion 16 . the sensor 10 is directly fixed to a position where the temperature or humidity needs to be monitored, which can be set as required. the exploded schematic view of the sensor 10 as shown in fig. 2 includes the circuit board 12 . the circuit board 12 is provided with a temperature sensor element 121 , a humidity sensor element 122 and a filter capacitor 123 . the temperature sensor element 121 can sense temperature. the humidity sensor element 122 can sense humidity. the filter capacitor 123 can reduce interference in the temperature or humidity measurement process. in some other embodiments, the circuit board 12 is only provided with the temperature sensor element 121 or the humidity sensor element 122 . in other words, the temperature sensor element 121 or the humidity sensor element 122 can be arranged separately or in combination, and there is no limitation here. optionally, the circuit board 12 is provided with at least one sensor chip which can sense temperature and/or humidity. optionally, the circuit board is provided with more than two sensor chips which can monitor temperature and/or humidity. the sensing area of the sensor chip is pasted with a waterproof and/or dustproof film. the waterproof and dustproof film can be dustproof and waterproof, so that the measurement accuracy of the sensor is high, and the service life of the sensor can be relatively prolonged. wherein, the waterproof and dustproof film can be ip67. optionally, the circuit board includes a filter capacitor which can reduce the noise of monitoring and make the monitoring data more accurate. optionally, there are a plurality of filter capacitors. fig. 3 is a schematic top view of the sensor 10 according to an embodiment of the present disclosure in fig. 2 . fig. 4 is a schematic cross-sectional view of the sensor 10 of the embodiment in fig. 3 along the a-a direction. as shown in figs. 4 and 3 , the sensor 10 has the housing 11 . the housing 11 has the inner cavity 110 , the bottom wall 111 , the top wall 112 and the side wall 113 . the first channel 141 and the third channel 142 extend through the side wall 113 . the sensor 10 also has the stab portion 16 which is disposed on the side wall 113 . the stab portion 16 has a plurality of saw-tooth portions 161 . the plurality of saw-tooth portions 161 are convex, so that the outer periphery of the stab portion 16 forms saw-teeth with different concavities and convexities. the sensor 10 also includes the circuit board 12 . the circuit board 12 is fixed to the top wall 112 by the thermal conductive glue 13 . the circuit board 12 is provided with the humidity sensor element 122 , the temperature sensor element 121 and the filter capacitor 123 . the sensing openings of the humidity sensor element 122 and the temperature sensor element 121 are both face to the inner cavity 110 . when the sensor 10 is actually used, the sensing openings of the humidity sensor element 122 and the temperature sensor element 121 may be arranged downwardly. by such arrangement, it is not conducive to adhesion of dust and the like to the sensing area, thereby avoiding affecting the accuracy of the sensor elements. in addition, it is also conducive to the discharge of the condensed water by gravity, which improves the test accuracy, and can extend the life of the sensor elements and the sensor to a certain extent. in some other embodiments, the circuit board 12 may also be disposed on the side wall 113 . the sensing opening may not face down completely, as long as it can meet the needs. as shown in fig. 4 , the second channel 151 is provided on the bottom wall 111 . specifically, all or a part of the bottom wall 111 forms an inclined wall 101 . a first end 1011 of the inclined wall 101 is connected to a first end 1131 of the side wall 113 . a second end 1012 of the inclined wall 101 is an end extending from the first end 1011 in a direction away from the top wall 112 . the bottom wall 111 further includes a matching portion 102 . the mating portion 102 protrudes from an end of the inclined wall 101 in a direction away from the top wall 112 . the second channel 151 extends through the matching portion 102 . the matching portion 102 may be cylindrical, which is convenient for processing and manufacturing. of course, the bottom wall 111 may not have the matching portion 102 . the second end 1012 of the inclined wall 101 enclosing the second channel 151 can also achieve a drainage function. in this way, since the bottom wall 111 is generally funnel-shaped, it is advantageous for the condensed water to drain out of the sensor 10 . in some other embodiments, the bottom wall 111 may also have a part of a straight wall section, that is, a part of the bottom wall 111 is a flat wall section and a part of the bottom wall 111 is an inclined wall 101 . the first end 1011 of the inclined wall 101 is connected with the flat wall section of the bottom wall 111 . the second end 1012 of the inclined wall 101 extends from the first end 1011 in a direction away from the top wall 112 . fig. 5 is a schematic structural view of a housing of a sensor 20 in accordance with another embodiment of the present disclosure. fig. 6 is an exploded schematic view of the sensor of the embodiment shown in fig. 5 . fig. 7 is a schematic top view of the housing of the sensor 20 of the another embodiment shown in fig. 6 or fig. 5 . fig. 8 is a schematic cross-sectional view of the sensor 20 shown in fig. 7 along the b-b direction. as shown in figs. 5 to 8 , the structure of the sensor 20 is the same as that of the sensor 10 , which will not be repeated here. the difference is that at least a part of the bottom wall 111 forms a concave wall 103 . the first end 1011 of the concave wall 103 is connected to a lower end 1131 of the side wall 113 . the concave wall 103 extends from the side wall toward a direction close to the top wall 112 . that is, a center of the concave wall 103 is recessed in a direction approaching the top wall 112 relative to an edge of the concave wall 103 . the second channel 151 is closer to the side wall 113 than the center of the concave wall 103 . the concave wall 103 is provided with a second channel 151 near the edge of the side wall 113 . this arrangement facilitates the discharge of condensed water from the sensor 10 . fig. 9 is a schematic structural view of a heat exchanger 100 having the sensor 10 in accordance with an embodiment of the present disclosure. the heat exchanger 100 is a multi-channel heat exchanger. in some other embodiments, the heat exchanger may also be a tube-fin heat exchanger or other heat exchangers that need to monitor temperature or humidity, etc., which is not limited here. the heat exchanger 100 of an embodiment of the present disclosure may include a collecting pipe, a plurality of heat exchange tubes 20 and fins 30 . the collecting pipe has an inner cavity for refrigerant to flow. the shape of the collecting pipe is a circular tube, and a length direction of the collecting pipe is an axial direction. the collecting pipe includes two collecting pipes, namely a first collecting pipe 41 and a second collecting pipe 42 . the first collecting pipe 41 and the second collecting pipe 42 are arranged substantially in parallel. it is noted that the heat exchanger 100 and the air generally only undergo heat exchange one time, which is often referred to as a single-layer heat exchanger in the industry. of course, in some other embodiments, the collecting pipe may also be a d-shaped or square pipe, and its specific shape is not limited, as long as its burst pressure meets the needs of the system. the relative position of the collecting pipe is also not limited, as long as it meets the actual installation requirements. the number of the collecting pipe can also be only one, as long as it meets the heat exchange requirement, which is not limited here. the collecting pipe in the embodiment of the present disclosure is a round pipe as an example. a plurality of heat exchange tubes 20 are provided. each of the heat exchange tubes 20 has a length direction, a width direction and a height direction. the plurality of heat exchange tubes 20 are arranged along an axial direction of the collecting pipe and arranged substantially in parallel. each of the plurality of heat exchange tubes 20 has a first end and a second end. as shown in fig. 9 , the heat exchange tube 20 includes a first heat exchange tube 21 and a second heat exchange tube 22 arranged side by side. the first heat exchange tube 21 has a first end 211 and a second end 212 . a direction defining the extension of the first end 211 of the heat exchange tube 21 to the second end 212 is the length direction of the heat exchange tube (the x direction in the drawings). along the two ends of the heat exchange tube in a thickness direction (the z direction in the drawings), the heat exchange tube 21 has a first top wall 213 and a first bottom wall 214 . the first top wall 213 and the first bottom wall 214 are substantially parallel to each other. the height direction of the heat exchange tube 20 may also be referred to as the thickness direction of the heat exchange tube. the first end 211 of the first heat exchange tube 21 is connected to the first collecting pipe. the second end 212 of the first heat exchange tube 21 is connected to the second collecting pipe. similarly, the first end 221 of the second heat exchange tube 22 is connected to the first collecting pipe. the second end 222 of the second heat exchange tube 22 is connected to the second collecting pipe. the first heat exchange tube 21 and the second heat exchange tube 22 are arranged substantially in parallel. the heat exchange tube 20 has an inner channel (not shown in the drawings) for the refrigerant to flow. such connection makes the inner channel of the heat exchange tube 20 be in fluid communication with the inner cavity of the collecting pipe 40 so as to form a refrigerant flow passage (not shown in the drawings) of the heat exchanger 100 . the refrigerant can flow in the heat exchange channel, and the heat exchange can be realized through the heat exchanger 100 . it should be noted that the heat exchange tube 20 is also referred to as a flat tube in the industry, and it has the inner channel for refrigerant to flow inside. each of the first collecting pipe 41 and the second collecting pipe 42 has a pipe wall 401 , a heat exchange tube insertion hole 402 and an inner cavity (not labeled in the drawings). the axial direction of the first collecting pipe 41 and the second collecting pipe 42 is defined as the length direction of the collecting pipe 10 (i.e., the z direction in the drawings). the distribution structure in the embodiment of the present invention is not limited to single-layer heat exchangers, but can also be used in other multi-layer heat exchangers. the multi-layer heat exchanger can be a heat exchanger in which the heat exchange tubes are bent, or a heat exchanger in which adjacent collecting pipes are connected through a connection module. their structures are roughly the same, so it is not repeated here. it should be noted that when the multi-layer heat exchanger is a heat exchanger with a bent heat exchange tube, the length direction of the heat exchange tube is an extending direction of the heat exchange tube. in other words, the length direction is not limited to a linear direction. the heat exchanger 100 in the embodiment of the present disclosure includes the fins 30 . it is worth noting that the surface of the sensor in the related technology is coated with functional materials, such as corrosion-resistant materials. specifically, it is coated on all or a part of the outer surface of the entire heat exchanger. the functional material may be a corrosion-resistant material or a moisture-absorbing material, etc., which can be set as required, and will not be repeated here. the fin 30 is a window fin and has a wave crest portion and a wave trough portion. it is noted that, in other embodiments, the fin may also be a non-opening fin. the shape of the fin can be roughly corrugated or profiled. the cross section of the fin can be a sine wave or an approximate sine wave, or a saw-tooth wave, as long as it meets the requirements, and its specific structure is not limited. of course, the fin 30 can be coated with functional materials as required, which is not limited here. the fin 30 in the embodiment of the present disclosure is a corrugated fin. the fin 30 has a wave crest portion 31 , a wave trough portion 32 , and a side wall portion 33 connecting the wave crest portion 31 and the wave trough portion 32 . the wave crest portion 31 and the wave trough portion 32 are arranged at intervals in a longitudinal direction of the fin 30 . a plurality of side wall portions 33 are provided. it is noted that, the phase “a plurality of” in the present disclosure refers to two and more than two, unless otherwise specified. the side wall portion 33 can be provided with or without windows, which can be provided according to heat exchange requirements. the fin 30 is arranged between two adjacent heat exchange tubes 20 . the wave crest portion 31 is at least partially in contact with the first heat exchange tube 21 . the wave trough portion 32 is at least partially in contact with the second heat exchange tube 22 . the extending direction of the wave crest portion 31 and the wave trough portion 32 defining the fin 30 at intervals is the length direction of the fin 30 (the x direction in the drawings). it can be found that the length direction of the fin 30 is the same as the length direction of the heat exchange tube 20 (the x direction in the drawings). the distance between the heat exchange tubes 20 is the height direction of the fin 30 (the z direction in the drawings). at least a part of the housing of the sensor 10 is inserted into the fin 30 or the gap formed by the fin 30 . specifically, the sensor 10 can be fixed to the fin 30 through the stab portion 16 . the stab portion 16 may be clamped on the fin 30 . the housing of the sensor 10 is in contact with at least a partial area of the surface of the fin 30 and/or the surface of the heat exchange tube 20 . this arrangement is beneficial for the surface temperature of the fin 30 and/or the surface temperature of the heat exchange tube 20 to conduct heat conduction through the metal housing of the sensor 10 . as a result, the housing temperature of the sensor 10 is closer to the surface temperature of the heat exchanger 100 , and the temperature of the circuit board 12 is also closer to the surface temperature of the heat exchanger 100 . correspondingly, the temperature of the environment where the sensor chip 121 is located is close to the temperature of the heat exchanger 100 . in this way, the sensor 10 monitors the temperature and/or humidity near the outer surface of the heat exchanger 100 more accurately. referring to fig. 9 , in the length direction of the heat exchange tube 20 , one of the first collecting pipe 41 and the second collecting pipe 42 is closer to the sensor 10 than the other of the first collecting pipe 41 and the second collecting pipe 42 . based on the specific placement position of the heat exchanger 100 in the actual application environment and different working conditions, in the length direction of the heat exchanger 20 , the temperature at different positions of the heat exchanger 100 may also be different. the initial frosting position of the heat exchanger 100 may be close to one of the two collecting pipes, for example, the frost layer may gradually spread upwardly from the bottom of the heat exchanger. in this way, the sensor 10 is arranged closer to one of the collecting pipes, which is beneficial to combine the actual placement position of the heat exchanger to achieve a better fit to the temperature of the initial frosting position of the heat exchanger. therefore, it is more accurate to determine whether the heat exchanger is frosted based on the humidity information. as shown in fig. 10 , a heat exchange system 1000 is disclosed in an exemplary embodiment of the present disclosure. the heat exchange system 1000 at least includes a compressor 1 , a first heat exchanger 2 , a throttling device 3 , a second heat exchanger 4 and a reversing device 5 . optionally, the compressor 1 of the heat exchange system 1000 may be a horizontal compressor or a vertical compressor. optionally, the throttling device 3 may be an expansion valve. in addition, the throttling device 3 can also be other components that have the function of reducing pressure and adjusting the flow rate of the refrigerant. the present disclosure does not specifically limit the type of the throttling device, which can be selected according to the actual application environment, and will not be repeated here. it should be noted that in some systems, the reversing device 5 may not be provided. the heat exchanger 100 described in the present disclosure can be used in the heat exchange system 1000 as the first heat exchanger 2 and/or the second heat exchanger 4 . in this heat exchange system 1000 , the compressor 1 compresses the refrigerant; the temperature of the refrigerant after being compressed rises and then enters the first heat exchanger 2 ; the refrigerant transfers heat to the outside through the heat exchange between the first heat exchanger 2 and the outside; then, the refrigerant which passing through the throttling device 3 becomes a liquid state or a gas-liquid two-phase state; the temperature of the refrigerant at this time decreases, and then the lower temperature refrigerant flows to the second heat exchanger 4 , and enters the compressor 1 again after exchanges heat with the outside in the second heat exchanger 4 , so as to realize a circulation of the refrigerant. when the second heat exchanger 4 is used as an outdoor heat exchanger to exchange heat with the air, referring to the above-mentioned embodiment, the heat exchanger is arranged as required. the above descriptions are only preferred embodiments of the present disclosure, and do not limit the present disclosure in any form. although the present disclosure has been disclosed as above in preferred embodiments, it is not intended to limit the present disclosure. those of ordinary skill in the art, without departing from the scope of the technical solution disclosed in the present disclosure, can use the technical content disclosed above to make some changes or modifications into equivalent embodiments with equivalent changes. however, without departing from the content of the technical solutions of the present disclosure, any simple modifications, equivalent changes and modifications made to the above embodiments based on the technical essence of the present disclosure still fall within the scope of the technical solutions of the present disclosure.
006-587-510-558-738
FI
[ "EP", "FI", "ES", "US", "CN", "WO", "JP", "KR", "SG", "TW" ]
H01L21/60,B23K26/00,C03C27/00,G02F1/13,B23K26/244,B23K26/50,G02F1/1339,G02F1/1345,H01L21/56,H01L23/00,H05K7/06,B23K26/24,H05K13/04,B23K26/402,H01L23/02,H01L21/58,H01L/,H01L21/50,B23K26/20
2010-05-18T00:00:00
2010
[ "H01", "B23", "C03", "G02", "H05" ]
method of sealing and contacting substrates using laser light and electronics module
the invention concerns a method of fusing and electrically contacting a first insulating substrate ( 28 a) having at least one first conductive layer ( 29 a) thereon with at least one second insulating substrate ( 28 b) having at least one second conductive layer ( 29 b) thereon, the method comprising: stacking the first and second substrates ( 28 a, 28 b) such that an interface zone is formed between them, the interface zone comprising an electrical contacting zone where at least one first conductive layers ( 29 a) faces and is at least partially aligned with at least one second conductive layer ( 29 b), and a substrate fusing zone where the insulating substrates ( 28 a, 28 b) directly face each other; focusing to the interface zone of the substrates ( 28 a, 28 b) through one of the substrates ( 28 a, 28 b) a plurality of sequential focused laser pulses from a laser source, the pulse duration, pulse frequency and pulse power of the laser light being chosen to provide local melting the substrate ( 28 a, 28 b) materials and the conductive layers ( 29 a, 29 b); and moving the laser source and the substrate with respect to each other at a predetermined velocity and path so that a structurally modified zone is formed to the interface zone, the structurally modified zone overlapping with said electrical contacting zone and said substrate fusing zone. the invention provides a convenient way of manufacturing well-sealed joints and electrical contacts for multifunction electronic devices, for example.
a method of fusing and electrically contacting a first insulating substrate (28a) having at least one first conductive layer (29a) thereon with at least one second insulating substrate (28b) having at least one second conductive layer (29b) thereon, the method comprising - stacking the first and second substrates such that an interface zone is formed between them, the interface zone comprising - an electrical contacting zone where at least one first conductive layer (29a) faces and is at least partially aligned with at least one second conductive layer (29b), and - a substrate fusing zone where the insulating substrates (28a, 28b) directly face each other, - focusing to the interface zone of the substrates (28a, 28b) through one of the substrates a plurality of sequential focused laser pulses from a laser source (20), the pulse duration, pulse frequency and pulse power of the laser light being chosen to provide local melting of the substrate (28a, 28b) materials and of the conductive layers (29a, 29b), - moving the laser source (20) and the substrate with respect to each other at a predetermined velocity and path so that a structurally modified zone is formed at the interface zone, the structurally modified zone overlapping with said electrical contacting zone and said substrate fusing zone. the method according to claim 1, wherein the structurally modified zone comprises a continuous hermetically sealed weld seam. the method according to any of the preceding claims, wherein the path of the pulsed laser light forms a closed loop, preferably around a moisture- or oxygen-sensitive element contained in one of said substrates (28a, 28b). the method according to any of the preceding claims, wherein at least one of the substrates (28a, 28b) comprises a microchip or a display panel having a plurality of contact terminals as said conductive layers. the method according to any of the preceding claims, wherein at least one of the insulating substrates (28a, 28b) comprises a glass panel. the method according to claim 5, comprising focusing said laser light to the interface zone through said glass panel. the method according to any of the preceding claims, wherein at least one of the substrates (28a, 28b) comprises a silicon microchip, the first or second contacting layers forming the contact terminals of the silicon microchip. the method according to any of the preceding claims, wherein a complete local fusion of the first and second substrates (28a, 28b) together is produced as said structurally modified zone at the substrate fusing zone and a complete local fusion of the first and second conductive layers (29a, 29b) together is produced at the electrical contacting zone. the method according to any of the preceding claims, wherein - the pulse duration is 20 - 100 ps, - the pulse frequency is at least 1 mhz, in particular at least 4 mhz, - the moving velocity of the pulsed laser is adjusted such that successive pulses overlap with each other. the method according to claim 9, wherein the distance between successive pulses is less than 1/5, in particular less than 1/10, preferably less than 1/20 of the diameter of the focal spot of the pulses. the method according to any of the preceding claims, wherein the thickness of the conductive layers is less than 1 µm. an electronics module comprising a first substrate (28a) comprising a pattern of first conductive zones (29a) applied as a layer thereon, and a second substrate (28b) comprising a pattern of second conductive zones (29b) applied as a layer thereon, and wherein the first and second substrates (28a, 28b) are in stacked configuration such that an interface zone is formed between them, the interface zone comprising - an electrical contacting zone where at least one first conductive zone (29a) faces and is at least partially aligned with at least one second conductive zone (29b), - a substrate fusing zone where the insulating substrates (28a, 28b) directly face each other, - a continuous weld line going through said fusing zone and said electrical contacting zone, the weld line comprising substrate (28a, 28b) materials being locally fused with each other at the substrate fusing zone and conductive zones (29a, 29b) being locally fused with each other at the electrical contacting zone. the electronics module according to claim 12, wherein - the first substrate (28a) is a glass substrate, and - the second substrate (28b) is a microchip. the electronics module according to claim 12 or 13, wherein the continuous weld line forms both a hermetic seal between the substrates and an electrical connection between the substrates.
field of the invention the invention relates to processing substrates using laser. in particular, the invention relates to welding of glass and/or semiconductor substrates containing electrical contact areas together using pulsed laser light. the substrates may comprise e.g. sapphire, quartz or silicon. background of the invention ep 1369912 discloses a method of bonding a flip chip to a chip carrier using laser beam. the method comprises aligning a contact area of the chip and a contact area of the chip carrier and projecting a laser beam through the chip or carrier to the aligned contact areas to electrically bond them to each other. however, the surroundings of the contact area remain exposed to ambient air (oxygen) and humidity, which may have a detrimental effect on the device being manufactured. us 2004/207314 , us 2005/174042 , us 2003/197827 and jp 2005/028891 disclose further methods utilizing laser welding for contacting or joining parts of semiconductor or glass substrates. also none of these methods, however, is capable of producing a simultaneously well-contacted and well-sealed structure. summary of the invention it is an aim of the invention to achieve an improved method of electrical contacting of substrates using laser light, the method also providing protection against particles, oxygen and humidity. it is a further aim of the invention to provide a well-sealed electronics module having electrical contacts. the aims are achieved by the method and electronics module according to the independent claims. the invention is based on the finding that laser light may induce both fusing of substrate materials, which are generally insulating, and conductive layers applied thereon, together by sweeping of pulsed laser light over their interface zones. a practically complete fusion (welding) of both these areas is achieved. in one embodiment, the invention provides a method of fusing and electrically contacting a first insulating substrate, preferably a glass substrate, having at least one first conductive layer, i.e., contact terminal, thereon with at least one second insulating substrate, preferably a glass or silicon substrate, having at least one second conductive layer thereon. the method comprises stacking the first and second substrates such that an interface zone is formed between them, the interface zone comprising an electrical contacting zone where at least one first conductive layers faces and is at least partially aligned with at least one second conductive layer, and a substrate fusing zone where the insulating substrates face each other, focusing to the interface zone of the substrates through one of the substrates a plurality of sequential focused laser pulses from a laser source, the pulse duration, pulse frequency and pulse power of the laser light being chosen to provide local melting the substrate materials and the conductive layers, moving the laser source and the substrate with respect to each other at a predetermined velocity and path so that a structurally modified zone is formed to the interface zone, the structurally modified zone overlapping with said electrical contacting zone and said substrate fusing zone. the term "insulating substrate" refers to all non-conductive substrates, including intrinsic semiconducting substrates, which are frequently used as wafers in microelectronics. the conductive layer is typically a metal layer. the invention provides significant advantages. first, as the mechanical and electrical connecting of the substrates is carried out in the same processing stage, the method is simple and provides both time and cost savings. second, the weld seam can be made completely hermetic because of direct fusion of materials. third, the same laser exposure scheme can be used for purely mechanical or electrical connecting of other substrates or components in the same electrical device. fourth, a very high-quality and pinhole-free weld seam can be produced. according to one embodiment, at least one of the substrate bodies is transparent for the laser wavelength used. this allows for the laser to be guided through the substrate and focused to the interface, where the intensity per volume is high enough to achieve heating and welding of the substrates or their contact areas. one aim of the invention is to produce laser-induced welding and electrical contacting of substrates, in which the weld seam produced is higher quality, that is, essentially free of microcracks. this is achieved, in particular, by using picosecond-scale laser pulses which induce at the substrate, in addition to nonlinear absorption, also considerable linear absorption effect, provided that they are directed to the substrate temporally and spatially frequently enough. therefore, a subsequent pulse is directed to the substrate such that it significantly overlaps with the spot of the previous pulse, the spot still being hot enough, additional absorption of laser energy to the substrate is gained due to linear absorption. in addition to increased absorption, a high pulse repetition rate will reduce microcracking susceptibility of the substrate material(s). this is because a preceding pulse can make the material less rigid and when the succeeding pulse comes the shock wave will be dampened. an apparatus can be used which comprises a pulsed laser source for emitting laser pulses having a predefined duration, pulsing frequency and focal spot diameter, means for holding the substrates such that laser light can be guided from the pulsed laser source to the interface zone of the substrates through one of the substrates, means for moving the substrates with respect to the pulsed laser source with a predefined velocity and along a predefined path. alternatively, the laser beam can be guided using mirror optics, for example, to avoid movement of the laser source and/or the substrates. the effective optical distance between the laser source and the substrate is arranged to be such that the laser pulses are focused to the interface zone of the substrates. this means that enough energy to locally melt the substrate material(s) is absorbed from each individual pulse to both substrates. the method according to the invention has found to yield processed substrates having low amount of microcracks within the processed materials and thus high bending strength of processed components. in this document, the term "substrate" means broadly any target material or material combination in which structural changes (melting and re-solidification) take place upon proper pulsed laser exposure. the substrate may be substantially homogeneous or it may comprise a plurality of regions or layers made from different materials. the regions or layers may initially be connected. the processing may be directed to one individual layer or region or to the interface of two or more layers or regions, depending on the desired effect. further embodiments and advantages of the invention are described in the following detailed description with reference to the attached drawings. brief description of the drawings fig. 1 shows a side view of the welding process according to one embodiment of the invention. fig. 2 illustrates a side view welded product resulting from the process of shown in fig. 1 . figs. 3a-3c show a top and side views of a) a microcircuit, b) a glass substrate and c) electronics module comprising the components of a) and b) fused and electrically connected according to the invention, fig. 3d shows a top view of a multifunction electronics module manufactured with the aid of the invention, figs. 4a - 4d illustrate welding of an (o)led display panel according to one embodiment of the invention. figs 5a and 5b show diagrams of laser pulses at each location as a function of frequency for two different focal spot diameters. fig. 6 shows a cross-sectional image of a microstructure processed according to the invention to a glass substrate. fig. 7 shows cross-sectional image of an interface produced with the aid of the invention. detailed description of embodiments fig. 1 shows one way of carrying out the present method. there is provided a first substrate 28a (e.g. a glass substrate), which contains first electrical contact terminals 29a, and a second substrate 28b (e.g. a semiconductor chip), which contains second electrical contact terminals 29b. the substrates 28a, 28b are placed on top of each other as a stack 28 such that the contact terminals 29a, 29b are aligned with each other at their interface zone. thereafter, a laser source 20 is used to produce, through optics 22, a pulsed laser beam 24, which is focused through one of the substrates to the interface zone so as to produce a plurality of sequential and overlapping laser-induced spots to the interface zone. as shown in fig. 2 , after the process, the stack 28 has transformed into a fused stack 28' in which the substrates 28a, 28b have fully fused together at the regions 27 with no contact terminals 29a, 29b. at the region of the contact terminals 29a, 29b, the contact terminals have fully fused together so as to provide electrical connection zones 29 between the substrates. fig. 3a shows a microchip comprising a substrate 32 comprising a electronic function portion 33 and a plurality of contact terminals 34. fig. 3b shows a substrate 31 comprising contact terminals 36 which are adapted to mate with contact terminals 34 of the microchip when stacked. fig. 3c shows the microchip and the substrate in stacked configuration and a weld line 37 provided between the elements using the method of the invention. the weld line overlaps with the contact terminals 31, 34 facing each other and also areas outside the contact terminals and, in this case, forms a closed loop. thus, a hermetic protection for the core of the microchip is achieved against outside moisture and oxygen diffusing between the substrates. fig. 3d shows a multifunction device comprising a substrate 30 onto which a plurality of functional components are affixed using the present method. it should be noted that not all the components need to contain both types of fusing zones (direct substrate fusion and electrical contacting zones). for example, a moisture-sensitive sensor may be sealed and contacted with the larger substrate using the present method, but a display element may only be sealed to the larger substrate without contacting (which is carried out by some other means). if hermeticity is not required, only electrical contacting may be carried out. it is an advantage of the invention that the same laser exposure scheme can be used for each of these cases, whereby manufacturing of such multifunction devices is simplified. in particular, the invention is particularly usable for welding glass and/or semiconductor substrates, such as silicon, technical glasses such as quartz, fused silica, borosilicate, lime glass, temperature expansion co-efficient tuned glasses, sapphire, ceramics such as zirconium oxide, litao etc. and combinations of these materials. the substrates may contain conductive zones made of chrome, copper, gold, silver, molybdenum or indium-tin-oxide (ito), for example. particularly preferred material combinations (substrate 1/conductive material 1 - conductive material 2/substrate 2) which can be welded using the present method are: glass/chrome - chrome/glass glass/copper - copper/glass glass/copper - copper/silicon glass/gold - gold/glass glass/gold - gold/silicon glass/silver - silver/silicon glass/molybdenum- molybdenum/glass glass/ito - ito/glass the laser light is typically directed through a glass substrate. the thickness of the substrate the laser pulses are directed though is typically 100-500 µm. the thickness of the lower substrate is irrelevant, but at least thicknesses of 300-1000 µm can be processed successfully. the thicknesses of the metallizations on the substrates are typically 0.1 - 5 µm, in particular 0.1 - 3 µm. according to one embodiment, a pulse duration of 20 - 100 ps and the pulsing frequency and the moving velocity adjusted such that the pulses significantly overlap, the distance between successive pulses being less than 1/5 of the diameter of the focal spot. the pulsing frequency is preferably at least 1 mhz. in this parameter range, it has been found that both nonlinear and linear absorption of laser power will be most efficiently utilized, resulting in higher total absorptivity than in known methods. thus, the target spot is still hot because of the previous pulse at the time of arrival of the subsequent pulse, the material is locally not transparent to the wavelength used but has already initially significant absorptivity, ie. high number of free charge carriers. in other words, because of the previous pulses, the number or electrons in the conduction band is very high and the material appears as metal-like target having high absorptivity for the laser radiation. in typical applications, the focal spot diameter is in the range of 1 - 10 µm, resulting in a typical maximum distance between pulses the range of 200 nm - 2 µm. a more detailed description of the physical phenomena occurring in the substrate is given in our earlier pct application no pct/fi2009/050474 . an additional advantage of the described processing scheme is that lower peak power of laser light (typically less than 10 12 w/cm 2 ) can be utilized, the average power still being higher or at least at the same level as in known methods. thus, a laser-induced shock wave cause by each individual pulse is followed by a significant thermal wave contributed by subsequent pulses directed to the immediate vicinity of the impact zone of the pulse. one benefit of this is that local cracks caused by individual pulses are automatically repaired as the melting effect in the vicinity is high. thus, the structurally modified zone resulting from the processing according to the invention is consistent and of high quality. typically, the peak power used is 10 10 - 10 12 w/cm 2 , in particular 10 10 -5*10 11 w/cm 2 . this is significantly less than that is required in femtosecond pulse processing or multiphoton absorption processing methods and has the consequence that the number of laser-induced defects is greatly reduced. according to one embodiment, the pulsing frequency is increased or the moving velocity is decreased such that the distance between successive structurally modified spots is less than 1/10, preferably less than 1/20 of the diameter of said focal spot. this further increases the linear absorption effect taking place in the substrate and aids in achieving a more homogeneous processing line. the processing frequency is preferably at least 4 mhz and it may be up to 20 mhz and even more. in the metallized areas, the electrons of the metal foil cause the linear absorption effect to increase, compared with areas having no metallizations. there is formed a plasma cloud whose electrons increase absorprion of light not strictly in the metallized areas but also in the glass or semiconductor substrate in its vicinity. generally, the percentage of overlap of successive pulses can be characterized by the formula (1 - (processing speed * (time between pulses) / focal spot diameter)). figs. 5a and 5b show the number of pulses hitting each location of the substrate calculated with the aid of this formula for 2 µm and 6 µm spot diameters, respectively, and for three exemplary processing speeds, as a function of processing frequency. the preferred pulsing parameter ranges disclosed above can be used for processing substrates which are in their normal state totally or partly transparent at the wavelength used. this is because in practice impurities or lattice defects of the material initiate the photoionization process and further the impact ionization process. it is to be noted that so-called multiphoton absorption, which plays a key role in processing substrates by shorter pulses, in particular by femtosecond-scale pulses, does not significantly take place and is not even necessary. according to a preferred embodiment, the wavelength used is in the near infrared range, i.e. 0.75-1.4 µm. this range has been proven to be suitable not only for silicon processing, but also for high band gap materials such sapphire and quartz, which are difficult to process at least in any industrial way using known low-frequency and/or femtosecond - scale processing methods. according to one embodiment nonpolarized laser light is used. this causes the electromagnetic field direction in the substrate to be arbitrary and makes the method more immune of the lattice parameters of the substrate. in other words, nonpolarized light has been found to be effective for a wider variety of substrates. fig. 6 shows a cross-sectional image of a microstructure processed according to the invention to a glass substrate. the laser has been directed to the substrate from above and the melting process has initiated at the tapered end (see arrow) of the feature shown. it can be seen that a pulse having a duration of 20 ps or more provides a round shape at the initiation point, contrary to shorter pulses, in particular sub-ps pulses, having sharp initiation points and high cracking probabilities in the vicinity of the initiation point. it can also be seen that the diameter of the resulting feature in glass is so wide that power density is not enough for multiphoton absorption and that the linear absorption effect strengthens towards the upper portion of the feature. fig. 7 shows cross-sectional image of an interface produced with the aid of the invention, the interface which comprises a substrate fusing region 72 where the substrates are fully fused with each other and electrical connecting regions 71a, 71b on lateral sides of the substrate fusing region, where electrically conducting metal layers (not clearly visible) are fully fused with each other. it can be seen that in the substrate fusing region 72 the structural modification extends several micrometers to each substrate and the fusion is very through (hermetic). in this region, the connection between the substrates may be characterized as 'diffused deep bonding'. on the other hand, in the electrical connecting regions 71a and 71b the depth of the structural modification is smaller due to more local absorption of laser energy due to the metal-containing layers, i.e. 'surface bonding'. according to a preferred embodiment, the laser source used is a fiber laser source. fiber lasers have the advantage that they are capable of producing light at the megahertz frequency range, which has been found to be the most interesting as regards both processing speed and quality, as discussed above. fiber lasers in this context mean lasers in which the active gain medium is a doped optical fiber. the doping can be achieved with rare-earth elements such as erbium, ytterbium, neodymium, dysprosium, praseodymium, and thulium. the present invention has the advantage that very high processing speeds can be achieved in welding due to the absence of a separate contacting stage. in addition, the weld seam can be manufactured hermetically sealed and of very high quality. the invention can be used for welding silicon crystal wafers and other semiconductor materials used in the fabrication of integrated circuits and other microdevices. such wafers contain microelectronic device(s) built in and/or over the wafer by any known microfabrication process such as doping, ion implantation, etching, deposition, and photolithographic patterning and electronic terminals for conducting electronic current and/or potential to the device(s). particular advantages are achieved with very thin wafers (e.g. < 200 µm, in particular < 100 µm), which are used, for example, for manufacturing display panels (e.g. lcd panels and (o)led panels). however, the invention can in principle be used for wafers of any thicknesses. according to one embodiment, the invention is used for welding at least two superimposed layers having an interface zone, the method comprising focusing the laser pulses to said interface zone for achieving local melting at the interface zone and for welding the layers together through re-solidification. the welding application is schematically illustrated in fig. 2 . in the method, a laser source 20 and optics 22 are used for producing and focusing a laser light beam 24 to the interface of two separate layers 28a and 28b of a substrate 28. the plurality of overlapping pulses subjected to a moving substrate give rise to a weld seam 26 connecting the layers 28a and 28b according to the principle described above. according to one example, the substrate comprises two superimposed glass panels which are welded together at the fringe areas of at least one of the panels by a contiguous seam. thus, for example display panels or light sensing panels can be manufactured using the present method. figs. 4a and 4b show an example of manufacturing an oled display panel. the panel 48 comprises a base layer 48a comprising an active layer 49 having an array of individual light-emitting units and a front glass layer 48b. initially, the layers 48a and 48b are placed on top of each other such that the active layer 49, which needs to be hermetically protected, remains between them. after that, the present invention is used for producing a welded seam 46 around the whole active layer. preferably, the welded seam is unbroken (contiguous). thus, an effective barrier against dust and humidity can be formed for the active layer, at the same time efficiently affixing the layers of the panel together without any additional components, such as adhesives. due to the frequent pulsing and complete melting and re-solidification of the glass layers, the seam is very impermeable. preferably, electrical contacting of the panels 48a, 48b can be carried out at the same time as discussed above. figs. 4c and 4d show alternative detailed views concerning two alternatives of carrying out the substrate welding. in the process of fig. 4c , the glass layer 48a and 48b are spaced from each other at the interface zone and a weld seam 46a is produced directly between them. in the process of fig. 4d , an additional bridging layer 47 is provided between the glass layers 48a and 48b. the bridging layer 47 decreases the free distance between the glasses and ensures that compete unification of the layers takes place. the weld seam 46b is thus produced between the bridging layer and the front glass 48b. the bridging layer 47 can be a metal layer. there may also be bridging metal layers on both substrates, whereby the welding comprises fusing the metal layers together. in addition to manufacturing display panels, the present welding method can be used for fusing any other laser-weldable components and substrates too in applications requiring both hermetic sealing and electrical contacting. such needs may arise e.g. in bonding microsensors and other microcomponents to substrates, wafer level packaging applications, temperature sensitive component packaging, integration of optical components and integration of microfluidistic components. the above-described embodiments and examples and the attached drawings are given for illustrative purposes and are intended to be non-limiting. the scope of the invention is defined in the following claims which are to be interpreted in their full breadth and taking equivalents into account.
008-767-107-640-695
US
[ "US" ]
A61L9/01,B01D47/00
1995-03-31T00:00:00
1995
[ "A61", "B01" ]
arrangement for air purification; and method
a method and arrangement for conducting air filtration by biofilter operation is provided. the arrangement generally includes at least one bioreactor bed, through which air to be purified is passed. preferably the arrangement is configured so that air flow through each tank is from the top downwardly. in general, the biofiltration operation is conducted under pressures of less than ambient, to advantage.
1. a method of treating air for reduction of a presence of contaminating material therein; said method comprising steps of: (a) directing air to be treated into a first bioreactor treatment tank; (i) the first bioreactor treatment tank having at least one bioreactor bed, a closed top, a closed bottom and a sidewall; said closed top and sidewall defining an enclosed head space above the at least one bioreactor bed; (ii) said step of directing air to be treated comprising directing air into the enclosed head space and above the at least one bioreactor bed; and, (b) drawing air from the enclosed head space downwardly through said at least one bioreactor bed; (iii) said step of drawing being conducted under an air pressure, within said at least one bioreactor bed, of less than ambient. 2. a method according to claim 1 wherein: (a) said step of drawing air downwardly through said at least one bioreactor bed comprises passing air downwardly through at least one bioreactor bed which is at least one foot thick. 3. a method according to claim 1 further comprising a step of: (a) directing air, after it has been passed through the at least one bioreactor bed, through an exhaust stack arrangement. 4. a method according to claim 1 further comprising steps of: (a) directing air which has been drawn downwardly through said at least one bioreactor bed into an enclosed head space above at least one bioreactor bed in a second treatment tank; and, (b) drawing air from the enclosed head space in the second treatment tank downwardly through the at least one bioreactor bed therein; (i) said step of drawing being conducted under an air pressure, within the second treatment tank, of less than ambient. 5. a method according to claim 1 further comprising a step of: (a) feeding water into the at least one bioreactor bed in the treatment tank during said step of drawing air downwardly therethrough. 6. a method according to claim 5 wherein: (a) said step of feeding water comprises feeding water downwardly into the at least one bioreactor bed. 7. a method according to claim 1 further comprising a step of: (a) feeding nutrient material into the at least one bioreactor head in the treatment tank during said step of drawing air downwardly therethrough. 8. a method according to claim 7 wherein: (a) said step of feeding nutrient material comprises feeding nutrient material downwardly into the at least one bioreactor bed. 9. a method according to claim 1 wherein: (a) said step of drawing air downwardly through the at least one bioreactor bed comprises drawing air downwardly through at least one bed comprising active bacteria-loaded material positioned above lower strata comprising rock. 10. a method according to claim 1 wherein: (a) said step of drawing air downwardly through at least one bioreactor bed comprises drawing air downwardly through at least one bed comprising wood or bark chips. 11. a method according to claim 1 wherein: (a) said step of drawing air downwardly through at least one bioreactor bed comprises drawing air downwardly through at least one bed comprising wood or bark chips mixed with soil. 12. a method of treating for reduction of a presence of contaminating material therein; said method comprising steps of: (a) directing air to be treated into an enclosed head space of a treatment tank above at least one bioreactor bed; and (b) drawing air from the enclosed head space downwardly through said at least one bioreactor bed; (i) said step of drawing being conducted under an air pressure, within said at least one bioreactor bed, of less than ambient; (c) wherein said method is conducted upon air to be treated, when it is directed into said head space, having a first contaminant level, until a contaminant level in the air is reduced by at least 80%, through bioreactor treatment. 13. a method according to claim 12 wherein: (a) said treating is conducted until a contaminant level in the air is reduced by at least 90%, through bioreactor treatment.
field of the invention the present invention relates to air filtration arrangements and processes. it particularly relates to arrangements which use, at least in part, biofiltration or bioreactor techniques for reduction of organic contaminants present in air. background of the invention a wide variety of industries generate volatile and/or other air-borne generally non-particulate materials (for example odors, toxic organics, etc.) as byproducts. these include, for example, businesses such as: wood laminating; composting facilities; rendering plants; breweries; plastics molding; and, fiberglass manufacturing. in general, off-gases from processes conducted in such industries include substantial amounts of undesirable, and sometimes toxic, air-borne materials therein. also, volatiles or other air-borne contaminants (such as odors) may be released from waste materials of these processes. it is generally undesirable that the process gases or volatiles from said activities be vented directly to the atmosphere. indeed, environmental legislation, such as the 1990 clean air act amendment, limits the extent to which certain air-borne materials from such industries can be released. a variety of techniques are available for removal of such materials from airstreams. one well-known method is incineration. although incineration is effective, it is not, in many instances, a practical approach. first, there are generally high capital costs and operating costs associated with incineration. secondly, there are generally air permit problems associated with implementation of new incinerators at various locations. activated carbon filters are sometimes used for removal of organics. often generally involves passage of the air through a carbon filter. a problem is that in time the process generates contaminated carbon, which needs to be disposed of appropriately. thus, while the technique has some positive effect for removing organics from air, it does not avoid the problem of disposal of contaminated materials. also, activated carbon systems are not especially effective at removing certain organics which are relatively soluble in water. other types of systems which have found some beneficial use include biomass purification systems. in general, in such arrangements the contaminated airstream is passed through a bioreactor or biofilter including an active biomass. bacteria within the biomass operate on the contaminating materials, often generating acceptable carbon dioxide off-gasses while consuming the organics. when the process is completed, the biomass, comprising the bacteria, typically does not contain substantial amounts of contaminants, and thus can be readily discarded, in safe manners. while biomass arrangements provide significant advantage, to date they have not been as flexible and efficient as desired. what has been needed has been improvements in biomass air filtration arrangements, to enhance operation. summary of the invention according to the present invention, a method is provided for conducting reduction in volatile and/or other air-borne contaminants in an airflow stream. herein in this context the term "volatile and/or air-borne contaminants" includes within its scope odors, volatile and semi-volatile organics, and in some instances materials such as sulfur-containing materials, i.e. potentially almost any volatile or semi-volatile material carried in air flow stream. the method generally includes a step of directing air to be purified through at least one bioreactor bed, with the operation being conducted under a pressure of less than ambient. in general, the method is effected by directing the air through a bioreactor bed under a suction or draw, from downstream of the reactor bed. in preferred operations, the reactor bed and airflow conduits are configured so that airflow through the reactor bed is from the top downwardly. it certain preferred applications, after treatment in the bioreactor system, the air is exhausted through an exhaust stack. in some applications, before direction through the exhaust stack arrangement, the air is treated for reduction in moisture and particulates therein. also according to the present invention a bioreactor or biofilter arrangement for air treatment is provided. (herein the terms "bioreactor" and "biofilter" are used interchangeably and without specific regard to whether the bed is operating to digest organics, filter material, or both.) the arrangement generally comprises: a first biofilter or bioreactor treatment tank having an air inlet and an air outlet; and, an air draw apparatus constructed and arranged to draw air through the first bioreactor treatment tank while maintaining a pressure within the first tank of less than ambient. this is generally done by utilizing a blower or fan (i.e. an air draw apparatus of some type) positioned downstream from the first bioreactor treatment tank, to draw air through the tank for treatment. preferably the first bioreactor treatment tank is configured for downward air flow therethrough during treatment. in variations according to the present invention, the biofilter arrangement may include more than one bioreactor treatment tank. in some configurations, more than one tank is positioned in series, and in others the tanks are in parallel, and in some instances both. preferably in the various arrangements, air flow is from the top downwardly through each tank, and the air draw apparatus is constructed and arranged for operation of each tank under a pressure of less than ambient. in certain preferred systems, the bioreactor treatment tank includes at least one, one-foot thick, bioreactor bed therein. preferably each tank includes at least a two-foot deep bioreactor bed or mass through which the air is passed. the at least two-foot deep bed in some systems may be separated into two or more sections, preferably each at least one-foot thick. preferably each tank is at least one-foot deep, more preferably at least two-feet deep. typically each tank will be 2.5 feet deep or more. arrangements according to the present invention may be utilized in a wide variety of systems, with various flow rates, etc. in some, relatively high flow rates, for example up to about 7500 acfm (actual cubic feet per minute) or more, can be accomplished while utilizing plastic components for the treatment tanks and many of the pipes. in certain preferred systems, 9000 gallon pvc (polyvinyl chloride) tanks are used for the bioreactor tanks. in some preferred embodiments, the tanks include a reactor bed comprising vertically stacked strata of more than one material. in certain preferred arrangements, the lower strata comprises rock, with certain strata above comprising a mixture of active bacteria-loaded material, such as peat and topsoil. by "active" in this context it is meant that the bacteria in the bed is active for reduction in the level of the volatile or other air-borne materials. in some arrangements, wood or bark chips (in some instances mixed with soil) are utilized in the bacteria-loaded mix to improve porosity. in general, arrangments and techniques described herein can readily be applied to obtain 80% efficiency or greater, and often 90% efficiency or greater, at removing selected materials from air. in certain preferred arrangements, the reactor bed comprises a plurality of sub-beds, oriented in a vertical stack, each comprising various strata as defined. in the drawings, relative material thicknesses and component sizes may be shown exaggerated, for clarification. brief description of the drawings fig. 1 is a top schematic representation of a biomass air filtration system according to the present invention. fig. 2 is a fragmentary, side elevational view of a biomass filtration tank utilized in fig. 1; fig. 2 being taken from the viewpoint of line 2--2, fig. 1, and having portions broken away to show internal detail. fig. 3 is a fragmentary, top schematic representation of an alternate biomass air filtration system according to the present invention. fig. 4 is a fragmentary, side elevational view of a biomass filtration tank utilized in fig. 3; fig. 4 being taken from the viewpoint of line 4--4, fig. 3, and having portions broken away to show internal detail. detailed description of the invention i. general principles of improved systems conventional biomass air filtration systems generally operate with a blower removing air from the room in which the contaminated process gases are generated, and forcing or driving the air through an air purification bed. typically, the air is forced from the bottom upwardly through the bed or biomass reactor. one of the reasons that the airflow in conventional systems is generally directed upwardly through the biomass, is because direction of the air, under pressure, through the biomass downwardly would tend to cause biomass packing and eventual build-up of an undesirable pressure head. thus, in conventional forced air systems, upward flow is typical. however, when the air is directed from the bottom up, loose material in the top of the bed in the reactor can easily be picked up in the air stream and blown through the system. another problem with directing forced air upwardly through the biomass, is that the biomass may tend to "bump" or "burp" in time (to release pressure build up under the bed, by bed shift), increasing the likelihood of channelling; i.e. developing air flow bypass channels that reduce efficiency of air purification. a variety of other problems are related with conventional biomass filters, which operate in the manner described above. for example, since the airflow through the biomass is under pressure, relatively large blowers are often needed to operate the systems. also, any leak in the lines, tanks, etc. would be associated with a leak of contaminated material to the ambient, since the systems generally operate under internal pressure relative to ambient. this generates an associated need for relatively expensive tanks, line seals, etc., to help ensure against breakage or leaks. further, it creates a potential for contamination at the site, should a leak occur; and, undesirable downtime, while the leak is being repaired. further, it requires the development of biomass tanks of appropriate size and material that will resist undesirable "pack", to increase the pressure undesirably, under the pressure of airflow. in addition, conventional systems have not been designed for efficient collection and removal of the airflow stream from the downstream side of the biomass. rather, they have often been left open. this means that efficient discharge and dispersion of odoriferous off-gas streams have sometimes been problematic. in general, the present disclosure concerns variations and improvements in biomass filters to significant advantage. a principal difference from typical prior arrangements is that, for arrangements according to the present invention, the equipment lines upstream of the biomass generally operate "under vacuum" relative to ambient. that is, rather than a forced air system directing the air into the biomass, a suction draw from downstream of the biomass is used, to pull the air through the biomass. this means that the airflow upstream from the biomass is not under substantial pressure, but rather is under a vacuum relative to ambient. thus, a leak in the line upstream from the biomass will not generally result in leakage of contaminant to the atmosphere, but rather it will result in a draw of air from the atmosphere (ambient) into the system. secondly, in preferred arrangements disclosed herein, the airflow within the biomass is directed downwardly, rather than upwardly. this can reduce the likelihood that channeling such as that described above will result. also, it can facilitate preferred water and nutrient flow. in addition, downstream equipment, such as blowers used to cause the airflow through the biomass, can be used to efficiently collect the air and direct it through a stack in an efficient manner for dispersion to the atmosphere. another advantage to certain of the preferred arrangements according to the present invention is that they are well adapted for generation of "modular" air cleaning systems. thus, they can readily be expanded on site to accommodate variations in the processing. further, they can be readily assembled from relatively inexpensive and readily obtained components. indeed, in some applications otherwise discarded materials, such as spent agricultural tanks, can be used. ii. a typical improved system in fig. 1, a schematic depiction of an arrangement for implementing principles according to the present invention is depicted. while arrangements utilizing some or all of the advantageous principles of the present invention may be applied in a wide variety of specific systems, the particular system of fig. 1 is typical and illustrative. from the following descriptions, variations will be understood. referring to fig. 1, the reference numeral 1 generally indicates an air purification system according to the present invention. the particular system 1 of fig. 1 is shown in relation to a building, with which it might be used. an exterior wall of the building is generally indicated at 4, with the interior indicated at 5 and the exterior at 6. for the system 1 of fig. 1, the biomass is retained within the building, along with operating blowers and related equipment. in this manner, the environment of the biomass can be more readily controlled. it is not a requirement of systems according to the present invention, however, that the biomass be retained indoors. if needed, other means (for example heaters or insulated systems) may be used to provide a desirable and controlled environment for the biomass. indeed, in some locations, average outdoor conditions can provide for a good, stable, biomass without further temperature or other environmental control. referring to fig. 1, reference numeral 10 generally indicates an inlet for air to be treated, to the system, i.e. an air flow inlet. the air will generally have been removed from some process conducted upstream of inlet 10. for the particular arrangement of fig. 1, upstream of inlet 10, a hooded evaporator unit 11 is shown. thus, the particular application shown in fig. 1 involves the purification (or reduction in the presence of) of organics from air that has been removed from a hooded evaporator unit. the typical use of such an arrangement would be, for example, as follows. indeed a test system was developed according to figs. 1 and 2, for such a use. in a process involving wood laminating, various washing steps may be conducted, periodically, of the equipment. the water wash would include organic contaminants and particulate contaminants therein. the waste water from the washing process might be evaporated, leaving nonvolatile residue. when the water is evaporated, some volatile and semi-volatile materials will be discharged to the air. a hooded evaporator such as that shown in fig. 1 at 11 can be used to conduct this process. the bioreactor air purification system shown in fig. 1, then, is operated to reduce the levels of volatile and semi-volatile materials in the off-gases resulting from operation of the hooded evaporator 11. also, upstream from inlet 10, is located a fresh air intake or bleed 16. this can be used to provide a fresh air flow through the biomass, or a partial bleed of fresh air into the biomass if desired. it is an advantage to the conduct of processes with arrangements according to the present invention, with the airflow stream at inlet 10 being generally under vacuum relative to ambient, that such inlets or bleeds can be efficiently utilized. still referring to fig. 1, the reference numeral 20 generally indicates the air flow exit from the biomass purification system. downstream from exit 20, the air, after purification by the biomass, is directed through pipeline 21, through downstream water remover 22, filter 23 and air draw apparatus, i.e. blower 25. the exit or exhaust from blower 25 is generally indicated at 26. from this point, the air can be directed, for example, through a stack 27 for efficient dispersion in the atmosphere. in general, blower 25 is constructed and arranged for operation to provide a suction or draw from pipeline 21. thus, at least between inlet 10 and exit 20, system 1 is generally operated under vacuum relative to ambient, to advantage. still referring to fig. 1, reference numeral 29 generally indicates the biomass filtration system 29. system 29 includes a plurality of air filter tanks 30. as will be apparent from the further detailed description, the tanks 30 are oriented so that selected tanks can be put on or off line as desired. further, it will be apparent that the system can be readily expanded to accommodate still more tanks. tanks 30 are sometimes referred to herein as "bioreactor treatment tanks" or "biofilter tanks". preferably each tank has a top, a bottom and a sidewall arrangement, and each is at least one-foot deep, more preferably at least two feet deep. typically each is greater than 2.5 feet deep. for the arrangement shown in fig. 1, the tanks 30 are oriented in three pairs of two tanks, each pair being in parallel with the other two pairs, and with the individual tanks in each pair being oriented in series relative to one another. thus, arrangement 29 comprises tanks 36, 37, 38, 39, 40 and 41, with tanks 36 and 37 oriented in series relative to one another, tanks 38 and 39 oriented in series relative to one another, and tanks 40 and 41 oriented in series relative to one another. further, the pair of tanks 45 comprising tanks 36 and 37 is oriented in parallel to pair 46 (comprising tanks 38 and 39) and also in parallel to pair 47 (comprising tanks 40 and 41). in general, the inlet feed pipeline or manifold for pairs 45, 46 and 47 is indicated generally at 50. the outlet feed pipeline or manifold for pairs 45, 46 and 47 is indicated generally at 51. operation will be apparent from examination of tank pair 45, i.e., tanks 36 and 37. referring to fig. 1, inlet feed manifold 50 extends from inlet 10, bringing air to be treated from the process. at joint 55, line 56 engages manifold 50, directing air through tanks 36 and 37. line 56 includes a first segment 57, having control valve 58 therein. segment 57 is directed into the top of tank 36. off-gases from tank 36 are removed from bottom outlet 60 and are directed by pipeline segment 61 into the top of tank 37. off-gases from tank 37 are removed from bottom outlet 64 and are directed through segment 65, through control valve 66, to joint 67. at joint 67, the off-gases are directed to outlet feed manifold 51. sampling ports are indicated generally at 69. the sampling ports can be used to check such parameters as air temperature, velocity, volumetric flow rate, and pressure drop across various points in the system. analogously, tank pair 46 is positioned in line 71 and tank pair 47 is positioned at line 72. attention is directed to ends 75 and 76 of manifolds 50 and 51, respectively. each of ends 75 and 76 is capped by caps 80 and 81, respectively. in general, biomass filtration system 29 can be readily expanded to handle still more air passage therethrough, for example by extending manifolds 50 and 51 to accommodate still more tanks. throughout biomass filtration unit 29 are located control valves 85 and ports 86, for easy control and sampling. through appropriate operation of the valves, any of tanks 36-41 can be isolated and taken off line for service or replacement. for the arrangement shown in fig. 1, each of tanks 36-41 is depicted positioned on a pallet 90, for ease of movement. the pipes used throughout the system would preferably include appropriate joints, etc. that can be easily disconnected to facilitate this operation. in general, it may be desirable to intermittently, or even continuously, introduce water and/or other nutrients into selected ones of tanks 36-41. an optional line for accommodating this with respect to tank 40 is indicated generally in fig. 1 at 95 (in phantom). of course a similar line could, optionally, be used with each tank. still referring to fig. 1, airflow from outlet manifold 51 is directed into water remover 22. the particular remover or separator 22 shown comprises a pair of tanks 101 operated with appropriate float water controls, in a conventional manner, to control water depth. outlet air flow from moisture separator 22 is indicated at 102, with the air being directed outwardly from the biomass filtration system 29 to downstream particle filter 23. a conventional particle filter arrangement may be used. attention is now directed to fig. 2. in fig. 2, a side elevational view of tank 40 is provided. the view in fig. 2 is with portions broken away to indicate internal detail. referring to fig. 2, tank 40 is depicted with inlet segment 72 directed into a top portion 110 thereof. air outlet 111 of tank 36 is oriented in a lower region 112 below any material packed within the tank 36. should any water collect in the lower region 112 of tank 36, it can be removed through drain 113. in fig. 1 water drains from the tanks are fed into recirculation lines 114, and back to evaporator 11. still referring to fig. 2, the bed in the interior of tank 40 is generally stratified for advantageous biofiltering operation. in general, the bed in tank 40 is divided into a lower stratified region 117 and an upper stratified region 118. the stratified regions 117 and 118 are separated by open space 119. separation of the bed in tank 40 into upper and lower stratified regions 118 and 117, respectively, inhibits the likelihood of channeling through the biomass and ensures good porosity. still referring to fig. 2, tank 40 includes internal lower bracket 121 on which porous grate or screen 122 is positioned. porous screen 122 prevents biomass material (or packing material, i.e. the bed) from settling into lower region 112. for the particular treatment tank 40 shown, above porous screen 122 is located a bed comprising a region or strata of volcanic rock 124, above that a porous region or strata 125 of wood or bark chips mixed with soil, and above that a region or strata 126 containing a bacteria-loaded mixture of peat, topsoil, vermiculite, and some bark. a variety of packing materials may be used, however. in general, the upper stratified region 118 is analogous, comprising bracket 128, porous screen 129, strata of volcanic rock 130, strata of wood or bark chips mixed with soil 131 and strata 132 of topsoil, vermiculite, bark, and peat. such a mix is good for a high flow, wet, air stream since it resists packing. again, a variety of materials can be utilized to form the "bed" or "beds" of tank systems according to the present invention. the particular examples given for tank 40, with upper and lower separated stratified regions 117 and 118 with some head space between the beds, are particularly useful for avoiding undesirable packing, maintaining air flow and maintaining a desired porosity. variations may be desired to handle selected airflows, purification rates or particular contaminants. generally, in preferred arrangements each bed (or sub-bed) is at least one-foot thick; and, if more than one bed is positioned in a single tank, preferably the total thickness of all beds therein is at least two-feet. in this context, the thickness (sum) of the beds is inclusive of all materials and layers therein; i.e. rock, bark chips, soil, etc. still referring to fig. 2, tank 40 is provided with optional nutrient or water inlet 95, indicated in phantom lines. it is an advantage of arrangements according to the present invention that a wide variety of tanks may be used. a reason for this is that the arrangement generally operates under vacuum relative to ambient, and thus tanks and lines which can accommodate substantial pressures therein are not required. the particular system depicted in the schematics of figs. 1 and 2 is shown utilizing stainless steel drums. the particular drums depicted are 85 gallon salvage drums (27 inch diameter, 36 inches deep), the interiors of which have been coated with enamel to resist corrosion. referring to fig. 1, two different sized airflow pipes are shown. for an example, the larger pipes, comprising manifolds 50, 51 can be schedule 40 pvc (polyvinyl chloride) 4 inch diameter pipes; and, the smaller pipes, for example pipes 56, 71, 72 can be schedule 40 pvc 2 inch diameter pipes. the pipes 136, 137, making the immediate connections to the blower 25, are preferably galvanized steel piping (for example, 2.5 inch diameter). a reason for galvanized steel piping at this location is that in the immediate vicinity of the blower 25, substantial heat may be generated in the air stream, which can damage pvc piping. the particular arrangement shown in the schematic of fig. 1, made with equipment of the size indicated in the previous three paragraphs, is configured for an air flow of up to about 450 acfm. it can readily be operated with a commercially available, 10 horsepower, regenerative blower. it can readily withstand operation with a pressure drop, across the system, of up to about 40 inches of water. it will be preferred to construct the arrangement for operation with a pressure drop across the system of about 20 inches of water or less. it is an advantage to arrangements of the present invention that they can be constructed to withstand such vacuum draws, while using readily available and relatively inexpensive components. in general, the grate or screen (122, 129) positioned within the tanks should be a relatively strong material, for example steel which has been appropriately coated to reduce rusting. alternatives can be used, however. a variety of additional equipment can be used in association with arrangements according to the present invention, to facilitate operation. for example, various thermocouples can be used in association with the tanks to allow for monitoring of bed temperature. also, various pressure gauges 138, etc. can be used to monitor operation. as will be apparent from the review with respect to fig. 1, biomass systems in arrangements according to the present invention are made relatively porous, with porosity enhanced by the use of filler such as volcanic rock, gravel, wood chips or similar material in the bed. this is advantageous in many systems according to the present invention, especially those in which the flow rate is relatively high, since it helps ensure that a good air flow rate under the vacuum draw can be readily obtained without development of a substantial pressure drop across the bed. example of an arrangement for high flow operation attention is now directed to the schematic of fig. 3. in this schematic, the principles of the present invention are depicted embodied in an arrangement for handling an even higher flow (volume) of air to be treated, therethrough. the particular arrangement shown will readily handle a flow up to about 7500 acfm. this can be accomplished in some systems incorporating the principles of the present invention, through use, as a blower, of an industrial fan rated for 25 horsepower operation. the particular schematic of fig. 3 was designed for operation to treat odoriferous off-gases from partially composted municipal trash. in particular, this waste will typically be positioned in aeration trays, in an isolated building, for passage of air therethrough to remove volatiles. in the schematic of fig. 3, the building for aerating the partially composted waste is not shown. air flow from the building to the biofilter system 202 for treatment is shown at line 201. line 203 is a bypass line, for air drawn from the aerating building to bypass the biofilter system. the biofilter system 202 of fig. 3 generally comprises four treatment tanks 208, 209, 210 and 211. for the configuration of air flow of system 202, air only passes through one of the tanks 208-211, during treatment. that is, the air flow stream is divided up into four substreams, and each substream passes through only one tank. alternately phrased, the tanks 208-211, are operated with air flow in parallel, rather than series. thus, all four tanks are simultaneously used, but any given volume of air only passes through one tank. still referring to fig. 3, reference 213 indicates the inlet manifold for air to be filtered. spur lines 214-217 direct air from inlet manifold 213 into each of tanks 208-211, respectively. as with the arrangement of fig. 1, the air passes downwardly through each of the tanks 208-211 in operation. each of tanks 208-211 includes two outlets positioned in the bottom thereof, for air to be removed. outlet lines for tank 208 are indicated at 220 and 221. analogously, the outlets for tank 209 are indicated at 223, 224; the outlets for tank 210 at 226, 227; and, the outlets for tank 211 at 229 and 230. in preferred systems for high flow operation, it will be preferred to use a smooth pipe material without square turns, to avoid unnecessary contribution to the pressure differential resulting from friction between the air and the piping. it will also be preferred to use piping which is resistant to corrosion or moisture deterioration. smooth pvc piping has been found to be useful. referring to the schematic of fig. 3, outlets 220 and 223 from tanks 208 and 209, respectively, join at manifold 232, for removal. similarly, outlets 221 and 224 join at manifold 233; outlets 226 and 229 join at manifold 234; and outlets 227 and 230 join at manifold 235. manifolds 232, 233, 234 and 235 ultimately join at outlet trunk 238, for withdrawal of air from the biofilter system 202, to the operating fan or blower 240 (i.e. air draw apparatus). from the operating fan or blower 240, the air is exhausted through line 243 and stack 244. various valves, such as indicated at 245, 246, provide for selected operation. an optional hose connector for connecting a water/nutrient line is shown on tank 211 at 260. similar hose connectors can be used with respect to each tank. each of the tanks 208-211 can include an optional liquid drain in the bottom thereof, appropriately valved for operation. it has been found, however, that in general operation can often be conducted without the need for utilization of the drains, provided the amount of moisture introduced into each tank can be balanced with the rate of air flow such that moisture loss from standing water in the bottom of the tank approximately matches its introduction. the system of fig. 3 was designed to be operable, if desired, with utilization of 9000 gallon pvc agricultural drums, which, at the end of their use on a farm would otherwise have been discarded. the tanks depicted in figs. 3 and 4 are such pvc agriculture drums, each of which includes a filling spout (see fig. 4). it was also designed to utilize pvc pipe (various sizes from about 12 inches to about 26 inches in diameter depending on flow requirements) generally throughout, except in the immediate vicinity of the blower and downstream therefrom, at which point steel would likely be used. in general, such plastic components utilized for tanks in a system operating under vacuum, will implode if the pressure head within the system is at about 20 inches of water or greater. the overall system was designed to handle a relatively high flow of air, for treatment, with a total pressure drop across the system never being held at much greater than about 14 or 15 inches of water. thus, the system can be readily operated, efficiently, for air treatment, without the need for reinforced structure to withstand the vacuum therein. it is noted that the tanks in the schematic of fig. 3 could be enclosed within a building. in general, composting generates heat. it is believed that for many environments the heat generated by the composting will warm the air sufficiently, to maintain a stable bioreactor, provided, at least for the winter, the tanks are simply enclosed within a shelter. however, heaters or other systems to maintain reactor bed stability may be used. also various control valves, monitors, sample ports, etc. (not shown) could be used. at 265 a framework, for support of certain components of the system, is shown. for the particular preferred arrangement shown, it is important to operate the system with packing in the tanks being no greater than will allow for ready flow of air thereacross, without development of a pressure differential above about 14-15 inches of water. for the system shown, this is conducted by filling each of the tanks in a preferred manner, for example as shown in fig. 4. referring to fig. 4, treatment tank 211 is depicted. it is shown with portions broken away, so that internal arrangements can be depicted. in the bottom of the tank, there is a layer 270 of about 2 feet of washed river rock, which generates a good air flow distribution in the bottom of the tank. it is foreseen that in operation typically about 2 inches of water will stand in the bottom of the tank. as indicated above, an optional drain can be used in the bottom of the tank, to ensure appropriate moisture level. above the rock, a porous mixture 271 containing the microorganisms to conduct the treatment, is positioned. the particular bacteria-loaded mix shown comprises a mix of peat, compost, vermiculite, and pinebark. the large pieces of pinebark and vermiculite ensure the appropriate porosity of the system. selection of biomass bed material the design of filter media for biofilters is variable depending on the importance of a number of design criteria. a principal criteria for the design of a biofilter which involves a pressure of less than ambient (i.e. vacuum) is the porosity of the overall biofilter bed. porosity is important because adequate flow through the bed must be maintained. adequate flow through the bed determines the size of the blower and the ultimate volume of air that can be moved through the system. the downward flowing air current of the described system has the potential effect of compacting the biofilter bed. thus the biofilter bed should be designed to resist compaction that would undesireably reduce the porosity and the ultimate gas flow rate through the system. another important design criteria for the biofilter media is the surface area available for the bacteria in the biomass to adhere. gas entering the system should come into contact with the greatest amount of bed material surface area reasonably possible, to effect the highest obtainable removal rates. as the contaminants in the gas stream enter the system, they are adsorbed into minute quantities of water that partially fill the porous structure of the biofilter media. the bacteria cultures exist in abundance on the surfaces of these water films. digestion of the contaminants that dissolve in the water films is a primary method of biodegradation (and thus filtering) in the described biofilter. other important design criteria include: the organic material available to the bacterial cultures as nutrients, the ability of the media mix to self regulate or buffer ph, the ability of the media to support its own weight without compacting, the presence of indigenous bacteria in the biofilter material, the cost of the media, the availability of the media in bulk quantities, and the relative ease of handling (transporting, loading and unloading the biofilter). based on these basic design criteria, an optimum biofilter media can be constructed to meet the requirements of a wide variety of systems. the following list presents sample media components that can be used to construct the biofilter media. of course the list is exemplary only, not exhaustive. ______________________________________ biofilter media component principal benefit ______________________________________ coarse hardwood increase porosity of biofilter, bark (3" cross- good surface area, resists physical sectional degradation, low cost, readily diameter) available, light weight, absorbs water. peat (non- high organic content, very light, sterilized) relatively inexpensive, readily available. vermiculite bulking agent, high surface area, resists physical degradation, very light, somewhat available. pearlite a type of obsidian. has a high surface area and operates as a bulking agent. top soil (non- high organic content, readily sterilized) available, indigenous bacteria cultures, absorbs water, low cost. activated carbon bulking agent, relative low cost (spent), very porous, may absorb some contaminants, buffering ability, odor buffering. oyster shell ph buffering ability, relatively porous lime ph buffering ability, may stabilize some contaminants, readily available, low cost, odor buffering. compost high organic content, readily available, indigenous bacteria cultures, bulking agent, absorbs water. pea gravel filter material, prevents particulate escape, supports biofilter, stable, low cost, high surface area. bulking agents bulking agent, increases porosity and (plastic formed surface are, stable. column packing media) manure (non- high organic content, indigenous sterilized) bacteria cultures, readily available. volcanic rock filter material, prevents particulate escape, supports biofilter material, relatively low weight, high porosity, readily available. ______________________________________ in instances where the soil has a high clay content, or is very fine, it may be desirable to ensure that a substantial amount of bulking agent or filler is mixed with it, to avoid compaction in use. the media that was selected for the biofilter of figs. 1 and 2 was composed of top soil, vermiculite, volcanic rock, large hardwood bark, and peat. each drum held approximately 8 cubic feet of media (not counting the volcanic rock) in two discrete beds. the approximate formulation for the media was: ______________________________________ large hardwood bark chips 4 cubic feet topsoil 2 cubic feet peat 1 cubic foot vermiculite 1 cubic foot ______________________________________ the media was mixed by hand in a 3 yard "roll off" container with a shovel. in the material immediately above the volcanic rock, a higher percent of bark chips was used, than in higher layers, to help ensure porosity. when the desired consistency was reached, the material was transferred to the drums using shovels and scoops. the material was deposited onto a layer of approximately 2 inches of volcanic rock at the bottom of the drum. the media was added until the top of the first bed nearly reached the intermediate drum supporting screen. the second layer was constructed in the same manner over 2 inches of volcanic rock on the intermediate supporting screen to within 3 inches of the top of the drum. the main design criteria for the biofilter media for this system was porosity, organic content, availability, and low cost. this biofilter when tested achieved a measured removal efficiency in excess of 95% for formaldehyde. the media selected for the biofilter of figs. 3 and 4 was composed of washed river rock, peat, top soil, vermiculite, compost and coarse hardwood bark. each biofilter vessel held approximately 21.25 cubic yards of media in one layer atop washed river rock. the approximate formulation of the media was: ______________________________________ large hardwood bark chips 8.5 cubic yards topsoil 7.0 cubic yards peat 3.5 cubic yards vermiculite 1.25 cubic yards compost 1.0 cubic yards ______________________________________ the media was constructed by first establishing stockpiles of the materials in the staging area. materials were spread out by a front-end loader in long windrows and thoroughly mixed together using a scarab compost turner. when the appropriate mixture was reached, the material was conveyed into the top of each biofilter vessel atop approximately 2 feet of washed river rock. the biofilter media was added to a depth of 8 feet in each vessel. the main design criteria for the biofilter media for the system of figs. 3 and 4 was extremely high porosity, resistance to compaction, organic content, moisture holding capability, availability, and low cost. the efficiency of this biofilter has been demonstrated by the absence of odors and odor complaints at a facility when installed for testing. engineering considerations the following recitations indicate engineering considerations and analyses useful in developing specific arrangements, for the test circumstances reflected by the figures. the considerations will be useful in developing a particular biofilter arrangement according to the present invention, for installation in other situations. example 1 the biofilter design of figs. 1 and 2 the design of the experimental pilot scale biofilter of figs. 1 and 2 began with a study to investigate the options for treating formaldehyde emissions from an evaporative waste water system. formaldehyde is an organic contaminant that has a low henry's law constant (5.8.times.10.sup.-5) which means that generally it is very soluble in water. compounds that are soluble in water will work well with biofilters due to the fact that the bacteria that digest these contaminants exist in the thin film aqueous environment. for this biofilter, it was determined that formaldehyde would be readily adsorbed by the system and broken down by the microorganisms indignant to the biofilter. additional contaminants such as particulate, metals or certain toxic organic and inorganic substances can effectively preclude untreated gaseous waste streams from biofiltration. there were no contaminants in the waste water to be treated by this system that would have a deleterious effect on the system or the microorganisms in the system. the general principles of filter design considerated for this project were similar to those reported below for example 2. the total volume of the air that would need to be removed from the waste water evaporation tray was a function of the total volume of the venting hood and the tray (100 cf) and the number of desired air exchanges per unit of time. based on the total volume of the evaporative tray and hood assembly, the minimum air flow through the system would be approximately 350 cfm. based on this flow rate, the hood would be kept under slight negative pressure, thus not allowing formaldehyde emissions to escape. the hood formed a tight seal over the evaporative tray thus the ability of the system to collect odors (capture efficiency) was assumed to be 100%. the volume of the biofilter media was calculated based on the velocity of gas that would be passed through the system, and the desired contact time of the gas and the biofilter media. contact times for various biofiltration systems have been related to biodegradation rates for certain contaminants. generally, the higher the biodegradation rate, the faster the air flow through the system (holding efficiency constant). the biodegradation rate for formaldehyde is relatively high 1.39.times.10.sup.-6) which means that the air flow through the system and thus the residency time could be reduced relative to systems designed to handle other contaminants. the initial contact time chosen for this biofilter was approximately 10 seconds which, according to emissions testing, provides for adequate reaction time. the thicker beds of the negative pressure modular biofilter allow for more media to come into contact with the contaminants as they pass through the system, thus offsetting reductions in actual contact time while allowing velocities to increase. the net effect is that a smaller biofilter can treat more gas more efficiently. the volume of this biofilter was sized to fit into six 85 gallon drums each with an internal volume of approximately 10 cubic feet. the drums were painted on the inside with epoxy paint to resist corrosion. after the type and size of the biofilter was selected, the blower was selected. the selection of the type and size of fan is dependent upon the maximum air flow rate desired, the anticipated pressure drop in the system due to the biofilter bed and the associate piping, the energy usage and economics of the blower, the physical size and energy requirements of the blower, and the blower materials. for this project, the pressure drop across the system was estimated using the pressure loss nomegraphs for the piping distributors, and an estimate of pressure drop through the biofilter beds. pressure loss through a biofilter bed can be estimated using the formula described in example 2, for the biofilter design of figs. 3 and 4. the maximum expected pressure drop of this system was 25 inches. the biofilter exit gas was assumed to be dry, at ambient temperature and noncorrosive. the biofilter would have the effect of buffering the ph of the incoming waste water vapor and stabilizing the temperature. the blower that was chosen for this site was a 10 hp regenerative blower designed to provide for approximately 450 cfm against a vacuum head of up to 60 inches of water, in a slightly corrosive environment. regenerative blowers are more efficient in moving air against a high vacuum than other types of blowers or fans. regenerative blowers are also quite durable and low maintenance. the blower was modified on-site to match the ducting of the biofilter and the discharge stack. piping was selected for the biofilter that would be low cost, easily handled by the construction crew, resistant to weather and the expected corrosive nature of the biofilter inlet gas. the piping that was selected for this system was available through a local supplier thus reducing the delivery costs and the lead time for ordering. initial emission testing of this pilot scale biofilter indicated that the removal efficiency of the system was over 95% for formaldehyde. example 2 the biofilter design of figs. 3 and 4 the design of the biofilter began with an identification of the contaminant to be removed by the system. in this case, the principal contaminant that was to be removed from the gas stream was odors from a municipal trash composting facility. odors are complex in nature and are generally not attributed to any one chemical compound. odors from composting facilities, however, have been characterized in the literature. the major components of the odors can (typically) include: hydrogen sulfide (h.sub.2 s), di- and trimethyl sulfide and disulfides, ammonia (nh.sub.3), odorous amines (c.sub.2 h.sub.3 nh.sub.2), short-chain fatty acids, and aldehydes and mercaptans. odors and odorous compounds have been, in some systems, effectively treated by biofilters of various designs, for over 50 years. the actual designing of the biofilter began with determining the volume of material that would be needed in the system, the amount of time that the gas bearing the contaminant must be in contact with the biofilter media, the pressure drop associated with the biofilter media and associated piping, the physical properties and requirements for gas conditioning, the site parameters for siting the biofilter, cost and permitting considerations and the desired removal efficiency for the system. for this biofilter system, it was initially determined that the odors and the odorous compounds from the facility could be readily addressed by a biofilter. this decision was based on the fact that composting facilities have used biofilters in the past to successfully treat odors from some composting facilities. there was nothing unusual or unique in the site or the operation of the composting facility that would have precluded the use of a biofilter with the possible exception of extreme cold conditions sometimes experienced during the winter. upon interviewing the site personnel, it was learned that odors were not considered a problem at the facility during the winter months. furthermore, the biofilter that would eventually be proposed (negative pressure, modular, topdown flow) could be situated near the main composting building due to its compact size and thus receive heat from the building to keep the media active, if winter operation became a desired design parameter. the total volume of the air that would need to be removed from the composting building to keep them under negative pressure and prevent odors from escaping was the next design parameter. based on the total volume of the interior of the building and the desired number of air exchanges per hour, it was determined that the minimum air flow through the system would be approximately 6500 cfm. based on this flow rate, the building would be kept under slight negative pressure, thus not allowing odors to escape. this design criteria was predicated on the ability of composting building to effect an efficient seal over the compost. the volume of the biofilter media was calculated based on the velocity of gas that would be passed through the system, and the desired contact time of the gas and the biofilter media. this relationship can be expressed in the following equation: volume (ft.sup.3)=desired contact time (sec).times.flow rate (cfm) contact time for biofiltration of odors have been investigated by a number of researchers and generally range from a few seconds to over 1 minute. the contact time chosen for this biofilter was approximately 20 seconds which was believed to be sufficient to provide for adequate reaction time. the relatively thick beds of a negative pressure modular biofilter, according to the present invention, allow for more media to come into contact with the odors as they pass through the system, thus offsetting reductions in actual contact time while allowing velocities to increase. the net effect is that a smaller biofilter can treat more gas more efficiently. the volume of this biofilter was designed to fit into eight 9000 gallon tanks each with an internal volume of approximately 25 cubic yards. due to financial constraints, only four tanks were initially installed, as shown in fig. 3. it is anticipated that expansion may eventually be conducted. after the type and size of the biofilter has been selected, the blower or fan must be selected. the selection of the type and size of fan is dependent upon the maximum air flow rate desired, the anticipated pressure drop in the system due to the biofilter bed and the associate piping, the energy usage and economics of the blower, the physical size and energy requirements of the blower, and the blower materials. for the project of fig. 3, the pressure drop across the system was estimated using the pressure loss nomegraphs from the piping distributors, and an estimate of pressure drop through the biofilter beds. pressure loss through a biofilter bed can be estimated using the following formula: ##equ1## the maximum expected pressure drop of this system was 15 inches of water. the biofilter exit gas was assumed to be dry, at ambient temperature and non-corrosive. the biofilter would have the effect of buffering the incoming compost odors, removing moisture from the gas and stabilizing the temperature. in the area in which the biofilter was to be installed, there was an energy credit for blowers and motors that achieved a certain efficiency of operation. the blower that was chosen for this site was a large industrial 25 hp fan designed to provide for approximately 7500 cfm against a vacuum head of up to 20 inches of water, in a slightly corrosive environment. the motor for the fan met the energy efficiency requirements of the location and was in stock at the time of the order. the fan was modified on-site to match the ducting of the biofilter and the discharge stack. piping (pvc) was selected for the biofilter that would be low cost, easily handled by the construction crew, resistant to weather and the expected corrosive nature of the biofilter inlet gas. the piping connections were selected to reduce the amount of pressure loss that would be attributed the entire gas transmission system. the piping that was selected for this system was available through a local supplier thus reducing the delivery costs and the lead time for ordering.
009-277-255-110-244
GB
[ "CN", "US", "EP", "GB" ]
H02M3/02,G05F3/08,H02M7/44,H02M3/155,H02J3/38,H02M3/156,H02M3/158,H01L31/042
2012-03-05T00:00:00
2012
[ "H02", "G05", "H01" ]
direct current link circuit
the invention presents an electronic circuit for converting power from a floating source of dc power to a dual direct current output. the electronic circuit may include a positive input terminal and a negative input terminal connectible to the floating source of dc power. the dual dc output may connectible to the input of a dc/ac inverter. a positive output terminal connected to the positive input terminal of the dc/ac inverter and a negative output terminal and a ground terminal which may be connected to the input of the dc/ac inverter. a series connection of a first power switch and a second power switch connected across the positive input terminal and the negative input terminal. a negative return path may include a first diode and a second diode connected between the negative input terminal and the negative output terminal. a resonant circuit may connect between the series connection and the negative return path.
1 - 15 . (canceled) 16 . an electronic circuit comprising: first and second terminals adapted to be connected across a floating source of direct current (dc) power; third and fourth terminals; first and second switches in series across the first and the second terminals, wherein a first node is formed between the first and the second switches, and wherein the first switch is between the first node and the third terminal; a first diode between the fourth terminal and a second node; and a first resonant circuit in series between the first node and a second node, wherein the electronic circuit is configured such that when the first switch is closed and the second switch is open, the first resonant circuit is connected across the first and the second terminals, and such that when the first switch is open and the second switch is closed, the third terminal, the fourth terminal, the first diode, and the first resonant circuit are in series across the first and the second terminals. 17 . the electronic circuit of claim 16 , wherein the first diode is directly connected to the second node. 18 . the electronic circuit of claim 16 , further comprising a charge storage device across the first and second terminals. 19 . the electronic circuit of claim 16 , wherein the first resonant circuit comprises an inductor and a capacitor in series between the first and second nodes. 20 . the electronic circuit of claim 16 , further comprising: a first capacitor in series between the third terminal and ground; a second capacitor in series between the fourth terminal and the ground; and an inverter having a first inverter input connected to the third terminal, a second inverter input connected to the fourth terminal, and a ground inverter input connected to the ground. 21 . the electronic circuit of claim 16 , further comprising first and second drive circuits adapted to gate the first and the second switches alternatively with a pulse width modulation (pwm) cycle such that the first switch is closed while the second switch is open during a first half of the pwm cycle and the second switch is closed while the first switch is open during a second half of the pwm cycle. 22 . the electronic circuit of claim 16 , further comprising first and second drive circuits adapted to gate the first and the second switches alternatively, with less than a fifty percent duty cycle, with a pulse width modulation (pwm) cycle such that the first switch is closed while the second switch is open during a first half of the pwm cycle and the second switch is closed while the first switch is open during a second half of the pwm cycle. 23 . the electronic circuit of claim 16 , further comprising first and second drive circuits adapted to gate the first and the second switches alternatively with a pulse width modulation (pwm) cycle such that the first switch is closed while the second switch is open during a first half of the pwm cycle and the second switch is closed while the first switch is open during a second half of the pwm cycle, wherein the first and the second drive circuits are configured to open and close the first and the second switches with substantially zero current through the first and the second switches. 24 . the electronic circuit of claim 16 , further comprising a second diode, wherein the second node is between the first diode and the second diode. 25 . the electronic circuit of claim 16 , further comprising: third and fourth switches in series across the first and the second terminals, wherein a third node is formed between the third and the fourth switches; a second diode between the fourth terminal and a fourth node; and a second resonant circuit in series between the third and the fourth nodes, wherein the electronic circuit is configured such that when the third switch is closed and the fourth switch is open, the second resonant circuit is connected across the first and the second terminals, and such that when the third switch is open and the fourth switch is closed, the third terminal, the fourth terminal, the second diode, and the second resonant circuit are in series across the first and the second terminals. 26 . the electronic circuit of claim 16 , third and fourth switches in series across the first and the second terminals, wherein a third node is formed between the third and the fourth switches; a second diode between the fourth terminal and a fourth node; and a second resonant circuit in series between the third and the fourth nodes, wherein the electronic circuit is configured such that when the third switch is closed and the fourth switch is open, the second resonant circuit is connected across the first and the second terminals, and such that when the third switch is open and the fourth switch is closed, the third terminal, the fourth terminal, the second diode, and the second resonant circuit are in series across the first and the second terminals, and wherein the first and the fourth switches are configured to be opened and closed together, and wherein the second and the third switches are configured to be opened and closed together. 27 . an electronic circuit comprising: first and second terminals adapted to be connected across a floating source of direct current (dc) power; third and fourth terminals adapted to be connected to a load; first and second switches in series across the first and the second terminals, wherein a first node is formed between the first and the second switches, and wherein the first switch is between the first node and the third terminal; a first diode between the fourth terminal and a second node; and a resonant circuit in series between the first node and a second node, wherein the electronic circuit is configured such that when the first switch is closed and the second switch is open, first current flows from the floating source of dc power through the resonant circuit, and such that when the first switch is open, the second switch is closed, and the load is connected to the third and fourth terminals, a second current flows from the first terminal through the load, the first diode, the resonant circuit, and the second terminal in series. 28 . the electronic circuit of claim 27 , wherein when the first switch is closed and the second switch is open, the first current flows through the resonant circuit in a first direction, and when the first switch is open and the second switch is closed, the second current flows through the resonant circuit in a second direction opposite the first direction. 29 . the electronic circuit of claim 27 , further comprising first and second drive circuits adapted to gate the first and the second switches alternatively, with less than a fifty percent duty cycle, with a pulse width modulation (pwm) cycle such that the first switch is closed while the second switch is open during a first half of the pwm cycle and the second switch is closed while the first switch is open during a second half of the pwm cycle. 30 . the electronic circuit of claim 27 , further comprising a second diode, wherein the second node is between the first diode and the second diode. 31 . a method comprising: connecting a floating source of direct current (dc) power across first and second terminals of a circuit, the circuit comprising: third and fourth terminals; first and second switches connected in series across the first and the second terminals, wherein a first node is formed between the first and the second switches, and wherein the first switch is between the first node and the third terminal; a first diode between the fourth terminal and a second node; and a resonant circuit in series between the first node and a second node, wherein the electronic circuit is configured such that when the first switch is closed and the second switch is open, the resonant circuit is connected across the first and the second terminals, and such that when the first switch is open and the second switch is closed, the third terminal, the fourth terminal, the first diode, and the resonant circuit are in series across the first and the second terminals; and causing the first and the second switches to be gated alternatively, such that the first switch is closed and the second switch is open during a first phase of a pulse width modulation (pwm) cycle thereby charging the resonant circuit from the floating source of dc power, and such that the first switch is open and the second switch is closed during a second phase of the pwm cycle thereby discharging the resonant circuit to provide converted power to a load connected to the third terminal, the fourth terminal, and to ground. 32 . the method of claim 31 , wherein the causing comprises causing the first and second switches to be gated alternatively with less than a fifty percent duty cycle. 33 . the method of claim 31 , wherein the causing comprises causing the first and second switches to open and close with substantially zero current through the first and the second switches. 34 . the method of claim 31 , wherein the circuit further comprises a second diode, wherein the second node is between the first diode and the second diode. 35 . the method of claim 31 , further comprising causing the converted power to be inverted prior to being supplied to the load.
technical field aspects of this disclosure relate to distributed power systems, particularly a photovoltaic power harvesting system and, more particularly to a direct current link circuit connected between a photovoltaic array and a 3-phase inverter circuit. background in a conventional photovoltaic power harvesting system configured to feed a single phase or a three phase alternating current (ac) power grid, dual (positive and negative) direct current (dc) power may be generated first from solar panels. the three phase inverter powered by the dual (positive and negative) dc power produces three phase ac power at the output of the three phase inverter. conventionally, sufficiently high dc voltage may be provided to the input of the three phase inverter by connecting solar panels in series. however, in order to increase overall power conversion efficiency, the sum of positive and negative dc rails required by the inverter may be over 600 volts. in north america, an input of voltage over 600 volts may create an issue with safety agency approval. an approach to avoid the safety issue may include inputting less than 600 volts to a boost circuit or transformer-isolated circuit to generate dual dc rails internally for the inverter input. the additional boost or transformer-isolated circuit increases cost and complexity especially since the additional power converter module generally requires dedicated control and protection features. additionally, the boost or transformer-isolated circuit may also generate electromagnetic interference (emi) and may cause reduction in overall efficiency of conversion of dc power to three phase ac power. thus there is need for and it would be advantageous to have a dc link circuit with a low voltage input, which does not cause significant reduction in overall efficiency of conversion of dc power to three phase ac power and which provides a sufficiently high dc input voltage to the ac inverter to generate an ac output of the inverter of required magnitude. brief summary embodiments include an electronic circuit for converting power from a floating source of dc power to a dual direct current (dc) output. the electronic circuit may include a positive input terminal and a negative input terminal connectible to the floating source of dc power. a positive output terminal and a negative output terminal and a ground terminal which may be connected to the dual dc output. the positive output terminal may be connected to the positive input terminal. the positive output terminal, the negative output terminal and the ground terminal may feed a three phase inverter. a charge storage device may be connected in parallel to the positive input terminal and the negative input terminal. the charge storage device may be charged from the positive input terminal and the negative input terminal. a series connection of a first power switch and a second power switch connected across the positive input terminal and the negative input terminal. the series connection may provide a power output terminal between the first power switch and the second power switch and a negative return current path between the negative output terminal and the negative input terminal. the series connection may also include a first power terminal of the first power switch which connects to the positive output terminal and the positive input terminal. a second power terminal of the first power switch which connects to a third power terminal of the second power switch to provide the power output terminal. a fourth power terminal of the second power switch connects to the negative input terminal. the negative return path may include a first diode and a second diode. the cathode of the first diode connects to the negative input terminal and the cathode of the second diode connects to the anode of the first diode to provide a diode terminal. the anode of the second diode connects to the negative output terminal and a resonant circuit may connect between the power output terminal and the diode terminal. the resonant circuit may be adapted to alternately charge the resonant circuit and discharge the resonant circuit to the negative output terminal by an alternating switching signal applied to respective drive terminals of the first power switch and the second power switch. the alternating switching signal causes both the first power switch and the second power switch to turn on and turn off with substantially zero current. further embodiments include a second series connection of a third power switch and a fourth power switch. the second series connection may include a fifth power terminal of the third power switch connected to the positive output terminal and the positive input terminal. a sixth power terminal of the third power switch connected to a seventh power terminal of the fourth power switch to give a second power output terminal. an eighth power terminal of the fourth power switch connected to the negative input terminal. a third diode and a fourth diode connected in series between the negative output terminal and the negative input terminal. a cathode of the third diode connects to the negative input terminal. a cathode of the fourth diode connects to an anode of the third diode to give a second diode terminal. an anode of the fourth diode connects to the negative output terminal. a second resonant circuit connected between the second power output terminal and the second diode terminal. the second resonant circuit may be adapted to alternately charge the second resonant circuit and discharge the second resonant circuit to the negative output terminal by the alternating switching signal applied to respective drive terminals of the third power switch and the fourth power switch. the alternating switching signal causes both the third power switch and the fourth power switch to turn on and turn off with substantially zero current. embodiments include a method to convert power from a floating source of dc power to a dual direct current (dc) output with respect to electrical earth. the floating source of dc power may include a positive input terminal and a negative input terminal. the dual dc output may include a positive output terminal, a negative output terminal and a ground terminal. the positive input terminal connects to the positive output terminal. a cathode of a second diode and an anode of a first diode may be connected together. the anode of the second diode connects to the negative output terminal and the cathode of the first diode connects to the negative input terminal. the method charges a resonant circuit in a first switching cycle applied to a first power switch. the first switching cycle connects the resonant circuit across the positive input terminal and to the negative input terminal through the first diode. the resonant circuit discharges in a second switching cycle applied to a second power switch. the second switching cycle connects the resonant circuit in series between the negative input terminal and the negative output terminal through the second diode. embodiments include an electronic circuit for converting power from a floating source of dc power to a dual direct current (dc) output. the electronic circuit may include a positive input terminal and a negative input terminal connectible to the floating source of dc power. a positive output terminal and a negative output terminal and a ground terminal which may be connected to the dual dc output. the negative output terminal may be connected to the negative input terminal. the positive output terminal, the negative output terminal and the ground terminal may feed a three phase inverter. a charge storage device may be connected in parallel to the positive input terminal and the negative input terminal. the charge storage device may be charged from the positive input terminal and the negative input terminal. a series connection of a first power switch and a second power switch connected across the positive input terminal and the negative input terminal. the series connection may provide a power output terminal between the first power switch and the second power switch and a positive return current path between the positive output terminal and the positive input terminal. the series connection may also include a first power terminal of the first power switch which connects to the positive output terminal and the positive input terminal. a second power terminal of the first power switch which connects to a third power terminal of the second power switch to provide the power output terminal. a fourth power terminal of the second power switch connects to the negative input terminal. the positive return path may include a first diode and a second diode. the cathode of the first diode connects to the positive output terminal and the cathode of the second diode connects to the anode of the first diode to provide a diode terminal. the anode of the second diode connects to the positive input terminal and a resonant circuit may connect between the power output terminal and the diode terminal. the resonant circuit may be adapted to alternately charge the resonant circuit and discharge the resonant circuit to the positive output terminal by an alternating switching signal applied to respective drive terminals of the first power switch and the second power switch. the alternating switching signal causes both the first power switch and the second power switch to turn on and turn off with substantially zero current. further embodiments include a second series connection of a third power switch and a fourth power switch. the second series connection may include a fifth power terminal of the third power switch connected to the positive output terminal and the positive input terminal. a sixth power terminal of the third power switch connected to a seventh power terminal of the fourth power switch to give a second power output terminal. an eighth power terminal of the fourth power switch connected to the negative input terminal. a third diode and a fourth diode connected in series between the positive output terminal and the positive input terminal. a cathode of the third diode connects to the positive output terminal. a cathode of the fourth diode connects to an anode of the third diode to give a second diode terminal. an anode of the fourth diode connects to the positive input terminal. a second resonant circuit connected between the second power output terminal and the second diode terminal. the second resonant circuit may be adapted to alternately charge the second resonant circuit and discharge the second resonant circuit to the positive output terminal by the alternating switching signal applied to respective drive terminals of the third power switch and the fourth power switch. the alternating switching signal causes both the third power switch and the fourth power switch to turn on and turn off with substantially zero current. brief description of the drawings certain embodiments are illustrated by way of example, and not by way of limitation, in the accompanying figures, wherein like reference numerals refer to the like elements throughout: fig. 1 shows a photovoltaic power harvesting system according to conventional art. fig. 2 shows a power harvesting system in accordance with one or more embodiments described herein. fig. 3 shows a method for the power harvesting system shown in fig. 2 according to one or more embodiments described herein. fig. 4 a shows a circuit according to one or more embodiments described herein. fig. 4 b shows a circuit which may be an interleaved topology version of the circuit shown in fig. 4 a , according to according to one or more embodiments described herein. fig. 4 c shows a method, according to one or more embodiments described herein. detailed description reference will now be made in detail to features of the present invention, examples of which are illustrated in the accompanying figures. the features are described below to explain the present invention by referring to the figures. before explaining features of the invention in detail, it is to be understood that the invention is not limited in its application to the details of design and the arrangement of the components set forth in the following description or illustrated in the figures. the invention is capable of other features or of being practiced or carried out in various ways. also, it is to be understood that the phraseology and terminology employed herein is for the purpose of description and should not be regarded as limiting. for example, the indefinite articles “a” and “an” used herein, such as in “a switch” and “a dc output” have the meaning of “one or more,” e.g., “one or more switches” and “one or more dc outputs.” it should be noted, that although the discussion herein relates primarily to photovoltaic systems, the present invention may, by non-limiting example, alternatively be configured using other distributed power systems including (but not limited to) wind turbines, hydro turbines, fuel cells, storage systems such as battery, super-conducting flywheel, and capacitors, and mechanical devices including conventional and variable speed diesel engines, stirling engines, gas turbines, and micro-turbines. the term “switch” as used herein refers to any of: silicon controlled rectifier (scr), insulated gate bipolar junction transistor (igbt), bipolar junction transistor (bjt), field effect transistor (fet), junction field effect transistor (jfet), mechanically operated single pole double pole switch (spdt), spdt electrical relay, spdt reed relay, spdt solid state relay, insulated gate field effect transistor (igfet), diode for alternating current (diac), and triode for alternating current (triac). the term “switch” as used herein refers to a three terminal device. two out the three terminals referred to herein as “power terminals” and are equivalent to the collector and emitter of a bjt or the source and drain of a fet for example. the remaining “drive terminal” of the three terminal device being equivalent of the base of a bjt or the gate of a fet for example. the term “positive current” as used herein refers to a direction of flow of a current from a higher potential point in a circuit to a lower potential difference point in the circuit. the term “negative current” as used herein refers to a flow of return current from a negative dc output to a negative input terminal. the term “zero current switching” (or “zcs”) as used herein is when the current through a switch is reduced to substantially zero amperes prior to when the switch is being turned either on or off. the term “power converter” as used herein applies to dc-to-dc converters, ac-to-dc converters, dc-to-ac inverters, buck converters, boost converters, buck-boost converters, full-bridge converters and half-bridge converters or any other type of electrical power conversion/inversion known in the art. the terms “power grid” and “mains grid” are used herein interchangeably and refer to a source of alternating current (ac) power provided by a power supply company and/or a sink of ac power provided from a distributed power system. the term “period of a resonant circuit” refers to a time period of a substantially periodic waveform produced by the resonant circuit. the time period is equal to the inverse of the resonant frequency of the resonant circuit. the term “low input voltage” is used herein refers to a floating (i.e., not referenced to a ground potential) dc voltage input across two terminals of less than 600 volts or other voltage as specified by a safety regulation. the term “dual dc” input or output refers to positive and negative terminals that may referenced to a third terminal, such as ground potential, electrical ground or a neutral of an alternating current (ac) supply which may be connected to electrical ground at some point. the term “two level inverter” as used herein, may refer to its output. the ac phase output of the two level inverter has two voltage levels with respect to a negative terminal. the negative terminal is common to the ac phase output and the direct current (dc) input to the two level inverter. the alternating current (ac) phase output of the two level inverter may be a single phase output a two phase output or a three phase output. therefore, the single phase output has two voltage levels with respect to the negative terminal. the two phase output has two voltage levels with respect to the negative terminal for each of the two phases. the three phase output has two voltage levels with respect to the negative terminal for each of the three phases. similarly, the term “three level inverter” as used herein may refer to an alternating current (ac) phase output of the three level inverter. the ac phase output has three voltage levels with respect to a negative terminal. the negative terminal is common to the ac phase output and the direct current (dc) input to the three level inverter. the alternating current (ac) phase output of the three level inverter may be a single phase output a two phase output or a three phase output. therefore, the single phase output has three voltage levels with respect to the negative terminal. the two phase output has three voltage levels with respect to the negative terminal for each of the two phases. the three phase output has three voltage levels with respect to the negative terminal for each of the three phases. the three level inverter compared with the two level inverter may have a cleaner ac output waveform, may use smaller size magnetic components and may have lower losses in power switches, since more efficient lower voltage devices may be used. three level inverter circuits may have dual (positive and negative) direct current (dc) inputs. reference is made to fig. 1 , which shows a photovoltaic power harvesting system 10 according to conventional art. a photovoltaic string 109 includes a series connection of photovoltaic panels 101 . photovoltaic strings 109 may be connected in parallel together in an interconnected array 111 , which provides a parallel direct current (dc) power output at dc power lines x and y. the parallel dc power output supplies the power input of a direct-current-to-alternating-current (dc-to-ac) three phase inverter 103 on dc power lines x and y. the three phase ac power output of inverter 103 (phases w, u and v) connects across an ac load 105 . ac load 105 by way of example may be a three phase ac motor or a three phase electrical power grid. reference is now made to fig. 2 , which illustrates a power harvesting system 20 according to a feature of the present invention. system 20 includes interconnected photovoltaic array 111 , which may provide a floating direct current voltage (dc) on positive input terminal a and negative input terminal b. the floating dc voltage may also be provided from other distributed power systems such as a dc voltage generator for example. connected across positive and negative input terminals a and b is charge storage device c 1 , which may be a capacitor. connected to positive input terminal a is the collector of an insulated gate bipolar transistor (igbt) igbt 1 . the emitter of igbt 1 connects to node c. igbt 1 may include an integrated diode with an anode connected to the emitter and a cathode connected to the collector. connected to negative input terminal b is the emitter of an insulated gate bipolar transistor (igbt) igbt 2 . the collector of igbt 2 connects to node c. igbt 2 may include an integrated diode with an anode connected to the emitter and a cathode connected to the collector. drive circuits g 1 and g 2 are connected to the bases of igbt 1 and igbt 2 respectively and may be referenced to ground. an inductor l 1 connects between nodes c and d, where node d may connect to the ground and the ground input of inverter 103 a . a diode cr 1 has an anode connected to positive input terminal a and a cathode connected to node v+. diode cr 1 provides a positive current path between nodes v+ and positive input terminal a. a capacitor c 2 connects between node d and node v+. node v+ provides a dc positive voltage to the input of inverter 103 a . a diode cr 2 has a cathode connected to negative input terminal b and an anode connected to node v−. diode cr 2 provides a negative return current path between nodes v− and node b. capacitor c 3 connects between node d and node v−. node v− provides a dc negative voltage to the input of inverter 103 a . capacitors c 2 and c 3 may have substantially equal capacitance value. inverter 103 a may have a 3 level inverter topology with dual dc input from nodes v+, v− and node d which may be converted to a single phase or a 3 phase ac voltage output, which supplies a load 105 , which may be single phase or 3 phase load. reference is now made to fig. 3 , which shows a method 301 applied to power harvesting system 20 shown in fig. 2 , according to a feature of the present invention. in step 303 , capacitor c 1 may be charged by the floating dc voltage of array 111 by virtue of capacitor c 1 being directly connected across array 111 at positive and negative input terminals a and b. igbt 1 and igbt 2 may be gated alternately such that when igbt 1 is turned on, igbt 2 is off and vice versa by respective drive circuits g 1 and g 2 . igbt 1 and igbt 2 may be gated alternately with less than a 50% duty cycle so as to avoid cross-conduction between igbt 1 and igbt 2 (i.e. to avoid igbt 1 and igbt 2 being on at the same time). a floating voltage provided from array 111 substantially provides a positive voltage on node v+ and a negative voltage on node v− with respect to the ground. the voltages on node v+ and node v− may be substantially equal to the magnitude of the floating voltage. step 303 , which charges capacitor c 1 may continue during alternate gating of switches igbt 1 and igbt 2 . when switch igbt 1 is turned on (and igbt 2 turned off), current flows from array 111 and a discharge current flows (step 305 a ) from storage capacitor c 1 through collector and emitter of igbt 1 , through inductor l 1 , into capacitor c 3 and the input load of inverter 103 a between ground (node d) and node v−. inductor l 1 and capacitor c 3 form a series resonant circuit. the diode across igbt 1 is reverse biased with respect to the voltage at positive input terminal a. the input voltage to inverter 103 a with respect to ground (node d) and node v− may be derived across capacitor c 3 . the resonant frequency of inductor l 1 and capacitor c 3 is given by eq. 1 and the corresponding resonant periodic time t given in eq. 2. f 0 =1/2π( l 1× c 3) 1/2 eq.1 t= 1/ f 0 eq.2 when igbt 1 initially turns on, there may be both zero current through inductor l 1 and through the collector and emitter of igbt 1 . after igbt 1 initially turns on, the current through l 1 and the current through the collector and emitter of igbt 1 may increase and then fall sinusoidally. when igbt 1 turns off (the on period of the switch corresponds to half of the resonant periodic time t) there may be close to zero current through inductor l 1 and through the collector and emitter of igbt 1 . a negative current path between node v− and negative input terminal b may be completed through diode cr 2 corresponding to half of the resonant periodic time t. step 303 continues as capacitor c 1 is still being charged by the floating dc voltage of array 111 by virtue of capacitor c 1 being directly connected across array 111 at positive and negative input terminals a and b. when igbt 2 is turned on (and igbt 1 is turned off), current flows from array 111 and a discharge current (step 305 b ) from storage capacitor c 1 through diode cr 1 through the input load of inverter 103 a between ground (node d) and node v+, through c 2 , through inductor l 1 and through the collector and emitter of igbt 2 . inductor l 1 and capacitor c 2 form a series resonant circuit. the diode across igbt 2 may be reverse biased with respect to the voltage at negative input terminal b. the input voltage to inverter 103 a with respect to ground (node d) and node v+ is derived across capacitor c 2 . capacitor c 2 may have the same value as capacitor c 3 ; therefore, the resonant frequency of inductor l 1 and capacitor c 2 and corresponding resonant periodic time t may be substantially the same. when igbt 2 initially turns on, there may be both zero current through inductor l 1 and through the collector and emitter of igbt 2 and may be substantially zero power loss at turn on of igbt 2 . after igbt 2 initially turns on, the current through l 1 and the current through the collector and emitter of igbt 2 may increase and then fall sinusoidally. when igbt 2 turns off (the on period of the switch corresponds to half of the resonant periodic time t) there may be close to zero current in inductor l 1 and close to zero current through the collector and emitter of igbt 2 . therefore, there may be zero power loss at turn off of igbt 2 . a positive current path between node v+ and positive input terminal a is completed through diode cr 1 corresponding to half of the resonant periodic time t. zero current switching (zcs) may, therefore, be provided for both turn on and turn off of both switches igbt 1 and igbt 2 . zero current switching (zcs) may permit the use and implementation of slower switching speed transistors for igbt 1 and igbt 2 , which may have a lower voltage drop between collector and emitter. thus, both switching losses and conduction losses may be reduced. similarly, slower integrated diodes of igbt 1 and igbt 2 with lower voltage drop may be used. slower diodes cr 1 and cr 2 may also be used. resonant current shape through the collector and emitter of igbt 1 and igbt 2 may also reduce the turn-on losses in the diodes cr 1 and cr 2 , as well as generated electromagnetic interference (emi). another approach to generate dual dc rails, according to conventional art, may be to use boost or transformer-isolated circuits. if a boost circuit is used, the boost circuit conduction and switching losses may be very high. the boost inductor may be large and lossy and a reverse recovery problem of the output diode of the boost circuit may also be significant. using a silicon carbide diode for the output of the boost circuit may remove the reverse recovery problem but may also increase the conduction loss. the overall cost of the boost circuit may be high if a number of expensive carbide diodes are paralleled together to accommodate high power levels. also, some circuit topologies to generate dual dc rails from a solar panel may make the solar panel voltage vary with respect to ground. if the solar panel voltage is changing at a fast rate, ground circulating currents may be created and current levels set by safety agencies may be exceeded. the circuit topology described in various features and aspects below may address the above mentioned design considerations of circuit topologies to generate dual dc rails from a solar panel. reference is now made to fig. 4 a which shows a circuit 40 a according to an aspect of the present invention. interconnected photovoltaic array 111 is connected across capacitor c 1 at nodes a and b. connected to node a is the collector of transistor igbt 1 . the emitter of igbt 1 is connected to the collector of transistor igbt 2 at node c. both transistors igbt 1 and igbt 2 have an integral diode with an anode connected to the emitter and a cathode connected to the collector of each transistor respectively. drive circuits g 1 and g 2 are connected to the bases of igbt 1 and igbt 2 respectively. the emitter of igbt 2 is connected to node b and the cathode of diode cr 1 . the anode of diode cr 1 connects to the cathode of diode cr 2 at node f. one end of inductor l 1 connects to node c and the other end of inductor l 1 connects to one end of capacitor c 4 . the other end of capacitor c 4 connects to mode f. the anode of diode cr 2 connects to the negative direct current (dc) input v− of dc to alternating current (ac) inverter 103 a . the anode of diode cr 2 also connects to one end of capacitor c 3 , the other end of c 3 connects to ground or neutral center-point node d. node d connects to the ground input to inverter 103 a . one end of capacitor c 2 connects to node d, the other end of capacitor c 2 connects to node a and the positive direct current (dc) input v+ of dc to inverter 103 a . inverter 103 a may have a 3 level inverter topology with dual dc input from nodes v+, v− and node d which may be converted to a single phase or a 3 phase ac voltage output which supplies a load 105 which may be single phase or 3 phase. alternately in circuit 40 a , diodes cr 1 and cr 2 may be placed in a series connection between node a and node v+. the series connection has the anode of diode cr 2 connected to node a and the collector of igbt 1 . the cathode of cr 2 connected to the anode of diode cr 1 . the cathode of diode cr 1 connected to node v+ and one end of capacitor c 2 . tank circuit t 1 still has one end of l 1 connected to node c and the other end of l 1 connected to one end of capacitor c 4 . the other end of c 4 connects to the cathode of diode cr 2 . the emitter of igbt 2 and node b are now connected to node v− and one end of capacitor c 3 . reference is now made to fig. 4 c which shows a method 401 , according to a feature of the present invention. igbt 1 and igbt 2 in circuit 40 a are gated alternately with a pulse width modulation (pwm) cycle by drive circuits g 1 and g 2 . igbt 1 and igbt 2 in circuit 40 a are gated alternately with up to almost 50% duty cycle so as to avoid cross conduction between igbt 1 and igbt 2 . during the first half of the pwm cycle applied by drive circuit g 1 , igbt 1 is turned on at zero current (loss-less turn-on). current then flows between the collector and emitter of igbt 1 into the series connected resonant tank t 1 formed by inductor l 1 and capacitor c 4 , diode cr 1 , and is returned to the negative input terminal (node b) of panel 111 . full solar panel 111 voltage v in (across nodes a and b) is applied to resonant tank t 1 . as the current through the resonant tank t 1 rises, capacitor c 4 charges (step 403 ). when the voltage of capacitor c 4 reaches the input voltage v in (across nodes a and b), the current in tank t 1 is reduced to be substantially zero. by the time igbt 1 turns off, the current through igbt 1 and tank t 1 is already substantially zero, and turn-off of igbt 1 is also substantially loss-less. during the second half of the pwm cycle igbt 1 is off and igbt 2 turns on at zero current. the charged capacitor c 4 buffered by l 1 is connected in series with the input voltage v in (across nodes a and b). the voltage at the cathode of diode cr 2 goes negative so that diode cr 2 begins to conduct. a current path is formed from the positive input terminal (node a), through load and output filter capacitance provided by c 2 , c 3 and inverter 103 a , through cr 2 , through capacitor c 4 and inductor l 1 , through igbt 2 and to the negative input terminal (node b). the current path flowing through resonant tank t 1 discharges (step 405 ) capacitor c 4 . just as with igbt 1 , both turn-on and turn-off of igbt 2 occurs at zero current, due to sinusoidal current in tank t 1 . resonant action of tank t 1 may therefore allow the use of slower lower cost silicon output diodes cr 1 and cr 2 and possibly without reverse recovery problems of diodes used in conventional topologies to produce dual dc rails from a single dc source. similarly, igbt 1 and igbt 2 can be slower, have lower voltage drop and therefore may be less expensive. output voltage across terminals v+ and v− is substantially equal to twice the input voltage v in . with circuit 40 a no voltage feedback is needed to regulate the two dc outputs v+ and v−. reference is now made to fig. 4 b which shows a circuit 40 b which is an interleaved topology version of circuit 40 a shown in fig. 4 a , according to an aspect of the present invention. the interleaved topology version has additional transistors igbt 3 and igbt 4 , inductor l 2 , capacitor c 5 , diodes cr 3 and cr 4 . both transistors igbt 3 and igbt 4 have an integral diode with an anode connected to the emitter and a cathode connected to the collector of each transistor respectively. connected to node a is the collector of transistor igbt 3 . the emitter of igbt 3 is connected to the collector of transistor igbt 4 at node e. drive circuits g 1 and g 2 are also connected to the bases of igbt 4 and igbt 3 respectively. the emitter of igbt 4 is connected to node b and the cathode of diode cr 3 . the anode of diode cr 3 connects to the cathode of diode cr 4 at node g. one end of inductor l 2 connects to node e and the other end of inductor l 2 connects to one end of capacitor c 5 . the series connection of inductor l 2 and capacitor c 5 forms resonant tank t 2 . the other end of capacitor c 5 connects to mode g. the anode of diode cr 4 connects to the negative direct current (dc) input v− of dc to alternating current (ac) inverter 103 a. alternately in circuit 40 b , diodes cr 1 , cr 2 , cr 3 and cr 4 may be placed in a series connections between node a and node v+. the series connection between cr 1 and cr 2 has the anode of diode cr 2 connected to node a and the collectors of igbt 1 and igbt 3 . the cathode of cr 2 connected to the anode of diode cr 1 . the cathode of diode cr 1 connected to node v+ and one end of capacitor c 2 . similarly, the series connection between cr 3 and cr 4 has the anode of diode cr 4 connected to node a and the collectors of igbt 1 and igbt 3 . the cathode of cr 4 connected to the anode of diode cr 3 . the cathode of diode cr 3 connected to node v+ and one end of capacitor c 2 . tank circuit t 1 still has one end of l 1 connected to node c and the other end of l 1 connected to one end of capacitor c 4 . the other end of c 4 connects to the cathode of diode cr 2 . the emitters of igbt 2 , igbt 4 and node b are now connected to node v− and one end of capacitor c 3 . similarly, tank circuit t 2 still has one end of l 2 connected to node e and the other end of l 2 connected to one end of capacitor c 5 . the other end of c 5 connects to the cathode of diode cr 4 . at high power, in conventional circuit topologies, semiconductor switches and output diodes usually may be paralleled together. in practice it may not feasible to parallel silicon diodes directly. likewise, not all types of igbts may be paralleled directly also. instead, as with circuit 40 b , the same number of switches and diodes may be re-arranged into interleaved topology, as shown in fig. 4 b . ripple current ratings of c 1 , c 2 and c 3 capacitors can be greatly reduced (along with cost and size) due to partial cancellation of ripple currents in them. the diodes cr 1 , cr 2 , cr 3 , cr 4 and igbt 1 , igbt 2 , igbt 3 and igbt 4 share a load (input to inverter 103 a ) without having to be paralleled directly, so sharing of power to be delivered to the load may no longer be an issue. although selected features of the present invention have been shown and described, it is to be understood the present invention is not limited to the described features. instead, it is to be appreciated that changes may be made to these features without departing from the principles and spirit of the invention, the scope of which is defined by the claims and the equivalents thereof.
010-596-492-233-284
US
[ "US" ]
C10L5/00,C10L5/04,F26B1/00
1988-03-28T00:00:00
1988
[ "C10", "F26" ]
preagglomeration of fine coal before thermal dryer in a preparation plant
method and apparatus for reducing the dustiness and increasing the size distribution of a coal product in a coal preparation plant by mixing dewatered coal fines and recycled fine coal from the cyclone separator with a binder to form an agglomerated/size enlarged coated product which can be passed through a thermal dryer.
1. the method of fine coal recovery in a coal processing plant to minimize dustiness and amount of additives required for coal handling comprising the steps of: (a) combining dewatered coal fines from the vacuum disc filter with recycled unprocessed fine coal from a thermal dryer cyclone; (b) agglomerating the coal fines and fine coal with a binder in a pinmixer; (c) gas drying the agglomerates to a predesired moisture content in a thermal dryer; (d) separating unprocessed fine coal from the drying gas in a thermal dryer cyclone; and (e) recycling the unprocessed fine coal for mixture with the coal fines in the pinmixer.
background of the invention 1. field of the invention preparation plant fine coal from the vacuum disc filter is agglomerated together with cyclone recycle fines and a binder in a pinmixer to produce +28 mesh particles. the agglomeration is before the thermal dryer. 2. summary of the prior art it is well-known in many arts to agglomerate materials by mixing the material fines with a binder to cause the fines to adhere to produce particle growth. for example, u.s. pat. no. 3,651,179 discloses passing wet raw poorly fusing petroleum coke through a drye and crusher and then adding a binder in a mixer before pelletizing. u.s. pat. no. 3,830,943 discloses a method for agglomerating dry food particles in a drum and u.s. pat. no. discloses an apparatus for making granular superphosphate. summary of the invention it is an object of this invention to increase the size distribution of a coal preparation plant product by placing dewatered coal fines in an agglomerating device with cyclone recycle fines before passing to a thermal dryer. it is also an object of this invention to mix preparation plant fine coal from the vacuum disc filter with cyclone recycled coal fines in a pinmixer prior to passing through a thermal dryer. a binder can be added to the pinmixer. this will reduce the dustiness of the product and reduce or elminate the water and other chemical sprays required on the coal product to reduce dustiness and freezing. brief description of the drawing the single figure of the drawing is a schmmatic illustration of the process of this invention, diagramically illustrating the apparatus involved. description of the preferred embodiment coal is agglomerated during coal processing in a preparation plant. coal can be agglomerated by compaction or agitation. briquetters, extruders and pellet mills are types of compaction equipment. agitation agglomeration methods include pinmixers, disc pelletizers, drum pelletizers and liquid phase agglomeration using high shear mixing. most forms of coal agglomeration methods use either an organic binder such as lignosulfonate, petroleum pitch, latex or polymers or an inorganic binder such as cement or bentonite. binder choice depends principally on the cost of the binder and the product quality required. in the conventional coal processing plant, the coal fines are recovered by passing a coal slurry through a dewatering device such as a vacuum disc filter and then to a thermal dryer before mixing with the coarse coal. water and other dust preventative additives and binders are added after the thermal dryer. the fines blown out of the dryer and collected in the cyclone are presently mixed with the coarse coal product. in current practice, during the winter months, antifreeze agents also have to be added to the coal product to prevent freezing. with this background, it is the purpose of this invention to place the dewatered coal fines from the vacuum disc filter and recycled fines from the thermal dryer cyclone underflow into a pinmixer and add a binder to produce +28 mesh agglomerates. this product is then passed through the thermal dryer with, again, the coal fines from the thermal dryer cyclone underflow are recycled to the pinmixer for further processing. with this type of apparatus and process, all the fine coal in the prep plant is treated with binder, agglomerated into +28 mesh particles and heated and dried in the thermal dryer. with the recycling of the fines from the cyclone underflow, these fines are not mixed with the other plant product in the conventional manner to create dust, but are reprocessed into +28 mesh agglomerates. since all fines are exposed to a binder, there would be no water sprayed on the product after the thermal dryer, as in the conventional practice. further, the product from the thermal dryer would be less prone to freeze because of its coarser size and lower moisture content (no water added as a dust suppressant after the thermal dryer). preagglomeration/size enlargement before the thermal dryer can increase the amount of coal cleaned, while still using the thermal dryer and produce a non-dusty product. attention is now directed to the drawing which schematically illustrates the flow of the prep plant process with the various processing apparatus being diagramically illustrated. a coal fine slurry is dewatered in a conventional vacuum disc filter 10 and deposited on a transporter 12 and into a pinmixer 14. the pinmixer is a horizontal cylindrical casing 16 enclosing a shaft 18 driven by a motor "m" and containing several rods, pins or paddles 20 extending outwardly to a short distance from the inside surface of the casing 16. the fines from the underflow of the cyclone are added to the pinmixer along with a binder and the agitation of the fines in the mixer will cause particulate growth. (a pneumatic transporter such as blower "b" in the line from the cyclone to the pinmixer will transport these fines). the agglomerated product passes out of the pinmixer onto a transporter 22 and into the thermal dryer where the pelletized product is dried and thereafter directed out of the dryer. the moisture laden gas with unprocessed fines pass through the cyclone 24 which further separates the fines from the flue gas, which is passed to a scrubber for removal of toxic gases and fine particles before release from the prep plant to the atmosphere. the fines from the cyclone underflow are recycled to the pinmixer to be preagglomerated with the dewatered fine coal from the vacuum disc filter. it can thus be seen that with the method and apparatus of this invention, all the fine coal from the preparation plant can be agglomerated and dried producing a dust-free product. as illustrated in the drawing, the coarse coal can also be added to the agglomerated fines at the transporter 22 for passage through the thermal dryer to minimize the moisture content of the combined product.
011-572-348-051-38X
EP
[ "EP", "PL" ]
D06F58/20,D06F58/26,D06F58/44
2013-07-19T00:00:00
2013
[ "D06" ]
method for operating a steam generation unit in a laundry dryer and method of operating a laundry dryer
the invention relates to a method for operating a steam generation unit (90) in a laundry dryer (2), the laundry dryer comprising a laundry storing compartment (17) for receiving laundry (19) to be treated; a steam generation unit (90) for generating steam for laundry steam treatment, wherein the steam generation unit (90) is an inline steam generator comprising a heater (282); means for controlling the flow rate of water provided to the steam generation unit (90), wherein the supply rate for supplying water to the steam generation unit is controlled by controlling the activation of a water supply pump supplying water from a water reservoir (140), or by controlling the opening/closing of a valve (280) that is connected to a water reservoir (140) or connected to a water mains line. the method comprises: starting the control of the heater (282) for heating the steam generation unit (90), and thereafter starting the control of the water supply pump or of the valve (280); wherein after starting the control of the water supply pump or valve (280), the water supply pump or the valve is controlled according to a predetermined time sequence, in particular independent of the current operation status of the heater (282) or the current temperature of the steam generation unit (90).
method of operating a laundry dryer (2), wherein the laundry dryer (2) comprises: a laundry storing compartment (17) for storing laundry (19) to be treated, a rear channel (20b, 20c) for guiding process air (a) at the backside of the laundry storing compartment (17), a back wall (74) of the laundry storing compartment, wherein the compartment back wall (74) comprises a plurality of back wall openings (84) designed for passing process air (a) from the rear channel (20b, 20c) into the laundry storing compartment (17), a rear wall (94) forming at least a portion of a back cover (95) of the dryer (2), a steam generation unit (90) for generating steam to be supplied into the laundry storing compartment (17), and means (12) for heating the process air (a) for drying the laundry in the laundry storing compartment, wherein the means (12) for heating the process air (a) is a heat pump condenser or an electrical resistor heater, characterized by a nozzle unit (88, 300) comprising: one or a plurality of nozzle outlets (92) for injecting steam generated in the steam generation unit (90) into the laundry storing compartment (17), a steam conduit (106) for providing steam from the steam generation unit (90) to the nozzle unit (88, 300), and optionally a drain outlet (308) for draining water from within the nozzle unit (88, 300) to the outside, wherein at least a portion of the steam conduit (106) is guided within or in thermal contact with walls delimiting the rear channel (20b, 20c), so that there is a thermal connection (284) between the rear channel (20a, 20b) and the steam conduit (106), and wherein the method for operating the laundry dryer (2) comprises: activating the means for heating the process air (a) before activating the steam generation unit (90), and activating a fan (8) for guiding heated process air (a) into said rear channel (20b, 20c) for warming-up the steam conduit and/or the nozzle unit due to the thermal connection (284) before guiding steam through the steam conduit and/or the nozzle unit. method according to claim 1, wherein the method further comprises: deactivating the means for heating the process air (a) before activating the steam generation unit (90), or reducing the heating power of the means for process air heating before activating the steam generation unit (90), when the process air reaches a predetermined temperature. method according to claim 1 or 2, wherein the laundry dryer (2) further comprises one of, more of or all of the following: a front wall (60) with a front loading opening (54) for loading laundry (19) into the laundry storing compartment (17), wherein the compartment back wall (74) is opposite to the loading opening (54), and a rear frame (72) including said compartment back wall (74), wherein the laundry storing compartment (17) is formed or is essentially formed by a cylindrical, rotatable drum (18) which is open at the both axial ends and wherein the compartment back wall (17) is stationary and closes the backside of the drum and a loading door (55) and a portion of a front frame (68) are closing the front side of the drum. method according to claim 1, 2, or 3, wherein a nozzle outlet (92) of the nozzle unit (88, 300) is arranged between said compartment back wall (74) and said rear wall (94) inside said rear channel (20b, 20c) so that steam ejected from the nozzle outlet (92) passes through at least one back wall opening (84) of the compartment back wall (74) before entering the laundry storing compartment (17). method according to any of the previous claims, wherein the laundry dryer (2) further comprises a heat-pump system (4) having a refrigerant temperature sensor (292), and wherein the method comprises, before activating the steam generation unit, detecting a temperature signal from the temperature sensor (292), if the temperature is below a predetermined temperature threshold, activating the heat-pump system (4) for heating the process air, and activating the steam generation unit (90) for generating steam. method according to any of the previous claims, wherein the laundry dryer (2) further comprises a heat-pump system (4) having a refrigerant temperature sensor (292), and wherein the predetermined temperature of the process air (a) is determined by means of the refrigerant sensor (292). method according to any of the previous claims, wherein the method further comprises at least one of the following: reducing the heating power (p) of or deactivating the heater (282) of the steam generation unit (90) while the or a means for process air heating is active. method according to any of the previous claims, wherein the method further comprises deactivating the or a means for heating the process air after a predetermined period of time or when the steam conduit (106) has reached a temperature (q') within a predetermined temperature range of about 30°c to 40°c. method according to any of the previous claims, wherein the or a nozzle unit (88, 300) further comprises a drain outlet (308) adapted for draining the water from the nozzle unit (300) to the rear channel (20b, 20c), or a separation chamber (302) for separating steam and water, wherein at least a portion of the separation chamber (302) is arranged within the rear channel (20b, 20c). method according to any of the previous claims, wherein the duration of the activation of the means (12) for heating the process air and/or the heating power used for heating the process air and/or the duration of the activation of the fan and/or the power of the fan are controlled depending on the temperature inside the process air channel. method according to any of the previous claims, wherein the method further comprises keeping the steam generation unit (90) deactivated, while the means (12) for heating the process air (a) are activated.
the invention relates to a method for operating a steam generation unit in a laundry dryer. ep 1 887 127 a1 discloses laundry treating machines having means for applying a steam treatment to laundry. the steam is directed inside a rotatable drum containing the laundry to be treated. such steam treatment is used for removing odours from laundry or for relaxing and removing wrinkles from clothes. wo 2004/059070 a1 teaches a laundry dryer with a laundry storing compartment defined by a cylindrical rotatable drum, a loading opening at the front end of the drum and a drum back wall at the rear end. this laundry dryer contains a processing unit having an evaporator for generating steam in order to remove odours from the laundry disposed in the drum. the steam is injected into the laundry storing compartment by an outlet of the process air channel fluidly connected to the laundry storing compartment at its rear end. the processing unit and its evaporator are arranged outside the laundry storing compartment adjacent to the mentioned outlet of the process air channel. ep 1 889 966 b1 discloses a water supply control for a steam generator of a fabric treatment appliance using a temperature sensor. the fabric treatment appliance comprises a steam generator with a steam generation chamber configured to hold water, a temperature sensor configured to sense a temperature representative of the steam generation chamber at a predetermined water level in the chamber, and a controller coupled to the sensor. the controller is configured to control the flow of water based on the sensed temperature in order to control the level of water in the steam generation chamber. ep 2 390 404 a1 discloses a washing machine with a dryer function comprising a heat pump system. a steam generator may be provided within the body of the laundry treatment apparatus, but arranged external to the storing compartment and the air circulation channel. then a duct or connection provides steam from the steam generator into the storing compartment. de 10 2008 028 177 a1 discloses a method for operating a dryer with a steam generator. the steam generator is located above a drum of the dryer and steam is supplied to the laundry at a front side of the drum via a steam conduit and nozzle. to determine the laundry load a process air heater is activated and the time measured until process air outlet temperature reaches a temperature limit. based on the laundry load the amount of steam or steam supply duration is determined. the process air heater is deactivated before the heater of the steam generator is activated. it is an object of the invention to provide a method for operating a steam generation unit in a laundry dryer by which the steam generation for steam treatment of the laundry is further improved. the invention is defined in claim 1. particular embodiments of the invention are set out in the dependent claims. according to the invention, a method for operating a laundry dryer relates to a laundry dryer which comprises a rear channel, a back wall of the laundry storing compartment, a rear wall forming at least a portion of a back cover of the dryer, a nozzle unit, a steam conduit, and means for heating the process air. the rear channel is arranged for guiding process air at the backside of the laundry storing compartment, and the compartment back wall comprises a plurality of back wall openings designed for passing process air from the rear channel into the laundry storing compartment. a steam generation unit is arranged for generating steam to be supplied into the laundry storing compartment. the nozzle unit comprises one or a plurality of nozzle outlets for injecting steam generated in the steam generation unit into the laundry storing compartment and optionally a drain outlet for draining water from within the nozzle unit to the outside. the steam conduit is arranged for providing steam from the steam generation unit to the nozzle unit. at least a portion of the steam conduit is in thermal connection with walls delimiting the rear channel for guiding the process air. the method comprises activating the means for heating the process air before activating the steam generation unit. this method may be beneficial for example if only a steam laundry treatment is required by a laundry treatment program and therefore the drying process was not activated. in this case, the method provides for an appropriate warm-up of the steam conduit and/or the nozzle unit before guiding steam through the steam conduit and/or the nozzle unit, which can drastically reduce the amount of water droplets reaching the interior of the laundry treatment compartment. it shall be understood that the method introduced here and all of its embodiments can be used independently of or in any combination with the embodiments of the method described below. the method further comprises activating a fan for guiding heated process air into said process air channel and/or the process air fan is (already) activated during the warm-up phase in which the process air is heated. in embodiments (representing normal cases), when steam is to be supplied into the laundry storing compartment which is a rotatable drum, the drum is rotated and the motor rotating the drum is also rotating the fan for driving the process air through the laundry storing compartment. e.g. in this case the process air fan is already activated before the heating of the process air by the means for heating is activated and the fan has not to be additionally activated. thus in the invention the fan is already activated at the time when the means for heating the process air is activated. preferably the fan for driving the process air is activated at least over the period of heating the process air and/or over the period of steam generation. a pre-heating by first heating the process air before activating the steam generation unit is also applicable to the method described in the following and the detailed description. all individual elements or features or any arbitrary combination relating to the below described method are applicable to the above described method. a further method for operating a steam generation unit is related to a laundry dryer comprising a laundry storing compartment for storing the laundry to be treated and a steam generation unit for generating steam for laundry treatment. preferably, the steam generation unit is an inline steam generator comprising a heater. the laundry dryer further comprises means for controlling the flow rate of water provided to the steam generation unit. the supply rate of water to the steam generation unit is controlled by said means by controlling the activation of a water supply pump and/or by controlling the opening and/or closing of a valve. said water supply pump and/or said valve can be connected to a water reservoir and/or to a water mains line. the method for operating the steam generation unit comprises starting the control of the heater for heating the steam generation unit followed by starting the control of the water supply pump and/or of the valve. after starting the control of the water supply pump and/or of the valve, the water supply pump and/or the valve is controlled according to a predetermined time sequence. the predetermined time sequence is and can not be changed by the means for controlling the flow rate. thus the predetermined time sequence is fixed and invariant during the operation of the steam generation unit. in particular the predetermined time sequence is not dependent on any parameter (like temperature) during the control. by applying the predetermined time sequence the pump activation and/or valve closing/opening is controlled with the fixed sequence over time. the time sequence once applied or selected is not changed or adapted. controlling of the water supply pump and/or the valve according to the predetermined time sequence represents a feed-forward control and is not a feed-back control reactive to any current operation parameter of the steam generation unit. preferably the control of the water supply to the steam generating unit, in particular the below mentioned predefined sequence(s) of water supply to the steam generating unit is adapted such that forming of condensed water and ejecting condensed water droplets into the laundry treatment chamber is minimized. for example at the time of storing fixed parameter settings in a storing device of the apparatus (by the manufacturer of the apparatus) the parameter setting which is used for generating the predetermined time sequence is set in accordance with fixed and known hardware parameters of the steam generating unit (and preferably the steam supply arrangement (steam conduit and/or nozzle unit)). for example at the time of manufacturer programming of or data storing to the memory device (rom or prom) one or more of following are known and thus fixed hardware parameters for the steam generating unit and connected elements: heating power, maximum flow rate, heat masses of the conduit and/or nozzle, heat mass of steam generating unit. in an embodiment a storing device is associated to the means for controlling the flow rate, wherein one or more fixed predetermined time sequences are stored in the storing device. this or one of these fixed predetermined time sequences are used by the means for controlling during the control of the water supply. once the predetermined time sequence is retrieved from the storing device there is no change or modification and the time sequence is executed without change and without change of the predetermined time sequence to another one. even the one or more predetermined time sequences stored in the storing device is(are) preferably not modified. preferably the means for controlling the flow rate of water supplied to the steam generation unit is a control unit for controlling the overall operation of the apparatus and/or the storing device is the program memory of the apparatus. in particular the predetermined time sequence is independent of the operation status of the heater and/or the current temperature of the steam generation unit. when starting the control of the water supply pump and/or of the valve, the control of the water supply pump and/or the valve is independent of any operation status of the heater and/or any operating temperature of the steam generation unit. in a preferred embodiment, the inline steam generating unit, which may be a flow-through or flow-type steam generator, has a low water storing capacity, stores a limited amount of water temporarily, and/or transforms water to steam essentially at the rate of water supply. thus water input to the inline steam generating unit is essentially vaporized as steam when leaving the output. as an important advantage compared to other steam generators (in particular as compared to the boiler-type steam generators), the inline steam generator has a very short reaction time due to its lower volume of stored water. as a result, its steam generation rate can be adjusted very accurately and quickly by controlling its water input rate and/or heating power input. in a preferred embodiment, the inline steam generating unit is designed such that it reaches its operation temperature (e.g. to the predetermined upper temperature threshold) within less than 20, 15, preferably 10, 8 or 5 seconds. preferably, the steam generating unit is arranged at a bottom or lower section of the apparatus. more preferably, the steam generating unit is arranged at or at the top of a battery top cover or basement shell. preferably the predetermined time sequence is determined according to the laundry treatment program. the time sequence may be determined according to an operation status and in particular according to the temperature of the steam generation unit measured before or at the beginning of the time sequence. more preferably the water supply pump and/or the valve is/are controlled by a sequence of two or more time sequences, each of which is predetermined according to the laundry treatment program and/or an operation status such as a temperature of the steam generation unit, wherein the operation status is measured before or at the beginning of the respective time sequence. in an embodiment, the method further comprises heating the steam generation unit to a predetermined upper temperature threshold after starting the control of the heater. preferably, the control of the water supply pump and/or the valve is started after the predetermined temperature threshold has been reached or exceeded. in another embodiment, the control of the water supply pump/and or the valve is started when a predetermined time has elapsed after starting the control of the heater. preferably, the time to be elapsed between starting the control of the heater and starting the control of the water supply pump and/or the valve is determined according to an operation status such as a temperature of the steam generation unit before or at the time of starting the control of the heater. preferably, the control of the heater comprises energizing the heater when a measured temperature of the steam generation unit drops or is below a first predetermined temperature limit. alternatively or preferably additionally the heater control further comprises de-energizing the heater when a measured temperature of the steam generation unit rises or is above a second predetermined temperature limit. preferably the second predetermined temperature limit is above the first predetermined temperature limit. the first predetermined temperature limit and/or the second temperature limit may be above the upper temperature threshold as explained above that may be used for triggering the control of the water supply pump and/or the valve. in an embodiment, the method for operating the steam generation unit further comprises introducing water into the steam generation unit and repeatedly increasing and decreasing the flow rate of water provided to the steam generation unit. preferably said repeated increasing and decreasing of said flow rate is achieved by controlling the activation and/or de-activation of the water supply pump and/or by controlling the opening and/or closing of the valve according to a predetermined time sequence. this predetermined time sequence may be the or may part of the predetermined time sequence determining the overall control of the water supply pump and/or the valve as described above. in another embodiment it may be an additional predetermined time sequence applied for modulation of the overall control of the water supply pump and/or the valve. in an embodiment, said repeated increasing and decreasing may be implemented as a periodic modulation of the water flow rate during a limited period of time. preferably, said repeatedly increasing and decreasing of the flow rate of water provided to the steam generation unit may be controlled so that the water flow rate follows a sequence of predetermined target flow rates. it shall be understood that the real flow rate of water provided to the steam generation unit at any time may deviate from the related target flow rate e.g. due to inaccuracies of actuators (pumps, valves, etc.), sensors (temperature, flow, water level, etc.), and means of processing (amplifiers, calculations, signal lines, etc.). in embodiments, the predetermined time sequence and/or sequence of target flow rates may be changed during a laundry treatment program of the laundry dryer. preferably, the predetermined time sequence and/or sequence of target flow rates may be chosen based on purpose and/or state of the laundry treatment program. in an embodiment, the flow rate of water provided to the steam generation unit may be decreased and increased gradually resulting in a continuous sequence of flow rates. in another embodiment, the flow rate of water may be changed stepwise resulting in an essentially discontinuous sequence of flow rates. of course, it is possible to combine various embodiments of the method (such as e.g.: with continuous vs. essentially discontinuous sequences of flow rates) in a laundry dryer so that different laundry treatment programs may use different embodiments or the appropriate embodiment may be chosen based on purpose and/or state of the laundry treatment program and/or based on the state of the steam generation unit. for example, a laundry treatment program may comprise both a continuous and an essentially discontinuous sequence of flow rates. furthermore, for example, in a laundry treatment program the water supply pump and/or the valve may be controlled according to a first predetermined time sequence that is independent of the current operation status of the heater. in an embodiment, the laundry dryer further comprises a nozzle unit, a steam conduit, and optionally a drain outlet. the nozzle unit comprises one or a plurality of nozzle outlets for injecting steam generated in the steam generation unit into the laundry storing compartment. the steam conduit is arranged for providing steam from the steam generation unit to the nozzle unit. the optional drain outlet is arranged for draining water from within the nozzle unit to the outside. in an embodiment, the method further provides for a warm-up phase of the steam generation unit and/or a steam conduit and/or a nozzle unit. preferably, during the warm-up phase the heating power or the average heating power of the steam generation unit is higher than during normal operation. preferably, during the warm-up phase the water supply rate or the average water supply rate for supplying water to the steam generation unit for steam generation is lower than during normal operation. more preferably, during the warm-up phase the heating power (or the average heating power) of the steam generation unit is higher than during normal operation and the water supply rate (or the average water supply rate) for supplying water to the steam generation unit for steam generation is lower than during normal operation. preferably, at the end or during the warm-up phase, the heating power (or the average heating power) is decreased towards the heating power applied during normal operation and/or the water supply rate (or the average water supply rate) for supplying water to the steam generation unit is increased towards the water supply rate applied during normal operation. the described warm-up phase is arranged for achieving a soft-start of the steam generation unit which is beneficial as it can drastically reduce the condensation of water droplets in the steam generation unit and/or steam conduit and/or nozzle unit while one or several of them have not yet reached their final operating temperature and thus are still relatively "cold". as a consequence, the warm up phase helps to reduce the amount of water droplets reaching the laundry inside the laundry storing compartment, while allowing to use the system's full steam generation capacity after the warm-up phase. preferably, the control of the water supply and/or the control of the heater during the warm-up phase are arranged to minimize the amount of water droplets leaving the or the plurality of nozzle outlets. preferably, the control of the water supply and/or the control of the heater after the warm-up phase are arranged to minimize the amount of water droplets leaving the or the plurality of nozzle outlets. more preferably, the predetermined time sequence for controlling the water supply (i.e. the water supply pump and/or the valve) and/or the repeated increase and decrease of the flow rate of water and/or the control of the heater are arranged to minimize the amount of water droplets leaving the or the plurality of nozzle outlets. in an embodiment, the steam generation unit is operated intermittently and the duty rate of operating the steam generation unit is decreasing over time during or at the end of the warm-up phase. in an embodiment, the water supply to the steam generation unit is operated intermittently and the duty cycle of supply is increasing over time during or at the end of the warm-up phase. preferably, both the steam generation unit and the water supply to the steam generation unit are operated intermittently and the duty rate of the steam generation unit is decreasing and/or the duty rate of the water supply is increasing over time during or at the end of the warm-up phase. it shall be understood that said increasing and/or decreasing may be a gradual or a step-like change over time. in an embodiment of the method, the predetermined time sequence comprises a repeated decrease and increase of the liquid supply rate to the steam generation unit and/or comprises repeated stops and starts of the liquid supply to the steam generation unit according to predetermined time intervals t_on and t_off. preferably, the time interval t_on is in the range of 3 to 30, 4 to 8, 4 to 6 or 5 to 20 seconds or is preferably around 5 seconds. preferably additionally or alternatively, the time interval t_off is in the range of 6 to 60, 10 to 40, 12 to 20 or 13 to 18 seconds or is preferably around 15 seconds. in an embodiment, the laundry dryer further comprises at least one temperature sensor arranged for measuring a temperature of the steam generation unit or a temperature of the steam generated by the steam generation unit. the measured temperature may be used for example for choosing a time sequence for control of the water supply pump and/or the valve, for controlling the heating power of the steam generation unit, for controlling the warm-up phase etc. preferably, the duration of the activation of the means for heating the process air and/or the heating power used for heating the process air and/or the duration of the activation of the fan and/or the power of the fan are controlled depending on the temperature inside the process air channel. in an embodiment the means for heating the process air is a heat pump system. preferably, the temperature inside the process air channel is determined by means of a temperature sensor. alternatively or in addition, if the means for heating the process air is a heat pump system, the temperature inside the process air channel may be determined or calculated indirectly from a refrigerant temperature in the heat pump system, because the refrigerant temperature can be used as a measure for the process air temperature. in an embodiment, the delay between activating the means for heating the process air and activating the steam generation unit is a predetermined time. preferably the predetermined time is chosen according to a temperature inside the process air channel at or before the beginning of the activation of the means for heating the process air. preferably the temperature inside the process air channel is detected by a temperature sensor. the temperature sensor may detect the process air temperature directly or indirectly. an example of an indirect temperature detection in an apparatus having a heat pump system is the detection by the refrigerant temperature sensor, for example a sensor detecting the refrigerant temperature at the compressor or condenser. in case of using an electrically operated heater preferably the process air temperature is detected using a temperature sensor arranged in the process air path between the electrical heater and the inlet of the laundry storing compartment. preferably the method further comprises deactivating the means for heating the process air or reducing the heating power of the means for process air heating, when the process air reaches a predetermined temperature. for example, the process air heating phase may be shortened or skipped if the process air channel is already heated up. in an embodiment, the laundry dryer further comprises a front wall with a front loading opening for loading laundry into the laundry storing compartment and/or a rear frame including said compartment back wall. preferably, the compartment back wall is opposite to the loading opening. preferably, an or the nozzle outlet(s) of the nozzle unit is/are arranged between said compartment back wall and said rear wall inside said rear channel so that steam ejected from said nozzle outlet passes through at least one back wall opening of the compartment back wall before entering the laundry storing compartment. preferably, the laundry dryer further comprises a heat-pump system having a refrigerant temperature sensor and the method further comprises detecting a temperature signal from the temperature sensor before activating the steam generation unit. if the temperature is below a predetermined temperature threshold, the heat-pump system for heating the process air is activated and, preferably, the steam generation unit for generating steam is activated thereafter as described above. more preferably the refrigerant temperature sensor may be used instead or additionally for determining the temperature of the process air. the temperature sensor for detecting the refrigerant temperature may be arranged at or may be in thermal contact with the outlet region of the compressor, the inlet or outlet region of the condenser or the inlet region of the expansion device. furthermore, the temperature signal obtained from the refrigerant temperature sensor may be used for choosing, determining, or calculating the predetermined temperature of the process air to be reached. in an embodiment, the method further comprises keeping the heating power of the heater of the steam generation unit off (steam generation unit heater deactivated), while the means for process air heating is activated. after heating the process air and preferably after switching off or deactivating the heater means for heating the process air, the heater of the steam generation unit is switched on or is activated. this is especially useful if the total power that can be provided to certain components of the laundry dryer is limited. in particular it may be beneficial to deactivate the heater or to postpone the activation of the steam generating unit heater during time periods where the means for process air heating is operated at a high power. preferably, the heating power of the heater of the steam generation unit is controlled depending on the power at which the means for process air heating is operated. preferably there may be periods of time while operating the means for process air heating during which the heater of the steam generation unit is deactivated and/or there may be other periods of time during which the heater of the steam generation unit is operated at reduced heating power. in case of heating the process air using an electrical heater (e.g. electrical resistance heater), the electrical heater is preferably completely switched off before switching on the steam generating unit heater. in case of heating the process air with a heat pump system (condenser thereof), the compressor may be switched off or preferably may be operated at lower or lowest power consumption mode before activating the steam generating unit heater, if for example a drying process is executed after the steam supply cycle in which the steam is to be supplied from the steam generating unit. preferably, the method further comprises deactivating the means for process air heating after a predetermined period of time or when the steam conduit has reached a temperature within a predetermined temperature range of about 30°c to 40°c. as mentioned above, the means for heating the process air may be a heat pump system, in particular the heat exchanger (condenser) transferring heat from the refrigerant to the process air. in another embodiment the means for heating the process air may be an electrical resistor heater. preferably, a heat pump system and an electrical resistor heater may be combined to form the means for heating the process air. in particular, the electrical resistor heater may be used when the heat pump compressor is not activated. preferably, the electrical resistor heater may be used to support process air heating when the heat power provided by the heat pump compressor is not sufficient for heating the required amount of process air, possibly depending on the state of a laundry drying program. preferably, the nozzle unit further comprises a drain outlet adapted for draining the water from the nozzle unit to the rear channel and/or a separation chamber for separating steam and water. more preferably, at least a portion of the separation chamber is arranged within the rear channel. reference is made in detail to preferred embodiments of the invention, examples of which are illustrated in the accompanying figures, which show: fig. 1 a schematic view of a laundry dryer, fig. 2 a perspective view of the condenser dryer of fig. 1 - partially disassembled, fig. 3 the front view of the dryer of fig. 2 , fig. 4 another perspective view of the dryer of fig. 1 - partially disassembled, fig. 5 a front view of the rear frame and parts of a base section of the dryer of fig. 1 , fig. 6 an enlarged view of the detail vi in fig. 5 , fig. 7 the sectional view of the compartment back wall, nozzle unit and rear wall along line vii-vii in fig. 6 , fig. 8 a perspective view of the nozzle unit, fig. 9 a rear view of the nozzle unit of fig. 8 , fig. 10 the sectional view of the nozzle unit along line x-x in fig. 9 , fig. 11 an enlarged view of the detail xi in fig. 10 , fig. 12 a rear view of the rear frame with mounted nozzle unit and steam conduit, fig. 13 the sectional view of the rear frame along line xiii-xiii in fig. 12 , fig. 14 an enlarged view of the detail xiv in fig. 13 , fig. 15 a perspective view of the backside of the rear frame with mounted nozzle unit and steam conduit connected to the nozzle unit and the steam generation unit, fig. 16 a perspective view of the front side of the rear frame according to fig. 15 , fig. 17 a front view of the rear frame of fig. 12 , fig. 18 a perspective view of a front frame, a rear frame and in between a piping with a branching element for branching up a pump unit conduit of a condensation-type laundry dryer, fig. 19 another perspective view of the dryer of fig. 18 , fig. 20 a perspective view of the course of the piping of fig. 18 between a pump unit, a steamer tank and a drain tank, fig. 21 an enlarged view of the detail xxi in fig. 18 , fig. 22 a side view of the branching element shown in fig. 21 , fig. 23 a sectional side view of the branching element of fig. 22 , fig. 24 the branching element of fig. 23 in a closed position, fig. 25 the branching element of fig. 23 in an open position, fig. 26 a perspective view of another model of dryer - in the assembled state, fig. 27 the perspective view of the dryer of fig. 26 - with disassembled left cover, fig. 28 a perspective view of a dryer's base section carrying a steam generation unit and showing the steamer tank, the drain tank and the piping, fig. 29 a front view to the left front of the dryer parts of fig. 28 , fig. 30 a perspective view according to fig. 28 , without the base section, fig. 31 a front view to the left front of the dryer parts of fig. 30 , fig. 32 a perspective view of the piping between the drain pump, the steamer tank and the drain tank, fig. 33 the piping according to fig. 32 in a disassembled state, fig. 34 a front view of an enlarged detail of a piping part according to fig. 33 comprising the branching element, fig. 35 the sectional view of the branching element along line xxxv-xxxv in fig. 34 , fig. 36 a side view of the piping part according to fig. 33 comprising the branching element, fig. 37 the sectional view of the branching element along line xxxvii-xxxvii in fig. 36 fig. 38 a rear view of the rear frame with mounted nozzle unit according to another embodiment and steam conduit, fig. 39 the sectional view of the rear frame along line a-a in fig. 38 , fig. 40 an enlarged view of the detail b in fig. 39 , fig. 41 a front view of the rear frame of fig. 38 (compare fig. 17 ), fig. 42 a perspective view of the nozzle unit, fig. 43 a front view of the nozzle unit of fig. 42 , fig. 44 a left view of the nozzle unit of fig. 42 , fig. 45 a rear view of the nozzle unit of fig. 42 , fig. 46 the sectional view of the nozzle unit along line a-a in fig. 43 , fig. 47 an enlarged view of the detail b in fig. 46 , fig. 48 a schematic view of an embodiment of the intervention of the laundry dryer 2, fig. 49 the temporal variation of heating power, steamer temperature, and water flow rate in an embodiment for operating a steam generation unit, fig. 50 the temporal variation of heating power, steamer temperature, and water flow rate in another embodiment for operating a steam generation unit, fig. 51 a flow diagram of an embodiment of a method for operating a steam generation unit, fig. 52 the temporal variation of heating power and steamer temperature in another embodiment for operating a steam generation unit, and fig. 53 the temporal variation of heating power, steamer temperature, water flow rate, and temperature of a steam conduit in an embodiment for operating a steam generation unit. the following figures are not drawn to scale and are provided for illustrative purposes. the embodiment of the invention can best be seen in figure 48 . fig. 1 shows a schematically depicted laundry dryer 2. the dryer 2 comprises a heat pump system 4, including a closed refrigerant loop 6 which comprises in the following order of refrigerant flow b: a first heat exchanger 10 acting as evaporator for evaporating the refrigerant and cooling process air, a compressor 14, a second heat exchanger 12 acting as condenser for cooling the refrigerant and heating the process air, and an expansion device 16 from where the refrigerant is returned to the first heat exchanger 10. together with the refrigerant pipes connecting the components of the heat pump system 4 in series, the heat pump system 4 forms the refrigerant loop 6 through which the refrigerant is circulated by the compressor 14 as indicated by arrow b. the process air flow a within the dryer 2 is guided through a laundry storing compartment 17 of the dryer 2, i.e. through a compartment for receiving articles to be treated, e.g. a drum 18. the articles to be treated are textiles, laundry 19, clothes, shoes or the like. the process air flow is indicated by arrows a in fig. 1 and is driven by a process air blower 8. the process air channel 20 guides the process air flow a outside the drum 18 and includes different sections, including the section forming the battery channel 20a in which the first and second heat exchangers 10, 12 are arranged. the process air exiting the second heat exchanger 12 flows into a rear channel 20b in which the process air blower 8 is arranged. the air conveyed by blower 8 is guided upward in a rising channel 20c to the backside of the drum 18. the air exiting the drum 18 through the drum outlet (which is the loading opening 53 of the drum 18) is filtered by a fluff filter 22 arranged close to the drum outlet in or at the channel 20. the optional fluff filter 22 is arranged in a front channel 20d forming another section of channel 20 which is arranged behind and adjacent the front cover of the dryer 2. the condensate formed at the first heat exchanger 10 is collected and guided to the condensate collector 30. the condensate collector 30 is connected via a drain conduit 46, a drain pump 36 and a drawer pipe 50 to an extractable condensate drawer 40. i.e. the collected condensate can be pumped from the collector 30 to the drawer 40 which is arranged at an upper portion of the dryer 2 from where it can be comfortably withdrawn and emptied by a user. the dryer 2 comprises a control unit 51 for controlling and monitoring the overall operation of the dryer 2. for example and as shown in fig. 1 , the control unit 51 receives a temperature signal from a temperature sensor 41 which is arranged at the outlet of the second heat exchanger 12 (condenser) and which is indicative of the refrigerant temperature at this position. according to fig. 1 , the control unit 51 also controls the drain pump 36. additionally, the control unit 51 is able to control other parts of the dryer 2. fig. 2 shows a front perspective view of a partially disassembled condenser dryer that uses a heat pump system 4. in the shown state the loading door of the dryer 2, the right cover, the lower shell of a bottom unit and a bottom panel are removed. the outer appearance of the depicted dryer 2 is defined by a top cover 56, a left cover or wall 58, a front cover 60 having a loading opening 10 and a front top panel 62. the front top panel 62 frames a drawer cover 64 of the condensate drawer 40, wherein here the drawer 40 has a condensate container that is completely pushed in a drawer compartment located at the upper part of the dryer 2. the right portion of the front top panel 62 forms an input section 66 wherein here the details of the input section 66 are not shown (like indicators, a display, switches and so on). the loading opening 54 is surrounded by a loading frame 68 which is formed in the front cover 60. fig. 26 shows a loading door 55 for closing the loading opening 54 in a closed state. in loading direction behind the bottom section of the loading frame 68 a filter compartment/process air channel 20 is arranged which is adapted to receive the fluff filter 22 and which is formed in a front frame 70. at the back side of the loading opening 54 in the front frame 70 the drum 18 is arranged. in the embodiment shown the drum 18 is a rotating drum cylinder that is extending between the back side of the front frame 70 and the front side of a rear frame 72 ( fig. 4 , fig. 5 ). the open rear end of cylindrical rotatable drum 18 is closed by a compartment back wall 74 ( fig. 3 ) which is mounted at the rear frame 72 ( fig. 5 ). back wall 74 is preferably provided as a separate element to the rear frame 72, formed for example from a metal plate. the compartment back wall 74 is disposed stationary, whereas the rotatable drum 18 is rotatably coupled to the compartment back wall 74. in the shown embodiment the rotation axis of the drum 18 is horizontal, however, the rotation axis may be inclined with respect to the horizontal axis or may be even vertical with some modifications to the shown embodiment, however without the requirement to modify other groups of the dryer 2. below the condensate drawer 40 and adjacent to the left upper corner of the front cover 60 or left above middle of the loading opening 54, a window panel 76 is inserted into a front cover window opening 78 ( fig. 3 , fig. 4 ). the window opening 78 and the window panel 76 allow visual inspection into the inside of the dryer outer body to check the liquid level of a liquid reservoir, particularly a steamer (liquid storing) tank 140 (see more detail below). as indicated in fig. 3 showing the dryer of fig. 2 in front view, the condensate drawer 40 has a draw handle 82 at the drawer cover 64 to be gripped by the user for pushing the condensate drawer 40 in or pulling it out of the condensate drawer compartment 37 that is extending into the interior of the dryer 2 ( fig. 18 , fig. 19 ). fig. 3 gives a view onto the compartment back wall 74 which has a plurality of back wall openings 84 through which processing air a enters the laundry storing compartment 17 from the back side or rear side of the drum 18. in the center of the compartment back wall 74 and surrounded by the air back wall openings 84 a cone 86 is arranged which is extending into the laundry storing compartment 17 (preferably with a tapered end) and has in this embodiment laundry detangling function. the dryer comprises the following parts described in more detail below: a nozzle unit 88 ( fig. 7 - fig. 10 ) and a steam generation unit 90 (in short 'steamer'; see figs. 15, 16 ). the nozzle unit 88 has a nozzle outlet 92 for injecting steam generated in the steam generation unit 90 into the laundry storing compartment 17. as can be seen from fig. 7 , the nozzle unit 88 is mounted at a rear wall 94 which is forming at least a portion of a back cover 95 of the dryer 2. the compartment back wall 74 and the rear wall 94 define portion of the rear channel 20b and the rising channel 20c. the compartment back wall 74 comprises a plurality of the back wall openings 84 designed for passing process air from the rear channel 20b, 20c into the laundry storing compartment 17. the nozzle unit 88 comprises a base portion 96 mounted at the back side of the rear wall 94. for mounting the base portion it is perforated by mounting holes 96 interacting with mounting screws 98 or the like ( fig. 7 , fig. 8 ). according to fig. 7 , a steam guiding portion 102 is fluidly connecting the base portion 96 to the nozzle outlet 92. the steam guiding portion 102 is extending from the base portion 96 into the rear channel 20b, 20c such that it spans substantially just the distance between the rear wall 94 and the compartment back wall 74 (i.e. the depth of the rear channel 20b, 20c), whereas the nozzle outlet 92 is in contact with a respective back wall opening 84 at the back side of the compartment back wall 74. the nozzle unit 88 comprises a connection portion 104 which is adapted to connect a steam conduit 106 which fluidly connects the steam generation unit 90 to the nozzle unit 88 ( fig. 10 , fig. 13 , fig. 15 ). the nozzle outlet 92 is arranged at the back side at the compartment back wall 74 in such a manner that steam ejected from the nozzle outlet 92 passes through a respective back wall opening 84 before entering the laundry storing compartment 17 ( fig. 7 ). in the embodiments, several elements of the nozzle unit 88 are formed as a single-piece or monolithic piece or single-molded part. these elements are the base portion 96, a separation chamber 108 contained in the base portion 96 for separating the supplied steam and water, the nozzle outlet 92, the steam guiding portion 102, the connection portion 104 and a substantially plan mounting socket 110 for mounting the nozzle unit 88. the water that is separated in the separation chamber may be formed by condensing the supplied steam - for example in the starting phase of steam supply when the steam conduit and nozzle unit are at low temperature as compared to the steam temperature. thus, the whole nozzle unit 88 is mountable only by mounting the mounting socket 110 via the mounting holes 98 and some screws 100. the separation chamber 108 defined by the inner geometry of the base portion 96 is closed by a chamber cover 112. both parts 96 and 112 are joined together by a welding joint 114 (e.g. ultrasonic welding) such that these parts are integrally fixed and connected to each other in an inseparable monolithic manner. consequently, the separation chamber 108 is water and steam proof. the mounting socket 110 is part of the base section and mounted at the back side of the rear wall 94. in this regard, the rear wall 94 is perforated by a nozzle port 116 thus allowing the steam guiding portion 102 to extend from the base portion 96 through this nozzle port 116 into the rear channel 20b, 20c. to avoid any escape of process air out of the rear channel 20b, 20c in the region of the nozzle port 116, there is provided a flat sealing element 101 clamped between the back side of the rear wall 94 and the mounting socket 110 ( fig. 7 , fig. 10 ). as can be seen from fig. 15 and fig. 16 , the steam generation unit 90 is arranged in a base section 118 of the dryer 2. the steam conduit 106 is passing through a conduit port 120 contained in a bottom section of the rear frame 72 which is forming a portion of the back cover of the dryer 2 in this embodiment. the extension of the steam conduit 106 is such that a portion 122 of the steam conduit 106 extends at the back side of the rear frame 72 and the rear wall 94 from the conduit port 120 to the connection section 104 of the nozzle unit 88 ( fig. 15 ). the nozzle unit 88 and the steam conduit 106 are designed such that steam is supplied from the steam generation unit 90 to the nozzle unit 88 and condensed liquid (water) is drained from the nozzle unit 88 to the steam generation unit 90. for this purpose, the separation chamber 108 has a steam inlet 124 in fluid connection towards the steam generation unit 90 and a chamber outlet 126 in fluid connection towards the nozzle outlet 92 ( fig. 10 , fig. 14 ). the chamber outlet 126 is in fluid communication with the steam guiding portion 102 for guiding the steam from the separation chamber 108 to the nozzle outlet 92. the connection portion 104 comprises a conduit stub 128 for mounting the steam conduit 106, particularly its steam conduit portion 122, thereto ( fig. 9 ). the steam inlet 124 is arranged at a lower section of the separation chamber 108, whereas the chamber outlet 126 is arranged at an upper section of the separation chamber 108. simultaneously, the steam conduit portion 122 is descending from the connection portion 104 and the steam inlet 124 towards the steam generation unit 90 thus forming a draining conduit for draining water from the separation chamber 108 towards the steam generation unit 90. thus, separation of steam and condensed water is realized in a natural physical manner without any complex design. in this regard, the flow axis direction of the steam inlet 124 (or the allocated/associated connection portion 104) and the flow axis direction of the steam guiding portion 102 are perpendicular to each other. in other embodiments, these flow axes are inclined to each other in an angle different from 90°. the nozzle unit 88 comprises a single nozzle outlet 92 which is associated to one predefined back wall opening 84 ( fig. 7 , fig. 14 ). in further embodiments, the nozzle unit 88 comprises a plurality of nozzle outlets 92 and each one of these nozzle outlets 92 is assigned to a predefined one of a plurality of back wall openings 84. the nozzle outlet 92 is designed to direct a steam flow exiting this nozzle outlet 92 directly through its associated back wall opening 84 into the laundry storing compartment 17. in this regard, the nozzle outlet 92 abuts with its front surface portion 132 against an opening rim 130 of the respective associated back wall opening 84 such as to form a sealing between the nozzle outlet 92 and the compartment back wall 74. the nozzle outlet 92 is arranged such that its inner cross section area is centrally aligned to the cross section area of the associated wall opening 84. according to fig. 17 , a first horizontal plane 134 running through the center of the laundry storing compartment 17 is defined and a second horizontal plane 136 running through the highest point of the laundry storing compartment 17 is defined. the distance between these two planes 134, 136 defines a vertical range 138. along this range 138, the one nozzle outlet 92 or a plurality of nozzle outlets 92 is assigned to respective back wall openings 84. in other embodiments here not shown the assigned back wall opening(s) 84 is/are arranged in the upper third or in the upper fourth or in the upper fifth of the range 138. the condensation-type laundry dryer 2 according to fig. 18 comprises in principle the elements and parts shown in fig. 1 . in particular, a drain tank (i.e. condensate drawer 40), a steam generation unit 90, a steamer tank 140 for storing liquid to be supplied to the steam generation unit 90 for generating the steam, and a pump unit (i.e. drain pump 36) for pumping the liquid collected in the condensation collection unit (i.e. condensate collector 30) to the drain tank 40 and the steamer tank 140 are provided. additionally, a branching element 142 is provided. this element 142 is made for branching a pump unit conduit 144 into a steamer tank unit 146 and into a drain tank unit 148 ( fig. 20 ). the pump unit conduit 144 is connecting the branching element 142 to the pump unit 36. the steamer tank conduit 146 is connecting the branching element 142 to the steamer tank 140. the drain tank conduit is connecting the branching element 142 to the drain tank 40. the conduits 144, 146, 148 form a piping 150 for conveying the condensate to different destinations in the dryer. the branching element 142 comprises a backflow-preventing member 152 preventing a backflow of liquid from the steamer tank 140 towards the pump unit 36. the backflow-preventing member 152 shown in fig. 23 is a one-way valve arranged in the branching element 142. furthermore, the backflow-preventing member 152 is arranged in the branch 154 of the branching element 142 where the liquid flows towards the steamer tank conduit 146. the member 152 comprises a valve seat 156 at a valve passage 158 and a valve member 160 which is adapted to cooperate with the valve seat 156. the movable valve member 160 is constituted by a ball or sphere and is urged against the valve seat 156 when the pump unit 36 is not activated and subsequently liquid tends to flow back from the line 146 towards the steamer tank 140 towards the branching element 142 and towards the pump unit 36. if this is the case, the valve member 160 and the valve seat 156 cooperate to close the valve passage 158, i.e. the valve member 160 is in a close position ( fig. 24 ). then the liquid in the branch between the backflow- preventing member 152 and the upper hydraulic point of the steamer tank conduit 146. if the valve member 160 is actuated by liquid pressure exerted by liquid pressurized by the pump unit 36 the valve passage 158 will be opened, i.e. the valve member 160 is in an open position ( fig. 23, fig. 25 ). within the valve passage 158 and opposite to the valve seat 156 there is arranged a stopping element 162 for restricting the opening path of the valve member 160 when the liquid is flowing into the forward direction 164 of the one-way backflow-preventing member 152. in other words, the stopping element 162 is designed to provide a clearance passage 166 for the liquid flow which bypasses the valve member 160 in its open position ( fig. 25 ). thus, the backflow-preventing member 152 provides additionally a liquid flow restriction. the liquid flow restriction function of the branching element 142 is adapted to reduce the liquid flow into the steamer tank conduit 146 in comparison to the liquid flow into the drain tank conduit 148. due to the valve member 160 in its open position according to fig. 25 the flow resistance between the branching element 142 and the steamer tank 140 is higher than the flow resistance of the drain tank conduit 148 between the branching element 142 and the drain tank 40. the valve member 160 and the stopping element 162 a liquid flow restricting element of the branching element 142 by providing a reduced liquid flow cross section towards the steamer tank conduit 146 in comparison to the liquid flow cross section towards the drain tank conduit 148. the liquid flow cross section towards the steamer tank conduit 146 is defined particularly by the clearance passage 166 and an orifice 168 arranged in the axial end region of the branch 154 and having a diameter or cross section area that is less than the inner diameter 170 or cross section area of the branch 154 providing the fluid connection to the drain tank conduit 148. in fig. 20 , the branching element 142 is arranged in a region of the base section 118 of the dryer 2 (see also fig. 18 ). in further embodiments the branching element 142 is arranged at an upper region 172 of the cabinet of the dryer 2 ( fig. 28 - fig. 31 ). in this regard, the branching element 142 is preferably arranged in a height level within the dryer which is at least 3/4 or 4/5 or 5/6 of the total height of the dryer 2. as seen from fig. 22 - fig. 25 , the branching element 142 is made as a t-junction. according to fig. 20 or fig. 28 , the highest point 174 of the steamer tank conduit 146 has a height level which is lower than the highest point 176 of the drain tank conduit 148. in particular, the height level of the steamer tank conduit 146 is at least 3/4 or 4/5 or 5/6 of the height level of the drain tank conduit 148. in other embodiments, the highest point 174 of the steamer tank conduit 146 has the same height or is even higher than the highest point 176 of the drain tank unit 148. regarding fig. 28 - fig. 31 , it can be seen that the conduit 146 arranged between the branching element 142 and the steamer tank 140 is designed such that its connection length between the branching element 142 and the steamer tank 140 is minimized with respect to the connection line provided by the conduit 144, 148 between the pump unit 36 and the drain tank 40. hereby a second piping 184 for supplying the condensate to the steamer tank 140 and removable tank 40 is provided. in fig. 28 and fig. 29 it can be seen that the steam generation unit 88 is arranged in the region of the base section 118 of the dryer 2. the steam generation unit 88 is supplied with liquid to generate steam in order to convey this steam to the nozzle unit 90, as described above. the liquid is supplied to the steam generation unit from the steamer tank 140 via a connection conduit 178 ( fig. 28 - fig. 31 ). fig. 34 - fig. 37 show a branching element 142 in a second piping 184 having a design different to the design of the piping 150 according to fig. 20 - fig. 25 . the branching element 142 according to fig. 34 - fig. 37 does not have a backflow-preventing function but only a liquid flow reducing function such that a flow resistance between the branching element 142 and the steamer tank 140 is higher than a flow resistance of the drain tank conduit 148 between the branching element 142 and the drain tank 40. this liquid flow reduction towards the steamer tank 140 occurs by a conduit passage 180 in the branch 154 having locally a smaller diameter 182 than the inner diameter 170 in the branching element 142 towards the drain tank conduit 148 and towards the drain pump 36. in the above the reason for reducing the flow rate of condensate pumped by the pump unit 36 toward the steamer tank 140 as compared to the higher flow rate pumped towards the condensate drawer 40 (drain tank) is the expectation that only a lower portion of the condensate is needed for steam treatment of the laundry. thus most part of the condensate formed in a laundry drying cycle will normally not be required for steam treatment. the steamer tank 140 is provided with an overflow conduit 190 shown in fig. 30 by which excess water that can not be stored by the steamer tank 140 is flowing back to the condensate collector 30. from there it is pumped upward to tanks 40 and 140 again. by reducing the ratio of the flow rate to steamer tank 140 an excessive activation of the pump 36 can be avoided. in both embodiments of above piping 150 or 184, a backflow prevention member (compare 152) and/or a flow restriction element (compare 166 or 170) can be provided at the branching element 142. alternatively the backflow prevention member can be provided at any position between the branching element and the inlet to the steamer tank 140 of the steamer tank conduit 146. in the following, a modified nozzle unit 300 is described in detail. as compared to the nozzle unit 88, nozzle unit 300 has a few modifications and is a preferred embodiment of the present invention. apart from these modifications, the nozzle unit 300 is preferably embodied as above nozzle unit 88, as can be seen from figs. 38 to 47 . for example mounting and piping structure as well as positioning of the nozzle outlet are as for nozzle unit 88. it shall be understood that all the advantages and details of nozzle unit 88 also apply to the modified nozzle unit 300 and will therefore not be repeated here except when specific differences or advantages are to be highlighted. as can be seen from figs. 38 and 39 , the nozzle unit 300 is mounted at a rear wall 94 which is forming at least a portion of a back cover 95 of the dryer 2. as in the above embodiment, the compartment back wall 74 and the rear wall 94 define portion of the rear channel 20b and the rising channel 20c (cf. figs. 7 , 39 , and 40 ). the compartment back wall 74 comprises a plurality of the back wall openings 84 designed for passing process air from the rear channel 20b, 20c into the laundry storing compartment 17. the nozzle unit 300 preferably comprises a base portion 301 mounted at the back side of the rear wall 94, see fig. 40 . it is particularly beneficial to arrange the nozzle outlet 92 at the back side of the compartment back wall 74 in such a manner that steam ejected from the nozzle outlet 92 passes through a respective back wall opening 84 before entering the laundry storing compartment 17 (see also figs. 41 and 7 / 17 ). the nozzle unit 300 comprises at least one drain outlet 308 for draining water from within the nozzle unit to the outside, as can be seen in figs. 40 , 42, 43, 44 , 46, and 47 . in particular, it is beneficial for the nozzle unit and the drain outlet(s) to be arranged such that the water is drained from the nozzle unit to the rear channel 20b, 20c. a preferred embodiment of this arrangement is shown in figs. 39 and 40 . draining condensed water out of the nozzle unit provides the advantage that less water remains within the steam path and so less water needs to flow back to the steam generation unit and probability of condensate droplets being ejected through the nozzle outlet onto laundry in the drum is lowered. draining the condensed water to the rear channel further provides the advantage that the water can evaporate into the process air that may be guided through the rear channel, in which case it may also reach the laundry as evaporated steam together with the process air flowing into the laundry storing compartment 17 through the back wall openings 84. fig. 40 shows that in preferred embodiments of the nozzle unit 300, the steam guiding unit 102 of nozzle unit 88 may be partially or completely replaced by a separation chamber 302. in such embodiments the steam guiding portion 102 - if present - extends from the separation chamber 302 towards the rear side of the compartment back wall 74. the separation chamber 302 serves for separating condensed water from the flow of steam so as to avoid water droplets reaching the laundry 19 inside the laundry storing compartment 17 (compare separation chamber 108 described above). condensed water may be formed by (partial) condensation of the supplied steam - for example in the starting phase of the steam supply when the steam conduit and nozzle unit are at low temperature as compared to the steam temperature. fig. 40 is a sectional view of the nozzle unit 300 mounted to the rear wall 94 of the laundry dryer and depicts a preferred embodiment of the separation chamber 302. as can be seen, the separation chamber 302 preferably has at least one steam inlet 124 in fluid connection with the steam generation unit 90, e.g., by means of a steam conduit 106, and furthermore has one or more steam outlets 126 (see also figs. 46 and 47 ) in fluid connection with one or more nozzle outlets 92. in the embodiment shown, the drain outlet 308 is arranged at the separation chamber such that condensed water is drained out of the separation chamber. as can be seen in fig 40 , it is particularly beneficial to design the separation chamber 302 to have a portion 304 arranged within the rear channel 20b, 20c and/or another portion 306 arranged at the back side of the rear wall 94. this embodiment has several advantages. first, it allows to optimize the space requirements for a given size (here: depth) of the separation chamber. second, having a portion 306 of the separation chamber 302 at the back side of the rear wall 94 provides a simple way for arranging a lateral conduit stub 128, which in turn reduces the amount of space needed for the connection of the steam conduit 106 to the nozzle unit 300. third, having a portion 306 of the separation chamber 302 at the back side of the rear wall with a lateral conduit stub 128 allows to arrange for a significant deflection of the steam path direction inside the separation chamber, which is beneficial for an efficient separation of condensed water from the steam. furthermore, having a portion 304 of the separation chamber 302 within the rear channel 20b, 20c provides a simple way for draining water from the separation chamber into the rear channel, since no guiding, no conduit, no sealing or similar means is needed between the drain outlet(s) 308 and the rear channel. according to figs. 40 , 42, 43, 44 , 46, and 47 , it is preferable to arrange the drain outlet(s) at or close to the lowest portion of the nozzle unit 300 or the separation chamber 302, particularly because condensed water will accumulate at the lower parts of the steam path due to its higher density as compared to the steam. condensed water will therefore accumulate at the drain outlet and will be pushed towards the outside of the nozzle unit 300 by the pressure of the steam. in preferred embodiments, the separation chamber is designed or formed so that condensed liquid is guided towards the drain outlet. as depicted by figs. 39 and 40 the compartment back wall 74 and the rear wall 94 are arranged to form at least part of the rear channel 20b, 20c. in this way it is simple to arrange the drain outlet 308 of the nozzle unit 300 inside the rear channel as described above. further details of preferred embodiments of the nozzle unit according to the present invention are depicted in figs. 42 to 47 . thereof figs. 42, 43, 44 and 45 show the nozzle unit 300 at different viewing sides, namely a perspective front/left-side view, a front view, a left-side view and a rear view, respectively. in the following, a method for operating a steam generation unit in a laundry dryer 2 (compare fig. 1 ) will be explained in more detail (exemplified for steam generation unit 90). fig. 48 provides a schematic view of some components of the laundry dryer 2 relevant for understanding the operation of the steam generation unit. as already described in detail above, the exemplary laundry dryer comprises a laundry storing compartment 17 for storing laundry to be treated and means (here a heat pump system 4) for heating a flow of process air a which is introduced into the laundry storing compartment 17 by means of a fan 8 and a process air channel 20b, 20c. the laundry dryer 2 further comprises a steamer tank 140 serving as a liquid or water reservoir and containing liquid or water that can be guided to a steam generation unit 90 through connection conduit 178. the steam generation unit comprises a heater 282. a valve 280 is arranged for controlling the flow rate of liquid flowing into the steam generation unit. in addition to or instead of the valve 280, preferably there is a water supply pump for supplying liquid to the steam generation unit. the pump and/or valve are preferably operated under the control of the control unit 51. the control unit 51 retrieves the parameters for executing the program from a program memory 52. the control unit 51 can store current program status data, user settings and other data (e.g. error codes for maintenance) in the memory 52. the steam generation unit (in short also called "steamer") is arranged for converting the supplied liquid into a flow of steam that is directed through a steam conduit (see 106) and into a nozzle unit 88 or 300. preferably, the steam generation unit is an inline steam generation unit. the nozzle unit serves for injecting the steam into the laundry storing compartment and may have its nozzle outlet within the compartment or preferably arranged behind a back wall 74 of the compartment (compare units 88 or 300). fig. 48 also shows several heat/temperature sensors 292, 294, 296, 298 arranged in thermal contact at and/or assigned to the heat pump system 4, the process air channel 20b, 20c, and the steam generation unit 90. furthermore, in the depicted embodiment there is a thermal connection 284 between the process air channel 20b and the steam conduit 106. here and in the following, the expressions "liquid" and "water" shall be used interchangeably, meaning that the liquid stored in steamer tank or guided to the steam generation unit may be pure water or may be a liquid or a mixture of liquids (possibly including water) appropriate for steam generation in the steam generation unit and applicable for laundry treatment in a laundry dryer. although the arrangement shown in fig. 48 is useful for explaining various embodiments for operating a steam generation unit 90, it is to be understood as an example only. other embodiments may not comprise all of the shown components and/or may have additional components and/or may have arranged the components differently. for example, an embodiment may have no or fewer heat sensors or may have heat sensors applied to other components than proposed in the figure. furthermore, it shall be understood that the various embodiments of the method described below can be combined with the various embodiments of the laundry dryer 2 and its components described above as may be appropriate for obtaining the required benefits. in an embodiment for operating a steam generation unit 90 in a laundry dryer 2, the flow rate of liquid provided from the reservoir 140 to the steam generation 90 unit is controlled by means of a water supply pump and/or by a valve 280. by controlling the activation of the pump and/or the opening/closing of the valve 280 (preferably using control unit 51), it is possible to dose the amount of liquid introduced into the steam generation unit in a given time interval, i.e. to control the flow rate of liquid supplied to the steamer 90. preferably, the heating power supplied to the steam generation unit for generating steam can be controlled by controlling the heater 282 (preferably using control unit 51), e.g., by switching or adjusting the power supply of the heater 282. preferably, the method comprises starting the control of the heater 282 for heating the steam generation unit 90 and thereafter starting the control of the water supply to the steam generation unit 90 by starting the control of the water supply pump and/or the valve 280. in particular it is beneficial to switch on the heater 282 first and start supplying liquid to the steam generation unit only after the steam generation unit 90 has reached a predefined temperature threshold required for proper operation. preferably thereafter, the control of the water supply pump and/or the valve 280 is independent of the operation status of the heater 282 or the current temperature of the steam generation unit 90. preferably in addition, the overall control of the laundry dryer 2 (preferably using control unit 51) is adapted such that the control of the steam generation unit 90 with its heater 282 terminates when the control of the water supply terminates or vice versa. in this way it may be ensured for example, that the heater 280 stops heating the steam generation unit 90, when no more water is supplied to the steam generation unit 90, and/or that no more water is supplied to the steam generation unit 90, when the heater 280 stops heating the steam generation unit 90. in an embodiment of the method, the water supply, i.e., the water supply pump and/or the valve 280 is controlled by a predetermined time sequence which is independent of the current operation status of the heater 282 and/or the current temperature of the steam generation unit 90. the memory shown in fig. 48 stores the settings for at least one predetermined time sequence. preferably two or more settings for predetermined time sequences are stored which are different of each other. the settings for a predetermined time sequence is fixed and invariantly stored in the memory or memory section, the memory may be a rom type memory (e.g. eprom) which is set from the factory site. fig. 49 is a diagram showing the variations over time of the heating power p provided to the heater 280, of the liquid flow rate r provided to the steamer 90, and of the temperature q of the steamer. here and in the following 'operating the heater' or means supplying heating power p to the heater. first the control of the heater 282 is activated and the heating power is switched on until the steam generation unit 90 has reached a threshold q2 of its temperature q. a certain period of time after the activation of the heater control, the control of the water supply is activated. in the example shown, the control of the water supply is adapted such that the rate r of water flowing to the steam generation unit 90 oscillates between a lower value r1 and a higher value r2. preferably, the lower value r1 may correspond to no water flowing to the steamer. however, in other embodiments it may be preferred to have water flowing to the steamer at a non-zero rate r1 (standby-operation during heating-up the steamer) before starting the control of the water supply (i.e. when the temperature threshold q2 is reached and flow rates may be raised up to r2 due to flow control fully active). preferably, said predetermined time sequence for controlling the water supply is a sequence of the target flow rates of water provided to the steam generation unit 90 and the water supply pump and/or the valve 280 are controlled so that the effective water flow rate follows the predetermined sequence of target flow rates. it shall be understood that the effective or controlled flow rate of water provided to the steam generation unit at any time may deviate from the related target flow rate for various reasons, e.g. due to inaccuracies of actuators (pumps, valves, etc.), sensors (temperature, flow, water level, etc.), and/or means of processing (amplifiers, calculations, signal lines, etc.) - but this is not the intended operation. the predetermined time sequence or, if there are two or more predetermined time sequences, each one of the time sequences, is fixed in that as soon as one or the time sequence has been selected, the time behavior of the pump (activation/inactivation) and/or valve (open/closed) is fixed and will not be adapted in dependency of any other current operational parameter of the steam generation unit 90. under terms of control application of the predetermined time sequence to the pump and/or valve is a feed-forward control and not a feedback control. in an embodiment, the control of the water supply may be started immediately or a predetermined period of time after activating the control of the heater 282. in another embodiment, after activating the control of the heater 282, the steam generation unit 90 may first be heated until its temperature q reaches a predetermined upper temperature threshold. the control of the water supply may be started immediately or a predetermined period of time after the predetermined upper temperature threshold has been reached or is exceeded. in the example depicted in fig. 49 , the control of the water supply pump and/or the valve 280 is arranged so that after its activation the flow r of liquid to the steamer oscillates between two rates r1 and r2 according to a predetermined time sequence. preferably r1 = 0. as can be seen, this oscillation is independent of the operation status of the heater 282 (after reaching the predetermined upper temperature threshold) and of the temperature q of the steam generation unit 90. on the other hand, the control of the heater 282 in this example is arranged so that the heater is switched off, when the temperature q of the steam generation unit 90 is at or rises above a second threshold q2, and that the heater is switched on, when the temperature q of the steam generation unit 90 is at or drops below a first threshold q1. for measuring the temperature q, a temperature sensor 296 may be provided at the steam generation unit 90. of course, other temperature sensors, e.g., attached to or integrated in the steam conduit may be used for this purpose instead or in addition. noticeably, depending on the physical design of the steam generation unit, the temperature t of the steam generation unit 90 may continue to increase when the heater has been switched off and/or the temperature q of the steam generation unit 90 may continue to decrease after the heater has been switched on. one of the thresholds q1 or q2 may be identical to the mentioned predetermined upper temperature threshold used for triggering the activation of the control of water supply. in another embodiment, said predetermined upper temperature limit may be chosen separately and, in particular, the second threshold q2 at which the heater is deactivated may be above said predetermined upper temperature limit. in particular, it can be beneficial to choose a predetermined time sequence (temporal profile) for controlling the water supply pump and/or the valve 280 in such a way that no or essentially no water is supplied to the steamer 90 while the heater 282 is on and/or that all or most of the water is supplied to the steamer 90 during time periods, where the heater 282 is off. this is not an effect of the control of the water supply and the heating power as such, but the consequence of selecting the predetermined temporal profile and the heating power strength and the heating control parameters in dependency of the steam generator as device to be controlled in such a way that the non-overlap of periods of heating and water supply result. generally it is to be noted that in the embodiment of operating using the predetermined time sequence for water supply, the water supply over time is fixed according to the predefined sequence which specifically means that it is independent of the temperature and the temperature control (except for example that the time sequence starts only when the predetermined heater temperature threshold is achieved), while the temperature control is also reactive to the temporal temperature changes caused by the water supply. an example of control by a predetermined time sequence of water supply is shown in the diagram of fig. 50 . again, the control of the heater is switched on first and the heater 282 heats the steam generation unit 90. thereafter, the control of the water supply is activated and then follows a predetermined time sequence. as can be seen, in this embodiment the selected time sequence results in water being supplied to the steamer only during time periods when the heater 282 is off, i.e. when the heating power p is zero or close to zero. fig. 51 is a flow diagram showing the major steps of an embodiment of the method for operating a steam generation unit 90 in a laundry dryer 2. as already explained above, the control of the heater 282 is activated first. when the temperature q of the steam generation unit 90 is at or rises above a temperature threshold q2, the heater is switched off and the control of the water supply, i.e., of the water supply pump and/or the valve 280 is activated. the control of the water supply thereafter is independent of the operation status of the heater 282 and of the temperature of the steam generation unit 90. when the steamer temperature q is at or drops below a temperature threshold q1, the heater 282 is switched on until the temperature threshold q2 is reached again. this process is repeated, e.g., until no more steam generation is required by the laundry treatment program or, e.g., until steam generation is to be interrupted. in that case the control of the water supply and the control of the heater are de-activated. as depicted in fig. 52 , it may be particularly beneficial in particular, if during controlling of the heater, the heater 282 is switched off when the temperature t of the steam generation unit 90 rises above a temperature limit q2 and the heater 282 is switched on when the temperature t of the steam generation unit 90 drops below a temperature limit q1, wherein the temperature limit q1 is above the temperature limit q2. due to the fact that the temperature of the steam generation unit 90 tends to continue to rise after switching off the heater and tends to continue to drop after switching on the heater (see above), this can lead to a more constant temperature profile of the steam generation unit 90. in an embodiment of the method, the predetermined time sequence comprises a repeated decrease and increase of the liquid supply rate to the steam generation unit and/or comprises repeated stops and starts of the liquid supply to the steam generation unit according to predetermined time intervals t_on and t_off (see fig. 50 ). preferably, the time interval t_on is in the range of 3 to 30 seconds. preferably, the time interval t_off is in the range of 6 to 60 seconds. more preferably the time interval t_on is in the range of 3 to 30 seconds and the time interval t_off is in the range of 6 to 60 seconds. in a further embodiment, the repeated decrease and increase of the liquid supply rate may be an additional modulation applied to predetermined average liquid supply rates. the parameters of the modulation such as, e.g., duty cycle, frequency, and/or amplitude may be chosen depending on, e.g., the current state of the laundry treatment program and/or the purpose of the laundry treatment program and/or the state (e.g. measured temperature q) of the steam generation unit, etc. in embodiments of the method, the water supply pump and/or valve 280 may be controlled according to a sequence of predetermined time sequences, wherein each predetermined time sequence is chosen as appropriate for the current state of the laundry dryer 2, the state and/or purpose of the laundry drying program, and/or the state of one or more components of the laundry dryer. in particular, the method may provide for a warm-up phase for warming up the steam generation unit 90, a steam conduit 106, and/or a nozzle unit 88 or 300. the predetermined time sequence applied during the warm-up phase may be arranged so that the heating power or the average heating power of the heater 282 is higher than during normal operation of the steam generation unit, e.g. after the warm-up phase. alternatively or in addition, the predetermined time sequence applied during the warm-up phase may be arranged so that the rate of water supplied to the steamer 90 is lower than during normal operation of the steam generation unit, e.g. after the warm-up phase. fig. 53 is a diagram showing the variations over time of the heating power p applied to the heater 280, of the liquid flow rate r supplied to the steamer 90, of the temperature q of the steamer, and of the temperature q' of a steam conduit as it may arise in an embodiment of the method comprising a warm-up phase t_warm. in this example, both a reduced water flow rate r2 and an increased heating power p2 are applied during the warm-up phase as compared to the water flow rate r3 and the heating power p1 applied after the warm-up phase. the warm-up phase according to fig. 53 ends when the temperature q' of the steam conduit reaches a temperature threshold q'1. a temperature sensor 298 (see fig. 48 ) may be attached to or integrated in the steam conduit 106 for measuring the temperature of the steam conduit and/or the temperature of the steam generated by the steam generation unit 90. the temperature threshold q'1 may be a predetermined value or it may be determined from a state of the laundry dryer and its components, such as, e.g., the purpose and/or state of the laundry treatment program, the temperature of the steam generation unit, the ambient air temperature, etc. a preferable method for operating a laundry dryer 2 as described above and as depicted e.g. in fig. 48 comprises activating a means for heating the process air a before activating the steam generation unit 90. preferably said means for heating the process air a may be a heat pump system 4, in particular the heat exchanger 12 of the heat pump system 4. alternatively or in addition, an electrical resistor heater may be provided for heating the process air. preferably, at least a portion of the steam conduit 106 is guided close to, is guided in, or is in thermal contact with the process air channel 20b, 20c so that there is a thermal connection 284 between the process air channel and the steam conduit. optionally, the method comprises activating the fan 8 for blowing heated process air through the process air channel 20b, 20c. this method is beneficial for reducing the amount of water droplets reaching the laundry in the laundry storing compartment, as the heated process air heats the process air channel which in turn supports heating of the steam conduit due to the thermal connection 284. as a consequence there will be less condensation of steam in the steam conduit. heating the process air channel according to this method is particularly useful when the laundry dryer is used with a program comprising steam treatment of the laundry only, so that the drying process was not activated and thus the process air channel has not been heated before. preferably, the duration and/or power of the heating of process air and/or the delay before activating the steam generation unit are adapted according to the current temperature in the process air channel. for this purpose, at least one temperature sensor 294 may be attached to or integrated in the process air channel and/or the temperature sensor of the heat pump system 4 is used for detecting indirectly the process air temperature. if for example the refrigerant temperature sensor of the compressor or at the outlet of the compressor is used, the refrigerant temperature is detected which is a measure for the refrigerant temperature in heat exchanger 12. heat exchanger 12 transfers the refrigerant heat to the process air and thus the refrigerant temperature is a measure for the process air temperature. thus in an embodiment, the laundry dryer comprises a heat pump system 4 that has a refrigerant temperature sensor 292 and the method comprises detecting the temperature signal from the refrigerant temperature sensor 292 before activating the steam generation unit 90. if the temperature corresponding to the detected signal is below a predetermined threshold, then the heat pump system 4 is activated in order to heat the process air a. preferably, in such an embodiment, the temperature of the process air a is determined or at least estimated indirectly from the temperature of the refrigerant of the heat pump system 4. preferably, the method further comprises deactivating the means for heating the process air or reducing its heating power before activating the steam generation unit 90. in this way, the total power required by the laundry dryer 2 during steam generation can be reduced, which is especially beneficial if it is desired to not exceed a certain maximum power limit. for the same benefit, the method may additionally or instead comprise reducing the heating power of or deactivating the heater of the steam generation unit while the means for process air heating is active. preferably, the means for heating the process air is deactivated or its heating power is reduced after a predetermined period of time. more preferably, the means for heating the process air is deactivated or its heating power is reduced when the temperature of the process air a and/or the process air channel 20b, 20c and/or the steam conduit 106 reaches a predetermined temperature threshold or is within a predetermined temperature range. more preferably, the means for heating the process air is deactivated or its heating power is reduced when the temperature of the steam conduit is within a predetermined temperature range of about 30°c to 40°c. preferably, at least one temperature sensor 294, 298 is attached to or integrated in the process air channel and/or the steam conduit 106. in the above, of course, the overall control of the laundry dryer or at least one laundry treatment program of the laundry dryer is designed so that the control of the water supply pump and/or the valve is terminated when the control of the steam generation unit is terminated, or vice versa. table-tabl0001 reference numeral list: 2 laundry dryer 55 loading door 4 heat pump system 56 top cover 6 refrigerant loop 58 left cover 8 blower 60 front cover 10 first heat exchanger 62 front top panel 12 second heat exchanger 64 drawer cover 14 compressor 66 input section 16 expansion device 68 loading frame 17 laundry storing compartment 70 front frame 18 drum 72 rear frame 19 laundry 74 compartment back wall 20 process air channel 76 window panel 20a battery channel 78 front cover window opening 20b rear channel 82 drawer handle 20c rising channel 84 back wall opening 20d front channel 86 detangling cone 22 fluff element 88 nozzle unit 30 condensate collector 90 steam generation unit 36 drain pump 92 nozzle outlet 37 condensate drawer 94 rear wall compartment 95 back cover 40 condensate drawer 96 base portion 41 temperature sensor 98 mounting hole 46 drain conduit 100 mounting screw 50 drawer pipe 101 sealing element 51 control unit 102 steam guiding portion 52 program memory 104 connection portion 54 loading opening 106 steam conduit 108 separation chamber 174 highest point 110 mounting socket 176 highest point 112 chamber cover 178 connection conduit 114 welding joint 180 conduit passage 116 nozzle port 182 smaller diameter 118 base section 184 piping 120 conduit port 190 overflow conduit 122 steam conduit portion 280 valve 124 steam inlet 282 heater 126 chamber outlet 284 thermal connection 128 conduit stub 292, 294, 296, 298 temperature sensors 130 opening rim 300 nozzle unit 132 front surface portion 301 base portion 134 first horizontal plane 302 separation chamber 136 second horizontal plane 304, 306 separation chamber portions 138 range 308 drain outlet 140 steamer tank a process air flow 142 branching element b refrigerant flow 144 pump unit conduit p heating power of the steam 146 steamer tank conduit generation unit 148 drain tank conduit p1, p2 heating power levels 150 piping q steamer temperature 152 backflow-preventing member q1, q2 steamer temperature 154 branch thresholds 156 valve seat q' temperature of steam conduit 158 valve passage q'1 steam conduit temperature 160 valve member threshold 162 stopping element r water flow rate 164 forward direction r1, r2, r3 water flow rates 166 clearance passage t_on heater-on time interval 168 orifice t_off heater-off time interval 170 inner diameter t_warm warm-up phase 172 upper region
012-094-550-563-592
GB
[ "DE", "GB", "US", "JP" ]
D05C15/18,D05C15/34
1985-06-19T00:00:00
1985
[ "D05" ]
tufting machines
a yarn feed roller assembly for a tufting machine has a number of sets of rollers about which yarn is adapted to be wound and fed to the needles of the tufting machine. a number of drive shafts each carrying a number of clutches and a like number of feed gears are each driven at a different rotational speed. each clutch has a first part fixed for rotation with the shaft and a second part rotatably fastened to the corresponding feed gear and axially moveable relative to that feed gear. the feed gear is rotatably mounted on each drive shaft and upon actuation of a particular clutch the feed gear associated therewith is drivingly connected to the shaft for rotation therewith. respective feed gears of the corresponding shafts are in meshing relationship with one another with a corresponding gear on a respective set of yarn feed rollers so that upon coupling of a particular feed gear to its shaft the yarn feed roller set associated therewith is driven at a speed related to that shaft. each shaft is formed from a number of assembled shaft sections coupled end to end and readily disassembled therefrom so that upon failure of a particular clutch removal of a shaft section with that clutch thereon can be made without disturbing the yarn on the corresponding feed roller set.
1. a yarn feed roller assembly for a tufting machine comprising a plurality of sets of rollers about which yarn is adapted to be wound and fed, a plurality of rotatably mounted drive shafts each adapted to be driven at a different speed, a feed gear rotatably mounted on each drive shaft, corresponding feed gears of the respective shafts being in meshing relationship and defining a gear set, each gear set corresponding to each set of rollers, a selectively operable clutch corresponding to each feed gear, each clutch having a first part fixed on the respective drive shaft for rotation therewith, and a second part rotatably fastened to the corresponding feed gear and axially moveable relative to the respective drive shaft toward and away from said first part, whereby selective actuation of a clutch couples the corresponding feed gear to the respective drive shaft for rotation therewith and each gear of a gear set is driven at a speed determined by the speed of the drive shaft carrying the actuated clutch, and gear means associated with each set of rollers disposed in meshing relationship with a corresponding feed gear of the respective gear set for driving each roller set at a speed determined by the speed of the corresponding gear set. 2. a yarn feed roller assembly as recited in claim 1, wherein each drive shaft comprises a plurality of shaft sections, each section having a male formation at one end and a female formation at the other end for coupling adjacent shaft sections in end-to-end disposition. 3. a yarn feed roller assembly as recited in claim 2, wherein each shaft section is supported by a bearing, and adjacent ends of successive sections are mounted within a bearing cover, said cover having locating means for axially locating the ends of the respective shafts. 4. a yarn feed roller assembly as recited in claim 2, wherein each shaft section supports a plurality of feed gears and corresponding clutches, the shaft section together with the associated feed gears and clutches defining a module, each module being independently moveable from adjacent modules of said assembly. 5. a yarn feed roller assembly as recited in claim 4, wherein each shaft section is supported by a bearing, and adjacent ends of successive sections are mounted within a bearing cover, said cover having locating means for axially locating the ends of the respective shafts, said cover comprising at least a pair of members readily dissassembled for permitting removal of a module. 6. a yarn feed roller assembly as recited in claim 1, wherein said first part of each clutch carries a slip ring rotatable therewith, at least one contact brush associated with each slip ring for transmitting electrical signals to said clutch, and means for pivotably mounting a plurality of said brushes for movement into and out of operative engagement selectively with the corresponding slip ring. 7. a yarn feed roller assembly as recited in claim 6, wherein each drive shaft comprises a plurality of shaft sections, each section having a male formation at one end and a female formation at the other end for coupling adjacent shaft sections in end-to-end disposition. 8. a yarn feed roller assembly as recited in claim 7, wherein each shaft section is supported by a bearing, and adjacent ends of successive sections are mounted within a bearing cover, said cover having locating means for axially locating the ends of the respective shafts. 9. a yarn feed roller assembly as recited in claim 7, wherein each shaft section supports a plurality of feed gears and corresponding clutches, the shaft section together with the associated feed gears and clutches defining a module, each module being independently moveable from adjacent modules of said assembly, the brushes corresponding to the clutches of a module being mounted on a common member. 10. a yarn feed roller assembly as recited in claim 9, wherein said member comprises a lid overlying the shaft section of respective gear sets, whereby pivotable movement of said lid disconnects electricity from all the clutches corresponding to the module in the gear sets associated with said lid. 11. a yarn feed roller assembly as recited in claim 1, wherein all the sets of rollers lie in a single row such that yarn strands wound on the respective set of rollers are disposed in substantial alignment.
background of the invention this invention relates to tufting machines, and more particularly to a drive arrangement for a yarn feed roller pattern assembly having a plurality of feed roller sets, cooperating roller sets being drivingly connected with a selected one of a number of continuously driven shafts for rotating at a respective and different speed, thereby to deliver yarn to the tufting machine in accordance with patterning requirements. wide use is made of yarn feed roller pattern attachments or assemblies for producing variations in pile height in tufted pile fabrics such as carpeting. representative of such feed roller pattern assemblies are those disclosed in the following u.s. pat. nos. 2,862,465 to card; 2,875,714 to nix; 2,966,866 to card; 3,001,388 to maccaffray; 3,075,482 to card; 3,103,187 to hammel; 3,134,529 to beasley; 3,272,163 to erwin, et al; 3,489,326 to singleton; 3,605,660 to short; 3,752,094 to short; 3,847,098 to hammel; 3,926,132 to lear, et al; 3,955,514 to prichard, et al and 4,134,348 to scott. these assemblies include a plurality of yarn feed rollers which feed yarn to the needles of the tufting machine. each of the feed rollers is selectively driven at one of a plurality of different speeds independently of the other feed rollers by means of clutches controlled by a pattern control. the amount of yarn supplied to the needles of a tufting machine is determined by the rotational speed of the feed rollers about which the yarn is wound, so that with a fixed needle stroke the amount of yarn supplied to each needle determines the pile height of the fabric produced. to create patterned pile effects the amount of yarn fed to the individual needles may be varied by driving the feed rollers selectively at the different speeds. early in the development of yarn feed roller pattern assemblies, such as exemplified in card u.s. pat. nos. 2,862,465 and 2,966,866, the feed rollers were mounted in parallel relationship upon respective shafts extending perpendicularly to the row of needles, and each shaft carried a high speed clutch and a low speed clutch. all of the high speed clutches were driven by chain and sprocket mechanisms from a high speed shaft, while all of the low speed clutches were driven by another chain and sprocket mechanism from a low speed shaft, one or the other of the clutches being selectively engaged to couple its drive to the shaft. later in the development of the prior art, the rollers were separated into upper and lower roll sets with each roll set coupled to a drive shaft, each drive shaft carrying a high speed clutch and a low speed clutch. again, all the high speed clutches were driven by chain and sprocket mechanisms from a high speed shaft and all the low speed clutches were driven by a chain and sprocket mechanism from a low speed shaft. actuation of a respective clutch selectively coupled its drive means to the drive shaft to rotate the rolls of each set to the respective high or low speed shaft. such constuctions are exemplified by hammel u.s. pat. nos. 3,103,187 and erwin, et al 3,272,163. the electromagnetic clutch members of the feed roller assemblies wear out or become defective before other parts thereof, and considerable dissassembly of these prior art drive mechanisms had to be carried out before the clutches could be replaced. such dissassembly and replacement resulted in considerable "down-time" which in many cases was well out of proporation to the seriousness of the fault. to reduce such "down-time" the prior art developed yarn feed roller assemblies in which the clutch is within the roll. for example, in u.s. pat. nos. 3,489,326 to singleton and 3,926,132 to lear, et al the clutches were placed within respective rollers. here two or more drive shafts are driven at different speeds each carrying a number of feed rolls rotatable relative to the respective shafts. corresponding rolls on each shaft form a roll set which are coupled together so that each set of rolls may rotate at the same speed. the clutches are selectively engaged to couple the roll associated with the engaged clutch to the shaft on which the clutch is mounted for driving that roll and the other rolls in that set from the shaft carrying the engaged clutch. a number of shaft sections were coupled end to end so that a shaft section may be removed for repair or replacement of a defective clutch. however, although highly succesful and effective in reducing "down-time" problems still resulted from the handling of a large number of feed rolls and clutches on a single shaft section. if a clutch in the middle feed roll of a shaft section is defective, then all the remaining feed rolls and clutch members between the defective clutch and the end of the shaft section must be removed before the problem can be remedied. moreover, since the rolls are removed with the associated clutches, the yarn wound about the rolls is disturbed and when a shaft section is replaced the yarn must again be threadedly wound about the rolls in each roll set associated with the replaced shaft section. in another effort to reduce the "down-time" a module construction was proposed in hammel u.s. pat. no. 3,847,098 wherein each roll set is mounted within a modular housing having a chain and sprocket drive for each roll of the set, with each drive being operable to connect a first shaft on which each roll of the set and its associated internal clutch is mounted to another shaft which carries a gear extending from the module. the latter gears are coupled to a corresponding gear mounted on a shaft in a drive housing when the module is mounted in an operative position. actuation of one of the clutches within the module couples its roll and by virtue of gearing between the rolls in a set, all the rolls, in the set to the drive and thus the shaft within the drive housing associated therewith. when a clutch must be replaced or repaired, the module including the associated rolls, clutches, chains and sprocket drive is detached and replaced as a unit. although a more convenient arrangement is provided for changing a worn-out clutch in a module by replacing the entire module, all the rolls, clutches, gears and drive members are removed with the moudle, and the yarn associated with the replaced module must still be pulled away from the original module and then threadedly rewound about the replacement unit, albeit only the yarn associated with that module need be disturbed. moreover, because of the bulkiness of the modules, they must be mounted in groups spaced vertically apart so as to provide the required number of modules for a tufting machine. this is an inconvenience in servicing the rollers as when the yarn is initially threaded about the various rollers of the yarn feed assembly since it requires working at two different levels, one of which requires a ladder. furthermore, the size of the rolls are limited by the size of the clutch, as with any yarn feed roller attachment having the clutch within the rolls. this limits the size of the clutch and its associated elements, thereby resulting in faster wearing characteristics since the size of the rolls must be large enough to receive the clutches, the rolls are of a larger size than necessary resulting in the yarn feed attachment being placed high up on the tufting machine, adding to the inconvenience of threading the yarn about the rollers, and of servicing the modules. summary of the invention consequently, it is a primary object of the present invention to provide a clutch actuated yarn feed roller assembly for tufting machines in which the clutch assembly and the rollers are separated one from the other so that the clutch assembly can be serviced, replaced or repaired without disturbing yarn on the rollers, the clutches being mounted in modular sub-assemblies for rapid replacement when necessary. it is another object of the present invention to provide a yarn feed roller assembly for tufting machines wherein the assembly includes clutches separated from the yarn feed rollers such that maintenance of the clutches can be performed without disturbing yarn wound about the rollers and whereby the size of the rollers are independent of the size of the clutches and vice versa so that the roller assembly may be mounted in a single row for ease of threading yarn about the rollers and larger clutches may be used for better wear characteristics. it is a further object of the present invention to provide a yarn feed roller assembly for tufting machines including clutch assemblies for selectively coupling respective rollers to selective drive shafts, the clutches and rollers being separately mounted, the clutches being mounted in modular sub-assemblies on shaft sections removeable as a unit from the drive shaft housing, the housing having a pivotably mounted closure lid which when closed may supply electrical energy to the clutches, but which when opened automatically disconnects the electricity for servicing. accordingly, the present invention provides a drive arrangement for yarn feed rollers of a tufting machine, the arrangement comprising a plurality of drive shafts, feed gears rotatably mounted on the drive shafts, corresponding yarn feed gears of the shafts being coupled together for unitary motion, a respective selectively operable clutch acting between each yarn feed gear and the shaft upon which the gear is mounted for coupling that gear and the corresponding gears to that drive shaft, and at least one yarn feed roller drivingly connected with respective yarn feed gears for unitary motion therewith at the speed of the selected drive shaft. in the preferred embodiment, the clutch has two elements, a first of which is fast on the respective shaft and the other of which is coupled to a respective gear and selectively moveable into and out of coupling engagement with the first element when actuated so as to couple the gear associated therewith to the shaft, and the yarn feed gears and respective yarn feed roller are drivingly connected together by gear teeth lying in a common plane extending transversely to the drive shaft. thus, the rollers and respective feed gears and clutches are independently mounted and when a clutch needs servicing the drive shaft on which that clutch is mounted may be readily removed as a unit from the drive and replaced by a like unit without disturbing the rollers and the yarn thereon. according to a preferred feature of the invention each drive shaft comprises a respective plurality of shaft sections, the shaft sections of a given shaft being arranged and axially aligned in end-to-end disposition and being drivingly connected by cooperating male and female formations. preferably, a bearing is provided at each respective end of each shaft section and adjacent ends of successive sections are mounted in a bearing cover, there being location means in the cover engagable with the flanks or ends of the bearings for locating the shaft ends in a requisite relative disposition in the axial direction of the shaft. according to a further feature of the invention, the clutch includes a slip ring at the periphery thereof and further includes at least one feed contact brush engagable with the slip ring, the contact brushes being mounted for pivotable motion about a remote axis to and from a position in engagement with the slip ring. the mounting of the contact brushes are in a plate forming the closure lid for the drive shaft housing so that when closed electricity may flow to the clutches but when the lid is open for service electrical contact is automatically disconnected. preferably, plural feed contact brushes are provided for each clutch. brief description of the drawings the particular features and advantages of the invention as well as other objects will become apparent from the following description taken in connection with the accompanying drawings, in which: fig. 1 is a schematic side elevational view of a tufting machine incorporating a yarn feed roller assembly constructed according to the principles of the present invention; fig. 2 is a diagrammatic side elevational view of the yarn feed roller assembly illustrated in fig. 1 but at an enlarged scale; fig. 3 is an elevational view of a portion of one of the yarn feed roller assembly drive constuctions shown partly in section; and fig. 4 is an end elevational view showing the mounting of a clutch shaft sub-assembly. description of the preferred embodiment referring now to the drawings, and particularly to fig. 1, a tufting machine 10 is illustrated having a head 12 in which a plurality of laterally spaced push rods 14 (only one of which is illustrated) is reciprocably mounted, the push rods carrying a needle bar 16 at the lower end thereof for supporting needles 18 which cooperate in conventional manner with loopers or hooks (not illustrated) in the bed 20 of the machine. a single row of a multiplicity of respective pairs or sets of yarn feed rollers 22, 24 are rotatably supported in cantilever fashion on shaft members 222, 224 respectively and in side-by-side disposition at the front of the machine on a support bracket 26 carried by a frame 28 mounted on the head 12. because of the construction of the feed roller assembly the frame which supports the entire assembly may be disposed directly above the tube bank 30 through which the strands of yarn 32 are fed. the yarn 32 is fed to the rollers 22, 24 from a creel (not shown) and is guided to the individual needles 18 of the tuft forming instrumentalities through the tube bank 30 to direct the yarn from the feed roller sets to the desired needle as required. the individual rollers of each respective set of yarn feed rollers 22, 24 which define the feed roller means, are drivingly connected together at an end thereof by the intermeshing of ring gears 34, 36 respectively provided thereon so that the rolls of a set are driven together at the same speed. the rollers 22, 24 are driven at a selected one of the respective speeds of at least two drive shafts 38, 40 through yarn feed gears 42, 44 freely mounted on the respective shafts 38, 40. the feed gears 42, 44 are in mesh with each other and with a ring gear 34 associated with each roller 22. the yarn feed gears 42, 44 are selectively engagable with the respective drive shaft 38, 40 by means of electromagnetic clutches 50, 52 operating between the respective feed gears 42, 44 and the drive shafts 38, 40 in which they are mounted. the clutches 50, 52 may be selectively operated in accordance with patterning requirements drivingly to connect rollers 22, 24 with either the drive shaft 38 or the drive shaft 40 thereby to deliver yarn to the needles 18 at a rate determined by the speed of the corresponding roller set. it will be appreciated that by separating the clutches 50, 52 from the yarn feed rollers 22, 24 and by providing a direct gear connection between the yarn feed gears 42, 44 and the respective rollers 22, 24, removal of the drive shafts 38, 40 and the clutches 50, 52 mounted thereon can be effected without the need to disturb the yarn feed rollers 22, 24 and the yarn 32 wound thereon. the arrangement thus far described provides for two rates of yarn feed, corresponding to the respective speeds of the drive shafts 38, 40. by including a third drive shaft 54 having respective clutches 58 and feed gears 60 thereon, the feed gears 60 being in mesh with corresponding feed gears 42, and drive shaft 54 being driven at a speed different from that of either of shafts 38, 40, three rates of feed can be provided. the shafts 38, 40 and 54 may be driven at the three rates in conventional manner at one end thereof by drive means (not illustrated) driven in timed relationship to that of the reciprocation of the needle bar 16, the relationship being established by the main shaft (not illustrated) which is journalled in the head 12 of the machine. by providing the individual drive shafts 38, 40, 54 together with the respective clutches 50, 52, 58 and respective feed gears 42, 44, 60 mounted thereon, in modular form, removal and replacement of a clutch in the event of failure can be effected with minimal machine down time and, as has previously been mentioned, without any disturbance of the yarn feed rollers 22, 24 and the yarns thereon. in order further to facilitate removal and replacement of a module, the present invention also proposes a quick release drive arrangement connecting successive sections of the individual drive shafts 38, 40, 54 and also the separation of the electrical feed to the clutches 50, 52, 58 mounted on the drive shaft sections. thus, referring now to figs. 2 through 4, each drive shaft 38, 40, 54 comprises a pluarlity of shaft sections 218, illustrated in fig. 3 with regard to the shaft 40, each shaft section having a length determined by requirements, the opposite ends of each shaft section being provided with cooperable male and female coupling means 62, 64 respectively, for engagement with the complimentary coupling means of an aligned and adjacent shaft section 218. in practice, it appears that each shaft section will carry approximately 10 clutches and speed gear assemblies. the male coupling means 62 comprises a flanged cap 66 engaged with a respective end of the shaft section and supporting a bearing 68 thereon, there being a narrow bar 70 extending diametrically of the flanged outer end of the cap 66. the female coupling means 64 comprises a cap 72 engaged with the end of the shaft section 218, cap 72 having a blind bore 74 at its outer end and there being two diametrically opposed slots 76 in the wall 78 defined by the bore 74. the slots 76 constitute a female formation and are sized and dimensioned to receive the bar 70 of an adjacent shaft section into coupling engagement therewith. the female coupling means 64 also includes a bearing 80 supported on a reduced portion thereof. the connection between two successive shaft sections 218 is supported in a bearing cover 82 which serves also axially to locate the sections, the bearing cover 82 being shown in fig. 4 and, in part, at the left hand end of fig. 3. the bearing cover 82 comprises an upper part 84 having a recess 86 therein of semi-circular form dimensioned to receive the outer bearing races 88, 90 of the bearings 68, 80 respectively and a lower part 92 having a recess 94 generally of semi-circular form, the recess 94 being enlarged at 96 adjacent the mating face 98 of the lower part to facilitate engagement with the respective bearings 68, 80. the upper and lower part 84, 92 are secured together by bolts 100. the lower part 92 of each bearing cover further includes two locating pins 102, 104 engagable with the adjacent flanks or sides of the outer bearing races 88, 90 of the resepective bearings 68, 80, thereby to locate the bearings 68, 80 in a predetermined relative disposition. correct angular alignment of the shaft sections 218 on assembly of the shaft from the separate sections thereof, is ensured by providing a three screw fixing of the end caps 66, 72 on the shaft ends, such fixing providing a reference for determining the angular disposition of the shaft relative to the cooperable male and female formations. for example, screws such as 106 connect the end caps and the corresponding end of the shaft section, the screws 106 being disposed in an offset relationship such as illustrated at 107 in fig. 4. to reduce bearing wear, the clutches 50, 52 and 58 are of the rotary field type wherein the rotary field 108 is keyed to the shaft section 218 and the armature 110 is slidably splined on the hub 112 of the respective feed gear 42, 44, 60. when the clutch is actuated the armature is pulled toward the field by the electromagnetic force and couples the corresponding feed gear to the clutch and thus the shaft section to drive the feed gear at the speed of that shaft. electrical feed to the clutches 50, 52, 58 is through respective slip rings 114, and the clutches are grounded through the respective shafts 38, 40 and 54. the feed contacts 116, 118 which carry the brushes that contact the slip rings to supply current for the clutches, are supported in a hinged lid 120, as illustrated in fig. 2, which overlies the shafts and closes the top of the housing which they are mounted, related contacts 116, 118 being electrically connected through respective connector plates 122 such that on failure of one of a related pair of contacts the other contact of that pair remains operative. the lid 120 is held in the closed condition by a ball catch 123, and a handle 124 is provided to facilitate opening and closing of the lid which is pivotable about the hinge 126. as will be appreciated, a requisite number of feed contacts can be provided in appropriate disposition in the lid, being electrically insulated from the lid, and can be brought into operative relationship with the individual clutches on closure of the lid and can be readily disengaged permitting service of the sub-assembly of clutches, gears and shafts. inspection of the feed contacts can be effected and any worn or damaged contacts replaced in a readily simple manner. when a clutch is defective or worn so as to require replacement, the lid 120 is opened, the required modular shaft section 218 is removed by dissassembling the bearing covers 82 at each end of the shaft section 218. a new modular shaft section 218 is then installed, the bearing cover 82 reassembled, the lid 120 closed and the machine is again operative. a guard 128 may be provided at the front of the yarn feed rollers 22, 24, the guard 128 being hingedly mounted on the tube bank 30 and being held in an operative position by ball catch means 130. numerous alterations of the structure herein disclosed will suggest themselves to those skilled in the art. however, it is to be understood that the present disclosure relates to the preferred embodiment of the invention which is for purposes of illustration only and not to be construed as a limitation of the invention. all such modifications which do not depart from the spirit of the invention are intended to be included within the scope of the appended claims.